uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,877,628,089,054 | arxiv | \section{Introduction}
\label{sec:introduction}
The size-frequency distribution of the Main Belt asteroid population is an equilibrium between destruction and creation. Destruction occurs through two mechanisms: collisions and rotational disruption. Both produce fragments--new asteroids of smaller sizes. The equilibrium established when considering only the role of collisions is well studied~\citep[e.g. ][]{BottkeJr:2005gd}, but the role of rotational disruption has yet to be explored. We use an asteroid rotational evolution model combined with a collision evolution model to produce a new size-frequency distribution that accounts for both mechanisms.
Rotational disruption is driven by the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect~\citep{Rubincam:2000fg,Taylor:2007kp}, which can rotationally accelerate asteroids to their critical spin disruption rates~\citep{Bottke:2006en,Scheeres:2007io,Walsh:2008gk}. The YORP effect changes the spin rate:
\begin{equation}
\dot{\omega} = \frac{Y}{2 \pi \rho R^2} \left( \frac{F_\odot }{a_\odot^2 \sqrt{1 - e_\odot^2}} \right)
\label{eqn:yorp}
\end{equation}
where $Y$ is the YORP coefficient determined by the asymmetric shape of the asteroid, $\rho$ is the density, $R$ is the radius of the asteroid, $a_\odot$ and $e_\odot$ are the heliocentric semi-major axis and eccentricity, and $F_\odot = 10^{14}$ kg km s$^{-2}$ is the solar radiation constant~\citep{Scheeres:2007kv}. The YORP effect has a strong size dependence. If the YORP coefficient $Y > 0$, then the spin rate accelerates towards a critical surface disruption limit.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{RadiusPeriodObservationsPlot.eps}
\end{center}
\caption{Spin period distribution as a function of radius for near-Earth (NEA), Mars crossing (MCA) and Main Belt (MBA) asteroids as reported in the Asteroid Lightcurve Database~\citep{Warner:2009ds}. The dashed lines indicate the critical surface disruption period $P_d \sim 2.33$ h for radii $R > 250$ m.}
\label{fig:spindistribution}
\end{figure}
Rotational disruption occurs when centrifugal and gravitational accelerations become equal inside a rubble pile asteroid. Created by collisional processing, rubble pile asteroids are a collection of gravitationally bound boulders with a distribution of size scales and with very little or no tensile strength between them~\citep{Harris:1996tn,Asphaug:2002vp}. Evidence for rubble pile geophysics includes measured low bulk densities implying high porosities~\citep{Yeomans:1997fp,Ostro:2006dq}, the resolved surface of 243 Itokawa~\citep{Fujiwara:2006ca}, the observed critical spin limit amongst the asteroid population $P_d = \sqrt{3 \pi / \rho G} \sim$ 2.33 h (see Figure~\ref{fig:spindistribution}) where $G$ is the gravitational constant~\citep{Harris:1996tn,Pravec:2007ki}, and evidence that asteroid pairs form from rotational fission events~\citep{Pravec:2010kt}. Due to this strengthless internal structure, an asteroid eventually disrupts into components when it rotates at this disruption spin rate~\citep{Scheeres:2007io}. This simple story of rotational disruption is complicated by but reaffirmed when the asteroid's shape is allowed to evolve~\citep{Walsh:2008gk,Walsh:2012jt,Sanchez:2012hz,Sanchez:2013vm}.
The disruption spin rate is also size dependent. Asteroids smaller than 250 m in radius are able to accelerate faster than the critical disruption period (see Figure~\ref{fig:spindistribution}). The strength holding these small bodies together is hypothesized to come from cohesive forces~\citep{Holsapple:2007eg,Scheeres:2012tj} if these bodies are still rubble piles or if these bodies are monolithic components, then the strength of the rock itself~\citep{Pravec:2000dr,Pravec:2007ki}. It is unclear what is happening at these small sizes.
Since the YORP effect is proportional to the radius squared (see Equation~\ref{eqn:yorp}), there is not a population of large asteroids spinning near the critical surface disruption limit (see Figure~\ref{fig:spindistribution}). We quantify this upper size limit by comparing rotational acceleration rates to collision rates using a timescale analysis. There are two possibly relevant collisional timescales: (1) the disruption timescale $\tau_\text{disr}$, how long before a collision occurs that removes more than half the mass of the asteroid, and (2) the rotational timescale $\tau_\text{rot}$, how long before a collision occurs that adds or subtracts angular momentum on the same order of magnitude as the asteroid's spin state.~\citet{Farinella:1998ff} provides an estimate of both timescales:
\begin{equation}
\tau_\text{disr} = 633\text{ My} \left( \frac{R}{1\text{ km}} \right)^{\frac{1}{2}} \quad \tau_\text{rot} = 188\text{ My} \left( \frac{R}{1\text{ km}} \right)^{\frac{3}{4}}
\end{equation}
We compare these timescales to an asteroid with a heliocentric orbit at $a_\odot = 2.5$ AU and a YORP coefficient of $Y = 0.01$:
\begin{equation}
\tau_{YORP} \sim \frac{ 2 \pi \omega_d \rho R^2 a_\odot^2 }{Y F_\odot} = 42 \text{ My} \left( \frac{R}{1\text{ km}} \right)^2
\end{equation}
where $\omega_d = \sqrt{4 \pi \rho G / 3}$ is the critical disruption spin rate for a gravitationally bound spherical object. The timescale for YORP-induced rotational acceleration is always shorter than the collision-driven rotation timescale, and it is shorter than the collision-driven disruption timescale for asteroids with radii $R \lesssim$ 6 km. The YORP-induced rotational disruption timescale is longer than the age of the Solar System for $R \gtrsim$ 10 km explaining the lack of rapid rotators in Figure~\ref{fig:spindistribution} at large sizes. Therefore, we focus our rotational evolution model on asteroids with radii between 250 m and 15 km since in this range rotational disruption has a significant effect on the creation-destruction equilibrium which sets the size-frequency distribution.
\section{Methods}
\label{sec:asteroidpopulationevolutionmodel}
To understand the effects of rotational disruption on the evolution of the size-frequency distribution of the Main Belt, we have used two separate codes. The first model computes the frequency with which small ($0.25 < R < 15$ km) asteroids spin up to disruption. The second is a collisional evolution model where we have added the effects of rotational disruption by exploiting the outcome of the first code.
\subsection{The rotational evolution model}
The model computing the rotational evolution of small asteroids is a continuation of the code presented in~\citet{Marzari:2011dx}, which studied the rotational evolution of the Main Belt asteroid (MBA) population including both the YORP effect and collisions since both evolve the spin rate and direction. This was already an improvement and continuation of earlier projects by~\citet{Rossi:2009kz} and~\citet{Scheeres:2004bd}, which studied the near-Earth asteroid population. Similar to~\citet{Marzari:2011dx}, we use a Monte Carlo approach to individually simulate $2 \times 10^6$ asteroids for $4 \times 10^9$ years; this evolution assumes conditions that were only present after the late giant planet instability~\citep{Tera:1973tf,Levison:2011gt}. The spin rate and obliquity of each asteroid evolves constantly due to the YORP effect and collisions as in~\citet{Marzari:2011dx}. Unlike in the previous works, when the rotation rate of an asteroid exceeds a specified spin limit, the asteroid rotationally disrupts.
Since the exact rotational break-up spin rate is a complex function of the internal component distribution, the asteroid rotation evolution model utilizes the simple approximation that all ``rubble piles'' rotationally disrupt at the critical surface disruption spin limit for a ellipsoidal object: $\omega_d = S \sqrt{4 \pi \rho G/3}$ where $S$ is a shape factor determined by elliptic integrals from the semi-axes of the ellipsoidal figure~\citep{Scheeres:1994cb}. This approximation requires the system to rotationally accelerate for a longer period of time before undergoing rotational fission. With respect to the YORP timescale for rotational fission, this may accurately reflect delays in rotational fission due to shape evolution. Each asteroid is assigned also a shape from an ellipsoidal semi-axis ratio distribution for the purpose of calculating the critical spin limit. From largest to smallest, the tri-axial semi-axes are $a$, $b$, and $c$ and the axis ratios are drawn from normal distributions such that for $b/a$, the mean $\mu = 0.6$ with a standard deviation $\sigma = 0.18$ and for $c/a$, $\mu = 0.4$ and $\sigma = 0.05$~\citep{Giblin:1998io}. This shape distribution is in agreement with Hayabusa observations of boulders on 243 Itokawa and photometry of small, fast-rotating asteroids~\citep{Michikami:2010cr}, as well as agreement between the $b/a$ ratio and the mean amplitude of asteroids with diameters between 0.2 and 10 km~\citep{Pravec:2000dr}.
In addition to its shape, each asteroid is characterized by a number of fixed and evolving parameters including a diameter and a fixed semi-major axis $a_\odot$ and eccentricity $e_\odot$ from a Main Belt asteroid orbital element distribution. Both the YORP effect and collisions evolve the spin rate $\omega$ and the obliquity $\epsilon$ of each asteroid. The initial spin rate is drawn from a Maxwellian distribution with a $\sigma = 1.99$ corresponding to a mean period of $7.56$ h~\citep{Fulchignoni:1995um,Donnison:1999iv}.~\citet{Rossi:2009kz} demonstrated for models similar to the asteroid rotational evolution model that the steady-state spin rate distribution is independent of the initial spin rate distribution. We draw the initial obliquity of each asteroid from a flat distribution. The relative change in obliquity is used by the model to update the YORP coefficient, however the absolute obliquity is not currently used by the model. Thus the rotational evolution output is insensitive to the initial obliquity distribution, but it is a feature of the model that could be utilized in the future to compare input and output obliquity distributions.
In order to calculate the rotation evolution due to the YORP effect, each object is also assigned a non-dimensional YORP coefficient\footnote{\citet{Rossi:2009kz} and~\citet{Marzari:2011dx} notated the non-dimensional coefficient $Y$ as $C_Y$.} $Y$ from a gaussian distribution with a mean of $0$ and a standard deviation of $0.0125$ motivated by the measured values of 1862 Apollo $Y = 0.022$ \citep{Kaasalainen:2007hq} and 54509 YORP $Y = 0.005$ \citep{Taylor:2007kp}. In~\citet{Rossi:2009kz}, the results were found to be invariant on the order of the uncertainty of the model to the particular distribution used. The YORP coefficient is re-drawn whenever the obliquity changes by more than $0.2$ rad and evolves according to: $Y_\text{new} = Y_\text{old} \left( 3 \cos^2 \Delta \epsilon - 1 \right) / 2$ for smaller changes in the obliquity due to collisions as in~\citet{Nesvorny:2008by}. A similar scheme was utilized in the past~\citep{Scheeres:2007kv,Rossi:2009kz,Marzari:2011dx}. If the YORP coefficient $Y < 0$, then the spin rate is decelerating and the asteroid may enter a tumbling state. Since this model cannot assess the evolution of this state, an artificial lower spin barrier is enforced. Asteroids have a set maximum spin period limit of $10^5$ hours. At this very slow rotation rate the YORP torque switches directions. This is modeled by switching the sign of the YORP coefficient. Collisions often control the spin state of bodies with such low rotation rates since even the smallest projectiles can deliver impulsive torques that are the same order of magnitude as the angular momentum of the target body.
The effects of collisions on the rotation rate follows a similar protocol as~\citet{Marzari:2011dx}. The population of potential impactors is derived from the Sloan Digital Sky Survey size-frequency distribution of asteroids~\citep{Ivezic:2001ct} distributed over logarithmic size bins\footnote{Diameter bins are created so that the upper diameter of a bin is $D_i = D_m D_w ^i$, where $D_m$ is the minimum diameter and $D_w = 1.25992$ is the bin width. This is similar to \citet{Spaute:1991hv}.} from $1$ m to $40$ km. Using Poisson statistics, the number of collisions and their timing is computed for each asteroid with projectiles from each size bin using the intrinsic probability of collision for the Main Belt $\left< P_i \right> = 2.7 \times 10^{-18}$ km$^{-2}$ yr$^{-1}$ \citep{Farinella:1992im,BottkeJr:1994kr}. Each collision is assigned an impact velocity of $5.5$ km s$^{-1}$~\citep{BottkeJr:1994kr} and a random geometry within the limits of the Main Belt orbital distribution\footnote{The strongest constraint is on the velocity along the absolute z-axis which cannot exceed that predicted by the average inclination of the Main Belt.}, in order to determine from these parameters the change in spin rate due to each collision.
Cratering collisions do not appreciably change the mass or size of the target asteroid, but they do change the angular momentum of the asteroid. The angular momentum of the projectile and the target and the geometry of the collision determine the new angular momentum of the cratered asteroid. This new angular momentum vector is used to update both the spin rate and the obliquity. Sub-catastrophic impacts create a random walk in spin rate if there is no significant YORP effect rotational acceleration~\citep{Marzari:2011dx}. If the collision is too large for a cratering event, then the original asteroid is shattered and a new object is created with the same size but a new initial spin state and YORP coefficient. Shattering collisions are defined as those that deliver specific kinetic energy greater than the critical specific energy of the target, which defined as the energy per unit target mass delivered by the collision required for catastrophic disruption (i.e. such that one-half the mass of the target body escapes)~\citep{Benz:1999cj,Davis:2002ts}.
Asteroid system destruction whether through a catastrophic collision or rotational disruption is a mass transfer from one size asteroid (the progenitor in the case of a binary) into two or more smaller size bodies. Each asteroid in the asteroid rotational evolution model resides in a logarithmic diameter bin and the model tracks this mass flow from larger bins into smaller bins after each destructive event. This mass flow from large asteroids into smaller asteroids is a well-studied phenomena in the context of collisional evolution of an asteroid population~\citep[e.g.][]{Davis:1979wc,Davis:2002ts,CampoBagatin:1994ki,Marzari:1995ga,Obrien:2003jk,BottkeJr:2005gd}.
After a destructive event, the asteroid is replaced with another asteroid from the original diameter bin. This replacement is motivated by the constant flux of material into the original bin from even larger bins~\citep{Farinella:1992wn,Marzari:2011dx}. In this way, the asteroid rotational evolution model maintains a steady-state size-frequency distribution. Therefore it does not feature collisional evolution with full feedback, but the output of the asteroid rotational evolution model includes destruction statistics that we then incorporate into a collisional evolution model~\citep{Davis:1989vn,Davis:2002ts}. From which, we generate a new size-frequency distribution. Thus the asteroid rotational evolution model and the collisional evolution model complete a cycle. New impact probabilities and a new projectile distribution could be generated from the size-frequency distribution output from the collisional evolution code and with these inputs the asteroid rotational evolution model could be re-run. This iterative process could be followed multiple times refining the results with each iteration. These iterations are left for future work. For now, we use the tracked mass flow from the asteroid rotational evolution model in the collisional evolution model to determine the first order corrected size-frequency distribution due to rotational fission.
\subsection{The collisional evolution model}
To evaluate the effects of YORP fissioning on the overall collisional evolution of asteroids in the Main Belt, we have used a simple 1--dimensional collisional evolution code~\citep{CampoBagatin:1994ki,Marzari:1995ga,BottkeJr:2005gd} . The size distribution of the Main Belt asteroids is modeled by a set of discrete logarithmic bins spaced by a factor 2 in mass and the time evolution is simulated through a sequence of timesteps. At each timestep the expected number of collisions involving bodies belonging to any pair of different bins is computed from $\left< P_i \right>$. This modeled distribution is appropriate for comparison to the Main Belt after 3.8 Gyr of evolution (to account for the possible Late Heavy Bombardment~\citep{Tera:1973tf}) or after 4.5 Gyr. The outcome of each collision is computed in terms of cratering or breakup and all the fragments, at the end of the timestep, are allocated in their new size bins. The number of asteroids is then updated in each bin according to the results of the mutual collisions. The size-strength scaling adopted in the model is similar to that described in~\citet{BottkeJr:2005gd}. Our innovative approach consists in including in this model the additional erosion mechanism related to the rotational disruption. The rotational evolution code described in the previous section gives as output the frequency of rotational disruption events for different asteroid sizes. We use this frequency to compute for each size bin of the collisional evolution code the number of bodies undergoing fission during the timestep and we subtract this number from each bin adding, at the same time, the fragments to the lower size bins. Their relative sizes are chosen from a flat size distribution. Once we run the collisional evolution code with the additional grinding mechanism related to YORP, we produce a size-frequency distribution that we compare to observations.
These smaller fragments can also be the two components of the binary created by the rotational disruption. In this case, it is possible that the rotational evolution of the binary members is significantly effected by their membership. Since $\sim$15\% of small asteroids (diameters between 0.3 and 10 km) are binaries~\citep{Pravec:1999wt,Margot:2002fe}, this effect is an important second-order effect to be dealt with in future work.
\section{Results: size-frequency distribution}
\label{sec:sizefrequencydistribtuion}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figx.eps} \\
\includegraphics[width=\columnwidth]{figy.eps}
\caption{Incremental size-frequency distributions corresponding to different initial distributions. Each is assumed for the Main Belt at the end of the accretion phase or after the Late Heavy Bombardment. They are either a steep power-law $N\left( > R \right) \propto R^{-4}$ for the top model~\citep{Weidenschilling:2011kt} or the same but smoothly truncated below $D = 100$km for the bottom model to simulate the scenario where `Asteroids were born big'~\citep{Morbidelli:2009dd}. In both plots the initial size distribution is shown as red squares. The observed distribution, reference for the modeling, is shown as magenta empty squares and it is computed by plugging in the results from the SKADS survey~\citep{Gladman:2009cx} to the~\citep{BottkeJr:2005gd} size distribution at 10 km in diameter. The size-frequency distribution of the collisional evolution model, without rotational fission, is shown on both panels as green empty squares after 3.8 Gy. The outcome of the complete model is shown as blue stars. The effects of rotational breakup begins at $D ~ 15$km in diameter but their effect is noticeable for diameters less than 6 km. Continuing both simulations to 4.5 Gy does not significantly change the results.}
\label{fig:sizefrequencyplots1}
\end{figure}
The asteroid rotation evolution model and the collision model evolve initial asteroid populations into size-frequency distributions that share many of the features of the observed size-frequency distributions by~\citet{Gladman:2009cx}. Figure~\ref{fig:sizefrequencyplots1} show the initial final size-frequency distributions for two different initial populations. The initial size-frequency distributions are shown as red squares and represent either a canonical accretion scenario (top)~\citep{Weidenschilling:2011kt} or an ``Asteroids were born big'' scenario (bottom)~\citep{Morbidelli:2009dd}.
From each scenario, we conducted two experiments. First, asteroids did not rotationally evolve due to the YORP effect and so there was no YORP-induced rotational fission. These are the green empty squares in Figure~\ref{fig:sizefrequencyplots1}. Second, asteroids did rotationally evolve due to the YORP effect, and so there was YORP-induced rotational fission. These are the blue circles in Figure~\ref{fig:sizefrequencyplots1}. Consistent with the prediction from the simple timescale analysis, these size-frequency distributions are very similar for asteroids with radii $R \gtrsim 6$ km regardless of the initial asteroid population. Collisions solely determine the size-frequency distribution equilibrium at large sizes, where the YORP effect is irrelevant.
However, at radii $R \lesssim 6$ km, these model size-frequency distributions diverge. The second experiment, which included YORP-induced rotational fission, has far fewer asteroids in each size bin than the first experiment, which did not include YORP-induced rotational fission. This shallowing of the size distribution reflects a new equilibrium. The asteroid population at this new equilibrium experiences enhanced destruction due to rotational disruption. In other words, the collisional cascade, which produces the mass within these bins, is not able to produce new asteroids fast enough. Since this model does not include a full feedback loop, it is possible that this deficit of smaller asteroids will influence the destruction rate of asteroids that refill these size bins. This effect is likely to be small since most new mass in smaller bins is the result of catastrophic impacts between asteroids of similar sizes.
This new equilibrium matches observations. The results from the Sub-Kilometer Asteroid Diameter Survey (SKADS) survey are plotted as purple open squares and extend past the $\sim18$ magnitude ($\sim0.86$ km) limit of the survey~\citep{Gladman:2009cx}. The SKAD data transition at a radius of $\sim5$ km corresponds very close to the transitions observed in the models including YORP-induced rotational fission.
We point out that it is out of the scope of this paper to perform an accurate study of the influence on the collisional evolution of different scaling laws for the strength or of the collisional parameters. Our goal is to select a 'standard' case that reproduces reasonably well the observed size distribution at large diameters and plug in the rotational disruption algorithm to test its effect at the small size end. We intend to point out that rotational fission is an additional mechanism that must be accounted for in modeling the evolution of the Asteroid Belt.
\section{Conclusions}
Rotational disruption is a new size-dependent mechanism that alters the collisional steady-state size-frequency distribution equilibrium of Main Belt asteroids. We find that this mechanism becomes important at radii $R\lesssim6$ km from both a timescale analysis and a detailed numerical model. It nicely explains the change to a shallower slope observed in the size distribution of asteroids at small sizes (i.e. SKAD). This finding appears to be robust since we obtain the same result even with different initial size distribution populations. Future modeling of the collisional evolution of asteroids must include this effect in their algorithm.
\section*{Acknowledgments}
SAJ would like to thank the NASA Earth and Space Science Fellowship program.
\bibliographystyle{mn2e.bst}
|
2,877,628,089,055 | arxiv | \section{Introduction}
\vspace{-1ex}
The calculation of quark propagators remains a major bottleneck in lattice
QCD. The problem is to solve the matrix equation
\begin{equation} \label{TheEquations}
M \psi = \eta
\end{equation}
for $\psi$
given several right-hand sides ({\em sources}) $\eta$.
The fermion matrix $M$ is non-hermitian
(we use Wilson fermions with clover improvement),
sparse and very large.
Recent attempts to accelerate the solution of \eqref{TheEquations}
have focused on:\\
\hspace*{5mm} 1. improved iterative methods\\
\hspace*{5mm} 2. improved preconditioners, or\\
\hspace*{5mm} 3. solving related systems.
There is growing consensus that the first of these areas is now mature
\cite{Forcrand,FrommerReview}, with BiCGSTAB
emerging as the method to beat.
Finding a better preconditioner is complicated by the now universal
requirement of scalability on parallel computers;
red-black preconditioning (used in the present study) is beginning to give
way to LL-SSOR \cite{Fischer}, but the last word on preconditioning
has not been said.
The third area is the exploitation of information
found in the solution of one system to accelerate the convergence of another.
Keeping the source fixed and varying $\kappa$ leads
to a family of multiple mass tricks \cite{ManyMasses,Glassner,Beat}.
Keeping the matrix fixed and varying the source leads to the ideas of deflation
\cite{Forcrand} and block algorithms, the several systems being solved
simultaneously in the latter but sequentially in the former.
Two properties of $M$ are relevant here.
\vspace{-1ex}
\begin{enumerate} \setlength{\parskip}{0 ex}
\item \label{SymmetryProp}
$M = \gamma_5 M^\dagger \gamma_5 $ is $\gamma_5$-symmetric.
This has been exploited to halve the computational costs
of QMR and BiCG \cite{ManyMasses}.
\item \label{ShiftedProp}
$\frac{1}{\kappa} M$
is a shifted matrix with respect to its dependence on $\frac{1}{\kappa}$.
Multiple-mass tricks depend on this property.
\end{enumerate}
\vspace{-1ex}
Red-black preconditioning preserves property (\ref{SymmetryProp}),
but destroys property (\ref{ShiftedProp}) except in the unimproved case
of $C_{SW}=0$.
\vspace{-1ex}
\section{Block Algorithms}
\vspace{-1ex}
Using a Krylov subspace method $s$ times to solve $s$ systems
$M \psi^{(1)} = \eta^{(1)}, \ldots, M \psi^{(s)} = \eta^{(s)}$
leads to the construction of several overlapping Krylov subspaces.
In the worst case, ie. when the number of iterations required for
convergence equals the order $N$ of $M$, the overlap will be complete.
By solving the $s$ systems simultaneously, block algorithms eliminate
the redundant matrix-vector operations in the above approach.
One assembles the $s$ right-hand sides into an $N \times s$ matrix
$ \Eta = ( \eta^{(1)}, \ldots, \eta^{(s)} ) $
and solves
\begin{equation} \label{BlockEquation}
M \Psi = \Eta
\end{equation}
for
$ \Psi = ( \psi^{(1)}, \ldots, \psi^{(s)} ) $.
On the other hand,
a perfect preconditioner (one which coincides exactly with $M^{-1}$),
solves the system with a single multiplication;
in this case there is no gain to be had from the block algorithm.
In practice we hope to have good preconditioners, so that we usually
solve the point ($s=1$) problem to the desired accuracy in much less
than $N$ multiplications.
These considerations lead one to expect that blocking will be
most effective on badly conditioned systems and/or small volumes, ie.
when the number of iterations required for the point algorithm
to converge is comparable to $N$.
Blocking introduces certain overheads. The first is memory;
the storage requirements for vectors in the point algorithm
are multiplied by $s$ when going to the block algorithm.
Secondly, vector-vector operations such as
\begin{equation}
y = \alpha x + y \ {\rm and} \ \beta = y^\dagger x \nonumber
\end{equation}
in the point algorithm generalise to
\begin{equation}
Y = X \alpha + Y \ {\rm and} \ \beta = Y^\dagger X \nonumber
\end{equation}
where $X$ and $Y$ are $N \times s$ matrices and $\alpha$ and $\beta$
have become $s \times s$ matrices. Thus the number of
vector-vector operations required per iteration scales as $s^2$.
\vspace{-1ex}
\subsection{B-CGNR and B-Lanczos}
Block Conjugate Gradient \cite{OLeary} can be applied to the normal equation
$M^\dagger M \Psi = M^\dagger \Eta$.
Increasing $s$ yields a clear improvement in convergence, but not enough to
defray the cost of squaring the condition number.
Reference \cite{Henty2} studied a method based on the hermitian block Lanczos
process and applied it to
$\gamma_5 M \Psi = \gamma_5 \Eta$.
B-Lanczos clearly outperforms B-CGNR.
Unfortunately, $\gamma_5$ is a bad
preconditioner for \eqref{BlockEquation}.
\vspace{-1ex}
\subsection{Block B-BiCG($\gamma_5$)}
The algorithm presented here is a special case of, and easily derived from,
the Block Bi-Conjugate Gradient algorithm of O'Leary \cite{OLeary}.
I have used $\gamma_5$-symmetry to eliminate multiplications by
$M^\dagger$, and have followed her important suggestion of
orthonormalising the columns of $P_k$.
\begin{center}
\vspace{3pt} {\bf B-BiCG($\gamma_5$)} \\
\end{center}
\begin{eqnarray}
&& R_0 = \Eta - M \Psi_0 \nonumber \\
&& \rho_0 = R_0^\dagger \gamma_5 R_0 \nonumber \\
&& P_0 \delta_0 = R_0 \label{bicg_g5_qr_a} \\
&& {\rm for\ } k = 0, 1, 2, \ldots
{\rm \ until\ convergence\ do\ } \{ \nonumber \\
&& \hspace*{5mm} T = M P_k \nonumber \\
&& \hspace*{5mm} \alpha_k = \left( P_k^\dagger \gamma_5 T \right)^{-1}
\delta_k^{-\dagger} \rho_k \nonumber \\
&& \hspace*{5mm} \Psi_{k+1} = P_k \alpha_k + \Psi_k \nonumber \\
&& \hspace*{5mm} R_{k+1} = -T \alpha_k + R_k \nonumber \\
&& \hspace*{5mm} \rho_{k+1} = R_{k+1}^\dagger \gamma_5 R_{k+1} \nonumber \\
&& \hspace*{5mm} \beta_k = \delta_k \rho_k^{-1} \rho_{k+1} \nonumber \\
&& \hspace*{5mm} T = R_{k+1} + P_k \beta_k \nonumber \\
&& \hspace*{5mm} P_{k+1} \delta_{k+1} = T \label{bicg_g5_qr_b} \\
&& \} \nonumber
\end{eqnarray}
The operations (\ref{bicg_g5_qr_a}) and (\ref{bicg_g5_qr_b})
are $QR$ decompositions, as are
(\ref{BQMRMGSa}), (\ref{BQMRMGSb}) and (\ref{BQMRQR}) below.
\vspace{-1ex}
\subsection{Block QMR}
The original QMR (Quasi-Minimal Residual) algorithm
\cite{FreundAndNachtigal} used blocks of variable sizes
in look-ahead steps to avoid Lanczos breakdown. Boyse and Seidl \cite{Boyse}
described a block version of QMR for complex symmetric matrices,
using fixed-size blocks to accelerate convergence.
Subsequently Freund and Malhotra \cite{FreundAndMalhotra}
discovered a non-hermitian version.
The version presented here was developed by me independently
of \cite{FreundAndMalhotra}.
\begin{center}
\vspace{3pt} {\bf B-QMR($\gamma_5$)}
\end{center}
\begin{eqnarray}
&& P_0 = P_{-1} = V_0 = 0 \nonumber \\
&& c_0 = b_{-1} = b_0 = 0;
\ a_0 = d_{-1} = d_0 = I \nonumber \\
&& R_0 = \tilde{V}_1 = \Eta - M \Psi_0 \nonumber \\
&& V_1 \rho_1 = \tilde{V}_1 \label{BQMRMGSa} \\
&& \tilde{\tau}_1 = \rho_1 \nonumber \\
&& {\rm for\ } k = 1, 2, \ldots
{\rm \ until\ convergence\ do\ } \{ \nonumber \\
&& \hspace*{5mm} \delta_k = V_k^\dagger \gamma_5 V_k \nonumber \\
&& \hspace*{5mm} \beta_k = \delta_{k-1}^{-1} \rho_k^\dagger \delta_k \nonumber \\
&& \hspace*{5mm} T = M V_k - V_{k-1} \beta_k \nonumber \\
&& \hspace*{5mm} \alpha_k = \delta_k^{-1} V_k^\dagger \gamma_5 T \nonumber \\
&& \hspace*{5mm} \tilde{V}_{k+1} = T - V_k \alpha_k \nonumber \\
&& \hspace*{5mm} V_{k+1} \rho_{k+1} = \tilde{V}_{k+1} \label{BQMRMGSb} \\
&& \hspace*{5mm} \theta_k = b_{k-2} \beta_k \nonumber \\
&& \hspace*{5mm} \epsilon_k = a_{k-1} d_{k-2} \beta_k + b_{k-1} \alpha_k \nonumber \\
&& \hspace*{5mm} \tilde{\zeta}_k = c_{k-1} d_{k-2} \beta_k + d_{k-1} \alpha_k \nonumber \\
&& \hspace*{5mm}
\left( \begin{array}{cc} a_k & b_k \\ c_k & d_k \end{array} \right) ^\dagger
\left( \begin{array}{c} \zeta_k \\ 0 \end{array} \right)
=
\left( \begin{array}{c} \tilde{\zeta}_k \\ \rho_{k+1} \end{array} \right)
\label{BQMRQR} \\
&& \hspace*{5mm} P_k = (V_k - P_{k-1} \epsilon_k
- P_{k-2} \theta_k) \zeta_k^{-1} \nonumber \\
&& \hspace*{5mm} \tau_k = a_k \tilde{\tau}_k;
\ \tilde{\tau}_{k+1} = c_k \tilde{\tau}_k \nonumber \\
&& \hspace*{5mm} \Psi_k = \Psi_{k-1} + P_k \tau_k \nonumber \\
&& \} \nonumber
\end{eqnarray}
The algorithm does not give a recurrence for the residual $R_k$,
but the fact that ${\rm Tr}\ \tilde{\tau}_k^\dagger \tilde{\tau}_k$
is of the same order of magnitude as ${\rm Tr}\ R_k^\dagger R_k$
is useful in formulating a stopping criterion.
This version does nothing to address the problems of Lanczos breakdown
and (near) linear dependence in the columns of $V_k$.
These can be brought under control, as shown in \cite{FreundAndMalhotra}.
\begin{minipage}{72mm}
\begin{center}
\leavevmode
{\setlength{\epsfxsize}{70mm} \setlength{\epsfysize}{60mm}
\epsfbox{BBiCG4000cvghist.eps}}
{\setlength{\epsfxsize}{70mm} \setlength{\epsfysize}{60mm}
\epsfbox{BQMR4000cvghist.true.eps}}
{\small
Convergence histories for B-BiCG$\gamma_5$, B-QMR$\gamma_5$.\\
$s = 1$ (---), $2$ ($\cdot \cdot \cdot$), $3$ ($\cdot - \cdot$),
$4$ (- - -), $6$ (-- --); \\
$\kappa = 0.14$,
$V=8^4$, $\beta=5.7$,
$C_{SW} = 1.5678$.
}
\end{center}
\end{minipage}
\vspace{-1ex}
\section{Conclusions}
\vspace{-1ex}
Block algorithms can reduce the number of matrix-vector operations
required for convergence at the expense of more vector-vector operations.
This does not necessarily lead to a reduction in wall-clock time.
B-QMR($\gamma_5$) and B-BiCG($\gamma_5$) are clearly faster than
B-CGNR and B-Lanczos,
and in some regimes (light masses and small volumes) they
significantly outperform BiCGSTAB.
However, on lattices of realistic size, the improvements from blocking
are marginal at best.
At $\beta=6.0$ quenched, $V=16^3 \times 48$, I found that
$s=1$ is near-optimal for both
B-QMR($\gamma_5$) and B-BiCG($\gamma_5$), unless the configuration
is exceptional.
At $\beta=5.2$, $N_f=2$, $V=12^3 \times 24$,
I found $s=1$ to be optimal for the same algorithms
at all $\kappa_{valence}$ and $\kappa_{sea}$ combinations studied in
\cite{Talevi}.
These conclusions should be reviewed if the relative cost of matrix-vector
to vector-vector operations increases significantly. Block algorithms
may yet have a role to play in conjunction with highly-improved actions
on coarse lattices.
\vspace{-1ex}
\section*{ACKNOWLEDGMENTS}
\vspace{-1ex}
I thank R.~Freund, U.~Gl\"{a}ssner and D.~Henty for helpful communications and
discussions. I used resources funded by
PPARC grant GR/L22744 and EPSRC grants GR/K41663 and GR/K55745.
\vspace{-1ex}
|
2,877,628,089,056 | arxiv | \section{Introduction}
The \textit{Kepler} exoplanet survey has revolutionized our understanding of planetary systems around other stars \citep[e.g.][]{2017PAPhS.161...38B}. During the first four years of the mission, hereafter \textit{Kepler}\xspace, it discovered thousands of exoplanets and exoplanet candidates, the majority of which are smaller than Neptune and orbit close to their hosts stars.
Because transit surveys can detect only a small fraction of the exoplanets, a completeness correction is necessary to understand the true population of exoplanets \citep[e.g.][]{2014PNAS..11112647B}.
The planet occurrence rate, defined as the average number of planets per star, is typically found to be of order unity for planets in the orbital period and planet radius range detectabe with \textit{Kepler} \citep{2011ApJ...742...38Y,2013ApJ...766...81F}.
Much progress has been made in recent years in understanding the survey detection efficiency of the \textit{Kepler}\xspace mission \citep[e.g.][]{2015ApJ...810...95C,2016ApJ...828...99C}.
While earth-sized planets in the habitable zone of sun-like stars are just below the detection limits of \textit{Kepler} \citep{2018ApJS..235...38T}, occurrence rates of earth-sized planets in the habitable zone, $\eta_\oplus$, can be estimated by considering smaller M and K dwarf stars \citep{2013ApJ...767...95D,2015ApJ...807...45D} or by extrapolating from shorter orbital periods and/or larger planet radii \citep{2013PNAS..11019273P,2014ApJ...795...64F,2015ApJ...809....8B}.
The exoplanet population discovered by the \textit{Kepler}\xspace mission has been characterized in great detail. Key findings include that planet occurrence rates increase with decreasing planet size \citep{2012ApJS..201...15H} and remains roughly constant for planets smaller than $3 ~R_\oplus$ \citep{2013ApJ...770...69P,2014ApJ...791...10M,2015ApJ...814..130M}. Recently, a gap in the planet radius distribution has been identified around $1.5-2.0~R_\oplus$ \citep{2017AJ....154..109F,2017arXiv171005398V}. The occurrence of sub-Neptunes increases with distance from the host star up to an orbital period of $\sim10$ days, after which it remains roughly constant \citep{2012ApJS..201...15H,2015ApJ...798..112M}, in contrast to planets larger than Neptune whose occurrence increases with orbital period \citep{2013ApJ...778...53D}.
The presence of multiple transiting planets around the same star provides important additional constraints on planetary orbital architectures.
The large number of observed systems with multiple transiting planets indicate that planets are preferentially located in systems with small mutual inclinations \citep{2011ApJS..197....8L,2012ApJ...750..112L,2012ApJ...761...92F}. Their orbital spacings follow a peaked distribution without a clear preference for being in orbital resonances \citep{2014ApJ...790..146F}.
However, the majority of planets in multi-planet system are not transiting, and modifying planetary occurrence rates from those based on the observed planetary architectures to account for non-transiting or otherwise undetected planets is not straightforward. The true multiplicity of planetary systems can only be constrained from the observed multiplicity and making assumptions about their mutual inclination distribution (\citealt{2012AJ....143...94T}, see also \citealt{2016ApJ...821...47B}).
Ensemble populations of systems with six or more planets with mutual inclinations of a few degrees provide a good match to the observed population of multi-planet systems \citep{2011ApJS..197....8L,2012ApJ...761...92F,2014ApJ...790..146F,2016ApJ...816...66B}.
However, these simulations under-predict the number of single transiting systems by a factor 2, indicating that an additional populations of planets must be present that is either intrinsically single or has high mutual inclinations.
The occurrence rates and planetary architectures place strong constraints on planet formation models \citep[e.g.][]{2013ApJ...775...53H,2016ApJ...822...54D}.
However, the quantitative comparison of planet formation and orbital evolution models and the observed exoplanet population has been greatly complicated by observational biases. Therefore, published comparisons resorted to trying to explain the presence of specific planet sub-populations by identifying processes that can give rise to such planets. While these studies are essential, such qualitative comparisons can not make full use of the wealth of information represented by the overall exoplanet population statistics.
To address this limitation we introduce \texttt{EPOS}\xspace\footnote{\url{https://github.com/GijsMulders/epos}}, the Exoplanet Population Observation Simulator, that provides a \texttt{Python} interface comparing planet population synthesis models to exoplanet survey data, taking into account the detection biases that skew the observed distributions of planet properties. \texttt{EPOS}\xspace uses a forward modeling approach to simulate observable exoplanet populations and constrain their properties via Markov Chain Monte Carlo simulation using \texttt{emcee}\xspace \citep{2013PASP..125..306F}, and has already been employed in two different studies \citep{2018arXiv180209602K,2018arXiv180300777P}.
\begin{figure*}
\centering
\includegraphics[width=0.8\linewidth]{fig1.pdf}
\caption{
Flowchart of the Exoplanet Population Observation Simulator.
A description of all mathematical symbols can be found in table \ref{t:sym}.
}
\label{f:flowchart}
\end{figure*}
In this paper, we verify this approach using parametric models of planet populations, that we compare to the final data release of the \textit{Kepler}\xspace mission, \texttt{DR25}.
From this, we are able to make a statistical evaluation of the properties of exoplanetary systems.
Among other results we report on the location of the innermost planet in planetary systems to place our Solar System in the context of exoplanet systems. We examine how the innermost planets in most of the \textit{Kepler}\xspace systems are located much closer in than Mercury and Venus.
In an upcoming paper, we will use \texttt{EPOS}\xspace to make a direct comparison between planet formation models and exoplanet populations.
\section{Code description}\label{s:code}
The Exoplanet Population Observation Simulator, \texttt{EPOS}\xspace, employs a forward modeling approach to constrain exoplanet populations through the following iterative procedure:
\begin{description}
\item [Step 1] Define a distribution of planetary systems from analytic forms for planet occurrence rates and planetary architectures or from a planet population synthesis model.
\item [Step 2] From this derive a transiting planet population by assigning random orientations to each system and evaluating which planets transit their host stars.
\item [Step 3] Determine which of the transiting planets would be detected by \textit{Kepler}\xspace, accounting for detection efficiency.
\item [Step 4] Compare the detectable planet population with exoplanet survey data using a summary statistic.
\item [Step 5] Repeat steps 1-4 until the simulated detectable planet population matches the observed planet population to constrain the intrinsic distribution of planetary systems.
\end{description}
In this section we will describe each step in greater detail.
Steps 1-4 take less than a second to run on a single-core CPU, allowing for an efficient sampling of the parameter space using MCMC methods.
Figure \ref{f:flowchart} summarizes these steps and their quantitative implementation in a flowchart. A description of all mathematical symbols used in this paper can be found in the Appendix, Table \ref{t:sym}.
\subsection{Step 1: Planet Distributions}\label{s:step:1}
Planet populations are generated using a Monte Carlo simulation by random draws from a multi-dimensional probability distribution function, $f$, which represents the intrinsic distribution of planets and planetary systems. We simulate a planet survey equal in size to the \textit{Kepler}\xspace survey, roughly $160,000$ stars.
The parametric descriptions of $f$ are based on studies of planet occurrence rate and planet multiplicity from \textit{Kepler}\xspace. Distributions based on planet formation models will be described in an upcoming paper.
Here we describe two parametric descriptions for the planet population $f$ that correspond to two different modes in \texttt{EPOS}\xspace:
\begin{description}
\item[Occurrence rate mode] Simulates only the distribution of planet radius, $R$, and orbital period, $P$, as $f=\ensuremath{f_{\rm pl}}\xspace(R,P)$. This mode is similar to occurrence rate calculations that estimate the average number of planets per star, $\eta$, in a certain period and radius range.
This mode can also be used to estimate the planet mass distribution for comparison with microlensing data, see \cite{2018arXiv180300777P} for details.
\item[Multi-planet mode] Simulates multiple planets per system, taking into account their relative spacing and mutual inclination, $\Delta i$, to estimate orbital architectures. The properties of each planet in the system are drawn from a distribution $f=f_k(R,P,i)$, where $i$ is the planet's orbital inclination and $k$ is an index for each planet in the system.
\end{description}
\begin{figure}
\includegraphics[width=\linewidth]{fig2.pdf}
\caption{Example planet probability distribution function, $\ensuremath{f_{\rm pl}}\xspace(P,R)$.
The color indicates the planet occurrence rate in percent per unit area of $d\ln P d\ln R$.
The side panels show the marginalized distributions, $f_P(P)$ in units of $d\ln P$ and $f_R(R)$ in units of $d\ln R$.
}
\label{f:panels}
\end{figure}
\subsubsection{Occurrence rate mode}\label{s:para}
The central assumption we use in this paper is that the planet occurrence rate distribution is a separable function in planet radius and orbital period,
\begin{equation}\label{eq:pdf}
\ensuremath{f_{\rm pl}}\xspace(P,R) \propto f_P(P) f_R(R).
\end{equation}
The distribution is normalized such that integral over the simulated period and planet radius range equals the number of planets per star, $\eta$:
\begin{equation}
\eta=\int_P \int_R \ensuremath{f_{\rm pl}}\xspace(P,R) ~d{\log}P ~d{\log}R
\end{equation}
The planet orbital period distribution is described by a broken power law
\begin{equation}\label{eq:period}
f_P(P)=
\begin{cases}
(P/P_{\rm break})^{a_P},& \text{if } P < P_{\rm break}\\
(P/P_{\rm break})^{b_P},& \text{otherwise}
\end{cases}
\end{equation}
where the break reflects the observed flattening of sub-Neptune occurrence rates around an orbital period of ten days \citep[e.g.][]{2012ApJS..201...15H,2015ApJ...798..112M}. In this example we use $a_P=1.5$ for the power law index of the increase interior to $P_{\rm break}=10$ days, and a flat occurrence rate $b_P=0$, at longer periods but we will refine these values later.
The planet radius distribution function is similarly described by a broken power law
\begin{equation}\label{eq:radius}
f_R(R)=
\begin{cases}
(R/R_{\rm break})^{a_R},& \text{if } R < R_{\rm break}\\
(R/R_{\rm break})^{b_R},& \text{otherwise}
\end{cases}
\end{equation}
reflecting the observed increase of small planets with decreasing radius ($b_R=-4$), and the ``plateau'' ($a_R=0$) of constant occurrence rates for planets smaller than $R_{\rm break}=3 ~R_\oplus$ \citep[e.g.][]{2012ApJS..201...15H, 2013ApJ...770...69P}.
The probability distribution function of planet occurrence thus has 7 free parameters ($\eta$, $P_{\rm break}$, $a_P$, $b_P$, $R_{\rm break}$, $a_R$, $b_R$). An example for $\eta=2$ is shown in Figure \ref{f:panels}.
\subsubsection{Multi-planet mode}\label{s:multi}
In its multi-planet mode \texttt{EPOS}\xspace simulates a planetary system for each star. Each planet in the system, denoted by subscript $k$, is characterized by its radius, orbital period, and orbital inclination with respect to the observer, $i$, and there are $m$ planets per system.
Because the properties of planets in multi-planet systems tend to be correlated, we draw a typical set of planet properties for each system ($R_s,P_s,i_s$) on which the properties of each planet in the system ($R_k,P_k,i_k$) are dependent:
\begin{equation}\label{eq:pdf:multi}
\begin{aligned}
f_{\text{pl},k}(R,P,i|R_s,P_s,i_s) \propto
&f_{R,k}(R|R_s)\\
&f_{P,k}(P|P_s)\\
&f_{i,k}(i|i_s).
\end{aligned}
\end{equation}
where we make the assumption that distributions of planet size, period, and inclination are not interdependent.
Previous studies have shown that planets in multi-planet systems tend to be more similar in size and more regularly spaced than random pairings from the overall distribution \citep{2017ApJ...849L..33M,2018AJ....155...48W} and have low mutual inclinations of a few degrees \citep[e.g.][]{2014ApJ...790..146F}. This motivates our choice to not draw the properties of each planet independently from a distribution $\ensuremath{f_{\rm pl}}\xspace(R,P,i)$.
The system properties ($R_s,P_s,i_s$) are drawn from a distribution
\begin{equation}
g_\text{pl}(R_s,P_s,i_s) \propto
g_R(R_s) g_P(P_s) g_i(i_s)
\end{equation}
where once again we made the assumption that the distributions of planet size, period, and inclination are separable functions of each variable. The distribution of $g$ is normalized such that the integral over the simulated parameter range is equal to the fraction of stars with planetary systems, $\eta_s$:
\begin{equation}
\eta_s=\int_P \int_R \int_i g_\text{pl}(P,R,i) ~d{\log}P ~d{\log}R ~di
\end{equation}
The parameterization of the system and planet properties are described below. The system architecture is illustrated in Figure \ref{f:art}
\begin{figure}
\includegraphics[width=\linewidth]{fig3.pdf}
\caption{Illustration of the planetary system architecture.
$P_\text{in}$ denotes the orbital period of the innermost planet, while $\mathcal{P}$ denotes the period ratio between adjacent planets.
The system is inclined with respect to the observer by an angle $i_s$,
and each planet has a mutual inclination $\delta i$ with respect to this system inclination.
Note that each planet has a different mutual inclination and therefore a different inclination with respect to the line of sight.
All orbits are assumed to be circular and the ellipiticity of the projected orbits are due to their respective inclinations.
The probability distributions for $i_s$, $\delta i$, $\mathcal{P}$, and $P_\text{in}$ are described in the text.
\label{f:art}
}
\end{figure}
\paragraph{Orbital Inclination}
A planet's orbital inclination with the respect to the line of sight, $i_p$, is dependent on the inclination of the system, $i_s$, the mutual inclination of the planet with respect to that system, $\delta i$, and the longitude of the ascending node, $\delta \Omega$.
We assume that the planet mutual inclinations, $\delta i$, follow a Rayleigh distribution:
\begin{equation}\label{eq:pldeltainc}
f_{\delta i,k}(\delta i)= \delta i/\Delta i e^{-\delta i^2/(2 {\Delta i}^2)},
\end{equation}
where $\Delta i$ is the mode of the mutual inclination distribution of planetary orbits, which is typically $1-3\degr$ \citep{2014ApJ...790..146F}. The distribution of system inclinations with respect to the observer, $i_s$, is proportionate to $\cos(i)$.
\begin{equation}\label{eq:sysinc}
g_i(i_s) \propto \cos(i_s)
\end{equation}
where $i_s=0$ is an edge-on orbit.
The distribution of planet inclinations with respect to the observer is then
\begin{equation}\label{eq:plinc}
f_{i,k}(i|i_s)= |i_s + f_{\delta i,k}(\delta i) \cos(\pi \delta \Omega)|
\end{equation}
where $\delta \Omega$ is the longitude of the ascending node with respect to the observer.
\paragraph{Orbital Period}
We assume that the planets are regularly spaced in the logarithm of the orbital period,
with the location of the first planet ($k=0$) defining the location of additional planets ($k=1...m$) in the system. The period ratio of adjacent planets is denoted by $\mathcal{P}_k \equiv P_{k+1}/P_{k}$ where $P_{k+1}$ is always the planet with the larger orbital period.
The location of the first planet in the system, ($P_0 \equiv P_s$), is parameterized by the broken power--law distribution described by Equation \ref{eq:period}, but with a steeper decline in planet occurrence with orbital period ($b_P\lesssim 0$) that we will constrain in the fitting process.
Observed orbital spacings follow a broad range in period ratios, and we follow \cite{2015ApJ...808...71M} in parameterizing the distribution of dimensionless spacings, $D_k=2\frac{\mathcal{P}_k^{2/3}-1}{\mathcal{P}_k^{2/3}+1}$, as a log-normal distribution.
The orbital period distribution of the k-th planet in the system is then given by
\begin{equation}\label{eq:dP}
f_{P,k}(P|P_{k-1})= \frac{1}{\sqrt{2 \pi} \sigma} e^{\frac{(\log D_k-D)^2}{2 \sigma^2}}
\end{equation}
with respect to the orbital period of the previous planet, $P_{k-1}$. $D$ and $\sigma$ are free parameters that characterize the median and width of the distribution, with typical values of $D\approx-0.4$ and $\sigma \approx 0.2$.
\paragraph{Planet Radius}
We assume all planets in the system are of equal size
\begin{equation}
f_{R,k}(R|R_s)= g_R(R_s)
\end{equation}
where we assume the radius distribution follows Equation \ref{eq:radius}.
We choose equal-sized planets over randomly assigned sizes because planets in multi-planet systems tend to be of similar size \citep{2011ApJS..197....8L,2017ApJ...849L..33M,2018AJ....155...48W}.
While there are observed trends of increasing planet size with orbital period, these trends are strongest at short orbital periods \cite[e.g.][]{2018arXiv180405069C}
and for large planet sizes \citep{2013ApJ...763...41C,2016ApJ...825...98H} that we will exclude from our observational comparison, see \S \ref{s:obs}.
Thus the number of free parameters describing the population of planetary systems is 11 ($\eta$, $m$, $\Delta i$, $P_\text{break}$, $a_P$, $b_P$, $D$, $\sigma$, $R_\text{break}$, $a_R$, $b_R$).
\begin{figure}
\includegraphics[width=\linewidth]{fig4.pdf}
\caption{Simulated sample of transiting (pink) and detectable (blue) planets generated in occurrence rate mode, see Figure \ref{f:panels}.
}
\label{f:PR:transit}
\end{figure}
\subsection{Step 2: Transiting Planet Populations}\label{s:step2}
The synthetic planet population is simulated using a Monte Carlo approach by sampling the distribution function of planet properties (Eq. \ref{eq:pdf} or \ref{eq:pdf:multi}) $n_p$ times. We simulate planetary systems in the period range $[0.5,730]$ days and $[0.3,20] R_\oplus$ (Figure \ref{f:PR:transit}). For each planet, 2 random numbers are drawn to determine its properties. The first random number is used to determine the planet orbital period from the cumulative distribution function of Eq. \ref{eq:period}. The second random number is used to determine the planet radius from the cumulative distribution function of Eq. \ref{eq:radius}.
The total number of draws in the synthetic survey is $n_p=\eta n_\star$ where $n_\star$ is the number of stars in the survey ($n_\star \approx160,000$), multiplied by the average number of planets per star, $\eta$. In occurrence rate mode, with $\eta\approx2$, the simulated sample $\{R,P\}_p$ consists of $n_s\approx240,000$ planets. In multi-planet mode, with $\eta_s\approx0.35$ and $m\approx 6$, the simulated sample $\{R,P,i\}_s$
consists of the same number of planets distributed across $n_s\approx40,000$ systems. Each star in the survey is assigned a unique identifier, {\rm ID}\xspace, to keep track of the observable planet multiplicity.
In occurrence rate mode, the subset of transiting planets can be simulated by considering the geometric transit probability
\begin{equation}\label{eq:fgeo}
\ensuremath{f_{\rm geo}}\xspace = R_\star/a.
\end{equation}
The semi-major axis $a$ is calculated using Kepler's third law. The stellar mass and radius are the average values of the surveyed stars (see next section).
A planet is transiting if
\begin{equation}\label{eq:mc:geo}
\chi < \ensuremath{f_{\rm geo}}\xspace
\end{equation}
where $\chi$ is a continuous random variable between $0$ and $1$. The transiting planet sample, $\{R,P\}_t$, typically contains $10,000$ planets. An example is shown in Figure \ref{f:PR:transit}.
\subsubsection{Multi-planet transit probability}
The Monte Carlo simulation of transiting planetary systems goes as follows.
First, the system inclination, $i_s$ is drawn according to Equation \ref{eq:sysinc}. The draw from a distribution is performed by generating a random number between 0 and 1 and interpolate the parameter value from the normalized cumulative distribution.
Next, the mutual inclination of the orbit of each planet in the system, $\delta i_{p,k}$, is drawn from Equation \ref{eq:pldeltainc}. Then the longitude of the ascending node with respect to the observer, $\delta \Omega$, is randomly drawn for each planet from a uniform distribution, $\delta \Omega= \chi$. The inclination, $i_k$ of each planets orbit with respect to the line-of-sight can then be calculated from equation \ref{eq:plinc}.
The transiting planet population, $\{R,P\}_t$, is defined by planets that traverse the stellar limb, given by
\begin{equation}\label{eq:multi:fgeo}
i_k<\arcsin(\ensuremath{f_{\rm geo}}\xspace),
\end{equation}
where \ensuremath{f_{\rm geo}}\xspace is the geometric transit probability from Eq. \ref{eq:fgeo}. We note that for a single planet ($\delta i_{p,k}=0$), this expression is equivalent to the geometric transit probability of Eq. \ref{eq:fgeo}. This expression is only valid for small mutual inclinations.
To account for the \textit{Kepler dichotomy}, the apparent excess of single transiting systems \citep[e.g.][]{2016ApJ...816...66B}, we assume that a fraction \ensuremath{f_{\rm iso}}\xspace of planetary systems has an isotropic distribution of orbits described by Eq. \ref{eq:fgeo} instead of Eq. \ref{eq:multi:fgeo}. Typically, $\ensuremath{f_{\rm iso}}\xspace \approx 0.5$, but we will treat it as a free parameter to be constrained from the data.
\begin{figure*}
\includegraphics[width=0.33\linewidth]{fig5a.pdf}
\includegraphics[width=0.33\linewidth]{fig5b.pdf}
\includegraphics[width=0.33\linewidth]{fig5c.pdf}
\caption{Planet detection efficiency for dwarf stars from the \textit{Kepler}\xspace mission for the \texttt{DR25} data release as function of orbital period and planet radius.
Left Panel: The detection efficiency, \ensuremath{f_{\rm S/N}}\xspace, the probability that a planet candidate is detected based on the signal-to-noise of the transit.
Middle Panel: The vetting efficiency, \ensuremath{f_{\rm vet}}\xspace, the probability that a planet is classified as a reliable planet candidate (a \textit{Robovetter} disposition score of $\geq 0.9$).
Right Panel: The survey completeness, \ensuremath{f_{\rm det}}\xspace, which includes both the detection and vetting efficiency as well as the the geometric transit probability, $\ensuremath{f_{\rm det}}\xspace= \ensuremath{f_{\rm geo}}\xspace ~\ensuremath{f_{\rm S/N}}\xspace ~\ensuremath{f_{\rm vet}}\xspace$}.
\label{f:eff}
\end{figure*}
\subsection{Step 3: Survey Detection Efficiency}\label{s:step3}
We simulate a detectable planet sample, $\{R,P\}_d$, from the simulated sample of transiting planets, $\{R,P\}_t$, by taking into account the survey detection efficiency, $\ensuremath{f_{\rm S/N}}\xspace(R,P)$, and the vetting completeness, $\ensuremath{f_{\rm vet}}\xspace(R,P)$, described below and displayed in Figure \ref{f:eff}. A planet is detectable if:
\begin{equation}\label{eq:snr}
\chi < \ensuremath{f_{\rm S/N}}\xspace \ensuremath{f_{\rm vet}}\xspace
\end{equation}
where $\chi$ is a continuous random variable between 0 and 1. $\ensuremath{f_{\rm S/N}}\xspace(R,P)$ is the detection efficiency based on the combined signal-to-noise ratio of all planet transits. $\ensuremath{f_{\rm vet}}\xspace(R,P)$ is the detection efficiency of the \textit{Kepler}\xspace \textit{Robovetter} for a planet candidate sample with high reliability.
We calculate detection efficiency contours for each individual star observed by \textit{Kepler}\xspace using \texttt{KeplerPORTs}\footnote{\url{https://github.com/nasa/KeplerPORTs}} \citep{2017ksci.rept...19B}.
We included all stars that were fully searched by the Kepler pipeline, for which stellar properties were available in the \cite{2017ApJS..229...30M} catalog, and for which all detection metrics were available on the exoplanet archive\footnote{\url{https://exoplanetarchive.ipac.caltech.edu/docs/Kepler_completeness_reliability.html}}.
The detection efficiencies were evaluated on a grid with 20 logarithmically-spaced orbital period bins between $0.2$ and $730$ days assuming planets on circular orbits, and 21 logarithmically spaced planet radius bins between $0.2$ and $20 ~R_\oplus$.
After removing giant and sub-giant stars according to the effective temperature dependent surface gravity criterion in \cite{2016ApJS..224....2H}, the sample consists of $n_\star= 159,238$ stars, with a median mass of $M_\star=0.95 ~M_\odot$ and median radius of $R_\star=0.94 ~R_\odot$.
We calculate the survey detection efficiency, $\ensuremath{f_{\rm S/N}}\xspace(R,P)$, by averaging the individual contributions of each star, displayed in the upper panel of Figure \ref{f:eff}.
Not all transiting planets that are detectable based on the detection efficiency are vetted as planet candidates by the \textit{Kepler}\xspace \textit{Robovetter} \citep{2016ApJS..224...12C}. Including only reliable planet candidates, here defined with a disposition score larger than $0.9$, further reduces the vetting completeness.
The vetting efficiencies were calculated following \cite{2018ApJS..235...38T} based on the \textit{Kepler}\xspace simulated data products\footnote{\url{https://exoplanetarchive.ipac.caltech.edu/docs/KeplerSimulated.html}}.
The vetting efficiency was evaluated on the same radius and period grid as the detection efficiency and using the same criterion to select main-sequence stars.
We use a power-law in planet radius and a broken power-law in orbital period to obtain a smooth function for the vetting completeness \ensuremath{f_{\rm vet}}\xspace
\begin{equation}
\ensuremath{f_{\rm vet}}\xspace= c ~R^a_R
\begin{cases}
(P/P_{\rm break})^{a_P},& \text{if } P < P_{\rm break}\\
(P/P_{\rm break})^{b_P},& \text{otherwise}
\end{cases}
\end{equation}
with $c=0.63$, $a_R= 0.19$, $P_{\rm break}=53$, $a_P=-0.07$, $b_P=-0.39$ (see Appendix \ref{s:vet} for details).
The best-fit vetting efficiency is shown in Figure \ref{f:eff}. The vetting efficiency varies between close to 100\% for Neptune-sized planets at short orbital periods to 25\% near the habitable zone.
The break in the power-law is needed to match the reduced vetting efficiency for planets at orbital periods larger than $\sim 100$ days as described in \cite{2018ApJS..235...38T}.
\begin{figure}
\includegraphics[width=\linewidth]{fig6.pdf}
\caption{Simulated sample of detectable planetary systems. Planets with no additional planets detected in the system are color-coded in gray.
Colors indicate the number of observed planets per system.
}
\label{f:PR:multi}
\end{figure}
The detectable planet sample, $\{R,P\}_d$, typically contains about $4,000$ planets (Figure \ref{f:PR:transit}). In multi-planet mode, \texttt{EPOS}\xspace also keeps track of which planets are observable as part of multi-planet systems (Figure \ref{f:PR:multi}). From the observed multi-planet systems we generate a set of summary statistics: $N_k$, the number of stars with $k$ detectable planets;
$\mathcal{P}$, the orbital period ratio between adjacent planets (Fig. \ref{f:in:dP}); $P_\text{in}$, the orbital period of the innermost planet in the system (Fig. \ref{f:in:Pinner}).
\begin{figure}
\includegraphics[width=\linewidth]{fig7.pdf}
\caption{Period ratio distribution of adjacent planets in multi-planet systems, $\mathcal{P}=P_{k+1}/P_k$. The intrinsic distribution is shown with the red line while the solid blue histograms show the distribution of detections in the simulated survey. The observable distribution is skewed towards shorter orbital period ratios by detection biases.
The hatched region indicates the observable planet pairs with at least one non-detected planet between them.
}
\label{f:in:dP}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{fig8.pdf}
\caption{Orbital period distribution of the innermost planet in each system. The intrinsic distribution is shown with the red line while the solid blue histograms show the distribution of detections in the simulated survey. The observable distribution is skewed towards shorter orbital periods by detection biases.
}
\label{f:in:Pinner}
\end{figure}
It is worth noting that the properties of the observable multi-planet systems are significantly different from the distributions from which they are generated. When the intrinsic population contains only planetary systems with $m \geq 7$ planets, the majority of systems have $k=2-6$ transiting detectable planets. In addition, the observable period and period ratio distributions are skewed by detection biases.
The geometric transit probability favors the detection of planets at shorter orbital periods. The distribution of the location of the innermost {\em observed} planet in the systems peaks at $P_\text{in}\sim 6$ days compared to $P_\text{in}= 10$ days in the intrinsic distribution (Fig. \ref{f:in:Pinner}). We also find that planetary systems at very short orbital periods ($P_\text{in}\sim 1$ day) are overrepresented in the detectable distribution by an order of magnitude, while systems with orbital periods similar to the terrestrial planets ($P_\text{in}\sim 100$ days) are underrepresented by an order of magnitude, due to the same detection biases (Fig. \ref{f:in:Pinner}).
Pairs of planets with smaller orbital period ratios are more likely to be both transiting, shifting the peak of the observable orbital period ratio distribution to smaller period ratios (Fig. \ref{f:in:dP}). The hatched area in the bottom panel of Figure \ref{f:in:dP} shows simulated planet pairs that are observable as adjacent but have a non-transiting planet between them. These planet pairs dominate the period ratio distribution at large orbital period ratios ($\mathcal{P}\gtrsim 4$).
\subsection{Step 4: Observational Comparison}\label{s:obs}
In this step we compare the simulated planet populations to the observed exoplanet properties to evaluate how well the simulated planet population reproduces the collection of observed planetary systems. We generate a set of summary statistics for both the simulated data and the \textit{Kepler}\xspace survey.
We evaluate the survey as a whole, and do not consider dependencies on stellar properties such as stellar mass \citep{2015ApJ...798..112M,2015ApJ...814..130M} or metallicity \citep{2016AJ....152..187M,2018AJ....155...89P}.
In occurrence rate mode, the summary statistics are the planet radius distribution, $\{R\}_d$, the orbital period distribution $\{P\}_d$, and the total number of planets, $N$. In multi-planet mode, additional summary statistics are calculated for the number of stars with $k$ planets, $N_k$, the period ratio distribution between adjacent planets, $\mathcal{P}$, and the period of the innermost planet in the system, $P_\text{in}$.
We evaluate this summary statistic in the range
of $R=[0.5,6] R_\oplus$ and $P=[2,400]$ days. These ranges exclude two regions where the assumptions of separability of parameters clearly breaks down. The maximum planet size of $6 R_\oplus$ is chosen to exclude giant planets, which have a different distribution of orbital periods than sub-Neptunes \citep{2013ApJ...778...53D,2016A&A...587A..64S}, are less often part of multi-planet systems \citep{2012PNAS..109.7982S}, or have very dissimilar sizes than other planets in the system \citep{2016ApJ...825...98H}. The minimum orbital period of 2 days is chosen to exclude the photo-evaporation desert \citep[e.g.][]{2016NatCo...711201L} where the planet-radius distribution deviates significantly from that at larger orbital periods. The other bounds are chosen because there are very few planet detection outside this range.
\begin{figure}
\includegraphics[width=\linewidth]{fig9.pdf}
\caption{Observed sample of planetary systems. Planets with no additional planets detected in the system are color-coded in gray. Colors indicate the number of observed planets per system.
}
\label{f:multi:planets}
\end{figure}
We then generate a summary statistic from the \textit{Kepler} \texttt{DR25} catalog.
The planet candidate list is taken from \cite{2018ApJS..235...38T}.
We include only main sequence planet hosts by removing giant and sub-giant stars according to the effective temperature dependent surface gravity criterion in \citep{2016ApJS..224....2H}. We also use a disposition score cut of 0.9 to select a more reliable sample of planet candidates (see \citealt{2018ApJS..235...38T} for details).
We account for the lower completeness of this high-reliability planet sample by explicitely taking into account the vetting completeness in the calculation of the survey detection efficiency.
The final list containing 3,041 planet candidates is shown in Figure \ref{f:multi:planets}.
The list contains 1,840 observed single systems and 324 double, 113 triple, 38 quadruple, 10 quintuples, and 2 sextuple systems within the region where the summary statistic is evaluated.
\begin{figure}
\includegraphics[width=\linewidth]{fig10.pdf}
\caption{Comparison of simulated planets for the example model (blue) with detected planets (orange). The comparison region (black box) excludes hot Neptunes ($P\textless 2$ days) and giant planets ($R>6 R_\oplus$).
}
\label{f:PR:single}
\end{figure}
\subsubsection{Occurrence rate mode}\label{s:obs:occ}
Figure \ref{f:PR:single} shows how the summary statistics of planet radius and orbital period are generated from the detectable planet population (blue) and from the \textit{Kepler}\xspace exoplanet population (orange). We compare the planet radius distributions and orbital period distributions separately. While this approach ignores any covariances between planet radius and orbital period that are present in the \textit{Kepler}\xspace data, it is consistent with the assumption made in Eq. \ref{eq:pdf} that these functions are separable. We quantify the distance between the two distributions using the two-sample Kolmogorov-Smirnoff (KS) test and calculated the associated probabilities, $p_P$ and $p_R$, that the observed and simulated distributions are drawn from the same data.
We will minimize the differences between these distributions in the fitting step.
\begin{figure}
\includegraphics[width=\linewidth]{fig11.pdf}
\caption{
Simulated versus observed frequency of multi-planet systems. The blue histogram shows the example model with an average mutual inclination of $\Delta i=2$.
Multi-planet statistics from Kepler derived in the same radius and period range are shown in orange.
The hatched region indicates the excess of single transiting planets, here $40\%$ of systems ($\ensuremath{f_{\rm iso}}\xspace=0.4$).
Crosses indicate a population of planetary systems on co-planar orbits (green, $\Delta i=0$) and on isotropic orbits (red, $\ensuremath{f_{\rm iso}}\xspace= 1.0$).
}
\label{f:multi:obs}
\end{figure}
\subsubsection{Multi-planet statistics}\label{s:obs:multi}
We calculate three additional summary statistics for multi-planet systems: the frequency of multi-planet systems ($N_k$, Fig. \ref{f:multi:obs}); the period ratio of adjacent planet pairs ($\mathcal{P}$, Fig. \ref{f:obs:dP}); and the distribution of the locations of the innermost planet in each system ($P_\text{in}$, Fig. \ref{f:obs:inner}), all evaluated within $R=[0.5,6] R_\oplus$ and $P=[2,400]$. As discussed in the previous section, the observed planet populations are subject to detection biases, and
a proper comparison requires comparing the observed distribution to the simulated distribution, which is what \texttt{EPOS}\xspace does.
All summary statistics are calculated in the same way for observed planets (orange lines) and simulated planets (blue histograms) from the set of orbital periods and host star IDs.
We use 2-sample KS tests to calculate the probabilities that the distribution of inner planet orbital periods ($p_\text{in}$) and orbital period ratios ($p_\mathcal{P}$) are drawn from the same distribution as the observations.
We use a Pearson $\chi^2$ test to calculate the probability, $p_{N}$, that the multi-planet frequencies of the simulated sample are drawn from the same distribution as the observations.
The frequency distribution of planets in multi-planet systems is shown in Figure \ref{f:multi:obs} for a model with $m=7$ planets per system, a mode for the mutual inclination of $\Delta i=2\degr$, and $\ensuremath{f_{\rm iso}}\xspace=0.4$.
The hatched region indicates simulated planets that are on isotropic orbits to match the excess of single-transiting systems.
We also show a model with only co-planar orbits ($i=0\degr$ and $\ensuremath{f_{\rm iso}}\xspace=0$, green) that over-predicts the frequency of multi-planet systems, particularly at high numbers of planets per system. A model with planets on isotropic orbits ($\ensuremath{f_{\rm iso}}\xspace=1$, red) under-predict the frequency of all multi-planet systems.
\begin{figure}
\includegraphics[width=\linewidth]{fig12.pdf}
\caption{Simulated (blue) versus observed (orange) period ratio between adjacent planets in multi-planet systems. The red line shows, for comparison, a simulation where planet orbits are randomly drawn from the period-radius distribution (Figure \ref{f:panels}) instead of regularly spaced.
}
\label{f:obs:dP}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{fig13.pdf}
\caption{Simulated (blue) versus observed (orange) location of the innermost planet in each system.
}
\label{f:obs:inner}
\end{figure}
The period ratio of adjacent planet pairs is shown in Figure \ref{f:obs:dP}. The blue histogram shows the observable period ratios of the example model where planets are regularly spaced according to the period ratio distribution of Eq. \ref{eq:dP}. The shape of the period ratio distribution qualitatively reproduces the observed distribution (orange), with a peak near a period ratio of $\mathcal{P}\approx 2$ and a tail towards large orbital period ratios.
For comparison, the red line shows a simulation where planetary systems are not regularly spaced, which is constructed by randomly drawing $m=7$ planets from the period-radius distribution of Eq. \ref{eq:pdf} (Figure \ref{f:panels}). These simulated systems have an observable period ratio distribution that is much wider than observed, indicating that planetary systems are regularly spaced (See also \citealt{2018AJ....155...48W}).
The distribution of the location of the innermost detected planet in each multi-planet systems is shown in Figure \ref{f:obs:inner}.
We only focus on the innermost planet of a detected multi-planet system,
as for observed single-planet systems it can not be derived from the observable if they are intrinsically single or part of a multi-planet system with non-transiting planets.
The distribution of innermost detected planets constrains the fraction of planetary systems without planets at short ($\lessapprox 20$ days) orbital periods.
\subsection{Step 5: Fitting Procedure}\label{s:mcmc}
With the framework to compare the parameterized distributions of exoplanets to observables we proceed to constrain the parameters to provide the best match to the observed planetary systems. The runtime of steps 1-4 is less than a tenth of a second in occurrence rate mode and less than a second in multi-planet mode, allowing for an comprehensive sampling of parameter space. We use \texttt{emcee}\xspace \citep{2013PASP..125..306F}, an open-source \texttt{Python}\xspace implementation of the algorithm by \cite{Goodman:2010et} to sample the parameter space and estimate the posterior distribution of parameters.
\subsubsection{Occurrence rate mode}
In occurrence rate mode, we explore the 7-dimensional parameter space consisting of the number of planets per star, $\eta$, the orbital period distribution, $P_\text{break}$, $a_P$, $b_P$, and the planet radius distribution, $R_\text{break}$, $a_R$, $b_R$. The summary statistics to evaluate the model given the observations are the number of detected planets, $N$; the planet orbital period distribution, $\{P\}$; and the planet radius distribution $\{R\}$.
We use Fisher's method \citep{fisher1925statistical} to combine the probabilities from the summary statistic into a single parameter:
\begin{equation}
\mathcal{L}_\text{occ}= -2 (\ln(p_N)+\ln(p_P)+\ln(p_R))
\end{equation}
which we use as the likelihood of the model given the data in \texttt{emcee}\xspace, e.g. $\mathcal{L} \propto \mathcal{L}(N,\{P\},\{R\}|\eta, P_\text{break}, a_P, b_P,$ $R_\text{break}, a_R, b_R)$.
\begin{figure}
\includegraphics[width=\linewidth]{fig14.pdf}
\caption{Period-radius distribution of detected planets in occurrence rate mode. The green population shows the population generated from the best-fit parameters.
The black box and dashed lines indicate the range of orbital periods and planet sizes that is included in the observational comparison.
The side histograms show the marginalized simulated distributions compared with the observed populations in orange. The blue lines show 30 samples from the posterior.
}
\label{f:single:mcmc}
\end{figure}
\begin{table}
\centering
\title{Best-fit parameters}
\begin{tabular}{llll}
\hline\hline
Parameter & Example & Best-fit \\
\hline
$\eta$ & $2.0$ & $4.9^{+1.3}_{-1.2}$ \\
$P_\text{break}$(days) & $10$ & $12^{+5}_{-3}$ \\
$a_P$ & $1.5$ & $1.5^{+0.5}_{-0.3}$ \\
$b_P$ & $0.0$ & $0.3^{+0.1}_{-0.2}$ \\
$R_\text{break}$($R_\oplus$) & $3.0$ & $3.3^{+0.3}_{-0.4}$ \\
$a_R$ & $0.0$ & $-0.5^{+0.2}_{-0.2}$ \\
$b_R$ & $-4$ & $-6^{+2}_{-3}$ \\
\hline\hline
\end{tabular}
\caption{Fit parameters for the example model and the best-fit solutions with $1\sigma$ confidence intervals.}
\label{t:fit:single}
\end{table}
We run \texttt{emcee}\xspace with 200 walkers for 5,000 iterations, allowing for a 1000-step burn-in to reach convergence. For the initial positions of the walkers we use the parameters of the example model, see Table \ref{t:fit:single}. The MCMC chain and parameter covariances are shown in the Appendix (Figures \ref{f:chain} and \ref{f:corner}). The best-fit values and their $1\sigma$ confidence intervals are calculated as the $50\%$, and $16\%$ and $84\%$ percentiles, respectively. The simulated model for the best-fit parameters and for 30 samples of the posterior is shown in Figure \ref{f:single:mcmc}.
\begin{figure}
\includegraphics[width=\linewidth]{fig15a.pdf}
\includegraphics[width=\linewidth]{fig15b.pdf}
\caption{Posterior orbital period distribution (top) and planet radius distribution (bottom). The red bars show the occurrence rates estimated using the inverse detection efficiencies for comparison.
Note that the occurrence rates underestimate the true distribution in bins that include regions where \textit{Kepler}\xspace has not detected any planet candidates, in particular $R<1.5 R_\oplus$ and $P>50$ days (see Figure \ref{f:planets}).
}
\label{f:posterior}
\end{figure}
\begin{figure
\includegraphics[width=\linewidth]{fig16.pdf}
\caption{\textit{Kepler}\xspace \texttt{DR25} candidate list, color coded by survey completeness. The sample includes only dwarf stars ($\log g ~\textless 4.2$) and planet candidates with a disposition score larger than 0.9. The planet occurrence rate estimated from Eq. \ref{eq:occ} is $\eta_\text{obs}=1.40\pm0.03$.
}
\label{f:planets}
\end{figure}
The posterior distribution of planet radius and orbital period are shown in Figure \ref{f:posterior}. As a sanity check, we calculate occurrence rates as function of planet radius and orbital period using the inverse detection efficiency method. The occurrence per bin is calculate as
\begin{equation}\label{eq:occ}
\eta_\text{bin}= \frac{1}{n_\star} \Sigma_j^{n_p} \frac{1}{\text{comp}_j}
\end{equation}
where ${\rm comp}_j$ is the survey completeness evaluated at the radius and orbital period of each planet in the bin (Figure \ref{f:planets}). The posterior distributions provide a decent match to the binned occurrence rates, with three notable deviations. At $P \gtrsim 50$ days and $R \lesssim 1.4 R_\oplus$ occurrence rates are lower than the posterior. We attribute this to these bins including regions where \textit{Kepler}\xspace has not detected planet candidates (see Figure \ref{f:planets}) and occurrence rates are therefore underestimated.
The broken power--law in planet radius does not describe the population of giant planets at $\sim 10 ~R_\oplus$, and we therefore in the following restrict our observational comparison to $\textless 6 ~R_\oplus$.
\begin{table}
\title{Best-fit parameters}
\centering
\begin{tabular}{lll}
\hline\hline
Parameter & Example & Best-fit \\
\hline
$\eta_s$ & $0.4$ & $0.67^{+0.17}_{-0.12}$ \\
$P_\text{in}$ (days)& $10$ & $12^{+3}_{-2}$ \\
$a_P$ & $1.5$ & $1.6^{+0.4}_{-0.2}$ \\
$b_P$ & $-1$ & $-0.9^{+0.4}_{-0.5}$ \\
$R_\text{break}$ ($R_\oplus$) & $3.0$ & $3.3$ \\
$a_R$ & $0.0$ & $-0.5$ \\
$b_R$ & $-4$ & $-6$ \\
\hline
$\log D$ & $0.4$ & $-0.39^{+0.07}_{-0.05}$ \\
$\sigma$ & $0.2$ & $0.18^{+0.05}_{-0.04}$ \\
\hline
$m$ & $10$ & $10$ \\
$\Delta i$ ($\degr$) & $2$ & $2.1^{+1.0}_{-0.8}$ \\
\ensuremath{f_{\rm iso}}\xspace & $0.4$ & $0.38^{+0.08}_{-0.08}$ \\
\hline\hline
\end{tabular}
\caption{Fit parameters for the example model in multi-planet mode and the best-fit solutions with $1\sigma$ confidence intervals. Parameters $R_\text{break}$, $a_R$, and $b_R$ were fixed to their best-fit solutions from occurrence rate mode.
}
\label{t:fit:multi}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=0.45\linewidth]{fig17a.pdf}
\includegraphics[width=0.45\linewidth]{fig17b.pdf}
\includegraphics[width=0.45\linewidth]{fig17c.pdf}
\includegraphics[width=0.45\linewidth]{fig17d.pdf}
\caption{Posterior predictive plots of simulated planetary systems. 30 samples from the posterior are shown in blue and the observed systems are shown in orange. The top left panel show the best-fit period-radius distribution in green, see Figure \ref{f:single:mcmc}.
The black box and dashed lines indicate the range of orbital periods and planet sizes that is included in the observational comparison.
The top right panel shows the frequency distribution of multi-planet systems with $k$ planets. The bottom left panel show the period ratio distribution of adjacent planets. The bottom right panel shows the orbital period distribution of the innermost planet in each multi-planet system.
}
\label{f:mcmc:multi}
\end{figure*}
\subsection{Multi-planet systems}
In multi-planet mode, we explore the 9-dimensional parameter space consisting of:
the fraction of stars with planetary systems, $\eta_s$;
the mode of the mutual inclination distribution, $\Delta i$;
the orbital period distribution of the inner planet, $P_\text{break}$, $a_P$, $b_P$;
the period ratio distribution, $D$ and $\sigma$;
and the fraction of isotropic systems, $\ensuremath{f_{\rm iso}}\xspace$.
The summary statistics to evaluate the model given the observations are the frequency of multi-planet systems, $\{N_k\}$; the orbital period distribution, $\{P\}$; the planet orbital period ratio distribution, $\{\mathcal{P}\}$; the distribution of the innermost planet in the system,$\{P_\text{in}\}$.
We use Fisher's method to combine the probabilities from the summary statistic into a single parameter:
\begin{equation}
\mathcal{L}_\text{multi}= -2 (\ln(p_N)+ \ln(p_{N,k})+\ln(p_{P})+\ln(p_\mathcal{P})+ \ln(p_\text{in}))
\end{equation}
which we use as the likelihood of the model given the data in \texttt{emcee}\xspace, e.g., $\mathcal{L} \propto \mathcal{L}(N_k,\{P\},\{\mathcal{P}\}, \{P_\text{in}\}|\eta, P_\text{break},$ $a_P, b_P, D, \sigma, \Delta i, \ensuremath{f_{\rm iso}}\xspace)$.
The planet radius distribution is fixed to minimize the amount of free parameters, and we use the best-fit values constrained in the previous section and listed in Table \ref{t:fit:multi}. The number of planets per system, $m$, is fixed to $10$ because it is not well constrained in the fitting: Systems with fewer than 6 planets do not reproduce the observed multiplicity distribution as there are two detected sextuple systems ($N_6=2$).
Systems with more than 7 planets have the same observation signature as the 8th, 9th, etc., planet in the system will typically remain undetected, as the combined likelihood of detecting all planets in such systems is too small to allow detections given the size of the Kepler sample.
We sample the parameter space with \texttt{emcee}\xspace using 100 walkers for 2,000 iterations, allowing for a 500-step burn-in. The MCMC chain and parameter covariances are shown in the appendix (Figure \ref{f:multi:corner}). The simulated observables of the best-fit models and of 30 samples from the posterior are shown in Figure \ref{f:mcmc:multi}. We will discuss the results in the next section.
\section{Results}\label{s:results}
By using \texttt{EPOS}\xspace in parametric mode find that the planet occurrence rate of the \textit{Kepler}\xspace mission are well described by a broken power--law in orbital period and planet radius in the region $2 < P < 400$ days and $0.5 < R_\oplus < 6$. We estimate the planet occurrence rate, the \textit{average} number of planets per star, to be $\eta= 2.4^{+0.5}_{-0.5}$ in this regime and $\eta=4.9^{+1.3}_{-1.2}$ in the simulated range ($0.4 < P < 730$ days and $0.3 < R_\oplus < 20$). This number is higher than the occurrence rate calculated from inverse detection efficiencies ($\eta_\text{inv}= 1.40\pm0.03$), which underestimate the occurrence rates for small planets at long orbital periods.
These results are largely consistent with previous occurrence rate studies collected in the SAG13\footnote{\url{https://exoplanets.nasa.gov/system/internal_resources/details/original/680_SAG13_closeout_8.3.17.pdf}} literature study on planet occurrence rates (see \citealt{2018arXiv180209602K} for details).
The observed population of exoplanets is well described by a population of regularly-spaced planetary systems with 7 or more planets that orbits $\eta_s=67^{+17}_{-12}\%$ of stars.
We find that the mutual inclinations are consistent with the Kepler dichotomy: Half the planet population is in planetary systems that have nearly co-planar orbits, here described by a Rayleigh distribution with $\Delta i=2.1^{+1}_{-0.8} \degr$, consistent with previous estimates that find ranges between $1-3\degr$ \citep{2011ApJS..197....8L,2012ApJ...761...92F,2014ApJ...790..146F,2016ApJ...816...66B}. The other half of planets appear as single-planet transiting systems, which we model as multi-planet systems with isotropically distributed orbital inclinations.
The typical orbital period ratio between adjacent planets is $\mathcal{P}=1.8$ but with a wide distribution, consistent with previous analysis of spacings between planets \citep{2013ApJ...767..115F}.
We discover that the inner edge of planetary systems are clustered around an orbital period of 10 days (Fig. \ref{f:posterior:inner}), reminiscent of the protoplanetary disk inner edge which is located at $\sim 0.1$ au for pre-main-sequence sun-like stars \citep{2007prpl.conf..539M}. While the break in planet occurrence rate around 10 days has been previously connected to the disk inner edge \citep{2015ApJ...798..112M,2017ApJ...842...40L}, this is the first time a peak in the occurrence rate distribution of sub-Neptunes has been identified. The posterior distribution of the innermost planet peaks at an orbital period of $12\pm2$ days and decays towards shorter and longer orbital period with a power--law index of $a_P=1.6^{+0.4}_{-0.2}$ and $b_P=-0.9^{+0.4}_{-0.5}$, respectively.
The decay towards long orbital periods is surprising, since planet occurrence rates are constant or slightly increasing in this range (green line). Planets exterior to the break are therefore mostly the 2nd, 3rd, etc., planet in the system.
We also show that systems with inner planets at orbital periods of $~100$ days are intrinsically rare, and we discuss the implications for planet formation theories and the origins of the solar system below.
\begin{figure}
\includegraphics[width=\linewidth]{fig18.pdf}
\caption{Marginalized orbital period distribution of the innermost planet in the system.
The green line indicates the distribution of all planets in the system.
}
\label{f:posterior:inner}
\end{figure}
\section{Discussion}
\subsection{How rare is the solar system?}
Our modeling analysis indicates that multi-planet systems without planets interior to $P_\text{in} \sim 100$ days are intrinsically rare. We estimate, based on parametric distributions of planet parameters, that $8^{+10}_{-5}\%$ of planetary systems have no planet interior to the orbit of Mercury and $3^{+5}_{-2}\%$ have no planet interior to Venus. This implies that solar system may simply be in the tail of the distribution of exoplanet systems.
On the other hand, this comparison with the solar system is made under the assumption that planet orbital architectures (inclinations, spacings, inner planet location) are independent of radius and orbital period. This assumption will need to be verified with additional data. The orbital architectures of the \textit{Kepler} planetary systems are most constrained by planets that are larger and closer in, typically $P\sim 10$ days and $R\sim 2 ~R_\oplus$ (see Figure \ref{f:single:mcmc}). A solar system analogue, if detected, would most likely appear as a single transiting systems in the \textit{Kepler}\xspace data due the low probability that multiple terrestrial planets transit \citep[e.g.][]{2016ApJ...821...47B}. Hence, we can not rule out that the solar system may be part of a population of planetary systems with different orbital characteristics than those detected by the \textit{Kepler}\xspace mission.
We make a direct estimate of how many planetary systems without planets interior to Mercury or Venus could be present in the Kepler data, without extrapolating the distribution of orbital periods from closer-in planets. We generate a population of planetary systems with similar orbital properties as the solar system where we place the innermost planet at the orbital period of mercury (or Venus) but otherwise keep the same parameters as in the best-fit model. Such a simulated planet population with $\eta_s=67\%$ and $\ensuremath{f_{\rm iso}}\xspace=0$ matches planet occurrence rates exterior to $P=50$ days for $P_\text{in}=88$ days. The simulated observations predict $~30$ multi-planet systems with an innermost planet exterior to $P=50$ days, while only $4$ are observed. This indicates that if a population of planetary systems without planets interior to Mercury existed it would be detectable in the Kepler data. We do note that most detectable planets in this range are larger than one earth radius, so the data do not directly constrain true solar system analogues.
Using a mixture of co-planar systems and highly inclined systems (which appear as intrinsically single), we can rule out that more than $10\%$ of systems ($\ensuremath{f_{\rm iso}}\xspace=0.9$) have $P_\text{in}=88$ days and $\Delta i=2$.
We repeat this exercise for systems where the innermost planet shares the orbital period of Venus ($P=225$ days). None of these simulated systems would be detectable as multi-planet systems, and no planetary systems with an orbital period larger than $150$ days are detected with Kepler. Hence we can not rule out that the Kepler exoplanet population contains a significant population of multi-planet systems without planets interior to Venus, though we do not find evidence that such a population exists.
\begin{figure}
\includegraphics[width=\linewidth]{fig19.pdf}
\caption{Best-fit distribution of planetary system properties from Kepler (blue) compared to the solar system terrestrial planets (red letters). The inclinations of the terrestrial planets are with respect to the invariable plane.
The orbital periods, radii, inclinations, and period ratios of the terrestrial planets lie near the peak of the distribution of kepler systems. The only notable exception is the orbital period of the innermost planet, where Mercury (93th percentile) and Venus (97th percentile) lie in the tail of the distribution.
}
\label{f:ss}
\end{figure}
Overall, the solar system seems to be a typical planetary system in most diagnostics of orbital architectures (planetary radii, periods, period ratios, inclinations, and planets per system, Figure \ref{f:ss}). The only clear difference is in the period of the innermost planet where the solar system is an outlier.
While the lack of super-earths/mini-Neptunes in the solar system is also notable \citep{2015ApJ...810..105M}, earth-sized planets are not intrinsically rare, and the size of the terrestrial planets does not make the solar system an outlier in the exoplanet distribution.
\subsection{Systems with habitable zone planets}
We also estimate $\eta_\oplus$, the number of earth-sized planets with earth-like orbital periods (``habitable zone'' planets), here defined as $0.9 P_\oplus < P < 2.2 P_\oplus$ and $0.7 R_\oplus < R < 1.5 R_\oplus$ based on the \cite{Kopparapu:2013fu} conservative habitable zone for a sun-like star that is representative of the most common star in the \textit{Kepler}\xspace sample. By integrating the posterior distribution over the radius and orbital period range we find $\eta_\oplus= 36^{+14}_{-14}\%$ or $\Gamma_\oplus= \frac{\eta_\oplus}{dlnP ~dlnR}= 53^{+20}_{-21}\%$. These results are consistent with the estimate of $\Gamma_\oplus=60\%$ for GK dwarfs by \cite{2015ApJ...809....8B} though we note that there is a large dispersion in the literature\footnote{\url{https://exoplanets.nasa.gov/system/internal_resources/details/original/680_SAG13_closeout_8.3.17.pdf}}
on habitable zone planet occurrence rates based on the adopted completeness correction, the method of extrapolation into the habitable zone, and the planet sample selection.
The confidence intervals on $\eta_\oplus$ include counting statistics and systematic uncertainties in extrapolating the planet radius and orbital period distribution out to the habitable zone. They do not include systematic uncertainties on the adopted stellar parameters, in particular the stellar radii which directly impacts the planet radii. For example, large uncertainties in the stellar radius of unresolved stellar binaries lead to over-estimating the occurrence of small planets \citep{2015ApJ...805...16C,2015ApJ...799..180S}.
Better observational constraints on the stellar properties are expected based on parallax measurments from the ESO \textit{Gaia} mission.
As a consistency check, we have repeated all calculations in this paper with the improved stellar radii and giant star classification from \citep{2018arXiv180500231B}. The best-fit parameteric distributions are consistent with those in table \ref{t:fit:single} within errors, and the habitable zone planet occurrence rate of $\eta_\oplus= 40^{+14}_{-14}\%$ is not significantly different.
The habitable zone planet occurrence rates we estimate is consistent with that of \cite{2015ApJ...809....8B} based on the Q1-Q16 catalog, but about a factor four higher than of the earliest estimate based on \textit{Kepler}\xspace data from \cite{2013PNAS..11019273P}, which we attribute to an improved understanding of detection efficiency of \textit{Kepler}\xspace.
This rate is comparable to that of M dwarfs estimated with inverse detection efficiency methods \citep{2015ApJ...807...45D}, though we leave a comparison between M dwarfs and FGK stars with consistent methodology for a future paper.
The majority of simulated planets at long orbital periods ($P\sim 100$ days) are not intrinsically single, but are part of multi-planet systems where the innermost planets are typically at an orbital period of $P\sim10$ days. While the multi-planet statistics do not directly constrain planetary systems of earth-sized planets in the habitable zone of sun-like stars, the presence of planets on orbital periods of 10 days does provide a way to identify targets for future direct imaging missions, such as the HabEx and LUVOIR mission concepts.
\cite{2017MNRAS.465.3495K} have shown that the presence of transiting planets at short orbital periods increases the chance of finding transiting planets at larger orbital periods. Our analysis indicates that, because most planets in multi-planet systems are not transiting, the probability of finding \textit{any} planet (transiting or non-transiting) is even higher.
For example, the detection of an earth-mass planets at an orbital period of ten days with radial velocity measurements indicates a higher probability of finding a planet in the habitable zone. Depending on whether single-transiting planets are intrinsically single or highly-inclined multiples, $50-100\%$ of systems with close-in planets may also have planets in the habitable zone.
\subsection{Protoplanetary disk inner edges}
The peak in the location of the innermost planet in the system points to a preferred location of planet formation/migration in protoplanetary disks. This location may reflect either the inner edge of the region where planets form \citep{2013MNRAS.431.3444C,2015ApJ...798..112M,2017ApJ...842...40L} or that of a planet trap where planet migration stalls \citep{2007ApJ...654.1110T}. While the break in planet occurrence at $\sim 10$ days has been attributed to the disk inner edge before \citep{2015ApJ...798..112M,2017ApJ...842...40L}, the peak in the location of the innermost planet presents solid evidence that there is a preferred location for planet formation at 0.1 au, rather than a continuum of location between 0.1-1 au. In particular, this result is inconsistent with a wide regions between 0.1 and 1 au acting as a trap for individual planets \citep[e.g.][]{2014A&A...567A.121D}, and is more reminiscent of inward migration of multiple planets where the first planet is trapped at the inner disk edge and halts the migration of other planets \citep{2014A&A...569A..56C,2016MNRAS.457.2480C,2017MNRAS.470.1750I}.
\begin{figure*}
\includegraphics[width=\linewidth]{fig20.pdf}
\caption{Illustration of how the estimated fraction of stars with planetary systems depends on the distribution of planets across stars. Nearly-coplanar systems orbit $31\%$ of stars (green) while the distribution of observed single transiting planets (purple) is less well constrained.
}
\label{f:dichotomy}
\end{figure*}
\subsection{Do most stars have planets?}
Our analysis confirms that the \textit{average} number of planets per (sun-like) star is larger than one (\S \ref{s:para}). However, this does not imply that most stars have planets, since planets can be unevenly distributed among stars.
In (\S \ref{s:para}) we show that $\eta_s=67^{+17}_{-12}\%$ of stars can host multi-planet systems. An important assumption we made is that the apparent excess of single transits is because $\ensuremath{f_{\rm iso}}\xspace=38\%$ of multi-planet systems have high mutual inclinations. The inclination distribution of exoplanets can not be uniquely constrained from transit survey data \citep{2012AJ....143...94T}. Different assumptions on the mutual inclination distribution lead to different estimates of the fraction of stars with planets, illustrated in Figure \ref{f:dichotomy}.
On the conservative end, the highly-inclined planets in otherwise coplanar multi-planet systems could contribute to the observed number of single transits (Figure \ref{f:dichotomy}, right panel). In this case, the dichotomy is an artifact of the assumption that mutual inclinations follow a Rayleigh distribution. A distribution with a larger tail towards higher inclinations
could possibly fit the multi-planet statistics with a single population. In this case, the fraction of stars with planets drops to $\sim 41\%$. Alternatively, a population of intrinsically single planets could contribute to the observed single transiting systems (Figure \ref{f:dichotomy}, right panel). In this case, the estimated fraction of stars with planets increases to unity.
A way of discriminating between both scenarios may be the use of stellar obliquity. Transiting planets with high mutual inclinations or isotropic distributions have large obliquities with respect to the line of sight. There are indications that the stellar obliquity is larger for single-transit systems \citep{2014ApJ...796...47M}, indicating that they may have highly inclined orbits.
Several mechanisms have been proposed to disrupt initially co-planar systems, reducing their multiplicity and/or increasing their mutual inclinations, and giving rise to the population of single transiting systems.
These mechanisms include dynamical instabilities within systems \citep{2015ApJ...806L..26V,2015ApJ...807...44P},
external perturbations from giant planets or stars \citep{2012ApJ...758...39J,2017AJ....153...42L,2017MNRAS.467.1531H,2017MNRAS.470.1750I,2018MNRAS.474.5114C} and host-star misalignment \citep{2016ApJ...830....5S}.
Alternatively, planets may form as intrinsically single systems in half the cases \citep{2016ApJ...832...34M}. Additional constraints on the single planet population are needed to constrain these scenarios. However, we stress that roughly half of the observed single transiting systems are part of multi-planet systems and have non-transiting or non-detected planets in the system, and hence the sample of ``singles'' is diluted with multi-planets, weakening any potential trends.
Transit timing variations can also be used to detect additional non-transiting planets in the system \citep{2012Sci...336.1133N}.
The \textit{Kepler}\xspace mission only detects planets out to $\sim1$ au and down to $\sim0.5 R_\oplus$. Hence, any estimate of the fraction of stars with planets is likely to be a lower limit. We do not see a clear indication that planet occurrence rate decreases towards the detection limits of Kepler. While planet occurrence rates calculated using the inverse detection efficiency seem to decrease for earth-size and smaller planets (Fig \ref{f:posterior}), this is also the region where the \textit{Kepler}\xspace exoplanet list becomes incomplete. The posterior planet size distribution does not show this trend and is almost flat in log $R_p$, indicating that planets below the detection efficiency of \textit{Kepler}\xspace could be extremely common. Since planetary systems tend to have planets of similar size \citep{2017ApJ...849L..33M,2018AJ....155...48W}, planetary systems with only planets smaller than the detection limit would be missed. Extending the planet size distribution down to the size of Ceres ($0.07 ~R_\oplus$) would increase the fraction of stars with planets to $100\%$.
We find that the orbital period distribution of sub-Neptunes is nearly flat in logarithm of orbital period. However, planets at long orbital periods are mostly members of systems with planets also at shorter orbital periods: there is no evidence for a large population of planetary systems with only long-period planets. Extrapolating the distribution of planetary systems to orbital periods larger than a year would add planets to existing systems, increasing the number of planets per star but not the fraction of stars with planets. Of course, such an extrapolation does not take into account the presence of additional populations of planets, which could be important, for example, if planets form more frequently at the snow line (exterior to $1$ au). Giant planets orbit $10-20\%$ of sun-like stars \citep{2008PASP..120..531C}, most of them beyond 1 au.
Microlensing surveys hint at the existence of a population of Neptune-mass planets around a significant fraction of M dwarfs \citep{2012Natur.481..167C,2016ApJ...833..145S}. Since these populations remain mostly undetected in the Kepler survey, it is not clear if they belong to the same stars, though there are some indications that they do \citep{2016ApJ...825...98H}.
\section{Summary}
We present the Exoplanet Population Observation Simulator, \texttt{EPOS}\xspace, a \texttt{Python} code to constrain the properties of exoplanet populations from biased survey data. We showcase how planet occurrence rates and orbital architectures can be constrained from the latest \textit{Kepler}\xspace data release, \texttt{DR25}. We find that:
\begin{itemize}
\item The \textit{Kepler}\xspace exoplanet population between orbital periods of $2-400$ days and planet radii of $0.5-6 R_\oplus$ are well described by broken power--laws. The planet occurrence rate in this regime is $\eta=2.4\pm0.5$ consistent with previous works. The estimated planet occurrence rate in the habitable zone is $\eta_\oplus= 36\pm14\%$.
\item The observed multi-planet frequencies are consistent with ensemble populations that are a mix of nearly co-planar systems and a population of planets whose orbital architectures are unconstrained, consistent with the previously reported Kepler dichotomy. $62\pm8 \%$ of exoplanets are in systems with 6 or more planets, orbiting $42\%$ of sun-like stars. The remaining $38\pm8 \%$ of exoplanets could be intrinsically single planets or be part of multi-planet systems with high mutual inclinations, raising the fraction of stars with planets to somewhere between $45\%$ and $100\%$.
\item The mutual inclinations of planetary orbits can be described by a Rayleigh distribution with a mode of $i=2\pm1\degr$. The spacing between adjacent planets follows a wide distribution with a peak at an orbital period ratio $\mathcal{P}=1.8$, though detection biases shift the peak of the observed distribution to shorter orbital period ratios.
\item The distribution of the innermost planet in the system peaks at an orbital period of $P\approx 10$ days. Planetary systems without planets interior to Mercury's and Venus' orbit are rare at $8\%$ and $3\%$, respectively.
\end{itemize}
The \texttt{EPOS}\xspace code presented in this paper provides a first step in a larger effort to constrain planetary orbital architectures from exoplanet populations. The next step will be to use more complex models of planetary system architectures based on planet formation models. Future directions include incorporating additional exoplanet survey data from radial velocity, microlensing, and direct imaging, as well as from transit surveys such as K2 and TESS.
\acknowledgments
We thank an anonymous referee for a constructive review of the manuscript.
We also thank Shannon Dulz and Peter Plavchan for feedback on an early manuscript draft, Ed Bedrick for advice on statistical methods, and Daniel Huber for advice regarding stellar properties. We acknowledge helpful conversations with Carsten Dominik, Eric Ford, Daniel Carrera, Renu Malhotra, and Rachel Fernandes.
This material is based upon work supported by the National Aeronautics and Space Administration under Agreement No. NNX15AD94G for the program “Earths in Other Solar Systems”. The results reported herein benefited from collaborations and/or information exchange within NASA’s Nexus for Exoplanet System Science (NExSS) research coordination network sponsored by NASA’s Science Mission Directorate.
\software{
NumPy \citep{numpy}
SciPy \citep{scipy}
Matplotlib \citep{pyplot}
Astropy \citep{astropy}
EPOS \citep{epos}
emcee \citep{2013PASP..125..306F}
corner \citep{corner}
KeplerPORTs \citep{2017ksci.rept...19B}
}
\ifhasbib
|
2,877,628,089,057 | arxiv | \section{#1}
\setcounter{equation}{0}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
\theoremstyle{definition}
\newtheorem{assumption}[theorem]{Assumption}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\theoremstyle{remark}
\newtheorem{remark}[theorem]{Remark}
\def\scal#1{\langle #1 \rangle}
\newcommand{\vn}[1]{{\vert\kern-0.23ex\vert\kern-0.23ex\vert #1
\vert\kern-0.23ex\vert\kern-0.23ex\vert}}
\newcommand{\bn}[1]{{[\kern-0.5ex] #1
[\kern-0.5ex]}}
\allowdisplaybreaks
\newcommand\Z{\mathbb{Z}}
\newcommand\bR{\mathbf{R}}
\newcommand\bone{\mathbf{1}}
\newcommand\cR{\mathcal{R}}
\newcommand\CR{{\mathcal{R}}}
\newcommand\cK{\mathcal{K}}
\newcommand\cD{\mathcal{D}}
\newcommand\CD{\mathcal{D}}
\newcommand\cB{\mathcal{B}}
\newcommand\cA{\mathcal{A}}
\newcommand\cC{\mathcal{C}}
\newcommand\CC{{\mathcal{C}}}
\newcommand\cN{\mathcal{N}}
\newcommand\CN{\mathcal{N}}
\newcommand\cI{\mathcal{I}}
\newcommand\cJ{\mathcal{J}}
\newcommand\cQ{\mathcal{Q}}
\newcommand\cZ{\mathcal{Z}}
\newcommand\cS{\mathcal{S}}
\newcommand\cG{\mathcal{G}}
\newcommand\cV{\mathcal{V}}
\newcommand\CV{\mathcal{V}}
\newcommand\cM{\mathcal{M}}
\newcommand\cW{\mathcal{W}}
\newcommand\CW{\mathcal{W}}
\newcommand\cU{\mathcal{U}}
\newcommand\cP{\mathcal{P}}
\newcommand\cF{\mathcal{F}}
\newcommand\CO{\mathcal{O}}
\newcommand\CH{\mathcal{H}}
\newcommand\scR{\mathscr{R}}
\newcommand\scK{\mathscr{K}}
\newcommand\scZ{\mathscr{Z}}
\newcommand\scD{\mathscr{D}}
\newcommand\scB{\mathscr{B}}
\newcommand\scT{\mathscr{T}}
\newcommand\frK{\mathfrak{K}}
\newcommand\frs{\mathfrak{s}}
\newcommand\frm{\mathfrak{m}}
\newcommand{\mathbb{C}}{\mathbb{C}}
\def\textstyle{1\over 2}{\textstyle{1\over 2}}
\def\varepsilon{\varepsilon}
\def\mathop{\mathrm{Erf}}{\mathop{\mathrm{Erf}}}
\newcommand{{\mathbb{R}}}{{\mathbb{R}}}
\newcommand{\mathbf{E}}{\mathbf{E}}
\newcommand{\mathbf{Var}}{\mathbf{Var}}
\def\mathop{\mathrm{supp}}{\mathop{\mathrm{supp}}}
\def\partial{\partial}
\def\mathrm{id}{\mathrm{id}}
\begin{document}
\title{Singular SPDEs in domains with boundaries}
\author{M\'at\'e Gerencs\'er${}^1$ and Martin Hairer${}^2$}
\institute{IST Austria, \email{[email protected]} \and
University of Warwick, \email{[email protected]}}
\date{\today}
\maketitle
\begin{abstract}
We study spaces of modelled distributions with singular behaviour near the boundary of a domain that, in the context of the
theory of regularity structures, allow one to give robust solution theories for singular stochastic PDEs with boundary
conditions. The calculus of modelled distributions established in Hairer (\emph{Invent. Math.} \textbf{198}, 2014) is extended to this
setting. We formulate and solve fixed point problems in these spaces with a class of kernels that is sufficiently large to
cover in particular the Dirichlet and Neumann heat kernels. These results are then used to provide solution theories
for the KPZ equation with Dirichlet and Neumann boundary conditions and for
the 2D generalised parabolic Anderson model with Dirichlet boundary conditions.
In the case of the KPZ equation with Neumann boundary conditions, we show that, depending
on the class of mollifiers one considers, a ``boundary renormalisation'' takes place. In other words,
there are situations in which a certain boundary condition is applied to an approximation to the KPZ
equation, but the limiting process is the Hopf-Cole solution to the KPZ equation with
a \textit{different} boundary condition.
\end{abstract}
\setcounter{tocdepth}{2}
\tableofcontents
\section{Introduction}
The theory of regularity structures, recently developed in \cite{H0}, was in large part motivated by, and very successful in dealing with, singular stochastic partial differential equations (SPDEs). These SPDEs are typically semilinear perturbations of the stochastic heat equation, with their formal right-hand side including expressions that are not well-defined even for functions that are as regular as the solution of the linear part. One well-known example is the KPZ equation
$$
\partial_t u=\Delta u+(\partial_x u)^2+\xi,
$$
where $\xi$ is the $1+1$-dimensional space-time white noise. From the linear theory we know that $u$ is not expected to have better (parabolic) regularity than $1/2$, so its spatial derivative is a distribution, which, in general, one cannot take the square of. The theory developed in \cite{H0} provided a robust concept of solution to equations like KPZ \cite{H_KPZ}, $\Phi^4_3$, the parabolic Anderson model in both two \cite{H0} and three \cite{HP15} dimensions, the dynamical Sine-Gordon model \cite{HShen} on the torus, or such equations on the whole Euclidean space \cite{HL15}.
As neither the torus nor the whole space has boundaries, the spatial behaviour in these examples are `uniform', and the only blow-up of the generalised abstract Taylor expansions - also referred to as `modelled distributions' - that describe the solutions occur at the $\{t=0\}$ hyperplane of the initial time.
The aim of the present article is to provide a framework within the context of this theory, with which one can provide
solution theories for initial-boundary problems for singular SPDEs. The appropriate spaces of modelled distributions
introduced here are flexible enough to account for singular behaviour at the spatial boundary. These are similar to the
singularities at the initial time treated in \cite{H0} and indeed a similar calculus can be built on them. One could hope
that, provided such a generalisation of the abstract calculus is obtained, coupling it with rest of the theory automatically
gives solution theories of the same equations that were previously considered without or with periodic boundary conditions,
now with for instance Dirichlet or Neumann boundary conditions. However, a subtle-looking but notable difference is that the codimension $2$ of the initial time hyperplane is replaced by the codimension $1$ of the spatial boundary, and therefore dual elements of spaces of test functions supported away from the boundary which are uniformly `locally in $\cC^\alpha$' for $\alpha<-1$ have no canonical extensions as bona fide distributions - a simple example for such situation is the function $1/|x|$, considered as an element of $\cD'({\mathbb{R}}\setminus\{0\})$. As elements with (local) regularity less than $-1$ are quite common in applications (unlike elements with regularity less than $-2$), for each such object one has to make sense of their extensions, in a consistent way so that the sufficient continuity properties are preserved.
Although, unlike the rest of the theory, the treatment of this issue is not performed in a systematic way,
the methods used to treat the examples discussed in the next section are likely to be relevant to different situations.
\subsection{Applications}\label{subsec:applications}
We now give a few examples of singular SPDEs to which the framework developed in this article can be applied.
The proofs of the results stated here are postponed to Section~\ref{sec:applications}.
Our first example is the Dirichlet problem for the two-dimensional generalised parabolic Anderson model given by
\begin{equs}[eq:0PAM]
\partial_t u&=\Delta u+f_{ij}(u)\partial_iu\partial_j u+g(u)\xi\quad & \text{on } &{\mathbb{R}}_+\times D,\\
u&=0, & \text{on } &{\mathbb{R}}_+\times\partial D,\\
u&=u_0.&\text{on }&\{0\}\times D
\end{equs}
Here $\xi$ denotes two-dimensional spatial white noise, $g$ and $f_{ij}$, $i,j=1,2$ are smooth functions, $D$ is the square $(-1,1)^2$, and $u_0$ belongs to $\cC^\delta(\bar D)$ for some $\delta>0$ and vanishes on $\partial D$.
Take a smooth compactly supported function $\rho$ on ${\mathbb{R}}^2$ integrating to 1, define $\rho_\varepsilon(x)=\varepsilon^{-2}\rho(\varepsilon^{-1}x)$ and set $\xi_\varepsilon=\rho_\varepsilon\ast\xi$. Consider then the renormalised approximating initial / boundary value problem
\begin{equs}[2][eq:0PAM approx]
\partial_t u^\varepsilon &=\Delta u^\varepsilon+f_{ij}(u^\varepsilon)(\partial_iu^\varepsilon\partial_j u^\varepsilon-\delta_{ij}C_\varepsilon g^2(u^\varepsilon)) &&\\
&\quad +g(u^\varepsilon)(\xi_\varepsilon-2C_\varepsilon g'(u^\varepsilon))\quad & \text{on } &{\mathbb{R}}_+\times D,\\
u^\varepsilon&=0, & \text{on } &{\mathbb{R}}_+\times \partial D,\\
u^\varepsilon&=u_0, & \text{on } &\{0\}\times D,
\end{equs}
for some constants $C_\varepsilon$. One can solve \eqref{eq:0PAM approx} in the classical sense, and in the $\varepsilon\rightarrow0$ limit this provides a concept of local solution to \eqref{eq:0PAM} in the following sense.
\begin{theorem}\label{thm:PAM}
There exists a choice of diverging constants $C_\varepsilon$ and a random time $T>0$ such that the sequence $u^\varepsilon\bone_{[0,T]}$ converge in probability to a continuous function $u$. Furthermore,
provided that the constants $C_\varepsilon$ are suitably chosen, the limit does not depend on the choice of the mollifier $\rho$.
\end{theorem}
\begin{remark}
We believe that the choice $D = (-1,1)^2$ is not essential, the restriction to the square case
is mostly for the sake of convenience: it is easier to verify our conditions when the explicit form
of the Greens function is known.
\end{remark}
\begin{remark}
One could easily deal with inhomogeneous Dirichlet data of the type $u^\varepsilon=g$ on $\partial D$
by considering the equation for $u^\varepsilon - \hat g$, where $\hat g$ is the harmonic extension of $g$ to
all of $D$.
\end{remark}
Our next example is the KPZ equation with $0$ Dirichlet boundary condition.
Write this time $\xi$ for space-time white noise and choose $u_0\in\cC^\delta([-1,1])$ for some $\delta>0$ with
$u_0(\pm 1) = 0$.
Taking a smooth,
compactly supported function $\rho$ integrating to $1$, define $\rho_\varepsilon(t,x)=\varepsilon^{-3}\rho(\varepsilon^{-2}t,\varepsilon^{-1}x)$ and set $\xi_\varepsilon=\rho_\varepsilon\ast\xi$. The approximating equations then read as
\begin{equs}[eq:0KPZ approx]
\partial_t u^\varepsilon &= \textstyle{1\over 2} \partial_x^2 u^\varepsilon+ (\partial_x u^\varepsilon)^2-C_\varepsilon+\xi_\varepsilon \quad & \text{on } &{\mathbb{R}}_+ \times [-1,1],\\
u^\varepsilon &=0, & \text{on } &{\mathbb{R}}_+ \times \{\pm 1\},\\
u^\varepsilon&=u_0 & \text{on } & \{0\}\times[-1,1].
\end{equs}
\begin{remark}
We have chosen to include the arbitrary constant $\textstyle{1\over 2}$ in front of the term $\partial_x^2 u$ so that the corresponding
semigroup at time $t$ is given by the Gaussian with variance $t$.
\end{remark}
We then have the following analogous result on local solvability.
\begin{theorem}\label{thm:KPZ D}
If $\rho$ satisfies the condition $\rho(x,t) = \rho(-x,t)$, then the
statement of Theorem~\ref{thm:PAM} also holds for $u^\varepsilon$ defined in \eqref{eq:0KPZ approx}.
\end{theorem}
\begin{remark}
If the additional symmetry on $\rho$ fails, then an analogous result holds,
but an additional drift term appears in general, see for example \cite{HS15}.
\end{remark}
A more interesting situation arises when trying to define solutions to the KPZ equation with Neumann boundary
conditions. First, in this case, it is much less clear \textit{a priori} what such a boundary condition
even means since solutions are nowhere differentiable. It is however possible to define a notion of
``KPZ equation with Neumann boundary conditions'' via the Hopf-Cole transform. Indeed, it suffices to realise
that, at least \textit{formally}, if $u$ solves
\begin{equ}[e:KPZ]
\partial_t u = \textstyle{1\over 2} \partial_x^2 u + (\partial_x u)^2 + \xi\;,\qquad \partial_x u(t,\pm 1) = c_\pm\;,
\end{equ}
then the process $Z = \exp(2u)$ solves
\begin{equ}[e:SHE]
\partial_t Z = \textstyle{1\over 2} \partial_x^2 Z + 2 Z\,\xi\;,\qquad \partial_x Z(t,\pm 1) = 2c_\pm Z(t,\pm 1)\;.
\end{equ}
The latter equation is well-posed as an It\^o stochastic PDE in mild form \cite{DPZ} (with the boundary condition
encoded in the choice of heat semigroup for the mild formulation), so that
we can \textit{define} the ``Hopf-Cole solution'' to \eqref{e:KPZ} by $u = \textstyle{1\over 2} \log Z$ with $Z$ solving \eqref{e:SHE}.
This is the point of view that was taken in \cite{2016arXiv161004931C} where the authors showed that the height function
associated to a large
but finite discrete system of particles performing a weakly asymmetric simple exclusion
process converges to the solutions to \eqref{e:KPZ}
with boundary conditions $c_\pm$ that are related to the boundary behaviour of the discrete system in a straightforward way. In particular, if the `net flow' of particles at each boundary is $0$, then $c_\pm=0$.
One of the main results of the present article is to show that the values of $c_\pm$ are very ``soft'' in the
sense that they in general depend in a rather non-trivial way on the fine details of the particular approximation
one considers for \eqref{e:KPZ}. This is not too surprising: after all, the solution itself is
not differentiable, so it is not so clear what we mean when we impose the value of its derivative at the
boundary.
To formulate this more precisely, consider $\xi_\varepsilon=\rho_\varepsilon\ast\xi$ and $\hat u_0 \in \CC^\delta([-1,1])$
as before (except that we do not impose that $\hat u_0$ vanishes at the boundaries)
and let $\hat u^\varepsilon$ be the solution to
\begin{equs}[eq:0KPZ N approx]
\partial_t \hat u^\varepsilon &=\textstyle{1\over 2}\partial_x^2 \hat u^\varepsilon+(\partial_x \hat u^\varepsilon)^2+\xi_\varepsilon \quad & \text{on } &{\mathbb{R}}_+\times [-1,1],\\
\partial_x \hat u^\varepsilon &= \hat b_\pm, & \text{on } &{\mathbb{R}}_+\times \{\pm 1\},\\
\hat u^\varepsilon &=\hat u_0 &\text{on }&\{0\}\times[-1,1].
\end{equs}
We then have the following result.
\begin{theorem}\label{thm:KPZ N}
There exist constants $C_\varepsilon$ with $\lim_{\varepsilon \to 0} C_\varepsilon = \infty$, as well as
constants $a, c \in {\mathbb{R}}$ such that, setting
\begin{equ}[e:renormu]
u^\varepsilon(t,x) = \hat u^\varepsilon(t,x) - C_\varepsilon t - cx\;,
\end{equ}
the sequence $u^\varepsilon$ converges, locally uniformly and in probability, to a limit $u$
solving the KPZ equation \eqref{e:KPZ} in the Hopf-Cole sense with boundary data
$b_\pm = \hat b_\pm - c \pm a$ and with initial condition $u_0(x)=\hat u_0(x)-cx$.
In the particular case where $\rho(x,t) = \rho(-x,t)$, one has $c = 0$.
\end{theorem}
\begin{remark}
Even in the symmetric case, one can have $a \neq 0$, so that one can end up
with non-zero boundary conditions
in the limit, although one imposes zero boundary conditions for the approximation.
\end{remark}
\begin{remark}
The effect of subtracting $cx$ in \eqref{e:renormu} is the same as that of adding a drift
term $2c \partial_x u^\varepsilon$ to the right hand side of \eqref{eq:0KPZ N approx} and changing the boundary condition
$\hat c_\pm$ into $\hat c_\pm - c$, which is the reason for the form of the constants $c_\pm$.
\end{remark}
\begin{remark}
At first sight, this may appear to contradict the results of \cite{Ismael} where the authors consider
the three-dimensional parabolic Anderson model in a rather general setting which covers
that of domains with boundary. Since this scales in exactly the same way as the KPZ equation
(after applying the Hopf-Cole transform), one would expect to observe a similar ``boundary renormalisation''
in this case. The reason why there is no contradiction with our results is that there is no
statement on the behaviour of the renormalisation term $\lambda^\varepsilon$ in \cite[Thm~1]{Ismael} as a
function of position. What our result suggests is that, at least in the flat case, one should
be able to take $\lambda^\varepsilon$ of the form $\lambda^\varepsilon = C_\varepsilon + \mu$, where $C_\varepsilon$ is a
constant and $\mu$ is some measure concentrated on the boundary of the domain.
\end{remark}
\begin{remark}
The recent result \cite{Nicolas} is consistent with our result in the sense that it shows
that the ``natural'' notion of solution to \eqref{e:KPZ} with homogeneous Neumann boundary condition
(i.e.\ $c_\pm = 0$) does \textit{not} coincide with the Hopf-Cole solution with homogeneous
boundary data.
In this particular case, one possible interpretation is that, for any fixed time, the solution
to the KPZ equation is a forward / backwards semimartingale (in its own filtration) near the
right / left boundary point. It is then natural to define the ``space derivative'' at the boundary
to be the derivative of its bounded variation component. When performing the Hopf-Cole
transform, one then picks up an It\^o correction term, which is precisely what one sees in \cite{Nicolas}.
Note however that it is not clear at all whether the homogeneous Neumann solution of \cite{Nicolas} can
be obtained by considering \eqref{eq:0KPZ N approx} with $\hat b_\pm = 0$ for some mollifier
$\rho$. This is because, with our conventions for units, this corresponds to the Hopf-Cole solution
with $b_\pm = \pm 1$, while in our case one has $|a| \le {1\over 2}$ as a consequence of
the explicit formula \eqref{eq:constant a} for typical choices of the mollifier, i.e.\ those
with $\rho \ge 0$.
\end{remark}
One has explicit expressions for $c$ and $a$ in terms of $\rho$:
with the notation $\bar\rho(s,y)=\rho(-s,-y)$ and $\mathop{\mathrm{Erf}}$ standing for the error function,
one has the identities
\begin{equs}
a&=\int_{{\mathbb{R}}^2}(\bar\rho\ast\rho)(s,y)\Big({1\over 2} - {1\over 2}\mathop{\mathrm{Erf}}\Big(\frac{|y|}{\sqrt{2|s|}}\Big) - 2|y| \CN(y,s)\Big)\,ds\,dy\;,
\label{eq:constant a}
\\
c&=2\int_{{\mathbb{R}}^2}(\bar\rho\ast\rho)(s,y)\, y\CN(y,s)\,ds\,dy\;,
\label{eq:constant c}
\end{equs}
where $\CN$ denotes the heat kernel, see Section~\ref{sec:KPZNeumann} below. Note that in both cases the function
integrated against $\bar\rho\ast\rho$ vanishes at $s = 0$ for any fixed value of $y$, so that
$a = c = 0$ if we consider the KPZ equation driven by purely spatial regularisations
of white noise. To the best of our knowledge, this is the first observed instance of ``boundary renormalisation''
for stochastic PDEs. On the other hand, it is somewhat similar to the effects one observes in the
analysis of (deterministic) singularly perturbed problems in the presence of boundary layers, see for example
\cite{Hinch,Holmes}.
The remainder of the article is structured as follows. After recalling some elements of the theory of
regularity structures in Section~\ref{sec:H0}, mostly to fix our notations, we introduce in Section~\ref{sec:def}
the spaces of modelled distributions that are relevant for solving singular stochastic PDEs on domains.
Section~\ref{sec:calculus} is then devoted to a rederivation of the calculus developed in \cite{H0}, adapted to these
spaces, with an emphasis on those aspects that actually differ in the present context.
In Section~\ref{sec:FPP}, we then ``package'' these results into a rather general fixed point theorem,
which is finally applied to the above examples in Section~\ref{sec:applications}.
\subsection*{Acknowledgements}
{\small
MH gratefully acknowledges support by the Leverhulme Trust and by an ERC consolidator grant,
project 615897.
MG thanks the support of the LMS Postdoctoral Mobility Grant.
}
\section{Elements of the theory of regularity structures}\label{sec:H0}
First let us summarise the relevant definitions, constructions, and results from the theory of regularity structures that we will need in the sequel.
\subsection{Main definitions}
\begin{definition}
A regularity structure $\scT=(A,T,G)$ consists of the following elements.
\begin{claim}
\item An index set $A\subset{\mathbb{R}}$ which is locally finite and bounded from below.
\item A graded vector space $T=\bigoplus_{\alpha\in A}T_\alpha$ with each $T_\alpha$ a finite-dimensional
normed vector space.
\item A group $G$ of linear operators $\Gamma:T\rightarrow T$, such that, for all $\Gamma\in G$, $\alpha\in A$, $a\in T_\alpha$, one has
$
\Gamma a-a=\bigoplus_{\beta<\alpha}T_\beta
$.
\end{claim}
We will furthermore always consider situations where $T_0$ contains a distinguished element $\bone$
of unit norm which is fixed by the action of $G$.
\end{definition}
\begin{definition}
Given a regularity structure and $\alpha\leq 0$, a sector $V$ of regularity $\alpha$ is a $G$-invariant subspace of $T$ of the form $V=\bigoplus_{\beta\in A}V_{\beta}$ such that $V_\beta\subset T_\beta$ and $V_\beta=\{0\}$ for $\beta<\alpha.$
\end{definition}
With $V$ as above, we will always use the notations
$
V_\alpha^+=\bigoplus_{\gamma\geq\alpha}V_\gamma$ and $V_{\alpha}^-=\bigoplus_{\gamma<\alpha}V_\gamma
$,
with the convention that the empty direct sum is $\{0\}$.
Some further notations will be useful. For $a\in T$, its component in $T_\alpha$ will be denoted either by $\cQ_\alpha a$ or by $(a)_\alpha$ and the norm of $(a)_\alpha$ in $T_\alpha$ is $\|a\|_\alpha$. The projection onto $T_\alpha^-$ is denoted by $\cQ_\alpha^-$. The coefficient of $\bone$ in $a$ is denoted by $\langle \bone,a\rangle$.
We henceforth fix a \textit{scaling} $\frs$ on ${\mathbb{R}}^d$, which is just an element of $\mathbb{N}^d$.
We use the notations $|\frs|=\sum_{i=1}^d\frs_i$, and, for any $d$-dimensional
multiindex $k$, we write $|k|_\frs=\sum_{i=1}^d\frs_i k_i$.
A scaling also induces a metric on ${\mathbb{R}}^d$ by
$d_\frs(x,y)=\sum_{i=1}^d|x_i-y_i|^{1/\frs_i}$,
and this quantity will also sometimes be denoted by $\|x-y\|_\frs$. This is homogeneous under the mappings $\cS_\frs^\delta$ defined by
$$
\cS_\frs^\delta(x_1,\ldots,x_d)=(\delta^{-\frs_1}x_1,\ldots,\delta^{-\frs_d}x_d)
$$
in the sense that $\|\cS_\frs^\delta x\|_\frs=\delta^{-1}\|x\|_\frs$. The ball with center $x$ and radius $r$, in the above sense, is denoted by $B(x,r)$. We also define the mapping $\cS_{\frs,x}^\delta$, acting on $L_1({\mathbb{R}}^d)$ by
$$
(\cS_{\frs,x}^\delta \varphi)(y)=\delta^{-|\frs|}\varphi(\cS_\frs^\delta(y-x)).
$$
We will also sometimes use the shortcut $\varphi_x^{\delta} = \cS_{\frs,x}^\delta \varphi$.
One important regularity structure is that of the polynomials in $d$ commuting variables,
which we denote by $X_1,\ldots,X_d$. For any nonzero multiindex $k$, we denote
$$
X^k=X_1^{k_1}\cdots X_d^{k_d},
$$
and also use the notation $X^0=\bone.$ We define the index set $\bar A=\mathbb{N}$, for any $n\in\mathbb{N}$, the subspaces
$$
\bar T_n=\text{span}\{X^k:|k|_\frs=n\},
$$
and for any $h\in{\mathbb{R}}^d$, the linear operator $\bar\Gamma_h$ by
$$
(\bar\Gamma_h P)(X)=P(X+h).
$$
It is straightforward to verify that this defines a regularity structure $\bar \scT$, with structure group $\bar G=\{\bar\Gamma_h:h\in{\mathbb{R}}^d\}\approx{\mathbb{R}}^d$.
In most of the following we consider $d$, $\scT$, and $\frs$ to be fixed.
We will always assume that our regularity structures contain $\bar \scT$ in the sense of \cite[Sec.~2.1]{H0}.
A concise definition of the H\"older spaces of all (non-integer) exponents that are used in the sequel is the following.
\begin{definition}
A distribution $\xi \in \CD'({\mathbb{R}}^d)$ is said to be of class $\cC^\alpha$, if for every compact set $\frK\subset{\mathbb{R}}^d$ it holds that
\begin{equation}\label{eq:holder def}
|\xi (\varphi_x^{\delta})|\lesssim\delta^\alpha
\end{equation}
uniformly over $\delta\leq 1$, $x\in\frK$, and over test functions $\varphi$ supported on $B(0,1)$ that furthermore
have all their derivatives up to order $(\lceil-\alpha\rceil+1) \vee 0$ bounded by $1$ and satisfy
$\int\varphi(x)x^k\,dx=0$ for every multiindex $|k| < \alpha$.
The best proportionality constant in \eqref{eq:holder def} is denoted by $\|\xi\|_{\alpha;\frK}$.
\end{definition}
We shall also use the notation $\cB^r$ for smooth functions $\varphi$ supported on $B(0,1)$ and having derivatives up to order $r$ bounded by $1$.
\begin{definition}
A model for a regularity structure $\scT$ on ${\mathbb{R}}^d$ with a scaling $\frs$ consists of the following elements.
\begin{claim}
\item A map $\Gamma:{\mathbb{R}}^d\times{\mathbb{R}}^d\rightarrow G$ such that $\Gamma_{xy}\Gamma_{yz}=\Gamma_{xz}$ for all $x$, $y$, $z\in{\mathbb{R}}^d$.
\item A collection of continuous linear maps $\Pi_x: T\rightarrow\cS'({\mathbb{R}}^d)$ such that $\Pi_x=\Pi_y\circ\Gamma_{xy}$ for all $x$, $y\in{\mathbb{R}}^d$.
\end{claim}
Furthermore, for every $\gamma>0$ and compact $\frK\subset{\mathbb{R}}^d$, the bounds
\begin{equation}\label{eq: model bounds}
|(\Pi_x a)(\cS_{\frs,x}^\delta\varphi)|\lesssim\|a\|_l\delta^l,\quad\quad
\|\Gamma_{xy}a\|_m\lesssim\|a\|_l\|x-y\|_\frs^{l-m}
\end{equation}
hold uniformly in $x,y\in\frK$, $\delta\in(0,1]$, $\varphi\in\cB^r$, $l<\gamma$, $m<l$, and $a\in T_l$. Here, $r$ is the smallest integer such that $l>-r$ for all $l\in A$.
The best proportionality constants in \eqref{eq: model bounds} are denoted by $\|\Pi\|_{\gamma,\frK}$ and $\|\Gamma\|_{\gamma,\frK}$, respectively.
\end{definition}
We shall always assume that all models under consideration are compatible with the polynomials in the sense that $(\Pi_x X^k)(y)=(y-x)^k$ for any multiindex $k$.
A central notion of the theory is that of a modelled distribution, spaces of which are defined as follows.
\begin{definition}
Let $V$ be a sector and $(\Pi,\Gamma)$ be a model. Then, for $\gamma\in{\mathbb{R}}$, the space $\cD^\gamma(V;\Gamma)$ consists of all functions $f:{\mathbb{R}}^d\rightarrow V_\gamma^-$ such that, for every compact set $\frK$,
\begin{equ}\label{eq:vnorm}
\vn{f}_{\gamma,\frK}=
\sup_{\substack{x,y\in\frK \\ \|x-y\|_\frs\leq1}}\sup_{l<\gamma}\frac{\|f(x)-\Gamma_{xy}f(y)\|_l}{\|x-y\|_\frs^{\gamma-l}}<\infty,
\end{equ}
where the supremum in $l$ runs over elements of $A$.
\end{definition}
Although the spaces $\cD^\gamma$ depend on $\Gamma$, in many situation, where there can be no confusion about the model, this dependence will be omitted in the notation. The name `modelled distribution' is justified by the following result.
\begin{theorem}\label{thm: standard reco}
Let $V$ be a sector of regularity $\alpha$ and let $r=\lceil-\alpha+1\rceil$. Then for any $\gamma>0$ there exists a continuous linear map $\cR:\cD^\gamma(V)\rightarrow\cC^\alpha$ such that for every $C>0$, the bound
\begin{equation}\label{eq:standard reco estimate}
|(\cR f-\Pi_yf(y))(\psi_x^{\lambda})|\lesssim\lambda^{\gamma}\vn{f}_{\gamma,\mathop{\mathrm{supp}} \psi_x^\lambda}\;,
\end{equation}
holds locally uniformly over $x \in {\mathbb{R}}^d$ and uniformly over $\psi\in\cB^r$, over $\lambda\in(0,1]$, over
$y \in \mathop{\mathrm{supp}} \psi_x^\lambda$, and over models satisfying $\|\Pi\|_{\gamma,B(x,2)}\leq C$.
Furthermore, \eqref{eq:standard reco estimate} specifies $\cR f$ uniquely.
\end{theorem}
It is clear from \eqref{eq:standard reco estimate} that the reconstruction operator $\cR$ is local,
so in particular one can `reconstruct' modelled distributions that only locally lie in $\cD^\gamma$.
\begin{remark}
While in \cite{H0} in the bound \eqref{eq:standard reco estimate}, $y=x$ is assumed, this version is essentially equivalent: for all $y\in\mathop{\mathrm{supp}}\psi_x^\lambda$, one can simply rewrite $\psi_x^\lambda$ as $\bar\psi_y^{2\lambda}$ with some $\bar\psi\in\cB^r$.
Let us also note that in the literature the use of the notation $\vn{\cdot}$ is slightly inconsistent: sometimes it is defined as in \eqref{eq:vnorm},
in some other instances it includes the term $\sup_{x\in\frK}\sup_{l<\gamma}\|f(x)\|_l$.
We will also be guilty of this: while for now, in the unweighted setting, \eqref{eq:vnorm} is convenient since that is what appears in
the bounds for reconstructions like \eqref{eq:standard reco estimate} above and \eqref{eq:reco bound away negative m} below, the weighted versions of $\vn{\cdot}$ introduced in Section \ref{sec:def} \emph{do} include controls over $\|f(z)\|$.
\end{remark}
\begin{definition}
A continuous bilinear map $\star:T\times T\rightarrow T$ is called a product if,
for $a\in T_\alpha$ and $b\in T_\beta$, one has $a\star b\in T_{\alpha+\beta}$,
and $\bone\star a=a\star\bone$ for all $a\in T$. The products arising in this article
will always be associative and commutative, at least on some sufficiently large subspace.
A pair of sectors $(V,W)$ is said to be $\gamma$-regular with respect to the product $\star$
if $(\Gamma a)\star(\Gamma b)=\Gamma(a\star b)$ for all $\Gamma\in G$ and $a\in V_\alpha$, $b\in W_\beta$,
satisfying $\alpha+\beta<\gamma$. A sector is called $\gamma$-regular, if the pair $(V,V)$ is $\gamma$-regular.
Given two $T$-valued functions $f$ and $\bar f$, we also denote by $f\star_\gamma\bar f$ the function $x\rightarrow\cQ_\gamma^-(f(x)\star\bar f(x))$.
\end{definition}
For $\gamma>0$, a sector $V$ of regularity 0, a product $\star$ such that $V\star V\subset V$, and a smooth function $F:{\mathbb{R}}^n\rightarrow{\mathbb{R}}$ one can then define a function $\hat F_\gamma:V^n\rightarrow V$ by setting
\begin{equation}\label{def: F}
\hat F_\gamma(a)=\cQ_\gamma^-\sum_{k}\frac{D^kF(\bar a)}{k!}\tilde a^{\star k},
\end{equation}
where the sum runs over all possible $n$-dimensional multiindices, with the conventions $\bar a=\langle\bone,a\rangle$, $\tilde a=a-\bar a$, $k!=k_1!\cdots k_n!$, $\tilde{a}^{\star k}=\tilde a_1^{\star k_1}\star\cdots\star \tilde a_n^{\star k_n}$ for $k\neq 0$, and $\tilde{a}^{\star0}=\bone$.
The abstract version of differentiation is quite straightforward.
\begin{definition}
Given a sector $V$, a family of operators $\scD_i: V\rightarrow V$ with $i=1,\ldots,d$
is called an abstract gradient if for every $i$, every $\alpha$ and every $a\in V_\alpha$, one has $\scD_i a\in T_{\alpha-\frs_i}$
and $\Gamma \scD_i a=\scD_i \Gamma a$ for all $\Gamma\in G$.
A model $(\Pi,\Gamma)$ is called compatible with $\scD$, if for all $a\in V$, $x\in{\mathbb{R}}^d$, and for all $i$, it holds that
$$
D_i\Pi_x a=\Pi_x\scD_i a,
$$
where $D_i$ is the usual distributional differentiation in the $i$-th unit direction.
\end{definition}
The final important operation on modelled distribution is the integration against singular kernels, the aim of which is to `lift' convolutions with Green functions to the abstract setting. The first ingredient is the abstract integral operator.
\begin{definition}
Given a sector $V$, a linear map $\cI:V\rightarrow T$ is an abstract integration map of order $\beta>0$ if:
\begin{claim}
\item $\cI(V_\alpha)\subset T_{\alpha+\beta}$ for all $\alpha\in A$.
\item $\cI a=0$ for all $a\in V\cap \bar T$.
\item $\cI\Gamma a-\Gamma\cI a\in \bar{T}$ for all $a\in V$ and $\Gamma\in G$.
\end{claim}
\end{definition}
In our applications $\beta$ will always be 2, but for most of the analysis the one important property required of $\beta$ is that for each $\alpha\in A$, $\alpha+\beta\in\mathbb{Z}$ implies $\alpha\in\mathbb{Z}$. In particular, under this assumption, $\cI$ does not produce any components in integer homogeneities. The class of kernels we will want to lift is characterised as follows.
\begin{definition}\label{def: K}
For $\beta>0$ the class $\scK_\beta$ of functions ${\mathbb{R}}^d\times{\mathbb{R}}^d\setminus\{x=y\}\rightarrow{\mathbb{R}}$ consists of elements $K$ that can be decomposed as
$
K(x,y)=\sum_{n\geq0}K_n(x,y)
$,
where the functions $K_n$ have the following properties:
\begin{claim}
\item For all $n\geq0$, $K_n$ is supported on $\{(x,y):\|x-y\|_\frs\leq2^{-n}\}$.
\item For any two multiindices $k$ and $l$,
$
|D_1^kD_2^lK_n(x,y)|\lesssim 2^{n(|\frs|+|k+l|_\frs-\beta)}
$,
where the proportionality constant only depends on $k$ and $l$, but not on $n$, $x$, $y$.
\item For any two multiindices $k$ and $l$, $y\in{\mathbb{R}}^d$, $i=1,2$, it holds, for all $n\geq0$,
$$
\Big|\int_{R^d}(x-y)^lD_i^kK_n(x,y)dx\Big|\lesssim 2^{-\beta n}
$$
where the proportionality constant only depends on $k$ and $l$.
\item For a given $r>0$,
$
\int_{{\mathbb{R}}^d}K_n(x,y)P(y)dy=0
$,
for all $n\geq0$, $x\in{\mathbb{R}}^d$, and every polynomial $P$ of (scaled) degree at most $r$.
\end{claim}
\end{definition}
To introduce the appropriate `remainder' terms, we set $\cJ(x) a$, for $a\in T_\alpha$ as
\begin{equation}
\cJ(x) a=\sum_{n\geq 0}\cJ^{(n)}(x)a=\sum_{n\geq 0}\sum_{|k|_\frs<\alpha+\beta}\frac{X^k}{k!}(\Pi_x a)(D_1^kK_n(x,\cdot)).
\end{equation}
\begin{definition}
Given a sector $V$ and an abstract integration map $\cI$ acting on $V$ we say that a model $(\Pi,\Gamma)$ realises $K$ for $\cI$ if, for every $\alpha\in A$, every $a\in V_\alpha$, every $x\in{\mathbb{R}}^d$ one has the identity
$$
\Pi_x\cI a=\int_{{\mathbb{R}}^d}K(\cdot,z)(\Pi_xa)(dz)-\Pi_x\cJ(x)a.
$$
\end{definition}
Note that both sides are distributions, so the equality should be understood in the distributional sense.
For $\gamma>0$ we also define an operator $\cN_\gamma$ which maps any $f\in\cD^\gamma$ into a $\bar T$-valued function by
\begin{equation}
(\cN_\gamma f)(x)=\sum_{n\geq 0}(\cN_\gamma^{(n)} f)(x)=\sum_{n\geq 0}\sum_{|k|_\frs<\gamma+\beta}\frac{X^k}{k!}(\cR f-\Pi_x f(x))(D_1^kK_n(x,\cdot)).
\end{equation}
The key result on a Schauder-type estimate for integration on $\cD^\gamma$ then reads as follows.
\begin{theorem}\label{thm:standard int}
Let $K\in\scK_\beta$ for some $\beta>0$, let $\cI$ be an abstract integration map acting on $V$, and let $(\Pi,\Gamma)$ be a model realising $K$ for $\cI$. Then, for $\gamma>0$, the operator $\cK_\gamma$ defined by
\begin{equation}
(\cK_\gamma f)(x)=\cI f(x)+\cJ(x)f(x)+(\cN_\gamma f)(x),
\end{equation}
maps $\cD^{\gamma}(V)$ into $\cD^{\gamma+\beta}$ and the identity
\begin{equation}
\cR\cK_\gamma f=K\ast\cR f
\end{equation}
holds for every $f\in\cD^\gamma$.
\end{theorem}
\subsection{Preliminaries}
For negative values of $\gamma$, a statement similar to Theorem~\ref{thm: standard reco} still holds, but the
``uniqueness'' part is lost.
It will be useful for our purposes to have a family of ``reconstruction operators'' defined
similarly to \cite[Eq.~3.38]{H0}, but depending additionally on some small cut-off scale.
We define the sets
$
\Lambda^n_\frs=\big\{\sum_{j=1}^d2^{-n\frs_j}k_je_j:k_j\in\mathbb{Z}\big\},
$
where $e_j$ is the $j$-th unit vector of ${\mathbb{R}}^d$, $j=1,\ldots,d$, and we use the notation
$$
\eta_x^{n,\frs}=2^{-n|\frs|/2}\eta_x^{2^{-n}}
$$
for locally integrable functions $\eta$. Then, as shown in \cite{Daub}, for any integer $r>0$,
there exist a compactly supported $\cC^r$ function $\varphi$ and a finite family of compactly
supported $\cC^r$ functions $\Psi$ with the following properties.
\begin{claim}
\item For each $m$, the set
$
\{\varphi_x^{m,\frs}:x\in\Lambda^m_\frs\}\cup\{\psi_x^{n,\frs}:n\geq m,x\in\Lambda_\frs^n,\psi\in\Psi\}
$
forms an orthonormal basis of $L^2({\mathbb{R}}^d)$.
\item For every $\psi\in\Psi$ and polynomial $P$ of degree at most $r$, one has
$
\int\psi(x)P(x)dx=0
$.
\end{claim}
In fact much more is known about these functions, but this will suffice for our purposes.
We then set
\begin{equation}\label{eq:Rm}
\cR^mf=\sum_{n\geq m}\sum_{x\in\Lambda_\frs^n}\sum_{\psi\in\Psi}(\Pi_xf(x))(\psi_x^{n,\frs})\psi_x^{n,\frs}+\sum_{x\in\Lambda_\frs^m}(\Pi_xf(x))(\varphi_x^{m,\frs})\varphi_s^{m,\frs}.
\end{equation}
With this notation, we have the following result which is a strengthening of the $\gamma < 0$ part of
\cite[Thm~3.10]{H0}.
\begin{lemma}
Let $\gamma<0$, $m\geq0$ be an integer, $f\in\cD^\gamma(V)$ with a sector $V$ of regularity $\alpha \le 0$. Then $\cR^mf\in\cC^\alpha$ and for every $r>|\alpha|$ there exists $c$ such that, uniformly over $\eta\in\cB^r$ and $\lambda\in(0,1]$ and locally uniformly over $x$, one has the bound
\begin{equation}\label{eq:reco bound away negative m}
|(\cR^mf-\Pi_xf(x))(\eta_x^\lambda)|\lesssim\lambda^{\gamma-\alpha}(\lambda\wedge2^{-m})^\alpha
\vn{f}_{\gamma,B(x,c\lambda+2^{-m})}\;.
\end{equation}
\end{lemma}
\begin{proof}
The fact that $\cR^mf\in\cC^\alpha$ is immediate, since the above construction only differs by a $\cC^r$ function from the reconstruction operator given in \cite[Eq.~3.38]{H0}.
To show \eqref{eq:reco bound away negative m}, we assume without loss of generality that
$\vn{f}_{\gamma,B(x,\lambda+2^{-m})} \le 1$. Note first that
$$
|(\psi_y^{n,\frs},\eta_x^\lambda)|\lesssim 2^{n|\frs|/2} \bigl(2^n \lambda \vee 1\bigr)^{-|\frs|-r}\;,
$$
and that $(\psi_y^{n,\frs},\eta_x^\lambda)=0$ for $\|x-y\|_\frs\geq\lambda+c2^{-n}$ for some fixed constant $c$.
We also have, for $n\geq m$, and for $\|x-y\|_\frs\leq\lambda+2^{-n}$,
\begin{align}
|(\cR^m f-\Pi_x f(x))(\psi_y^{n,\frs})|&=|(\Pi_yf(y)-\Pi_x f(x))(\psi_y^{n,\frs})|
=|(\Pi_y(f(y)-\Gamma_{yx} f(x))(\psi_y^{n,\frs})|
\nonumber\\&
\lesssim\sum_{l<\gamma}\|x-y\|_\frs^{\gamma-l}2^{-n|\frs|/2-nl}.\label{eq:Rm4}
\end{align}
Denoting the first (triple) sum in \eqref{eq:Rm} by $\cR^m_0$, and the projection of $\Pi_xf(x)$ to $\text{span}\{\psi_y^{n,\frs}:y\in\Lambda_\frs^n,n\geq m\}$ by $(\Pi_xf(x))_0$, we can write
\[
|(\cR^m_0f-(\Pi_xf(x))_0)(\eta_x^\lambda)|=\sum_{n\geq m}\sum_{y\in\Lambda_\frs^n}\sum_{\psi\in\Psi}|(\cR^mf-\Pi_xf(x))(\psi_y^{n,\frs})(\psi_y^{n,\frs},\eta_x^\lambda)|=:\sum_{n\geq m}I_n.
\]
We consider the cases $2^{-m}\gtrless\lambda$ separately. If $\lambda< 2^{-m}$, then considering that for $2^{-n}\leq\lambda$, the number of nonzero terms in the sum over $y\in\Lambda_\frs^n$ is of order $\lambda^{|\frs|}2^{n|\frs|}$, by estimating each of them using the bounds above, we have
\begin{equation}\label{eq:Rm0}
\sum_{2^{-n}\leq\lambda}I_n\lesssim
\sum_{2^{-n}\leq\lambda}\lambda^{|\frs|}2^{n|\frs|}2^{-n|\frs|/2-nr}\lambda^{-|\frs|-r}\sum_{l<\gamma}(\lambda+2^{-n})^{\gamma-l}2^{-n|\frs|/2-nl}\lesssim\lambda^\gamma,
\end{equation}
due to $r+l>0$. On the other hand, for $\lambda<2^{-n}$, the number of nonzero terms in the sum over $y$ is of order $1$, so we can write
\begin{equation}\label{eq:Rm1}
\sum_{\lambda<2^{-n}\leq2^{-m}}I_n\lesssim\sum_{\lambda<2^{-n}\leq2^{-m}}2^{n|\frs|/2}\sum_{l<\gamma}(\lambda+2^{-n})^{\gamma-l}2^{-n|\frs|/2-nl}\lesssim\lambda^\gamma,
\end{equation}
where we used the negativity of $\gamma$, and this bound is of the required order.
In the case $2^{-m}\leq\lambda$, then similarly to before
\begin{align}
\sum_{n\geq m}I_n&\lesssim
\sum_{n\geq m}\lambda^{|\frs|}2^{n|\frs|}2^{-n|\frs|/2-nr}\lambda^{-|\frs|-r}\sum_{l<\gamma}(\lambda+2^{-n})^{\gamma-l}2^{-n|\frs|/2-nl}\nonumber
\\&\lesssim\sum_{l<\gamma}2^{-m(r+l)}\lambda^{\gamma-l-r}\leq\sum_{l<\gamma}2^{-ml}\lambda^{\gamma-l},\label{eq:Rm2}
\end{align}
and since $l\geq\alpha$, this gives the required bound.
For the second sum in \eqref{eq:Rm}, denoted for the moment by $\cR^m_1$ and the projection of $\Pi_xf(x)$ to $\text{span}\{\varphi_y^{m,\frs}:y\in\Lambda_\frs^m\}$, denoted by $(\Pi_xf(x))_1$, we proceed similarly. This time, one has
$$
|(\varphi_y^{m,\frs},\eta_x^\lambda)|\lesssim 2^{m|\frs|/2} \bigl(2^m \lambda \vee 1\bigr)^{-|\frs|}\;,
$$
and $(\varphi_y^{m,\frs},\eta_x^\lambda)=0$ for $\|x-y\|_\frs\geq\lambda+c2^{-m}$, that is, for all but of order $2^{m|\frs|}\lambda^{|\frs|}$ instances of $y\in\Lambda_{\frs}^m$ in the case $2^{-m}\leq\lambda$, and for all but of order $1$ instances of $y\in\Lambda_\frs^m$ in the case $\lambda<2^{-m}$. The quantity $(\cR^m f-\Pi_x f(x))(\varphi_y^{m,\frs})$ can
then be bounded exactly as in \eqref{eq:Rm4}. Combining these bounds, we arrive at
\begin{align}
|(\cR^m_1f-(\Pi_xf(x))_1)(\eta_x^\lambda)|&=\sum_{y\in\Lambda_\frs^m}|(\cR^mf-\Pi_yf(y))(\varphi_y^{m,\frs})(\varphi_y^{m,\frs},\eta_x^\lambda)|
\nonumber\\
&\lesssim2^{m|\frs|/2}\sum_{l<\gamma}(\lambda+2^{-m})^{\gamma-l}2^{-m|\frs|/2-ml}\nonumber
\\&
\lesssim \lambda^{\gamma-\alpha}2^{-m\alpha} \vee 2^{-m\gamma}
\lesssim \lambda^{\gamma-\alpha}2^{-m\alpha} \vee \lambda^\gamma\;,
\end{align}
as required. Here, the last inequality comes from the fact that $\gamma < 0$ and that the second term
dominates when $2^{-m} \ge \lambda$, so that $2^{-m\gamma} \le \lambda^\gamma$.
\end{proof}
Next we recall some results on extending dual elements of a space of smooth functions that are supported away from a submanifold, to distributions, at least locally. This is essentially the content of \cite[Prop.~6.9]{H0}, but we slightly reformulate the statements in order to fit the needs of Section~\ref{subsec:reco} below better.
Whenever here and in the sequel we refer to a `boundary' $P$, we mean the following. Assume that ${\mathbb{R}}^d$ is decomposed as
${\mathbb{R}}^d={\mathbb{R}}^{d_1}\times\cdots\times{\mathbb{R}}^{d_m}$, such that $\frs_1=\cdots=\frs_{d_1}$, $\frs_{d_1+1}=\cdots=\frs_{d_2}$, etc.
We then assume $P$ to be of the form
$$
P=M_1\times\cdots\times M_m
$$
where each $M_i$ is either ${\mathbb{R}}^{d_i}$ or is a piecewise $\cC^1$ boundary of a domain, satisfying the strong cone condition. Denoting the codimension of $M_i$ by $m_i$, the codimension of $P$ is then defined to be $\sum_{i=1}^m m_i\frs_{d_{i-1}+1}$, with the convention $d_0=0$. We will need the following version of a well-known
``folklore'' fact:
\begin{proposition}\label{prop:extension}
Let $P$ be a boundary of codimension $\frm$, $D\subset{\mathbb{R}}^d$ be a bounded domain and let $\xi$ be an element of the dual of smooth functions compactly supported in $D\setminus P$. Suppose furthermore that $0\geq\alpha>-\frm$ and for an integer $r>|\alpha|$ one has
\begin{equation}\label{eq:extension}
|\xi(\psi_x^\lambda)|\lesssim\lambda^\alpha
\end{equation}
uniformly over $x \in D\setminus P$, over $\psi\in\cB^r$, and over $\lambda\in(0,1]$ satisfying
furthermore $2\lambda\leq d_\frs(x,P)$ and $\mathop{\mathrm{supp}}\psi_x^\lambda\subset D$.
Then there exists a unique element $\xi'$ in the dual of smooth functions compactly supported in $D$ that agrees with $\xi$ on test functions supported away from $P$ and for which the bound \eqref{eq:extension} holds with $\xi'$ in place of $\xi$, uniformly in $x$, in $\psi\in\cB^r$, and in $\lambda\in(0,1]$ satisfying $\mathop{\mathrm{supp}} \psi_x^\lambda\subset D$.
\end{proposition}
\begin{proof}
By considering a suitable partition of unity,
thanks to the strong cone condition,
we see that for any compact set $K \subset D$ with diameter $\lambda$,
and any $n$ with $2^{-n} \le \lambda$, we can find
smooth functions $\Phi_n \colon K \to [0,1]$ such that
$\Phi_n(y) = 1$ if $d_\frs(y,P) \in [2^{1-n},2^{2-n}]$, $\Phi_n(y) = 0$ if $d_\frs(y,P) \notin [2^{-n},2^{3-n}]$, and satisfying the following
property.
For every $n \ge 1$, one can find sequences $\{x_k\}_{k=1}^{N}$ with $N \le C \lambda^{|\frs|-\frm} 2^{(|\frs|-\frm)n}$
and functions $\phi_k , \tilde \phi_k \in \cB^r$ such that, setting $\mu = 2^{-n}$, one has
\begin{equ}[e:propPhi]
\Phi_n = \mu^{|\frs|}\sum_{k=1}^N \phi_{k,x_k}^\mu\;,\qquad
\Phi_n - \Phi_{n+1} = \mu^{|\frs|}\sum_{k=1}^N \tilde \phi_{k,x_k}^\mu\;.
\end{equ}
Fix now a test function of the type $\psi_x^\lambda$ with support $K \subset D$, then the sequence
$\xi(\psi_x^\lambda (1-\Phi_n))$ is Cauchy since
\begin{equs}
|\xi(\psi_x^\lambda (\Phi_{n+1}-\Phi_n))| &\le \sum_{k=1}^N\mu^{|\frs|} \xi(\tilde\psi_x^\lambda \phi_{k,x_k}^\mu)
\lesssim \lambda^{-|\frs|} N \mu^{\alpha + |\frs|}
\le C \lambda^{-|\frs|} \lambda^{|\frs|-\frm} 2^{(|\frs|-\frm)n} \mu^{\alpha+|\frs|} \\
& = C \lambda^{-m} 2^{-(\frm + \alpha)n}\;\label{corr},
\end{equs}
where in the second inequality we made use of the bound \eqref{eq:extension}. Thanks to the assumption $\alpha+\frm>0$, the right-hand side of \eqref{corr} which converges to $0$ exponentially fast, as claimed.
The same bound also shows that the limit is bounded
by some constant times $\lambda^{-\alpha}$ as required. The uniqueness of $\xi'$ follows in a similar way
by comparing $\xi'(\psi (1-\Phi_n))$ to $\xi'(\psi)$ and using the first identity of \eqref{e:propPhi}.
\end{proof}
\section{Definition of \texorpdfstring{$\cD_P^{\gamma,w}$}{DPgw} and basic properties}\label{sec:def}
Our main tool for dealing with domains is to introduce spaces analogous to the spaces $\CD^{\gamma,\eta}$
used in \cite{H0} to deal with initial conditions, but allowing for blow-ups at the boundary of
the domain as well. One subtlety arises in the handling of the ``double singularity'' arising
on the boundary at time $0$.
Let $P_0$ and $P_1$ be two fixed boundaries with respective codimensions $\frm_0$, $\frm_1$
and such that $P_\cap = P_0 \cap P_1$ is itself a boundary of codimension $\frm=\frm_0+\frm_1$.
We also write $P=P_0\cup P_1$ and we assume that $P$ satisfies the (uniform) cone condition, which
forces the two boundaries to intersect in a transverse manner.
For $i=1,2$, denote
$$
|x|_{P_i}=1\wedge d_\frs(x,P_i),\quad\quad|x,y|_{P_i}=|x|_{P_i}\wedge|y|_{P_i},
$$
and for any compact set $\frK$,
$$
\frK_P=\{(x,y)\in(\frK\setminus P)^2:\,x\neq y\,\,\,\text{and}\,\,\,2\|x-y\|_\frs\leq|x,y|_{P_0}\wedge|x,y|_{P_1}\}.
$$
To slightly ease notation, in the following $w$ will always stand for an element in ${\mathbb{R}}^3$, with coordinates $w=(\eta,\sigma,\mu)$, corresponding to exponents for the `weights' at $P_0$, $P_1$, and their intersection, respectively.
\begin{remark} It might be at first sight surprising to have not two, but three different orders of singularity. While in the subsequent calculus the use of exponent $\mu$ will become clear, it is worth mentioning a simple example when the singularities at the different boundaries do not in any way determine the one at the intersection: Consider the solution of $\partial_t u=\Delta u$, $u_0\equiv 1$, with $0$ Dirichlet boundary conditions on some domain $D$. Then, while away from the ``corner'' $\{(0,x):x\in\partial D\}$, all derivatives of $u$ are continuous up to both the temporal and the spatial boundaries, the $k$-th derivative exhibits a blow-up of order $|k|_\frs$ at the corner.
\end{remark}
\begin{definition}
Let $V$ be a sector, $\gamma>0$ and $w=(\eta,\sigma,\mu)\in{\mathbb{R}}^3$. Then the space $\cD_P^{\gamma,w}(V)$ consists of all functions $f:{\mathbb{R}}^d\setminus P\rightarrow V_\gamma^-$ such that for every compact set $\frK\subset{\mathbb{R}}^d$ one has
\begin{align}
\vn{f}_{\gamma,w;\frK}&:=
\sup_{(x,y)\in\frK_P}\sup_{l<\gamma}
\frac{\|f(x)-\Gamma_{xy}f(y)\|_l}
{\|x-y\|_{\frs}^{\gamma-l}|x,y|_{P_0}^{\eta-\gamma}|x,y|_{P_1}^{\sigma-\gamma}(|x,y|_{P_0}\vee|x,y|_{P_1})^{\mu-\eta-\sigma+\gamma}}\nonumber
\\
&\quad+\sup_{x\in\frK:\,0<|x|_{P_0}\leq|x|_{P_1}}\sup_{l<\gamma}\frac{\|f(x)\|_l}{|x|_{P_1}^{\mu-l}\left(\tfrac{|x|_{P_0}}{|x|_{P_1}}\right)^{(\eta-l)\wedge0}}\nonumber
\\
&\quad+\sup_{x\in\frK:\,0<|x|_{P_1}\leq|x|_{P_0}}\sup_{l<\gamma}\frac{\|f(x)\|_l}{|x|_{P_0}^{\mu-l}\left(\tfrac{|x|_{P_1}}{|x|_{P_0}}\right)^{(\sigma-l)\wedge0}}\,\,<\infty.\label{def:spaces}
\end{align}
The sum of the second and third term above will also be denoted by $\|f\|_{\gamma,w;\frK}$.
Similarly to before, these spaces do depend on the model, but if no confusion can arise, this dependence will not be denoted.
For two models $(\Pi,\Gamma)$ and $(\bar\Pi,\bar\Gamma)$, and for $f\in\cD_P^{\gamma,w}(V;\Gamma)$ and $\bar f\in\cD_P^{\gamma,w}(V;\bar \Gamma)$, we also set
\begin{align*}
\vn{f;\bar f}_{\gamma,w;\frK}&=\sup_{(x,y)\in\frK_P}\sup_{l<\gamma}
\frac{\|f(x)-\bar f(x)-\Gamma_{xy}f(y)+\bar\Gamma_{xy}\bar f(y)\|_l}
{\|x-y\|_{\frs}^{\gamma-l}|x,y|_{P_0}^{\eta-\gamma}|x,y|_{P_1}^{\sigma-\gamma}(|x,y|_{P_0}\vee|x,y|_{P_1})^{\mu-\eta-\sigma+\gamma}}
\\
&\quad+\|f-\bar f\|_{\gamma,w;\frK}.
\end{align*}
\end{definition}
This notation is slightly ambiguous since the knowledge of $P$ does of course not imply the knowledge of
$P_0$ and $P_1$. One should therefore really interpret the instance of $P$ appearing in $\cD_{P}^{\gamma,w}$
as meaning $P = \{P_0,P_1\}$ rather than $P = P_0 \cup P_1$, which is used whenever we view $P$
as a subset of ${\mathbb{R}}^d$.
It will also sometimes be useful to consider functions in $\cD_{P}^{\gamma,w}$ that are slightly better behaved when
approaching one of the two boundaries. This is the purpose of the following definition.
\begin{definition}
We denote by $\cD_{P,\{0\}}^{\gamma,w}$ the set of those elements $f \in \cD_{P}^{\gamma,w}$ for which the map $x\mapsto \cQ_\eta^- f(x)$ extends continuously to ${\mathbb{R}}^d\setminus P_1$ in such a way that $\cQ_\eta^-f(x)=0$ for all $x\in P_0\setminus P_1$.
The space $\cD_{P,\{1\}}^{\gamma,w}$ is defined analogously.
Finally, writing $\frK_0 = \{x\in\frK:\, 0<|x|_{P_0}\leq|x|_{P_1}\}$ and similarly for $\frK_1$,
we set
\begin{align*}
\bn{f}_{\gamma,w,\{0\};\frK}&=\sup_{x\in\frK_0}\sup_{l<\gamma}\frac{\|f(x)\|_l}{|x|_{P_1}^{\mu-l}\left(\tfrac{|x|_{P_0}}{|x|_{P_1}}\right)^{\eta-l}}\nonumber
+\sup_{x\in\frK_1}\sup_{l<\gamma}\frac{\|f(x)\|_l}{|x|_{P_0}^{\mu-l}\left(\tfrac{|x|_{P_1}}{|x|_{P_0}}\right)^{(\sigma-l)\wedge0}},
\end{align*}
and also define $\bn{f}_{\gamma,w,\{1\};\frK}$ in the same way, but with the exponents $\eta-l$
and $(\sigma-l)\wedge 0$ replaced
by $(\eta-l)\wedge 0$ and $\sigma-l$ respectively.
\end{definition}
We shall assume throughout the article that these exponents satisfy $\eta\vee\sigma\vee\mu\leq\gamma$.
\begin{remark}\label{remark:mu}
Denoting the regularity of the sector $V$ by $\alpha$, the definition is set up so that, when $\mu\leq\alpha$ and there exists an $x$ with $|x|_{P_0}\sim|x|_{P_1}\sim 1$ and
$
\sup_{l<\gamma}\|f(x)\|_{l}\sim1
$,
then the first term in \eqref{def:spaces} bounds the second and third. For $\mu>\alpha$, one would actually need to add $|x|_{P_1}^{(\mu-l)\wedge0}$ to the denominator in the second term and $|x|_{P_0}^{(\mu-l)\wedge0}$ in the third. As this would
make the calculations significantly longer, we omit this modification and deal with the slight difficulties arising from this
restriction later.
\end{remark}
\begin{proposition}\label{prop:1}
Let $V$ be a sector of regularity $\alpha$, and $f\in\cD_{P,\{1\}}^{\gamma,w}(V)$. Suppose furthermore that $\frK$ is a compact set such that for each $x\in\frK$ the line connecting $x$ and the closest point to $x$ on $P_1$ is contained in $\frK$. Then it holds that
\begin{equation}\label{eq:vmi0}
\bn{f}_{\gamma,w,\{1\};\frK}\lesssim\vn{f}_{\gamma,w;\frK}.
\end{equation}
If $(\bar \Pi,\bar\Gamma)$ is another model for $\scT$ and $\bar f\in\cD_{P,\{1\}}^{\gamma,w}(V;\bar\Gamma)$, then one has
\begin{equation}\label{eq:vmi1}
\bn{f-\bar f}_{\gamma,w,\{1\};\frK}\lesssim\vn{f;\bar f}_{\gamma,w;\frK}+\|\Gamma-\bar\Gamma\|_{\gamma;\frK}(\vn{f}_{\gamma,w;\frK}+\vn{\bar f}_{\gamma,w;\frK}),
\end{equation}
and, for any $\kappa \in [0,1]$,
\begin{equation}\label{eq:vmi2}
\vn{f;\bar f}_{\bar\gamma,\bar w;\frK}\lesssim\bn{f-\bar f}_{\bar \gamma,w,\{1\};\frK}^\kappa(\vn{f}_{\gamma,w;\frK}+\vn{\bar f}_{\gamma,w;\frK})^{1-\kappa},
\end{equation}
where $\bar\gamma=(1-\kappa)\gamma+\kappa\alpha$ and $\bar w=(\bar \eta,\sigma,\mu)$ with $\bar\eta=\eta+\kappa((\alpha-\eta)\wedge0)$.
\end{proposition}
\begin{proof}
We prove separately for $\frK_i=\frK\cap\{|x|_{P_i}\leq|x|_{P_{1-i}}\}$.
For $\frK_1$, further introducing $\frK_1^n=\frK_1\cap\{2^{-n}\leq|x|_{P_0}\leq 2^{-n+1}\}$, the bounds for $\frK_1^n$ in place of $\frK$ follow immediately from Lemmas 6.5 and 6.6, \cite{H0}, uniformly in $n$. Since there is no dependence on $n$ in the bounds, and for any pair $(x,y)\in(\frK_1)_P$, the indices $n_x$ and $n_y$ for which $x\in\frK_1^{n_x}$, $y\in\frK_1^{n_y}$, differ by at most $1$, the estimates carry through for $\frK_1$.
For $\frK_0$, the bounds \eqref{eq:vmi0} and \eqref{eq:vmi1} are trivial. As for \eqref{eq:vmi2}, we have
$$
\|f(x)-f(x)-\Gamma_{xy}f(y)+\bar\Gamma_{xy}f(y)\|_l\leq(\vn{f}_{\gamma,w;\frK_0}+\vn{\bar f}_{\gamma,w;\frK_0})\|x-y\|_\frs^{\gamma-l}|x,y|_{P_0}^{\eta-\gamma}|x,y|_{P_1}^{\mu-\eta}
$$
as well as
$$
\|f(x)-f(x)-\Gamma_{xy}f(y)+\bar\Gamma_{xy}f(y)\|_l\lesssim\bn{f-\bar f}_{\gamma,w,\{1\};\frK_0}|x,y|_{P_1}^{\mu-l}\left(\frac{|x,y|_{P_0}}{|x,y|_{P_1}}\right)^{(\eta-l)\wedge0}.
$$
Therefore, we can bound the quantity $\|f(x)-f(x)-\Gamma_{xy}f(y)+\bar\Gamma_{xy}f(y)\|_l$ by the right-hand side of \eqref{eq:vmi2} times
\begin{align*}
\|x-y&\|_\frs^{(1-\kappa)(\gamma-l)}|x,y|_{P_1}^{(1-\kappa)(\mu-\eta)+\kappa(\mu-l)-\kappa((\eta-l)\wedge0)}|x,y|_{P_0}^{(1-\kappa)(\eta-\gamma)+\kappa((\eta-l)\wedge0)},
\\
&\lesssim \|x-y\|_\frs^{\bar\gamma-l}\|x-y\|_{\frs}^{\kappa(l-\alpha)}|x,y|_{P_1}^{\mu-\eta-\kappa(l-\eta+(\eta-l)\wedge0)}|x,y|_{P_0}^{\eta-\bar\gamma+\kappa(\alpha-\eta+(\eta-l)\wedge0)}.
\end{align*}
Considering that $\|x-y\|_\frs\leq|x,y|_{P_0}$ and that the minimum value of $a_l:=(l-\eta+(\eta-l)\wedge0)$ is $a_\alpha=(\alpha-\eta)\wedge 0$, we can estimate the right-hand side above by
$$
\|x-y\|_\frs^{\bar\gamma-l}|x,y|_{P_1}^{\mu-\bar\eta}|x,y|_{P_1}^{\kappa(a_\alpha-a_l)}|x,y|_{P_0}^{\bar\eta-\bar\gamma}|x,y|_{P_0}^{\kappa(a_l-a_\alpha)},
$$
and since we are in the situation $|x,y|_{P_0}\leq|x,y|_{P_1}$, this gives the required bound. The estimate for $\|f(x)-\bar f(x)\|_l$, $x\in\frK_0$ is straightforward, since one has the bound
$$
\frac{\|f(x)-\bar f(x)\|_l}{|x|_{P_1}^{\mu-l}\left(\tfrac{|x|_{P_0}}{|x|_{P_1}}\right)^{(\eta-l)\wedge0}}\lesssim \bn{f-\bar f}_{ \gamma,w,\{1\};\frK}\wedge(\vn{f}_{\gamma,w,\frK}+\vn{\bar f}_{\gamma,w;\frK})\;,
$$
thus concluding the proof.
\end{proof}
\begin{proposition}\label{prop3}
If $f\in\cD_{P,\{1\}}^{\gamma,w}$ then, for any $\delta>0$ and compact $\frK\subset\{|x|_{P_1}\vee\delta\leq|x|_{P_0} \le 2\delta\}$, it holds that
\begin{equation}\label{eq:bode1}
\vn{\hat f}_{\sigma;\frK}\lesssim\delta^{\mu-\sigma}\vn{f}_{\gamma,w;\frK}\;,
\end{equation}
with $\hat f = \cQ_\sigma^- f$. In particular, away from $P_0$, $\hat f$ locally belongs to $\cD^\sigma$.
\end{proposition}
\begin{proof}
We assume without loss of generality $\vn{f}_{\gamma,w;\frK}\leq 1$.
For $2\|x-y\|_\frs\leq|x,y|_{P_1}$, simply by the definition of the spaces $\cD_{P}^{\gamma,w}$ we get
\begin{equation}\label{eq:bode2}
\frac{\|\hat f(x)-\Gamma_{xy}\hat f(y)\|_{l}}{\|x-y\|_\frs^{\sigma-l}}\lesssim\|x-y\|_\frs^{\gamma-\sigma}|x,y|_{P_1}^{\sigma-\gamma}|x,y|_{P_0}^{\mu-\sigma}\leq\delta^{\mu-\sigma}.
\end{equation}
since $\sigma\leq\gamma$. In the case $|x,y|_{P_1}\leq 2\|x-y\|_\frs$, then noting that $|x|_{P_1}\vee|y|_{P_1}\leq3\|x-y\|_\frs$ we can write, using the estimate \eqref{eq:vmi0} again
\begin{align*}
\|\hat f(x)-\Gamma_{xy}\hat f (y)\|_l&\leq \|\hat f(x)\|_l+\sum_{l\leq m<\sigma}\|x-y\|_\frs^{m-l}\|\hat f(y)\|_m
\\
&\leq \delta^{\mu-\sigma}|x|_{P_1}^{\sigma-l}+\sum_{l\leq m<\sigma}\|x-y\|_\frs^{m-l}\delta^{\mu-\sigma}|y|_{P_1}^{\sigma-m}
\lesssim\delta^{\mu-\sigma}\|x-y\|^{\sigma-l},
\end{align*}
as required.
The fact that $\hat f$ is locally in $\cD^\sigma$ then follows, since on
$\{\delta\leq|x|_{P_0}\leq|x|_{P_1}\}$, $ f$ actually belongs to $\cD^\gamma$,
so its projection $\hat f$ belongs to $\cD^\sigma$,
and $\delta>0$ was arbitrary.
\end{proof}
\begin{remark}
One simplification that we will often use is based on the fact that for pairs $(x,y)\in\frK_P$, we have
$$
|x|_{P_i}\sim|y|_{P_i}\sim|x,y|_{P_i}
$$
for $i=0,1$. As a consequence, in the proofs of Section~\ref{sec:calculus} below we will repeatedly interchange the above quantities without much explanation. Also, for such pairs, even though $|x|_{P_0}\leq|x|_{P_1}$ does not imply $|y|_{P_0}\leq|y|_{P_1}$ or $|x,y|_{P_0}\leq|x,y|_{P_1}$, it holds that
$$
\|f(y)\|_l\lesssim\|f\|_{\gamma,w,\frK}|y|_{P_1}^{\mu-l}\left(\frac{|y|_{P_0}}{|y|_{P_1}}\right)^{(\eta-l)\wedge0},
$$
and
$$
\|f(x)-\Gamma_{xy}f(y)\|_l\lesssim\vn{f}_{\gamma,w,\frK}\|x-y\|_\frs^{\gamma-l}|x,y|_{P_0}^{\eta-\gamma}|x,y|_{P_1}^{\mu-\eta}.
$$
This, and the corresponding symmetric implications (swapping the roles of $P_0$ and $P_1$),
will also often be used.
\end{remark}
\section{Calculus of the spaces \texorpdfstring{$\cD_P^{\gamma,w}$}{DPgw}}\label{sec:calculus}
In order to reformulate our stochastic PDEs as fixed point problems in $\cD_P^{\gamma,w}$,
one first needs to know how the standard operations like multiplication, differentiation,
or convolution with singular kernels, act on these spaces. The aim of this section is to
recover the calculus of \cite{H0} in the present context.
\begin{remark}
This of course means that repetition of arguments to a certain degree is inevitable. We shall try to
minimise the overlap and concentrate on the aspects that are different due to the additional
weights and don't just follow trivially from \cite{H0}. This in particular applies to the
continuity statements: since the space of models is not linear, boundedness of the operations do
not imply their continuity. However, in practice they usually follow from the same principles, with
an added level of notational inconvenience. We therefore only give the complete proof of
continuity for the multiplication, after which the reader is hopefully convinced that obtaining
the other similar continuity results is a lengthy but straightforward combination of the
corresponding arguments in \cite{H0} and the treatment of the additional weights as described
in the `boundedness' part of the corresponding statements. Alternatively, the continuity statements
can also be obtained by using the trick introduced in the proof of \cite[Prop.~3.11]{HP15}, which
allows to some extent to ``linearise'' the space of models.
\end{remark}
\begin{remark}
Let us mention an important point on how integration against singular kernels will be handled.
While Green's functions of boundary value problems are not translation invariant, they typically can be decomposed into a translation invariant part and a smooth one, which however is singular at the boundary. The most simple example of this is the $1+1$-dimensional Neumann heat kernel on $({\mathbb{R}}_+)^2$:
\begin{equ}
G((t,x),(s,y))=\frac{1}{\sqrt{4\pi(t-s)}}\Big(e^{-\frac{(x-y)^2}{4(t-s)}}+e^{-\frac{(x+y)^2}{4(t-s)}}\Big),
\end{equ}
for a more general discussion see Example \ref{example} below.
The advantage of such a decomposition is that only the former part plays a role in constructing the regularity structure itself and the corresponding admissible models, for which one can use the general machinery of \cite{BHZ, CH, H0}. Integration against the latter part simply produces functions described by polynomial symbols, albeit with blow-ups at the boundaries which need to be sufficiently controlled.
\end{remark}
\subsection{Multiplication}
\begin{lemma}\label{lem:mult}
For $i=1,2$, let $f_i\in \cD_P^{\gamma_i,w_i}(V_i)$ with $\gamma_i > 0$, where $V_i$ is a sector of regularity $\alpha_i \le 0$. Suppose furthermore that the pair $(V_1,V_2)$ is $\gamma:=(\gamma_1+\alpha_2)\wedge(\gamma_2+\alpha_1)$-regular with respect to the product $\star$. Then $f:=f_1\star_\gamma f_2$ belongs to $\cD_P^{\gamma,w}$, where
$
w=(\eta,\sigma,\mu)
$
with $\mu=\mu_1+\mu_2$ and
\begin{equs}
\eta &=(\eta_1+\alpha_2)\wedge(\eta_2+\alpha_1)\wedge(\eta_1+\eta_2)\;,\\
\sigma&=(\sigma_1+\alpha_2)\wedge(\sigma_2+\alpha_1)\wedge(\sigma_1+\sigma_2)\;.
\end{equs}
Moreover, if $(\bar\Pi,\bar\Gamma)$ is another model for $\scT$, and $g_i\in\cD_P^{\gamma_i,w_i}(V_i;\bar\Gamma)$ for $i=1,2$, then, for $g=g_1\star_\gamma g_2$ and any $C>0$
\begin{equation}\label{eq:multi cont}
\vn{f;g}_{\gamma,w;\frK}\lesssim\vn{f_1;g_1}_{\gamma_1,w_1;\frK}+\vn{f_2;g_2}_{\gamma_2,w_2;\frK}+\|\Gamma-\bar\Gamma\|_{\gamma_1+\gamma_2;\frK},
\end{equation}
holds uniformly in $f_i$ and $g_i$ with $\vn{f_i}_{\gamma_i,w_i;\frK}+\vn{g_i}_{\gamma_i,w_i;\frK}\leq C$ and models with $\|\Gamma\|_{\gamma_1+\gamma_2;\frK}+\|\bar \Gamma\|_{\gamma_1+\gamma_2;\frK}\leq C$.
\end{lemma}
\begin{proof}
We fix a compact $\frK$ and assume, without loss of generality, that both $f_1$ and $f_2$ are of norm $1$ on $\frK$. Then, for $|x|_{P_0}\leq|x|_{P_1}$ and $l < \gamma$,
\begin{align*}
\|f(x)\|_l&\leq\sum_{l_1+l_2=l}\|f_1(x)\|_{l_1}\|f_2(x)\|_{l_2}
\leq\sum_{l_1+l_2=l}|x|_{P_1}^{\mu_1+\mu_2-l_1-l_2}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{(\eta_1-l_1)\wedge0+(\eta_2-l_2)\wedge0}
\\
&\leq|x|_{P_1}^{\mu-l}\sum_{l_1+l_2=l}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{{-l+\eta_1\wedge l_1+\eta_2\wedge l_2}}.
\end{align*}
It remains to notice that, since for $i=0,1$, $l_i\geq\alpha_i$, we have $\eta_1\wedge l_1+\eta_2\wedge l_2\geq \eta\wedge l$, by construction, and hence
$$
\|f(x)\|_l\lesssim |x|_{P_1}^{\mu-l}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{(\eta-l)\wedge0}.
$$
Next we bound $f(x)-\Gamma_{xy}f(y)$. As usual, we assume $|x,y|_{P_0}\leq|x,y|_{P_1}$.
For $l < \gamma$, the triangle inequality yields
\begin{align}
\|f(x)-\Gamma_{xy}f(y)\|_l&\leq\|\Gamma_{xy}f(y)-(\Gamma_{xy}f_1(y))\star(\Gamma_{xy}f_2(y))\|_l
\nonumber\\
&\quad+\|(\Gamma_{xy}f_1(y)-f_1(x))\star(\Gamma_{xy}f_2(y)-f_2(x)\|_l
\nonumber\\
&\quad+\|(\Gamma_{xy}f_1(y)-f_1(x))\star f_2(x)\|_l
\nonumber\\
&\quad+\|f_1(x)\star\big(\Gamma_{xy}f_2(y)-f_2(x)\big)\|_l.\label{multi1}
\end{align}
Thanks to the $\gamma$-regularity of $(V_1,V_2)$, the first term in this expression can be bounded by
\begin{align}
A&:=\|\Gamma_{xy}f(y)-(\Gamma_{xy}f_1(y))\star(\Gamma_{xy}f_2(y))\|_l
\nonumber
\\&
\leq\Big\|\sum_{m+n\geq\gamma}(\Gamma_{xy}\cQ_mf_1(y))\star(\Gamma_{xy}\cQ_nf_2(y))\Big\|_l
\nonumber
\\
&\leq\sum_{m+n\geq\gamma}\sum_{\beta_1+\beta_2=l}\|\Gamma_{xy}\cQ_mf_1(y)\|_{\beta_1}\|\Gamma_{xy}\cQ_nf_2(y)\|_{\beta_2}
\nonumber
\\
&\leq\sum_{m+n\geq\gamma}\sum_{\beta_1+\beta_2=l}\|\Gamma\|_{\gamma_1+\gamma_2}^2\|f_1(y)\|_m\|f_2(y)\|_n\|x-y\|_\frs^{m+n-\beta_1-\beta_2}.\label{multi3}
\end{align}
The factor $\|\Gamma\|_{\gamma_1+\gamma_2}^2$ can course be incorporated into the proportionality constant,
but it will be useful in the sequel to view the dependence on it as above. We can continue by writing
\begin{align}
A&\lesssim\sum_{m+n\geq\gamma}\|x-y\|_\frs^{m+n-l}\|f_1(y)\|_m\|f_2(y)\|_n
\nonumber\\
&\leq\|x-y\|_\frs^{\gamma-l}\sum_{m+n\geq\gamma}\|x-y\|_\frs^{m+n-\gamma}
|y|_{P_1}^{\mu_1+\mu_2-m-n}
\left(\frac{|y|_{P_0}}{|y|_{P_1}}\right)^{(\eta_1-m)\wedge0+(\eta_2-n)\wedge0}
\nonumber\\
&\leq\|x-y\|_\frs^{\gamma-l}|y|_{P_1}^{\mu}|y|_{P_0}^{-\gamma}\sum_{m+n\geq\gamma}|y|_{P_0}^{m+n}|y|_{P_1}^{-m-n}\left(\frac{|y|_{P_0}}{|y|_{P_1}}\right)^{(\eta_1-m)\wedge0+(\eta_2-n)\wedge0}
\nonumber\\
&=\|x-y\|_\frs^{\gamma-l}|y|_{P_1}^{\mu}|y|_{P_0}^{-\gamma}\sum_{m+n\geq\gamma}\left(\frac{|y|_{P_0}}{|y|_{P_1}}\right)^{\eta_1\wedge m+\eta_2\wedge n},\label{multi2}
\end{align}
where we used $\|x-y\|\leq|y|_{P_0}$ to get the third line. As before, we have $\eta_1\wedge m+\eta_2\wedge n\geq\eta\wedge\gamma=\eta$, and recalling that $|y|_{P_i}\sim|x,y|_{P_i}$ we see that this is indeed the bound we need in \eqref{def:spaces}. The second term on the right-hand side of \eqref{multi1} is bounded by a constant times
\begin{align*}
\sum_{m+n=l}&\|\Gamma_{xy}f_1(y)-f_1(x)\|_m\|\Gamma_{xy}f_2(y)-f_2(x)\|_n
\\
&\leq\sum_{m+n=l}\|x-y\|_\frs^{\gamma_1+\gamma_2-m-n}|x|_{P_1}^{\mu_1+\mu_2-\eta_1-\eta_2}|x|_{P_0}^{\eta_1+\eta_2-\gamma_1-\gamma_2}
\\
&\lesssim\|x-y\|_\frs^{\gamma-l}|x|_{P_1}^{\mu-\eta_1-\eta_2}|x|_{P_0}^{\eta-\eta_1-\eta_2}|x|_{P_0}^{\eta-\gamma_1-\gamma_2}\|x-y\|_s^{\gamma_1+\gamma_2-\gamma}.
\end{align*}
Since $\gamma_1+\gamma_2\geq\gamma$, $\eta_1+\eta_2\geq\eta$, and $\|x-y\|_s\leq|x|_{P_0}\leq|x|_{P_1}$, this gives the required bound. The third term on the right-hand side of \eqref{multi1} is bounded by a constant times
\begin{align}
\sum_{m+n=l}&\|\Gamma_{xy}f_1(y)-f_1(x)\|_m\|f_2(x)\|_n
\nonumber\\
&\lesssim\sum_{m+n=l}\|x-y\|_\frs^{\gamma_1-m}|x|_{P_1}^{\mu_1-\eta_1}|x|_{P_0}^{\eta_1-\gamma_1}|x|_{P_1}^{\mu_2-n}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{(\eta_2-n)\wedge0}
\nonumber\\
&\leq \|x-y\|^{\gamma-l}\sum_{m+n=l}\|x-y\|_\frs^{\gamma_1+n-\gamma}|x|_{P_1}^{\mu-\eta_1-\eta_2\wedge n}|x|_{P_0}^{\eta_1-\gamma_1+\eta_2\wedge n-n}
\nonumber\\
&\lesssim\|x-y\|_\frs^{\gamma-l}|x|_{P_1}^{\mu-\eta}
\sum_{m+n=l}\|x-y\|_\frs^{\gamma_1+n-\gamma}|x|_{P_1}^{\eta-\eta_1-\eta_2\wedge n}|x|_{P_0}^{\eta_1-\gamma_1+\eta_2\wedge n-n}.\label{multi4}
\end{align}
Inside the sum, the exponent of $\|x-y\|_\frs$ is nonnegative, due to the relation $\gamma\leq\gamma_1+\alpha_2$, while the exponent of $|x|_{P_1}$ is nonpositive, due to $\eta\leq\eta_1+\eta_2\wedge\alpha_2$. Using $\|x-y\|_\frs\leq|x|_{P_0}\leq|x|_{P_1}$ as before, we get the required bound. Finally, the fourth term on the right-hand side of \eqref{multi1} is bounded similarly, reversing the roles played by $f_1$ and $f_2$.
To prove the continuity estimate \eqref{eq:multi cont}, we of course need only consider the first part of the definition of $\vn{\cdot;\cdot}$, the bound on the second already follows from above by linearity. We then write
\begin{align}
f(x)-g(x)&-\Gamma_{xy}f(y)+\bar\Gamma_{xy}g(y)
\nonumber\\
&=-\Gamma_{xy}f(y)+\Gamma_{xy}g(y)+\Gamma_{xy}f_1(y)\star\Gamma_{xy}f_2(y)-\bar\Gamma_{xy}g_1(y)\star\bar\Gamma_{xy}g(y)
\nonumber\\
&\quad+(f_1(x)-g_1(x)-\Gamma_{xy}f_1(y)+\bar\Gamma_{xy}g_1(y))\star f_2(x)
\nonumber\\
&\quad+\Gamma_{xy}f_1(y)\star(f_2(x)-g_2(x)-\Gamma_{xy}f_2(y)+\bar\Gamma_{xy}g_2(y))
\nonumber\\
&\quad+\bar\Gamma_{xy}(g_1(y)-f_1(y))\star(\bar\Gamma_{xy}g_2(y)-g_2(x))
\nonumber\\
&\quad+(\bar\Gamma_{xy}f_1(y)-\Gamma_{xy}f_1(y))\star(\bar \Gamma_{xy}g_2(y)-g_2(y))
\nonumber\\
&\quad+(g_1(y)-\bar \Gamma_{xy}g_1(y))\star(f_2(x)-g_2(x)).
\nonumber\\
&=:T_0+T_1+T_2+T_3+T_4+T_5
\end{align}
For $T_0$, repeating the argument in \eqref{multi3}, we need to estimate, for $m+n\geq\gamma$, terms of the form
\begin{align*}
\Gamma_{xy}\cQ_mf_1(y)\star\Gamma_{xy}\cQ_nf_2(y)&-\bar\Gamma_{xy}\cQ_mg_1(y)\star\bar\Gamma_{xy}\cQ_ng_2(y)
\\
&=\Gamma_{xy}\cQ_mf_1(y)\star(\Gamma_{xy}(\cQ_nf_2(y)-\cQ_ng_2(y))
\\
&\quad+\Gamma_{xy}\cQ_mf_1(y)\star(\Gamma_{xy}\cQ_ng_2(y)-\bar\Gamma_{xy}\cQ_ng_2(y))
\\
&\quad+\Gamma_{xy}(\cQ_mf_1(y)-\cQ_mg_1(y))\star\bar\Gamma_{xy}\cQ_ng_2(y)
\\
&\quad+(\Gamma_{xy}\cQ_mg_1(y)-\bar\Gamma_{xy}\cQ_mg_1(y))\star\bar\Gamma_{xy}\cQ_ng_2(y).
\end{align*}
Continuing as in \eqref{multi3}, we get
\begin{align*}
\|T_0\|_l\lesssim\sum_{m+n\geq\gamma}\|x-&y\|_\frs^{m+n-l}\Big[\|f_1(y)\|_m\|f_2(y)-g_2(y)\|_n+\|f_1(y)\|_m\|\Gamma-\bar\Gamma\|_{\gamma_1+\gamma_2}\|g_2(y)\|_n
\\
&\quad\quad
+\|f_1(y)-g_1(y)\|_m\|g_2(y)\|_n+\|\Gamma-\bar\Gamma\|_{\gamma_1+\gamma_2}\|g_1(y)\|_m\|g_2(y)\|_n\Big].
\end{align*}
From here we get the desired bound \eqref{eq:multi cont} by repeating the calculation in \eqref{multi2}.
For the further terms, we shall make use of the fact that for any $\bar\gamma,$ $\bar w$, $h\in\cD_P^{\bar \gamma,\bar w}$, and for pairs $(x,y)$ under consideration, $\Gamma_{xy}h(y)$ satisfies analogous bounds to $h(x)$:
\begin{align}
\|\Gamma_{xy}h(y)\|_l&\leq\sum_{m\geq l}\|x-y\|_\frs^{m-l}\|h(y)\|_m
\lesssim\sum_{m\geq l}\|x-y\|_\frs^{m-l}|y|_{P_1}^{\bar\mu-m}
\left(\frac{|y|_{P_0}}{|y|_{P_1}}\right)^{(\bar\eta-m)\wedge0}
\nonumber\\
&\lesssim |x|_{P_1}^{\bar\mu-l}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{(\bar\eta-l)\wedge0}.\label{eq:multi Gamma}
\end{align}
For $T_1$, we write
\begin{align*}
\|T_1\|_l\lesssim\vn{f_1;g_1}_{\gamma_1,w_1}\sum_{m+n=l}\|x-y\|^{\gamma_1-m}|x|_{P_1}^{\mu_1-\eta_1}|x|_{P_0}^{\eta_1-\gamma_1}|x|_{P_1}^{\mu_2-n}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{(\eta_2-n)\wedge0},
\end{align*}
and as we recognise the sum from \eqref{multi4}, the required bound follows.
For $T_2$, we use \eqref{eq:multi Gamma} with $h=f_1$, and then proceed just like for $T_1$, with the role of the indices reversed.
To bound $T_3$, we use \eqref{eq:multi Gamma}, this time with $h=g_1-f_1$, to get
\begin{align*}
\|T_3\|_{l}\leq\|f_1-g_1\|_{\gamma_1,w_1}\sum_{m+n=l}|y|_{P_1}^{\mu_1-m}\left(\frac{|y|_{P_0}}{|y|_{P_1}}\right)^{(\eta_1-m)\wedge0}\|x-y\|_\frs^{\gamma_2-n}|x|_{P_1}^{\mu_2-\eta_2}|x|_{P_0}^{\eta_2-\gamma_2},
\end{align*}
and the sum is again of the same form.
The bound for the term $T_5$ goes similarly to $T_3$, with the indices reversed, and so does $T_4$, with the only difference that the prefactor of the sum is $\|\Gamma-\bar\Gamma\|_{\gamma_1+\gamma_2}\vn{f_1}_{\gamma_1,w_1}$.
\end{proof}
\subsection{Composition with smooth functions}
\begin{lemma}\label{lem:comp}
Let $V$ be a sector of regularity $0$ with $V_0 = \scal{\bone}$
that is $\gamma$-regular with respect to the product $\star$ and furthermore $V\star V\subset V$.
Let $f_1,\ldots,f_n\in\cD_P^{\gamma,w}(V)$ with $w=(\eta,\sigma,\mu)$ such that $\eta,\sigma,\mu\geq0.$ Let furthermore $F:{\mathbb{R}}^n\rightarrow{\mathbb{R}}$ be a smooth function. Then $\hat F_\gamma(f)$ belongs to $\cD_P^{\gamma,w}(V)$. Furthermore, $\hat F_\gamma:\cD_P^{\gamma,w}\rightarrow\cD_P^{\gamma,w}$ is locally Lipschitz continuous in any of the seminorms $\|\cdot\|_{\gamma,w;\frK}$ and $\vn{\cdot}_{\gamma,w;\frK}$.
\end{lemma}
\begin{remark}\label{remark:comp boxnorm}
If two modelled distributions $f$, $\bar f$ are such that $f-\bar f\in\cD_{P,\{1\}}^{\gamma,w}$, then clearly $\hat F_\gamma(f)-\hat F_\gamma(\bar f)$ also has $0$ limit at $P_1\setminus P_0$. In this case the analogous Lipschitz bound for $\hat F$ in the seminorms $\bn{\cdot}_{\gamma,w;\frK}$ also holds.
\end{remark}
\begin{remark}
One can use the same construction as in \cite[Prop.~3.11]{HP15} to obtain local Lipschitz continuity when
comparing two modelled distributions modelled on two different models.
\end{remark}
\begin{proof}
We only give a sketch of the proof, as the majority of the argument is exactly the same as that of the proof of Theorem~4.16 and Proposition~6.12 in \cite{H0}. We prove the main estimates which are somewhat different due to the additional weights and refer the reader to \cite{H0} to confirm that these indeed imply the theorem.
As usual, we consider the situation $2\|x-y\|_\frs\leq|x,y|_{P_0}\leq|x,y|_{P_1}$. We denote $L=\lfloor \gamma/\zeta\rfloor$, where $\zeta$ is either the lowest nonzero homogeneity such that $V_\zeta\neq\{0\}$, or if that index is larger than $\gamma$, then we set $\zeta=\gamma$. The essential quantities to bound are
\begin{align*}
R_1&:=\sum_{l:\sum l_i\geq\gamma}\Gamma_{xy}\cQ_{l_1}\tilde{f}(y)\star\cdots\star\Gamma_{xy}\cQ_{l_n}\tilde{f}(y),
\\
R_f&:=\Gamma_{yx}f(x)-f(y),
\\
R_2&:=\sum_{|k|\leq L}(\Gamma_{yx}\tilde{f}(x))^{\star k}-(\Gamma_{yx}\tilde{f}(x)+R_f)^{\star k},
\\
R_3&:=\sum_{|k|\leq L}|\bar f(x)-\bar f(y)|^{\gamma/\zeta-|k|}(\tilde{f}(y)-(\bar f(y)-\bar f(x))\bone)^{\star k},
\end{align*}
each of which has to be estimated in the following way, for all $\beta<\gamma$:
\begin{equation}\label{eq:comp bound}
\|R_i\|_\beta\lesssim\|x-y\|_\frs^{\gamma-\beta}|x,y|_{P_1}^{\mu-\eta}|x,y|_{P_0}^{\eta-\gamma}.
\end{equation}
Note that there is a slight abuse of notation here in that $R_f$ is vector-valued. By \eqref{eq:comp bound} we then understand that such an estimate holds for each coordinate, and this convention is applied in the other analogous situations below whenever vector-valued functions are considered.
We further invoke two elementary inequalities from the proof of \cite[~Prop 6.12]{H0}: for $\eta\geq0$, $n\in\mathbb{N}$, $l_1,\ldots,l_n\in\mathbb{N}$, we have
\begin{equation}\label{eq:comp1}
\sum_{i=1}^n (\eta-l_i)\wedge0\geq \Big(\eta-\sum_{i=1}^n l_i\Big)\wedge0,
\end{equation}
and for any multiindex $k$ with $|k|\leq L,$ integer $0\leq m\leq |k|$, real numbers $0<\zeta\leq\gamma$, $0\leq \beta,\eta\leq\gamma$, and integers $l_1,\ldots,l_m$ satisfying $\sum l_i=\beta$ and $l_i\geq\zeta$, it holds
\begin{align}
N+M:=\Big[&(|k|\zeta-\gamma-|k|\eta+(\gamma\eta/\zeta))\wedge0\Big]
\nonumber\\
&+\Big[\beta-\zeta m+(|k|-m)((\eta-\zeta)\wedge0)+\sum_{i=1}^m(\eta-l_i)\wedge0\Big]\geq\eta-\gamma.\label{eq:comp2}
\end{align}
The term $R_1$ looks very similar to what we encountered in \eqref{multi3}, and indeed by the same argument we can write
\begin{align*}
\|R_1\|_\beta&\lesssim\sum_{\sum l_i\geq\gamma}\|x-y\|_\frs^{\sum l_i-\beta}\prod_i\|\tilde{f}(y)\|_{l_i}
\\
&\lesssim\|x-y\|_\frs^{\gamma-\beta}\sum_{\sum l_i\geq\gamma}\|x-y\|_\frs^{\sum l_i-\gamma}\prod_i|y|_{P_1}^{\mu-l_i}\left(\frac{|y|_{P_0}}{|y|_{P_1}}\right)^{(\eta-l_i)\wedge0}
\\
&\lesssim\|x-y\|_\frs^{\gamma-\beta}\sum_{\sum l_i\geq\gamma}|y|_{P_0}^{-\gamma}|y|_{P_1}^{n\mu}\left(\frac{|y|_{P_0}}{|y|_{P_1}}\right)^{\sum(\eta-l_i)\wedge0+\sum l_i}.
\end{align*}
By \eqref{eq:comp1}, the exponent of the fraction above is bounded from below by $\eta\wedge\sum l_i=\eta$, and since $n\mu\geq\mu$ due to $\mu$ being nonnegative, this yields the required bound.
The bound for $R_f$ follows from the definition. For $R_2$, notice that
\begin{align*}
\|\Gamma_{yx}\tilde{f}(x)\|_l&\lesssim\sum_{l'\geq l}\|x-y\|_\frs^{l'-l}\|\tilde{f}(x)\|_{l'}
\\&
\lesssim\sum_{l'\geq l}\|x-y\|_\frs^{l'-l}|x|_{P_1}^{\mu-l'}\left(\frac{|y|_{P_0}}{|y|_{P_1}}\right)^{(\eta-l')\wedge0}
\lesssim\|x-y\|_\frs^{-l}.
\end{align*}
Therefore, for any nonzero multiindex $m$ and any multiindex $m'$,
\begin{align*}
\|R_f^{\star m}\star(\Gamma_{yx}\tilde{f}(x))^{\star m'}\|_\beta&\lesssim\sum_{\substack{l_1+\ldots +l_m \\ +l'_1+\ldots+l'_{m'}=\beta}}\prod_{i=1}^{|m|}\|x-y\|_\frs^{\gamma-l_i}|x|_{P_1}^{\mu-\eta}|x|_{P_0}^{\eta-\gamma}\prod_{i'=1}^{|m'|}\|x-y\|^{-l'_{i'}}
\\
&\lesssim\|x-y\|_\frs^{\gamma-\beta}|x|_{P_1}^{\mu-\eta}|x|_{P_0}^{\eta-\gamma}\left(\|x-y\|_\frs^{\gamma}|x|_{P_1}^{\mu-\eta}|x|_{P_0}^{\eta-\gamma}\right)^{|m|-1},
\end{align*}
and since the quantity in the parentheses is of order one due to $\gamma,\eta,\mu\geq0$ and $\|x-y\|_\frs\leq|x,y|_{P_0}\leq|x,y|_{P_1}$, the bound \eqref{eq:comp bound} for $R_2$ follows.
For $R_3$, fix $k$ and first write
\begin{align}
|\bar f(x)-\bar f(y)|&\leq\|\Gamma_{xy}\tilde{f}(y)\|_0+\|f(x)-\Gamma_{xy}f(y)\|_0
\nonumber\\
&\lesssim\sum_{\zeta\leq l\leq\gamma}\|x-y\|_\frs^l|x|_{P_1}^{\mu-l}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{(\eta-l)\wedge0}\label{eq:semmmi}
,\end{align}
where $l$ runs over indices in $A\cup\{\gamma\}$ in the specified range. If the exponent of $\|x-y\|_\frs$ were $l-\zeta$ instead of $l$, we would be in the exact same situation as in \eqref{eq:multi Gamma}. Taking this extra $\|x-y\|_\frs^\zeta$ out of the sum, we therefore get the bound
\begin{equation}\label{eq:comp4}
|\bar f(x)-\bar f(y)|\lesssim\|x-y\|^{\zeta}|x|_{P_1}^{\mu-\zeta}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{(\eta-\zeta)\wedge0},
\end{equation}
and, recalling the notation $N$ from \eqref{eq:comp2},
\begin{equation}\label{eq:comp3}
|\bar f(x)-\bar f(y)|^{\gamma/\zeta-|k|}\lesssim\|x-y\|_\frs^{\gamma-|k|\zeta}|x|_{P_1}^{(\gamma/\zeta-|k|)(\mu-\zeta)}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{N}.
\end{equation}
Moving to the other constituent of $R_3$, by \eqref{eq:comp4} and the bounds on $\tilde{f}(y)$ from the definition of the spaces $\cD_P^{\gamma,w}$, the we can write
\begin{align*}
\|(\tilde{f}&(y)-(\bar f(y)-\bar f(x))\bone)^{\star k}\|_\beta
\\
&\lesssim\sum_{0\leq m\leq|k|}\sum_{\substack{\sum_{i=1}^m l_i=\beta \\ l_i\geq\zeta}}
\|x-y\|_\frs^{\zeta(|k|-m)}|x|_{P_1}^{(\mu-\zeta)(|k|-m)}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{((\eta-\zeta)\wedge0)(|k|-m)}
\\
&\quad\quad\times\prod_{i=1}^m|x|_{P_1}^{\mu-l_i}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{(\eta-l_i)\wedge0}.
\end{align*}
As the sum has finitely many terms, it suffices to treat them separately, and therefore we fix $m$ and $l_i$ as above. Then, since $\beta=\sum l_i\geq\sum\zeta=m\zeta$, we can get a bound
\begin{align*}
\|x-y\|_\frs^{\zeta|k|-\beta}|x|_{P_0}^{\beta-m\zeta}|x|_{P_1}^{|k|\mu-\zeta|k|}|x|_{P_1}^{m\zeta-\beta}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{((\eta-\zeta)\wedge0)(|k|-m)+\sum(\eta-l_i)\wedge0}
\end{align*}
Moving the second and fourth factor into the fifth one, we get that the exponent of the fraction above becomes $M$, as defined in \eqref{eq:comp2}. Combining this with \eqref{eq:comp3}, we get
\begin{align*}
\|R_3\|_\beta\lesssim\|x-y\|^{\gamma-\beta}|x|_{P_1}^{(\gamma/\zeta)\mu-\gamma}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{N+M},
\end{align*}
and by \eqref{eq:comp2} and the fact $(\gamma/\zeta)\mu\geq\mu$, we arrive at \eqref{eq:comp bound} for $R_3$.
\end{proof}
\subsection{Reconstruction}\label{subsec:reco}
Recall that, since reconstruction is a local operation, there exists an element $\tilde{\cR}f$ in the dual of smooth functions supported away from $P$ such that the bound \eqref{eq:standard reco estimate} is satisfied if $\lambda\ll|x|_{P_0}\wedge|x|_{P_1}$. A natural guess for the target space of the extension of the reconstruction operator acting on $\cD^{\gamma,w}_P(V)$ would be $\cC^{\eta\wedge\sigma\wedge\mu\wedge\alpha}$.
While this certainly does hold, we need some finer control over the behaviour at the different boundaries. To this end, we introduce weighted versions of H\"older spaces as follows.
\begin{definition}\label{def:weightedHolder}
Let $a = (a_0,a_1,a_\cap) \in {\mathbb{R}}_-^3$, write $a_\wedge = a_0\wedge a_1\wedge a_{\cap}$, and let $P = (P_0,P_1)$ as above. Then, we define $\cC^{a}_P$ as the set of distributions $u\in\cC^{a_\wedge}$ that furthermore satisfy the following two properties.
\begin{enumerate}[(a)]
\item For any $x\in\{|x|_{P_0}\leq2|x|_{P_1}\}$, $\lambda\in(0,1]$ satisfying $2\lambda\leq |x|_{P_1}$, and every $\psi\in\cB^r$, where $r=\lceil-a_0+1\rceil$,
\begin{equation}\label{eq:defHolder1}
|u(\psi_x^{\lambda})|\lesssim |x|_{P_1}^{a_{\cap}-a_0}\lambda^{a_0}.
\end{equation}
\item For any $x\in\{|x|_{P_1}\leq2|x|_{P_0}\}$, $\lambda\in(0,1]$ satisfying $2\lambda\leq |x|_{P_0}$, and every $\psi\in\cB^r$, where $r=\lceil-a_1+1\rceil$,
\begin{equation}\label{eq:defHolder2}
|u(\psi_x^{\lambda})|\lesssim |x|_{P_0}^{a_{\cap}-a_1}\lambda^{a_1}.
\end{equation}
\end{enumerate}
For a compact $\frK$, the maximum of the best proportionality constants in \eqref{eq:defHolder1} and \eqref{eq:defHolder2} over $x\in\frK$ is denoted by $\|u\|_{a;\frK}$.
\end{definition}
\begin{proposition}\label{prop:extension3}
Let $u \in \CD'({\mathbb{R}}^d\setminus (P_0\cap P_1))$ be such that the bounds \eqref{eq:defHolder1}-\eqref{eq:defHolder2} are satisfied.
Then, provided $a_\wedge>-\frm$, there exists a unique distribution
$u'\in\cC^{a}_P$ that agrees with $u$ on test functions supported away from $P_0\cap P_1$.
\end{proposition}
\begin{proof}
Such a $u'$ clearly satisfies (a)-(b) of Definition~\ref{def:weightedHolder}, so it only needs to be shown that there exists a unique extension of $u$ in $\cC^{a_\wedge}$.
By Proposition~\ref{prop:extension}, it suffices to obtain the bound
\begin{equation}\label{eq:reco2}
|u(\psi_x^{\lambda})|\lesssim \lambda^{a_\wedge}\;,
\end{equation}
uniformly over $\psi \in \cB^r$ (for some fixed large enough $r$)
and $\lambda \in (0,1]$, for $c\lambda\leq d_\frs(x,P_0\cap P_1)$ with some fixed $c>1$. For sufficiently large $c$ (depending only on the dimension), one can find smooth functions $\phi_i^{(\lambda)}$ with $i = 0,1$ with the following properties:
\begin{enumerate}[(i)]
\item The $\phi_i^{(\lambda)}$ are supported on $\{x: |x|_{P_i}\geq4\lambda,2 |x|_{P_i}\geq |x|_{P_{1-i}}\}$.
\item If $x\in{\mathbb{R}}^d$ is such that $d_\frs(x,P_0\cap P_1)\geq (c-1)\lambda,$ then $
\phi_0^{(\lambda)}(x)+\phi_1^{(\lambda)}(x)=1$.
\item For any multiindex $k$, the bound $|D^k\phi_i^{(\lambda)}(x)|\lesssim \lambda^{-|k|_\frs}$ is satisfied for all $x\in{\mathbb{R}}^d$.
\end{enumerate}
The functions $\psi_x^\lambda\phi_i^{(\lambda)}$ then satisfy the bounds
$$
\sup_{y\in{\mathbb{R}}^d}|D^k(\psi_x^\lambda\phi_i^{(\lambda)})(y)|\lesssim \lambda^{-|\frs|-|k|_\frs}
$$
and have support with diameter less than $2\lambda^{|\frs|}$. One can therefore find points $z_i$ with $2|z_i|_{P_i}\geq|z_i|_{P_{1-i}} \vee 8\lambda$, as well as
functions $\xi^{(i,\lambda)} \in \cB^r$ such that $
\psi_x^\lambda\phi_i^{(\lambda)}=\xi_{z_i}^{(i,\lambda),2\lambda}
$. Applying the estimates \eqref{eq:defHolder1} and \eqref{eq:defHolder2} to $\xi^{(1,\lambda)}$ and $\xi^{(0,\lambda)}$, respectively, we get
$$
|u(\psi_x^\lambda)| \le |u(\xi_{z_0}^{(0,\lambda),2\lambda})|+|u(\xi_{z_1}^{(1,\lambda),2\lambda})|\lesssim\lambda^{(a_\cap-a_1)\wedge0+a_1}+\lambda^{(a_\cap-a_0)\wedge0+a_0},
$$
and since the minimum of the two exponents on the right-hand side is $a_\wedge$, \eqref{eq:reco2} holds indeed.
\end{proof}
\begin{theorem}\label{thm:reco}
Let $f\in\cD_P^{\gamma,w}(V)$, where $V$ is a sector of regularity $\alpha$ and suppose that
\begin{equation}\label{eq:exponents}
\eta\wedge\alpha>-\frm_0,\quad\sigma\wedge\alpha>-\frm_1,\quad\mu>-\frm\;.
\end{equation}
Then, setting $a=(\eta\wedge\alpha,\sigma\wedge\alpha,\mu)$, there exists a unique distribution
$$
\cR f\in \cC^{a}_P
$$
such that $(\cR f)(\psi)=(\tilde{\cR} f)(\psi)$ for smooth test functions that are compactly supported away from $P$. In particular, $\cR f\in\cC^{a_\wedge}$.
Moreover, if $(\bar\Pi,\bar\Gamma)$ is another model for $\scT$ and $f\in\cD_{P}^{\gamma,w}(V,\Gamma)$, $\bar f\in\cD_{P}^{\gamma,w}(V,\bar\Gamma)$, then one has the bounds, for any $C>0$ and $\frK$ compact
\begin{equation}\label{eq:reco9}
\|\cR f-\cR\bar f\|_{a;\frK}\lesssim \vn{f;\bar f}_{\gamma,w;\bar\frK}+\|\Pi-\bar\Pi\|_{\gamma,\bar\frK}+\|\Gamma-\bar\Gamma\|_{\gamma,\bar\frK},
\end{equation}
uniformly in $f,\bar f$, and the two models being bounded by $C$, where $\bar\frK$ denotes the $1$-fattening of $\frK$.
\end{theorem}
\begin{proof}
By virtue of Proposition~\ref{prop:extension3}, it suffices to extend $\tilde\cR f$ to an element of
$\CD'({\mathbb{R}}^d \setminus (P_0\cap P_1))$ in such a way that \eqref{eq:defHolder1}-\eqref{eq:defHolder2} hold with the
desired exponents.
By \eqref{eq:standard reco estimate}, it holds, uniformly in $x\in\{|x|_{P_0}\leq2|x|_{P_1}\}$ over compacts, uniformly in $\psi\in\cB^r$, and uniformly in $\lambda\in(0,1]$ such that $4\lambda\leq|x|_{P_0}$, that
\begin{equation}\label{eq:reco6}
|(\tilde{\cR}f-\Pi_xf(x))(\psi_x^\lambda)|\lesssim \lambda^\gamma|x|_{P_0}^{\eta-\gamma}|x|_{P_1}^{\mu-\eta}\lesssim\lambda^{\eta}|x|_{P_1}^{\mu-\eta}.
\end{equation}
Also, in the same situation, we have
\begin{equation}
|(\Pi_xf(x))(\psi_x^\lambda)|\lesssim \sum_l\lambda^l|x|_{P_1}^{\mu-l}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{(\eta-l)\wedge0}.
\end{equation}
Since $\lambda\lesssim|x|_{P_0}\wedge|x|_{P_1}$, this sum is of the same form that we encountered before, for example in \eqref{eq:semmmi}. By the same argument we get
\begin{equation}\label{eq:reco again-1}
|(\Pi_xf(x))(\psi_x^\lambda)|\lesssim \lambda^\alpha|x|_{P_1}^{\mu-\alpha}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{(\eta-\alpha)\wedge0}\lesssim \lambda^{\eta\wedge\alpha}|x|_{P_1}^{\mu-(\eta\wedge\alpha)}.
\end{equation}
Combining this with \eqref{eq:reco6}, by Proposition~\ref{prop:extension} we can extend $\tilde\cR f$ to an element $\tilde\cR_0 f\in \CD'({\mathbb{R}}^d \setminus P_1)$ such that the bound
\begin{equation}\label{eq:rec again0}
|(\tilde\cR_0f)(\psi_x^\lambda)|\lesssim\lambda^{\eta\wedge\alpha}|x|_{P_1}^{\mu-(\eta\wedge\alpha)}
\end{equation}
holds uniformly in $x\in\{|x|_{P_0}\leq2|x|_{P_1}\}$ over compacts, uniformly in $\psi\in\cB^r$, and uniformly in $\lambda\in(0,1]$ such that $2\lambda\leq|x|_{P_1}$.
One can similarly construct $\tilde{\cR}_1 f \in \CD'({\mathbb{R}}^d \setminus P_0)$ such that
$
|(\tilde\cR_1f)(\psi_x^\lambda)|\lesssim\lambda^{\sigma\wedge\alpha}|x|_{P_1}^{\mu-(\sigma\wedge\alpha)}
$
holds in the symmetric situation. Since $\tilde \cR_0 f$ and $\tilde\cR_1 f$ agree on the intersection of their domains, they can be pieced together to get the claimed extension of $\tilde{\cR} f$.
The proof of continuity is again analogous and is omitted here.
\end{proof}
Keeping in mind that our goal will be to apply this calculus for singular SPDEs with boundary conditions on some domain $D$, $P_1$ will typically stand for ${\mathbb{R}}\times \partial D$. With a parabolic scaling we have $\frm_1=1$ and so condition \eqref{eq:exponents}, in particular requiring $\sigma\wedge\alpha>-1$ is rather strict and will often be violated. In these situations, a $\cC^{(\eta\wedge\alpha,\sigma\wedge\alpha,\mu)}_P$ extension $\tilde\cR f$ is not unique and hence sometimes it will be more suggestive to write $\hat\cR f$ for particular choices of such extensions.
On some occasions this choice will be made `by hand', but there is also another generic situation when a canonical choice can be made, as follows.
\begin{theorem}\label{thm:reco hat'}
Let $f\in\cD_{P,\{1\}}^{\gamma,w}$, where $V$ is a sector of regularity $\alpha$ and let $\gamma>0$ and $w$ be such that
\begin{equs}[eq:exponents']
0>\sigma>-\frm_1\geq\alpha,\quad
\eta\wedge\alpha>-\frm_0,\quad\mu>-\frm\;.
\end{equs}
Then there exists a unique distribution
$
\hat \cR f\in\cC^{(\eta\wedge\alpha,\alpha,\mu)}_P
$
such that for smooth functions $\psi$ compactly supported away from $P$, $\hat\cR f(\psi)=\tilde \cR f(\psi)$ and that furthermore,
\begin{equation}\label{eq:reco on P1}
|\hat\cR f(\psi_x^\lambda)|\lesssim \lambda^\sigma|x|_{P_0}^{\mu-\sigma}
\end{equation}
holds uniformly in $x$ over relatively compact subsets of $P_1\setminus P_0$, in $\psi\in\cB^r$, and in $\lambda\in(0,1]$ such that $2\lambda\leq|x|_{P_0}$.
Moreover, if $(\bar\Pi,\bar\Gamma)$ is another model for $\scT$ and $f\in\cD_{P,\{1\}}^{\gamma,w}(V,\Gamma)$, $\bar f\in\cD_{P,\{1\}}^{\gamma,w}(V,\bar\Gamma)$, then one has the bound, for all $C>0$ and compact $\frK$
\begin{equation}\label{eq:reco7'}
\|\hat\cR f-\hat{\bar{\cR}} \bar f\|_{\eta\wedge\alpha,\alpha,\mu;\frK}\lesssim \vn{f;\bar f}_{\gamma,w;\bar\frK}+\|\Pi-\bar\Pi\|_{\gamma,\bar\frK}+\|\Gamma-\bar\Gamma\|_{\gamma,\bar\frK}.
\end{equation}
uniformly in $f,\bar f$, and the two models being bounded by $C$, where $\bar\frK$ denotes the $1$-fattening of $\frK$.
Finally, if for all $a\in V$, $\Pi_x a$ is a continuous function, then
\begin{equation}\label{eq:reco8'}
\hat\cR f(\psi)=\int_{{\mathbb{R}}^d\setminus P}(\Pi_xf(x))(x)\,\psi(x)\,dx\;.
\end{equation}
\end{theorem}
\begin{proof}
First notice that such a $\hat\cR f$ has to be unique: any two extensions of $\tilde{\cR} f$ differ by
a distribution concentrated on $P$, which, due to the conditions on the exponents and the constraint \eqref{eq:reco on P1}, has to vanish.
An extension $\tilde\cR_0 f$ with the `right behaviour' on ${\mathbb{R}}^d\setminus P_1$ is constructed in the proof of Theorem~\ref{thm:reco}. Concerning the behaviour outside $P_0$ we claim that, with $\hat f = \cQ_\sigma^- f$, it suffices to construct an extension $\hat \cR_1 f\in\cD'({\mathbb{R}}^d\setminus P_0)$ of $\tilde{\cR} f$ that satisfies the bound
\begin{equation}\label{eq:reco01}
|(\hat\cR_1 f-\Pi_x\hat f(x))(\psi_x^\lambda)|\lesssim \lambda^\sigma|x|_{P_0}^{\mu-\sigma}
\end{equation}
uniformly in $x\in\{|x|_{P_1}\leq2|x|_{P_0}\}$ over compacts, uniformly in $\psi\in\cB^r$, and uniformly in $\lambda\in(0,1]$ such that $2\lambda\leq|x|_{P_0}$.
Indeed, \eqref{eq:reco on P1} then follows from the fact that $\hat f(x) = 0$ for $x \in P_1\setminus P_0$
by the definition of $\cD_{P,\{1\}}^{\gamma,w}$.
Furthermore, by Propositions~\ref{prop3} and~\ref{prop:1}, we have
$$
|\Pi_x\hat f(x)(\psi_x^\lambda)|\lesssim\sum_{\alpha\leq l<\sigma}|x|_{P_0}^{\mu-\sigma}|x|_{P_1}^{\sigma-l}\lambda^l
\lesssim|x|_{P_0}^{\mu-\alpha}\lambda^\alpha\;,
$$
where the last bound follows from the facts that $|x|_{P_1} \le |x|_{P_0}$, $\alpha \le l$, and $\lambda \le |x|_{P_0}$.
Therefore, by \eqref{eq:reco01}, the same bound holds for $\hat \cR_1 f$, and so piecing $\tilde\cR_0 f$ and $\hat \cR_1 f$ together, the resulting element of $\cD'({\mathbb{R}}^d\setminus (P_0\cap P_1))$ satisfies the conditions of Proposition~\ref{prop:extension3} with $a_0=\eta\wedge\alpha$, $a_1=\alpha$, and $a_\cap=\mu$. Applying the proposition, we get the claimed $\hat\cR f$. Further notice, that in fact it is enough to show \eqref{eq:reco01} for each $m\in\mathbb{N}$ in the case where $x$ is further restricted to run over $A_m:=\{|x|_{P_0}\in[2^{-m-2},2^{-m}]\}$. Indeed, all functions $\psi_x^\lambda$ that are considered in \eqref{eq:reco01} have support that intersects at most two $A_m$'s, and therefore a straightforward partition of unity argument, like for instance the one in the proof of Proposition~\ref{prop:extension3} completes the proof.
To get $\hat\cR_1f$ on $A_m$, first consider $\cR^m\hat f$ defined as in \eqref{eq:Rm},
which is a meaningful expression thanks to Proposition~\ref{prop3}. Furthermore, by
\eqref{eq:reco bound away negative m} and using Proposition~\ref{prop3}, one has the bound
\begin{equation}\label{eq:rec again2'}
|(\cR^m \hat f-\Pi_x\hat f(x))(\psi_x^\lambda)|\lesssim\lambda^{\sigma-\alpha}(\lambda\wedge|x|_{P_0})^{\alpha}|x|_{P_0}^{\mu-\sigma}\lesssim\lambda^\sigma|x|_{P_0}^{\mu-\sigma},
\end{equation}
uniformly in $x\in\{|x|_{P_1}\leq2|x|_{P_0}\}\cap A_m$ over compacts, uniformly over $\psi\in\cB^r$, and uniformly over $\lambda\in(0,1]$ such that $4\lambda\leq|x|_{P_1}$. One also has, by \eqref{eq:standard reco estimate} and the basic properties of the model,
\begin{align}
|(\tilde{\cR}f-\Pi_x\hat f(x))(\psi_x^\lambda)|&\leq|(\tilde{\cR}f-\Pi_x f(x))(\psi_x^\lambda)|+|(\Pi_xf(x)-\Pi_x\hat f(x))(\psi_x^\lambda)|\nonumber\\
&\lesssim \lambda^\gamma|x|_{P_1}^{\sigma-\gamma}|x|_{P_0}^{\mu-\sigma}+\sum_{l>\sigma}\lambda^l|x|_{P_0}^{\mu-\sigma}|x|_{P_1}^{\sigma-l}
\lesssim \lambda^{\sigma}|x|_{P_0}^{\mu-\sigma}\label{eq:rec again1'}
\end{align}
with the same uniformity. Thus the same bound holds for the difference $\tilde\cR f-\cR^m\hat f$, which therefore, by Proposition~\ref{prop:extension}, has a unique extension $\Delta_m \cR f\in\cD'(\{|x|_{P_1}\leq2|x|_{P_0}\}\cap A_m)$ for which the same bound holds even when $\lambda$ is only restricted by $2\lambda\leq|x|_{P_0}$. Hence $\cR^m f+\Delta_m\cR f$ satisfies the required bound \eqref{eq:reco01} (on $A_m$), and it trivially agrees with $\tilde{\cR} f$ on functions supported away from $P$.
As for the last statement of the theorem, one simply has to check that the right-hand side of \eqref{eq:reco8'} satisfies the claimed properties. It trivially coincides with $\tilde\cR f$ away from $P$, and the bound \eqref{eq:reco on P1} follows from the fact that, thanks to Proposition~\ref{prop:1}
$$
|(\Pi_x f(x))(x)|\lesssim|x|_{P_0}^{\mu-\sigma}|x|_{P_1}^\sigma
$$
if $|x|_{P_1}\leq|x|_{P_0}$, where in this particular case the proportionality constant also depends on the local supremum bounds of the continuous functions $\Pi_x a$. Since this additional dependency doesn't affect the uniqueness part of the statement, the proof is complete.
\end{proof}
\subsection{Differentiation}
\begin{lemma}\label{lem:diff}
Let $\mathscr{D}$ be an abstract gradient and let $f\in\cD_P^{\gamma,w}(V)$, where $\gamma>\frs_i$ and $w=(\eta,\sigma,\mu)\in{\mathbb{R}}^3$. Then $\mathscr{D}_{i} f\in\cD_P^{\gamma-\frs_i,(\eta-\frs_i,\sigma-\frs_i,\mu-\frs_i)}$.
\end{lemma}
This lemma is a direct consequence of the definition of abstract gradients, and since the proof is a trivial modification of that of \cite[~Prop 5.28]{H0}, it is omitted here.
\subsection{Integration against singular kernels}
\label{sec:kernel}
As seen above, in certain situations the distribution $\cR f$ is not uniquely defined as there might be many distributions $\zeta$ with the appropriate regularity that extend $\tilde\cR f$. For any such $\zeta$, let us denote by $\cN_\gamma^\zeta f$ and $\cK^\zeta_\gamma f$ the modelled distributions defined analogously to $\cN_\gamma f$ and $\cK_\gamma f$, but with $\cR f$ replaced by $\zeta$.
Before stating the result on the integration operator in the weighted spaces, let us recall the following identities from \cite{H0}, which hold for any multiindex $k$,
with the usual convention that empty sums vanish
\begin{equs}[eq:rearrange K 1]
(\Gamma_{xy}\cN_\gamma^{\zeta,(n)}f(y))_k&=\frac{1}{k!}\sum_{|k+l|_\frs<\gamma+\beta}
\frac{(x-y)^l}{l!}(\zeta-\Pi_x f(x))(D_1^{k+l}K_n(y,\cdot)),
\\
(\Gamma_{xy}\cJ^{(n)}(y)f(y))_k&=(\cJ^{(n)}(x)\Gamma_{xy}f(y))_k=\frac{1}{k!}\sum_{\delta>|k|_\frs-\beta}(\Pi_x\cQ_\delta\Gamma_{xy}f(y))(D_1^kK_n(x,\cdot)).\nonumber
\end{equs}
In particular, choosing $x=y$, these identities also cover the formulas for the coefficient of $X^k$ in $\cN_{\gamma}^{\zeta,(n)} f(x)$ and $\cJ^{(n)}(x)f(x)$, respectively.
Another nontrivial rearrangement of terms gives
\begin{align}
k!(\Gamma_{xy}\cN_\gamma^{\zeta,(n)}f(y)&+\Gamma_{xy}\cJ^{(n)}(y)f(y)-\cN_\gamma^{\zeta,(n)}f(x)-\cJ^{(n)}(x)f(x))_k
\nonumber\\
&=(\Pi_y f(y)-\zeta)(K_{n;xy}^{k,\gamma})
\nonumber\\
&\quad-\sum_{\delta\leq|k|_\frs-\beta}(\Pi_x\cQ_\delta(\Gamma_{xy}f(y)-f(x)))(D_1^k K_n(x,\cdot)),
\label{eq:rearrange K 0}
\end{align}
where we define, for $\alpha\in{\mathbb{R}}$,
$$
K_{n;xy}^{k,\alpha}(z)=D_1^kK_n(y,z)-\sum_{|k+l|_\frs<\alpha+\beta}\frac{(y-x)^l}{l!}D_1^{k+l}K_n(x,z).
$$
We will also make use of the fact that following Taylor remainder formula holds:
\begin{equation}\label{eq:rearrange K Taylor}
K_{n;xy}^{k,\alpha}(z)=\sum_{l\in \partial A_{\alpha}}\int_{{\mathbb{R}}^d}D_1^{k+l}K_n(\bar y,z)Q^l(x-y,d\bar y),
\end{equation}
where all we need from the yet undefined objects is that $\partial A_{\alpha}$ is a finite set of multiindices $l$ which all satisfy $|l|_\frs\geq\alpha+\beta-|k|_\frs$ and that $Q^l(x-y,\cdot)$ is a measure supported on the set $\{\bar y:\|x-\bar y\|_\frs\leq\|x-y\|_\frs\}$, with total mass bounded by a constant times $\|x-y\|_\frs^{|l|_\frs}$.
For a proof of this, see for example \cite[Appendix~A]{H0}.
\begin{lemma}\label{lem:int}
Fix $\gamma > 0$, $w = (\eta,\sigma,\mu)$, let $V$ be a sector of regularity $\alpha$, and set $a = (\eta\wedge\alpha,\sigma\wedge\alpha,\mu)$.
(i) Let $f\in\cD_P^{\gamma,w}(V)$ and let $K$ be as in Theorem~\ref{thm:standard int} for some $\beta>0$ and abstract integration map $\cI$.
Let $\zeta\in\cC^{a}$ such that $\zeta(\psi)=(\tilde \cR f)(\psi)$ for all $\psi\in C_0^\infty({\mathbb{R}}^d\setminus P)$
and set
\begin{equation}\label{eq:exponents2}
\bar \gamma = \gamma + \beta,\quad
\bar\eta=(\eta\wedge\alpha)+\beta, \quad
\bar\sigma=(\sigma\wedge\alpha)+\beta,\quad
\bar\mu\leq(a_\wedge+\beta)\wedge0,\quad
\bar\alpha=(\alpha+\beta)\wedge0.
\end{equation}
Suppose furthermore that none of $\bar \gamma$, $\bar\eta$, $\bar\sigma$, or $\bar\mu$ are integers and that
these exponents satisfy the condition \eqref{eq:exponents}. Then $\cK^\zeta_\gamma f\in\cD_P^{\bar \gamma,\bar w}$, where $\bar w=(\bar\eta,\bar\sigma,\bar\mu)$.
Furthermore, if $(\bar\Pi,\bar\Gamma)$ is a second model realising $K$ for $\cI$ and $\bar f\in\cD_P^{\gamma,w}(V,\bar\Gamma)$, $\bar \zeta\in\cC^a$ are as above, then for any $C>0$ the bound
$$
\vn{\cK^\zeta_\gamma f;\bar\cK^{\bar \zeta}_\gamma\bar f}_{\bar\gamma,\bar w;\frK}\lesssim\vn{f;\bar f}_{\gamma,w;\bar \frK}+\|\Pi-\bar\Pi\|_{\gamma;\bar\frK}+\|\Gamma-\bar\Gamma\|_{\bar\gamma;\bar\frK}+\|\zeta-\bar\zeta\|_{a,\bar\frK}
$$
holds uniformly in models and modelled distributions both satisfying $\vn{f}_{\gamma,w;\bar \frK}+\|\Pi\|_{\gamma;\bar\frK}+\|\Gamma\|_{\bar\gamma;\bar\frK}+\|\zeta\|_{a,\bar \frK}\leq C$, where $\bar\frK$ denotes the $1$-fattening of $\frK$.
Finally, the identity
\begin{equation}\label{eq:reco identity}
\cR\cK_\gamma^\zeta f=K\ast\zeta
\end{equation}
holds.
(ii) If $f\in\cD_{P,\{1\}}^{\gamma,w}$ and the coordinates of $w$ satisfy \eqref{eq:exponents'}, then choosing $\hat \cR f$ in the above in place of $\zeta$, the same conclusions hold, but with the definition of $\bar \sigma$
in \eqref{eq:exponents2} replaced by $\bar\sigma=\sigma+\beta$.
\end{lemma}
\begin{proof}
The argument showing that $\cN^\zeta_\gamma f$ (and therefore $\cK^\zeta_\gamma f$) is actually well-defined
is exactly the same as in \cite{H0}. Also, the fact that the required bounds trivially hold for components of $(\cK^\zeta_\gamma f)(x)$ and $(\cK^\zeta_\gamma f)(x)-\Gamma_{yx}(\cK^\zeta_{\gamma}f)(y)$, whose homogeneity is non-integer, does not change in our setting.
For integers homogeneities, we shall make use of the decomposition of $K$ and use different arguments on different scales. We start by bounding the second term in \eqref{def:spaces}. First consider the case $2^{-n+2}\leq|x|_{P_0}\leq|x|_{P_1}$. We then have, for any multiindex $l$, due to \eqref{eq:standard reco estimate}
\begin{equation}\label{eq:int1}
|(\tilde \cR f-\Pi_x f(x))(D_1^lK_n(x,\cdot))|\lesssim 2^{n(|l|_\frs-\beta-\gamma)}|x|_{P_0}^{\eta-\gamma}|x|_{P_1}^{\mu-\eta}.
\end{equation}
After summation over the relevant values of $n$, we get a bound of order
$$
|x|_{P_0}^{\eta+\beta-|l|_\frs}|x|_{P_1}^{\mu-\eta}\leq|x|_{P_1}^{\mu+\beta-|l|_\frs}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{\eta+\beta-|l|_\frs},
$$
as required, since $\bar\mu\leq\mu+\beta$.
As for $\cJ^{(n)}(x)f(x)$, for any integer $l$ we have
$$
\|\cJ^{(n)}(x)f(x)\|_l\lesssim\sum_{\delta>l-\beta}2^{n(l-\beta-\delta)}|x|_{P_1}^{\mu-\delta}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{(\eta-\delta)\wedge0}.
$$
Summing over $n$, we get
\begin{align*}
\sum_{2^{-n+2}\leq|x|_{P_0}} &\|\cJ^{(n)}(x)f(x)\|_l\lesssim\sum_{\delta>l-\beta}|x|_{P_0}^{\delta+\beta-l}|x|_{P_1}^{\mu-\delta}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{(\eta-\delta)\wedge0}
\\
&=\sum_{\delta>l-\beta}|x|_{P_0}^{\beta-l}|x|_{P_1}^{\mu}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{\eta\wedge\delta}
\lesssim|x|_{P_1}^{\mu+\beta-l}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{\eta\wedge\alpha+\beta-l}
,\end{align*}
where we made use of $\delta\geq\alpha$ in the last step.
Next, consider the case $|x|_{P_0}\leq 2^{-n+2}\leq|x|_{P_1}$. Since then $d_{\frs}(\mathop{\mathrm{supp}} D^l_1K_n(x,\cdot),P_1)\sim|x|_{P_1}$, we can invoke part (a) of Definition~\ref{def:weightedHolder}. For any multiindex $l$, we get
\begin{align*}
|(\zeta-\Pi_x f(x))&(D_1^lK_n(x,\cdot))+(\cJ^{(n)}(x)f(x))_l|
\\
&\leq|\zeta(D_1^lK_n(x,\cdot))|+\sum_{\delta\leq|l|_\frs-\beta}|(\Pi_x\cQ_\delta f(x))(D_1^l K_n(x,\cdot)|
\\
&\lesssim 2^{n(|l|_\frs-\beta-\eta\wedge\alpha)}|x|_{P_1}^{\mu-\eta\wedge\alpha}+\sum_{\delta\leq l-\beta}2^{n(|l|_\frs-\beta-\delta)}|x|_{P_1}^{\mu-\delta}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{(\eta-\delta)\wedge0}.
\end{align*}
Notice that here in fact we only use estimates of $\zeta$ tested against functions centred on the boundary, this observation useful in particular in the proof of part (ii) of the lemma. Let us denote the two terms above by $A_n$ and $B_n$. Summing $A_n$ over the relevant values of $n$, we have two cases, depending on the sign of $|l|_\frs-\beta-(\eta\wedge\alpha)=|l|_\frs-\bar\eta.$ If this exponent is positive, we get, after summation
$$
|x|_{P_0}^{(\eta\wedge\alpha)+\beta-|l|_\frs}|x|_{P_1}^{\mu-(\eta\wedge\alpha)}\leq
|x|_{P_1}^{\mu+\beta-|l|_\frs}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{\bar\eta-|l|_{\frs}},
$$
which gives the required bound. If, on the other hand, $|l|_\frs-\bar\eta<0$ (equality cannot occur, by assumption), then the sum of the $A_n$'s over the relevant values of $n$ is bounded by a constant times
$$
|x|_{P_1}^{(\eta\wedge\alpha)+\beta-|l|_\frs+\mu-\eta},
$$
which is also of the required order. The treatment of $B_n$ is momentarily postponed.
In the final case, $|x|_{P_0}\leq|x|_{P_1}\leq 2^{-n+2}$. Similarly as above, recalling that $\zeta\in\cC^{a_\wedge}$, we get
\begin{align}
|(\zeta-\Pi_x f(x))&(D_1^lK_n(x,\cdot))+(\cJ^{(n)}(x)f(x))_l|
\nonumber\\
&\leq|\zeta(D_1^lK_n(x,\cdot))|+\sum_{\delta\leq|l|_\frs-\beta}|(\Pi_x\cQ_\delta f(x))(D_1^l K_n(x,\cdot)|
\nonumber\\
&\lesssim 2^{n(|l|_\frs-\beta-a_\wedge)}+\sum_{\delta\leq l-\beta}2^{n(|l|-\beta-\delta)}|x|_{P_1}^{\mu-\delta}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{(\eta-\delta)\wedge0}.\label{eq:int0}
\end{align}
Recognising the second term as $B_n$, we consider its sum over the values of $n$ in both this and in the second case. Notice that the exponent of $2^n$ is strictly positive: indeed, $\delta+\beta\in\mathbb{N}$ implies $\delta\in\mathbb{N}$, but since $K_n$ and its derivatives annihilate polynomials, such terms have no contribution to the sum. The resulting quantity is bounded by a constant times
\begin{align*}
\sum_{\delta\leq l-\beta}|x|_{P_0}^{\beta+\delta-|l|_\frs}|x|_{P_1}^{\mu-\delta}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{(\eta-\delta)\wedge0}&\leq
\sum_{\delta\leq l-\beta}|x|_{P_1}^{\mu+\beta-|l|_\frs}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{(\eta\wedge\delta)+\beta-|l|_\frs}
\\
&\lesssim |x|_{P_1}^{\mu+\beta-|l|_\frs}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{(\eta\wedge\alpha)+\beta-|l|_\frs}
\end{align*}
as required. Moving on to the first term on the right-hand side of \eqref{eq:int0}, recall that $\bar\mu\leq a_\wedge+\beta$, and hence
\begin{equation}\label{eq:mu1}
\sum_n2^{n(|l|_\frs-\beta-a_\wedge)}\leq\sum_n2^{n(|l|_\frs-\bar\mu)}\lesssim|x|_{P_1}^{\bar\mu-|l|_\frs},
\end{equation}
where the sum runs over the relevant values of $n$, and we also made use of the fact that $\bar\mu\leq0$ holds, and in fact, by assumption, with strict inequality. This concludes the estimation of the second, and by symmetry, third term in \eqref{def:spaces}.
Turning to bounding $\|\cK^\zeta_\gamma f(x)-\Gamma_{xy}\cK_\gamma^\zeta f(y)\|$, recall that we need only consider pairs $(x,y)$ where $2\|x-y\|_\frs\leq|x,y|_{P_0}\leq|x,y|_{P_1}$. As before, this implies $|x|_{P_i}\sim|y|_{P_i}\sim|x,y|_{P_i}$.
We separate into different scales again, starting by $2^{-n+2}\leq2\|x-y\|_\frs\leq|x,y|_{P_0}\leq|x,y|_{P_1}$. As in \eqref{eq:int1}, we have
$$
|(\cN_{\gamma}^{\zeta,(n)}f(x))_l|\lesssim 2^{n(|l|_\frs-\beta-\gamma)}|x|_{P_0}^{\eta-\gamma}|x|_{P_1}^{\mu-\eta}.
$$
Summing over the relevant values of $n$, we get a bound of order
$$
\|x-y\|_s^{\gamma+\beta-|l|_\frs}|x|_{P_0}^{\eta-\gamma}|x|_{P_1}^{\mu-\eta},
$$
as required. Similarly,
$$
|(\Gamma_{xy}\cN_\gamma^{\zeta,(n)}f(x))_l|\lesssim \sum_{|k+l|_\frs<\gamma+\beta}\|x-y\|_\frs^{|k|_\frs}2^{n(|k+l|_\frs-\beta-\gamma)}|x|_{P_0}^{\eta-\gamma}|x|_{P_1}^{\mu-\eta},
$$
which, after summation, yields an estimate of order
$$
\sum_{|k+l|_\frs<\gamma+\beta}\|x-y\|_\frs^{|k|_\frs}\|x-y\|_\frs^{\gamma+\beta-|k+l|_\frs}|x|_{P_0}^{\eta-\gamma}|x|_{P_1}^{\mu-\eta},
$$
which is again of the required order. Next, using \eqref{eq:rearrange K 1}, we have
\begin{align*}
|(\cJ^{(n)}(x)f(x)-\Gamma_{xy}\cJ^{(n)}(y)f(y))_l|&\leq\sum_{\delta>|l|_\frs-\beta}(\Pi_x\cQ_\delta(f(x)-\Gamma_{xy}f(y))(D_1^{l}K_n(x,\cdot))
\\
&\lesssim\sum_{\delta>|l|_\frs-\beta}\|x-y\|^{\gamma-\delta}_\frs|x|_{P_0}^{\eta-\gamma}|x|_{P_1}^{\mu-\eta}2^{n(|l|_\frs-\beta-\delta)}.
\end{align*}
Summing over the relevant values $n$, we get the bound
$$
\sum_{\delta>|l|_\frs-\beta}\|x-y\|^{\gamma-\delta}_\frs|x|_{P_0}^{\eta-\gamma}|x|_{P_1}^{\mu-\eta}\|x-y\|^{\delta+\beta-|l|_\frs},
$$
as required.
Moving on to larger scales, we will then use the identity \eqref{eq:rearrange K 0}. Starting with the second term,
\begin{align*}
|\sum_{\delta\leq|l|_\frs-\beta}(\Pi_x\cQ_\delta(\Gamma_{xy}f(y)-f(x)))(D_1^l K_n(x,\cdot))|\lesssim\sum_{\delta\leq|l|_\frs-\beta}\|x-y\|^{\gamma-\delta}|x|_{P_0}^{\eta-\gamma}|x|_{P_1}^{\mu-\eta}2^{n(|l|_\frs-\beta-\delta)}.
\end{align*}
This can be treated for all the remaining scales at once: summing over $n$ such that $\|x-y\|_\frs\leq2^{-n+2}$ (the strict positivity of the exponent of $2^n$ can be argued exactly as in the previous similar situation), we get a bound of order
$$
\sum_{\delta\leq|l|_\frs-\beta}\|x-y\|^{\gamma-\delta}|x|_{P_0}^{\eta-\gamma}|x|_{P_1}^{\mu-\eta}\|x-y\|^{\delta+\beta-|l|_\frs},
$$
which is of required order.
We are left to estimate
$$
|(\Pi_y f(y)-\zeta)(K_{n;xy}^{k,\gamma})|.
$$
Rewriting the above quantity as in the formula \eqref{eq:rearrange K Taylor}, and making use of the properties mentioned following it, we have
\begin{align*}
|(\Pi_y f(y)&-\zeta)(K_{n;xy}^{l,\gamma})|
\\
&\leq\sum_{|k|_\frs\geq\gamma+\beta-|l|_\frs}\|x-y\|^{|k|_\frs}_\frs\sup_{\|x-\bar y\|_\frs\leq\|x-y\|_\frs}|(\Pi_y f(y)-\zeta)(D_1^{k+l}K_n(\bar y,\cdot))|
\\
&\leq \|x-y\|^{\gamma+\beta-|l|_\frs}_\frs\sum_{|k|_\frs\geq\gamma+\beta-|l|_\frs}\|x-y\|_{\frs}^{|k+l|_\frs-\gamma-\beta}\sup_{\|x-\bar y\|_\frs\leq\|x-y\|_\frs}|(\Pi_y f(y)-\zeta)(D_1^{k+l}K_n(\bar y,\cdot))|.
\end{align*}
Therefore it remains to show that, for any $k$ multiindex satisfying $|k|_\frs\geq\gamma+\beta-|l|_\frs$ and any $\bar y$ satisfying $\|x-\bar y\|_\frs\leq\|x-y\|_\frs$, the following bound holds.
\begin{equation}\label{eq:int2}
\|x-y\|_{\frs}^{|k+l|_\frs-\gamma-\beta}|(\Pi_y f(y)-\zeta)(D_1^{k+l}K_n(\bar y,\cdot))|\lesssim|x|_{P_0}^{\bar\eta-\bar\gamma}|x|_{P_1}^{\bar\mu-\bar\eta}.
\end{equation}
Notice that in particular, as before, $|x|_{P_i}\sim|\bar y|_{P_i}\sim|x,\bar y|_{P_i}$. To show \eqref{eq:int2} we again treat the remaining different scales separately. First, take $n$ such that $\|x-y\|_\frs\leq2^{-n+2}\leq|x,y|_{P_0}\leq|x,y|_{P_1}$. We write
\begin{align}
|(\Pi_y f(y)-\zeta)(D_1^{k+l}K_n(\bar y,\cdot))|&\leq|(\Pi_{\bar y}f(\bar y)-\zeta)(D_1^{k+l}K_n(\bar y,\cdot)|\nonumber
\\
&\quad+|(\Pi_{\bar y}(\Gamma_{\bar y y}f(y)-f(\bar y))(D_1^{k+l}K_n(\bar y,\cdot))|.\label{eq:int3}
\end{align}
Summing the first term over the relevant values of $n$, we get a bound of order
$$
\sum_{n}2^{n(|k+l|_\frs-\beta-\gamma)}|x|_{P_0}^{\eta-\gamma}|x|_{P_1}^{\mu-\eta}\lesssim \|x-y\|_{\frs}^{-|k+l|_\frs+\gamma+\beta}|x|_{P_0}^{\eta-\gamma}|x|_{P_1}^{\mu-\eta},
$$
so the prefactor in \eqref{eq:int2} cancels and we get the required bound. Similarly to before, we used that while we only required $|k|_\frs\geq\gamma+\beta-|l|_\frs$, in fact equality can not occur due to the assumptions of the theorem, so the exponent of $2^n$ is strictly positive. The second term in \eqref{eq:int3} is estimated by
$$
\sum_{\delta\leq\gamma}\|x-y\|_\frs^{\gamma-\delta}|x|_{P_0}^{\eta-\gamma}|x|_{P_1}^{\mu-\eta}2^{n(|k+l|_\frs-\beta-\delta)}.
$$
After summation over $n$, we get the bound
$$
\sum_{\delta\leq\gamma}\|x-y\|_\frs^{\gamma-\delta}|x|_{P_0}^{\eta-\gamma}|x|_{P_1}^{\mu-\eta}\|x-y\|_\frs^{-|k+l|_\frs+\beta+\delta},
$$
which, just as before, is of required order.
Turning to the scale $\|x-y\|_\frs\leq|x,y|_{P_0}\leq2^{-n+2}$, we estimate the the actions of the two distributions acting on the left-hand side of \eqref{eq:int2} separately. First,
$$
|(\Pi_y f(y))(D_1^{k+l}K_n(\bar y,\cdot))|\lesssim\sum_{\alpha\leq\delta\leq\gamma}|x|_{P_1}^{\mu-\delta}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{(\eta-\delta)\wedge0}2^{n(|k+l|_\frs-\beta-\delta)}.
$$
As before, the exponent of $2^n$ is strictly positive. Therefore
\begin{align*}
\sum_{|x,y|_{P_0}\leq 2^{-n+2}}&\|x-y\|_{\frs}^{|k+l|_\frs-\gamma-\beta}(\Pi_y f(y))(D_1^{k+l}K_n(\bar y,\cdot))
\\
&\lesssim\sum_{\alpha\leq\delta\leq\gamma}|x|_{P_0}^{|k+l|_\frs-\gamma-\beta}|x|_{P_1}^{\mu-\delta}\left(\frac{|x|_{P_0}}{|x|_{P_1}}\right)^{(\eta-\delta)\wedge0}|x|_{P_0}^{-|k+l|_\frs+\beta+\delta}
\\
&\lesssim\sum_{\alpha\leq\delta\leq\gamma}|x|_{P_1}^{\mu-\delta-(\eta-\delta)\wedge0}|x|_{P_0}^{\eta\wedge\delta-\gamma}
\lesssim|x|_{P_1}^{\mu-\eta}|x|_{P_0}^{\eta\wedge\alpha-\gamma}\;,
\end{align*}
as required.
To treat the other distribution in \eqref{eq:int2}, we further divide the scales, and consider first $\|x-y\|_\frs\leq|x,y|_{P_0}\leq2^{-n+2}\leq|x,y|_{P_1}$. In this case the support of $K_n(\bar y,\cdot)$ is separated away from $P_1$, so we have
$$
|\zeta (D_1^{k+l}K_n(\bar y,\cdot))|\lesssim 2^{n(|k+l|_\frs-\beta-(\eta\wedge\alpha))}|x|_{P_1}^{\mu-(\eta\wedge\alpha)}.
$$
After summation on $n$ and multiplying by the prefactor in \eqref{eq:int2}, using $\|x-y\|_\frs\leq|x|_{P_0}$, we obtain a bound of order
$$
|x|_{P_0}^{|k+l|-\gamma-\beta}|x|_{P_0}^{(\eta\wedge\alpha)+\beta-|k+l|_\frs}|x|_{P_1}^{\mu-(\eta\wedge\alpha)},
$$
which is again of required order.
Finally, when $\|x-y\|_\frs\leq|x,y|_{P_0}\leq|x,y|_{P_1}\leq2^{-n+2}$, we can write
$$
|\zeta(D_1^{k+l}K_n(\bar y,\cdot))|\lesssim 2^{n(|k+l|_\frs-\beta-a_\wedge)}\leq 2^{n(|k+l|_\frs-\bar \mu)}.
$$
Summing over $n$ and multiplying by the prefactor in \eqref{eq:int2}, we arrive at the bound
\begin{equation}\label{eq:mu2}
|x|_{P_0}^{|k+l|-\gamma-\beta}|x|_{P_1}^{\bar \mu-|k+l|_\frs}=
|x|_{P_0}^{\bar\eta-\bar\gamma}|x|_{P_0}^{|k+l|_\frs-\bar\eta}|x|_{P_1}^{\bar \mu-|k+l|_\frs},
\end{equation}
and since $|k+l|_\frs-\bar\eta\geq0$, the middle term can be estimated by $|x|_{P_1}^{|k+l|_\frs-\bar\eta}$, and the proof is finished.
The proof of continuity again goes in an analogous way and is omitted here.
As for the identity \eqref{eq:reco identity}, inspecting the proof of \cite[~Thm 5.12]{H0}, one can notice that this boils down to obtaining the estimate
$$
\Big|\sum_{n\geq 0}\int(\Pi_x f(x)-\tilde\cR f)(\cK_{n,yx}^{0,\gamma})\psi_{x}^\lambda(y)\,dy\Big|\lesssim\lambda^{\gamma+\beta}
$$
for $\lambda\ll|x|_{P_0}\wedge|x|_{P_1}$. This however is a local statement and therefore the argument in \cite{H0} carries through for our case virtually unchanged.
(ii) In the $f\in\cD_{P,\{1\}}^{\gamma,w}$ case, when repeating the above arguments, one should only pay attention in order to get the improved exponent $\bar\sigma=\sigma+\beta$ in place of $(\sigma\wedge\alpha)+\beta=\alpha+\beta$. This improvement is the consequence of the improved bound on $\|f(x)\|_l$ near $P_1$, thanks to Proposition~\ref{prop:1}, and of the improved regularity of $\hat \cR f$ when tested against functions centred on $P_1$, thanks to \eqref{eq:reco on P1}.
\end{proof}
\begin{remark}\label{remark:mu2}
The ``slight difficulty'' foreshadowed in Remark~\ref{remark:mu} is the constraint $\bar\mu\leq0$ in the above
lemma. Indeed, in all three of the concrete examples mentioned in the introduction, it turns out one needs to
choose $\bar\mu > 0$. Note that the only two places in the proof where the condition $\bar\mu\leq0$ was used
are \eqref{eq:mu1} with $l=0$ and \eqref{eq:mu2}. In the latter case one, actually only needs $\bar\mu\leq\bar\gamma$, which holds as soon as we choose $\gamma$ sufficiently large so that $\mu\leq\gamma$. Therefore, provided that $\zeta$
is such that the bound
$$
\sum_{2^{-n+2}\geq|x|_{P_1}}|\zeta(D_1^lK_n(x,\cdot))|\lesssim|x|_{P_1}^{\bar \mu-|l|_\frs}\;,
$$
holds for $|x|_{P_0}\leq|x|_{P_1}$, and the corresponding symmetric bound holds for $|x|_{P_1}\leq|x|_{P_0}$,
for all $|l|_\frs\leq\bar\mu$, and $\bar\mu\leq a_\wedge+\beta$, then the conclusions of Lemma~\ref{lem:int}
still hold. This appears to be a very strong condition, but in the standard case where $K$ is a non-anticipative
kernel and $\zeta$ is supported on positive times, it is actually quite reasonable, see Proposition~\ref{prop:improved mu} below.
\end{remark}
\subsection{Integration against smooth remainders with singularities at the boundary}
\label{subsec:int remainder}
From this point on we move to a more concrete setting, and in particular $P_0$ and $P_1$ will play different roles. We shall view ${\mathbb{R}}^d$ as ${\mathbb{R}}\times{\mathbb{R}}^{d-1}$, denoting its points by either $z$ or by $(t,x)$, where $t\in{\mathbb{R}}$, $x\in {\mathbb{R}}^{d-1}$. Furthermore we assume that $P_0$ is given by $\{(0,x):x\in{\mathbb{R}}^{d-1}\}$
\begin{definition}\label{def:Z}
Denote by $\scZ_{\beta,P}$ the set of functions $Z:({\mathbb{R}}^d\setminus P)^2 \to {\mathbb{R}}$ that can be written in the form
$
Z(z,z')=\sum_{n\geq0}Z_n(z,z')
$
where, for each $n$, $Z_n$ satisfies the following
\begin{claim}
\item $Z_n$ is supported on $\{(z,z')=((t,x),(t',x')):\,|z|_{P_1}+|z'|_{P_1}+|t-t'|^{1/\frs_0}\leq C 2^{-n}\}$,
where $C$ is a fixed constant depending only on the domain $D$.
\item For any ($d$-dimensional) multiindices $k$ and $l$,
$$
|D_1^kD_2^lZ_n(z,z')|\lesssim2^{n(|\frs|+|k+l|_\frs-\beta)},
$$
where the proportionality constant may depend on $k$ and $l$, but not on $n$, $z$, $z'$.
\end{claim}
\end{definition}
The relevance of this definition is illustrated by the following example, which shows that if we consider a
heat kernel on a domain obtained by the reflection principle, then it can always be decomposed into
an element of $\scK_\beta$ and an element of $\scZ_{\beta,P}$.
\begin{example}\label{example}
Our main example will be of the following form. Suppose that $G^0$ is a function on ${\mathbb{R}}^d\times{\mathbb{R}}^d\setminus\{(z,z'):z=z'\}$ with the following properties:
\begin{claim}
\item We have a decomposition $G^0=K^0+R^0$, where $K^0\in\scK_\beta$, while $R^0$ is a globally smooth function.
\item For any two multiindices $k$ and $l$ and any number $a$, there exists a constant $C_{k,l,a}$ such that it holds that
$
|D_1^kD_2^lR^0(z,z')|\leq C_{k,l,a}(|x-x'|\vee 1)^a
$.
\end{claim}
As it is shown in \cite{H0}, the heat kernel in any dimension satisfies these conditions with $\beta=2$. Suppose then that we have a discrete group $\cG$ of isometries of ${\mathbb{R}}^{d-1}$ with a bounded fundamental domain $D$, and with the property that the following implication holds
$$
g\in\cG\setminus\{\mathrm{id}\},\; x,y\in D,\; \|x-g(y)\|_{\frs}\leq 2^{-n}\quad\Rightarrow\quad d_\frs(x,\partial D)\vee d_{\frs}(y,\partial D)\leq 2^{-n}.
$$
Let $a \colon \cG \to \{-1,1\}$ be a group morphism and write
\begin{equation}\label{eq:G}
G((t,x),(s,y))=\sum_{g\in\cG}a_g G^0((t,x),(s,g(y))).
\end{equation}
A concrete example to have in mind is when $D=[-1,1]$ and $\cG$ is generated by the maps $y\mapsto -2-y$ and
$y\mapsto 2-y$. Then, the trivial morphism $a_g\equiv 1$ yields the Neumann heat kernel on $D$, while the
morphism with kernel given
by the orientation-preserving $g$'s yields the Dirichlet heat kernel. Obvious higher dimensional analogues include
the Neumann and Dirichlet heat kernels on $(d-1)$-dimensional cubes.
For functions $f$ and $g$ on $({\mathbb{R}}^d)^2$, write $f \sim g$ if $f(z) = g(z)$ for $z \in ([0,1]\times D)^2$.
We claim that, setting $P_1={\mathbb{R}}\times\partial D$, there exist $K\in\scK_\beta$, $Z\in\scZ_{\beta,P}$, such that
$G \sim K + Z$.
First, due to the decay properties of $R^0$, the sum $\tilde R = \sum_g a_gR^0((t,x),(s,g(y)))$ converges and defines a globally smooth function which we can truncate in a smooth way outside of $([0,1]\times D)^2$, so that it belongs to
$\scZ_{\beta,P}$. For $K^0$, we divide the sum
$$\sum_{g\in\cG}a_g K^0((t,x),(s,g(y)))$$
into three parts. For $g=\mathrm{id}$, we simply set $K=K^0$ which belongs to $\scK_\beta$ by assumption.
The terms with $g$ such that $y\in D$
implies $d_\frs(g(y),D)>1$ may safely be discarded since they are supported outside of $([0,1]\times D)^2$.
For the remaining finitely many terms, say $g_1,g_2,\ldots,g_m$, we use our assumption on $\cG$, by which we can write
\begin{equ}
K^0_n((t,x),(s,g_m(y))) \sim \varphi_n(x,y) K^0_n((t,x),(s,g_m(y)))\;.
\end{equ}
where $\varphi_n$ is $1$ on $\{(x,y):d_\frs(x,\partial D)\vee d_{\frs}(y,\partial D)\leq 2^{-n}\}$, is supported on $\{(x,y):d_\frs(x,\partial D)\vee d_{\frs}(y,\partial D)\leq 2^{-n+1}\}$, and for all multiindices $k$ and $l$, $D_1^kD_2^l\varphi$ is bounded by $2^{n(|k+l|_\frs)}$, up to a universal constant. Let furthermore $\varphi$ be a smooth compactly supported function that equals $1$ on $D\times D$. We can then set
$$
Z_0((t,x),(s,y))=\sum_{i=1}^m\varphi(x,y)K_0^0((t,x),(s,g_i(y)))+\varphi(x,y)\tilde R,
$$
and for $n>0$
$$
Z_n((t,x),(s,y))=\sum_{i=1}^m\varphi_n(x,y)K_n^0((t,x),(s,g_i(y))),
$$
which does indeed yield an element of $\scZ_{\beta,P}$.
\end{example}
\begin{lemma}\label{lem:Z}
Let $a\in{\mathbb{R}}_-^3$ and $a_\wedge$ be as in Definition~\ref{def:weightedHolder}, $u\in\cC_P^{a}$ and $Z\in\scZ_{\beta,P}$. Then the function
\begin{equation}\label{eq:Z}
v \colon z\mapsto\sum_{n\geq 0}\scal{u ,Z_n(z,\cdot)}
\end{equation}
is a smooth function on ${\mathbb{R}}^d\setminus P$, and its lift to $\bar T$ via its Taylor expansion,
which we also denote by $v$, belongs to $\cD_P^{\gamma,w}(\bar T)$, where $\sigma=a_1+\beta$,
$\gamma\geq\sigma\vee 0$, and $\eta$ and $\mu$ satisfy
\begin{equation}\label{eq:mu3}
\eta\leq\gamma, \quad\mu\leq(a_\wedge+\beta)\wedge0,
\end{equation}
provided neither of $\sigma$ nor $\mu$ are integers.
If $u$ furthermore satisfies $\scal{u,\psi_z^\lambda}\lesssim\lambda^{\bar a_1}|z|_{P_0}^{a_\cap-\bar a_1}$ for $z\in P_1\setminus P_0$ and $2\lambda\leq|z|_{P_0}$ with some $\bar a_1\geq a_1$, then the conclusions hold with the definition of $\sigma$ replaced by $\sigma=\bar a_1+\beta$.
\end{lemma}
\begin{proof}
Notice that in \eqref{eq:Z} only the terms where $2^{-n}\geq|z|_{P_1}$ give nonzero contributions. In particular, since the sum is finite, any differentiation on $v$ can be carried inside. If $|z|_{P_0}\leq 2|z|_{P_1}$, then we simply use the fact that $u\in\cC^{a_\wedge}$, to get, for any multiindex $l$
\begin{equation}\label{eq:badmuagain1}
|D^l v(z)|\lesssim\sum_{2^{-n}\geq|z|_{P_1}}2^{n(|l|_\frs-\beta-a_\wedge)}
\leq\sum_{2^{-n}\geq|z|_{P_1}}2^{n(|l|_\frs-\mu)}\leq|z|_{P_1}^{\mu-|l|_\frs},
\end{equation}
where we used $\mu\leq a_\wedge+\beta$ as well as $\mu<0$. If $2|z|_{P_1}\leq|z|_{P_0}$, then we distinguish two cases. First, if $2|z|_{P_1}\leq2^{-n}\leq|z|_{P_0}$, then the support of $Z_n(z,\cdot)$ is away from $P_0$, and so we make use of part (b) of the definition of $\cC_P^{a}$:
\begin{equ}\label{eq:Zproof}
|\scal{u,D_1^lZ_n(z,\cdot)}|\leq 2^{n(|l|_\frs-\beta-a_1)}|z|_{P_0}^{a_\cap-a_1}.
\end{equ}
If $\sigma=a_1+\beta<|l|_\frs$, then the summing up yields
$$
\sum_{2|z|_{P_1}\leq2^{-n}\leq|z|_{P_0}}|\scal{u,D_1^lZ_n(z,\cdot)}|\lesssim |z|_{P_1}^{\sigma-|l|_\frs}|z|_{P_0}^{a_\cap-a_1}=|z|_{P_0}^{a_\cap-a_1+\sigma
-|l|_\frs}\left(\frac{|z|_{P_1}}{|z|_{P_0}}\right)^{\sigma-|l|_\frs},
$$
which is as required, since $-a_1+\sigma=\beta$. If, on the other hand, $\sigma>|l|_\frs,$ then
$$
\sum_{2|z|_{P_1}\leq2^{-n}\leq|z|_{P_0}}|\scal{u,D_1^lZ_n(z,\cdot)}|\lesssim|z|_{P_0}^{a_\cap+\beta-|l|_\frs}.
$$
On the scale, $2|z|_{P_1}\leq|z|_{P_0}\leq 2^{-n}$, when we simply use the fact $u\in\cC^{a_\wedge}$ again in the same way as before, to get
\begin{equation}\label{eq:badmuagain2}
\sum_{|z|_{P_0}\leq 2^{-n}}|\scal{u,D_1^lZ_n(z,\cdot)}|\lesssim \sum_{|z|_{P_0}\leq 2^{-n}}2^{n(|l|_\frs-\beta-a_\wedge)}\leq|z|_{P_0}^{\mu-|l|_\frs}.
\end{equation}
Putting the above estimates together, we conclude that
\begin{equation}\label{eq:ZZ}
\|v(z)\|_{|l|_\frs} = {1\over k!} |D^l v(z)| \lesssim |z|_{P_1}^{\mu-|l|_\frs}\left(\frac{|z|_{P_0}}{|z|_{P_1}}\right)^{(\eta-|l|_\frs)\wedge0}
\end{equation}
if $|z|_{P_0}\leq|z|_{P_1}$, and the corresponding symmetric estimate holds when $|z|_{P_1}\leq|z|_{P_0}$. In particular, the second and third terms in \eqref{def:spaces} are finite for any finite $\gamma$. To bound the first term, it remains to recall that since $v$ is the lift of a smooth function, for any positive integer $\gamma$ and $(z,z')\in\frK_P$
$$
\|v(z)-\Gamma_{zz'}v(z')\|_l\leq \|z-z'\|_\frs^{\gamma-l}\sup_{\bar z\in\frK:|z|_{P_i}\sim|\bar z|_{P_i}\sim|z,z'|_{P_i}}|D^\gamma v(\bar z)|.
$$
Applying \eqref{eq:ZZ} (and its symmetric counterpart) with $l=\gamma,$ we get
$$
|D^\gamma v(\bar z)|\lesssim|z|_{P_0}^{\eta-\gamma}|z|_{P_1}^{\sigma-\gamma}(|z|_{P_0}\vee|z|_{P_1})^{\mu-\eta-\sigma+\gamma},
$$
as required. For $\gamma$ non-integer, it suffices to apply the above with $\gamma$
replaced by $\bar \gamma = \lceil \gamma \rceil$ and to note that, for every $\gamma \in (\bar\gamma-1,\bar \gamma)$,
one has $\cD_P^{\bar \gamma,w} \subset \cD_P^{\gamma,w}$.
(To see this, write $f = f \star_{\gamma} \bone$ and
apply Lemma~\ref{lem:mult}, noting that $\bone \in \cD_P^{\gamma,\bar w}$ with
$\bar \eta = \eta \vee 0$, $\bar \sigma = \sigma \vee 0$ and $\bar \mu = 0$.)
For the last statement of the lemma, one can simply notice that in \eqref{eq:Zproof} $u$ is tested against functions centred on $P_1\setminus P_0$, and use the additional assumption on $u$.
\end{proof}
\begin{remark}
The mapping $u\rightarrow\cQ_{\gamma+\beta}^-v$, where $v$ is as in \eqref{eq:Z}, will also be denoted by $Z_\gamma$. As all models that we consider act the same on polynomials, the usual continuity estimates are in this case direct consequences of the above result.
\end{remark}
\begin{remark}\label{remark:mu3}
It is again worth pointing out that the $\mu<0$ condition, used in \eqref{eq:badmuagain1} and \eqref{eq:badmuagain2}, can be omitted if one can derive
$$
\sum_{2^{-n+2}\geq|z|_{P_1}\vee|z|_{P_0}}|\scal{u,D_1^lZ_n(z,\cdot)}|\lesssim(|z|_{P_1}\vee|z|_{P_0})^{ \mu-|l|_\frs}
$$
for $|l|_\frs\leq\mu$ by some other means.
\end{remark}
One can easily verify that the action of $\cK_\gamma$ and $Z_\gamma$ are compatible in the following sense: take $f\in\cD_P^{\gamma,w}$ and an extension $\zeta$ of $\tilde\cR f$ as in Lemma~\ref{lem:int} (i). Then $Z_{\gamma+\beta}\zeta\in \cD_{P}^{\gamma+\beta,\bar w}$, where $\bar w$ is as in Lemma~\ref{lem:int} (i).
\section{Solving the abstract equation}\label{sec:FPP}
In addition to the setting of Section~\ref{subsec:int remainder} we now assume that, for a bounded domain $D\subset{\mathbb{R}}^{d-1}$ with a Lipschitz boundary $\partial D$ satisfying the cone condition, $P_1$ is given by $P_1={\mathbb{R}}\times\partial D$. We shall denote by $\bar D$ the $1$-fattening of the closure of $D$, and we introduce the $T$-valued function
$$
\bR^{D}_+(t,x)=\left\{\begin{array}{lr}
\bone, & \text{if } t>0,x\in D,\\
0, & \text{otherwise.}
\end{array}\right.
$$
It is straightforward to see that $\bR^{D}_+\in\cD_P^{\infty,(\infty,\infty,0)}$, and in particular that
multiplication by $\bR^{D}_+$ maps any $\cD_P^{\gamma,w}$ space into itself.
\subsection{Non-anticipative kernels}
In a typical situation of an application of the theory to SPDEs, one important property of the kernel $K$ that we have, further to the quite general setting in Definition~\ref{def: K}, is that it is non-anticipative in the sense that
\begin{equation}
t<s\quad\Rightarrow\quad K((t,x),(s,y))=0.
\end{equation}
We shall use the notations $O=[-1,2]\times \bar D$ and $O_\tau=(-\infty,\tau]\times\bar D$ as well as the shorthand $\vn{f}_{\gamma,w;\tau}$ for $\vn{f}_{\gamma,w;O_\tau}$ and similarly for other norms involving dependence on compact sets.
First of all, this allows us to improve our conditions on $\mu$.
\begin{proposition}\label{prop:improved mu}
\begin{enumerate}[(i)]
\item In the setting of Lemma~\ref{lem:int} (i), suppose that $K$ is non-anticipative, that $f$ is of the form $\bR^{D}_+g$ for some $g\in\cD_P^{\gamma,w}$, and that $\zeta$ annihilates test functions supported on negative times. Let furthermore $\varepsilon>0$ such that $\frm_0-\beta+\varepsilon>0$ and assume $a_\wedge+\frm_0\geq 0$. Then, modifying the condition on $\bar \mu$ from \eqref{eq:exponents2} to
$$
\bar\mu\leq a_\wedge+\beta-\varepsilon,
$$
the conclusions of Lemma~\ref{lem:int} (i) still hold.
\item The analogous statement holds for Lemma~\ref{lem:int} (ii), where the modified condition on $\bar\mu$ reads as
$$
\bar\mu\leq\eta\wedge\mu\wedge\alpha+\beta-\varepsilon.
$$
\item In the setting of Lemma~\ref{lem:Z}, suppose that $Z$ is non-anticipative and that $u$ annihilates test functions supported on negative times and let $\varepsilon>0$ be as above. Then, modifying the condition on $\mu$ from \eqref{eq:mu3} to
$$
\mu\leq a_\wedge +\beta-\varepsilon,
$$
the conclusion of Lemma~\ref{lem:Z} still hold.
\end{enumerate}
\end{proposition}
\begin{proof}
(i) By Remark~\ref{remark:mu2}, we only need to obtain the bound
\begin{equation}\label{eq:ind1}
\sum_{2^{-n+2}\geq|z|_{P_1}\vee|z|_{P_0}}|\zeta(D_1^lK_n(z,\cdot))|\lesssim(|z|_{P_1}\vee|z|_{P_0})^{\bar \mu-|l|_\frs}
\end{equation}
for $|l|_\frs\leq\bar\mu$.
For all $m\in\mathbb{N}$, define the grid
$$
\Lambda_m=\{(s,y):s=2^{-m\frm_0 },y=\sum_{j=1}^{d-1}2^{-m\frs_j}k_j e_j, k_j\in\mathbb{Z}\},
$$
where $e_j$ is the $j$-th unit vector of ${\mathbb{R}}^{d-1}$, $j=1,\ldots,d-1$. Let furthermore $\varphi$ be a function that satisfies
$$
\sum_{y\in\Lambda_0}\varphi(t,x-y)=1\quad\forall t\in[-1,2],x\in{\mathbb{R}}^{d-1},
$$
and define $\varphi_y^{m,\frs}=2^{-m|\frs|}\varphi_y^{2^{-m}}$.
To show \eqref{eq:ind1}, we first write, with setting $2^{-m}\leq|z|_{P_0}\leq2^{-m+1}$,
$$
\zeta(D_1^lK_n(z,\cdot))=\sum_{y\in\Lambda_m}\zeta(\varphi_y^{m,\frs}(\cdot)D_1^lK_n(z,\cdot)).
$$
Indeed, the function $D_1^lK_n(z,\cdot)-\sum_{y\in\Lambda_m}\varphi_y^{m,\frs}D_1^lK_n(z,\cdot)$ is supported on strictly negative times, and therefore vanishes under the action of $\zeta$. Each of the functions $\varphi_y^{m,\frs}D_1^lK_n(z,\cdot)$ has support of size of order $2^{-m|\frs|}$ and its $k$th derivative is bounded by $2^{n(|\frs|+|l|_\frs-\beta)}2^{m|k|_\frs}$. Recalling that $\zeta\in\cC^{a_\wedge}$, this yields
$$
|\zeta(\varphi_y^{m,\frs}(\cdot)D_1^lK_n(z,\cdot))|\lesssim 2^{-m a_\wedge}2^{-m|\frs|}2^{n(|\frs|+|l|_\frs-\beta)}.
$$
Combining this with the fact that the number of points $y\in\Lambda_m$ for which the support of $\varphi_y^{m,\frs}$ actually intersects the support of $D_1^lK_n(z,\cdot)$, is of order $2^{-n(|\frs|-\frm_0)}2^{m(|\frs|-\frm_0)}$, we get
$$
|\zeta(D_1^lK_n(z,\cdot))|\lesssim 2^{-m(a_\wedge+\frm_0)}2^{n(\frm_0+|l|_\frs-\beta)}.
$$
By multiplying with $2^{n\varepsilon}$, we only increase the right-hand side, and by our assumptions this guarantees that the exponent of $2^n$ becomes positive. Therefore, recalling that $2^{-m}\sim|z|_{P_0}$, we obtain
$$
\sum_{2^{-n+2}\geq|z|_{P_1}\vee|z|_{P_0}}|\zeta(D_1^lK_n(z,\cdot))|\lesssim|z|_{P_0}^{a_\wedge+\frm_0}(|z|_{P_1}\vee|z|_{P_0})^{\beta+\varepsilon-\frm_0-|l|_\frs},
$$
which, using $a_\wedge+\frm_0\geq0$, gives the required bound.
The proof of (ii) goes in the same way, and, in light of Remark~\ref{remark:mu3}, so does that of (iii).
\end{proof}
The other important consequence of the non-anticipativity of our kernel is the following short-time control.
\begin{lemma}\label{lem:short time K}
In the setting of Proposition~\ref{prop:improved mu} (i), suppose that $K$ is non-anticipative. Set, for a $\kappa>0,$ $w'=(\eta',\sigma',\mu'):=(\bar\eta-\kappa,\bar\sigma,\bar\mu-\kappa)$. Then it holds, for any $C>0$
\begin{align}
\vn{\cK^\zeta_\gamma\bR^{D}_+g}_{\bar\gamma,w';\tau}&\lesssim \tau^{\kappa/\frs_0}(\vn{g}_{\gamma,w;\tau}+\|\zeta\|_{a;\tau}),
\nonumber\\
\vn{\cK^\zeta_\gamma\bR^{D}_+g;\bar\cK^{\bar\zeta}_\gamma\bR^{D}_+\bar g}_{\bar\gamma,w';\tau}&\lesssim \tau^{\kappa/\frs_0}(\vn{g;\bar g}_{\gamma,w;\tau}+\|\Pi-\bar\Pi\|_{\gamma,O}+\|\Gamma-\bar\Gamma\|_{\gamma,O}
\nonumber\\
&\quad\quad\quad+\|\zeta-\bar\zeta\|_{a;\tau})\label{eq:short time}
\end{align}
uniformly in $\tau\in(0,1]$ and in models bounded by $C$. For the second bound, $g$ and $\bar g$ are also assumed to be bounded by $C$.
If we are instead in the situation of Proposition~\ref{prop:improved mu} (ii), then the analogous statement holds, with $\zeta$ replaced by $\hat \cR f$, and hence the last term on the right-hand side of \eqref{eq:short time} can be omitted.
\end{lemma}
\begin{proof}
First, by the fact that $K$ is non-anticipative, using \eqref{eq:standard reco estimate}
we can improve Lemma~\ref{lem:int} to
$$
\vn{\cK^\zeta_\gamma\bR^{D}_+g}_{\bar\gamma,\bar w;\tau}\lesssim \vn{g}_{\gamma,w;\tau}+\|\zeta\|_{a;\tau}.
$$
This already takes care of bounding the first and third term in \eqref{def:spaces}, since, using the shorthand $F=\cK^\zeta_\gamma\bR^{D}_+g$, for $(z,z')\in (O_\tau)_P$
$$
\frac{\|F(z)-\Gamma_{zz'}F(z')\|_l}
{\|z-z'\|_{\frs}^{\bar\gamma-l}|z,z'|_{P_0}^{\eta'-\bar\gamma}|z,z'|_{P_1}^{\bar\sigma-\bar\gamma}(|z,z'|_{P_0}\vee|z,z'|_{P_1})^{\mu'-\eta'-\bar\sigma+\bar\gamma}}
\lesssim|z,z'|_{P_0}^{\bar\eta-\eta'}\vn{F}_{\bar\gamma,\bar w;\tau},
$$
where we used that $\mu'-\eta'=\bar\mu-\bar\eta$. Similarly, for $z\in O_\tau\cap\{|z|_{P_1}\leq|z|_{P_0}\}$,
$$
\frac{\|F(z)\|_l}{|z|_{P_0}^{\mu'-l}\left(\tfrac{|z|_{P_1}}{|z|_{P_0}}\right)^{(\bar\sigma-l)\wedge0}}\lesssim|z|_{P_0}^{\mu'-\bar\mu}\vn{F}_{\bar\gamma,\bar w;\tau}.
$$
Keeping in mind that $|z|_{P_0}\leq t^{1/\frs_0}$, by the definition of the exponents $w'$, these are indeed the required bounds. Similarly, we have for $z\in O_\tau\cap\{|z|_{P_0}\leq|z|_{P_1}\}$
$$
\frac{\|F(z)\|_l}{|z|_{P_1}^{\mu'-l}\left(\tfrac{|z|_{P_0}}{|z|_{P_1}}\right)^{(\eta'-l)\wedge0}}\leq
\frac{\|F(z)\|_l}{|z|_{P_1}^{\mu'-\eta'}|z|_{P_0}^{\eta'-l}}\lesssim
|z|_{P_0}^{\eta'-\bar\eta}\bn{F}_{\bar\gamma,\bar w,\{0\};\tau},
$$
and hence, by virtue of Proposition~\ref{prop:1}, the proof is complete if we can show that $F=\cK^\zeta_\gamma\bR^{D}_+g\in\cD_{P,\{0\}}^{\bar\gamma,\bar w}$. This, on the other hand, follows from the proof of \cite[~Thm 7.1]{H0}, given that away from $P_1$, $\zeta$ belongs to $\cC^{\eta\wedge\alpha}$, which is exactly the situation considered therein.
The bound on the difference again follows in an analogous way.
\end{proof}
The corresponding results hold for the singular remainder as well.
\begin{lemma}\label{lem:short time Z,R}
Let $Z\in\scZ_{\beta,P}$, $f$, $\zeta$, $\gamma$, $\bar\gamma$, $w$, and $w'$ be as in Lemma~\ref{lem:short time K}.
Then it holds, for any $C>0$
\begin{align*}
\vn{Z_{\gamma}\zeta}_{\bar\gamma,w';\tau}&\lesssim \tau^{\kappa/\frs_0}\|\zeta\|_{a;\tau},
\end{align*}
uniformly in $\tau\in(0,1]$.
\end{lemma}
\begin{proof}
The proof goes precisely as in the previous lemma, with the only difference that we cannot refer to \cite{H0}
to argue that $F:=Z_{\bar\gamma}\zeta\in\cD_{P,\{0\}}^{\gamma,\bar w}$. We therefore need to show that $(F)_k$ has limit $0$ at points of
$P_0\setminus P_1$ whenever $|k|_\frs\leq\eta\wedge\alpha+\beta$. This is simply due to the fact that, for such $k$, the function
$$
z\rightarrow\zeta(Z(z,\cdot))
$$
is continuous away from $P_1$, and is $0$ for negative times.
\end{proof}
\subsection{On initial conditions}
The class of admissible initial conditions depends on the particular choice of the kernel in that in addition to the regularity, some boundary behaviour may be required. In the setting of Example~\ref{example}, which is general enough to cover all of our examples, this can be formalised as follows.
\begin{lemma}\label{lem:initial 2}
Let $\cG$ and $G$ be as in Example~\ref{example} and let $u_0$ be a function on $D$ such that the function $\bar u_0$ defined by
$$
\bar u_0(x)=a_g\bar u_0(g^{-1}x)
$$
for the $g\in\cG$ such that $g^{-1}x\in D$, has a continuous extension that belongs to $\cC^\alpha({\mathbb{R}}^{d-1})$. Then the function
$$
v(t,x)=\int_DG((t,x),(0,y)) u_0(y)dy
$$
is smooth on $(0,\infty)\times D$ and extending it by $0$ to ${\mathbb{R}}^d\setminus(0,\infty)\times D$, for any multiindex $l$, the pointwise lift of its $l$-th derivative via its Taylor expansion belongs to $\cD_P^{\gamma,(\alpha-|l|_\frs,\sigma,(\alpha-|l|_\frs)\wedge0)}$ for any $0\leq\sigma\leq\gamma$.
\end{lemma}
\begin{proof}
We can write
$$
v(t,x)=\int_{{\mathbb{R}}^d}G^0((t,x),(0,y))\bar u_0(y)dy.
$$
By assumption, the conditions of \cite[~Lem 7.5]{H0} are satisfied, and hence $v$ satisfies the bounds
$$
|D^l v(t,x)|\lesssim|z|_{P_0}^{(\alpha-|l|_\frs)\wedge0}.
$$
This already gives the right bounds for $\|D^l v(z)\|_k$, $k=0,1,\ldots$. From this one can deduce the bound for the quantity $\|D^lv(z)-\Gamma_{zz'}D^lv(z')\|_k$ precisely as in the proof of Lemma~\ref{lem:Z}.
\end{proof}
\subsection{The fixed point problem}
At this point everything is in place to solve the abstract equations that will arise as `lifts' of equations
similar to the ones in Section~\ref{subsec:applications}.
As the notation is already quite involved, we refrain from the full generality concerning the kernel $K+Z$ and the
scaling $\frs$ and only state the result in a form that is sufficient to treat nonlinear perturbations of
the stochastic heat equation with some boundary conditions. Our main goal is to formulate a fixed point
argument that is just general enough to cover the examples mentioned in the introduction, as well as some
related problems.
Our setup will involve families of Banach spaces
depending on some parameter $\tau>0$ (which will represent the time over which we solve our equation).
We will henceforth talk of a ``time-indexed space $\CV$'' for a family $\CV = \{\CV_\tau\}_{\tau > 0}$ of Banach
spaces as well as contractions $\pi_{\tau'\leftarrow \tau}\colon \CV_\tau \to \CV_{\tau'}$ for all $\tau' < \tau$ with the property
that $\pi_{\tau''\leftarrow \tau'} \circ \pi_{\tau'\leftarrow \tau} = \pi_{\tau''\leftarrow \tau}$. We consider $\CV$ itself as a
Fr\'echet space whose elements are collections $\{v_\tau\}_{\tau > 0}$ satisfying the consistency
condition $v_{\tau'} = \pi_{\tau'\leftarrow \tau} v_\tau$ and with the topology given by the collections
of seminorms $\|\cdot\|_\tau$ inherited by the spaces $\CV_\tau$. We will write $\pi_\tau\colon \CV \to \CV_\tau$ for the
natural projection.
Given a bounded and piecewise $\cC^1$ domain $D \subset {\mathbb{R}}^{d-1}$,
a typical example of a time-indexed space
is given by the space $\CV = \CD^{\gamma,w}_P$ with $\pi_\tau$ given by the restriction to $[0,\tau] \times D$
and norms $\|\cdot\|_\tau$ given by $\vn{\cdot}_{\gamma,w;D_\tau}$, where $D_\tau = [0,\tau]\times D$.
Similarly, we write again $\cC^w_P$ for the time-indexed space consisting of distributions
on ${\mathbb{R}}^d$ which vanish outside of ${\mathbb{R}}_+ \times D$, endowed with the norms
of Definition~\ref{def:weightedHolder}, but restricted to test functions $\psi$, points $x$
and constants $\lambda$ such that the support of $\psi_x^\lambda$ lies in $(-\infty,\tau]\times {\mathbb{R}}^{d-1}$.
Given two time-indexed spaces $\CV$ and $\bar \CV$, we call a map
$A \colon \CV \to \bar \CV$ `adapted' if there are
maps $A_\tau \colon \CV_\tau \to \bar \CV_\tau$ such that $\pi_\tau A = A_\tau \pi_\tau$.
If $A$ is linear, we will furthermore assume that the norms of $A_\tau$ are uniformly bounded
over bounded subsets of ${\mathbb{R}}_+$. Similarly, we call $A$ ``locally Lipschitz'' if each of the $A_\tau$
is locally Lipschitz continuous and, for every $K>0$ and $\tau>0$, the Lipschitz constant of
$A_{\tau'}$ over the centred ball of radius $K$ in $A_{\tau'}$ is bounded, uniformly over $\tau' \in (0,\tau]$.
With these preliminaries in place, our setup is the following.
\begin{claim}
\item Fix $d\geq 2$, $\beta=2$, the scaling $\frs=(2,1,\ldots,1)$ on ${\mathbb{R}}^d=\{(t,x):t\in{\mathbb{R}},x\in{\mathbb{R}}^{d-1}\}$, and a regularity structure $\scT$.
\item Let $\gamma$, $\gamma_0$ be two positive numbers satisfying $\gamma<\gamma_0 + 2$ and
let $V$ be a sector of regularity $\alpha \le 0$ and such that $\bar T \subset V$.
\item Set $P_0=\{(0,x):x\in{\mathbb{R}}^{d-1}\}$ and $P_1=\{(t,x):t\in{\mathbb{R}},x\in\partial D\}$, where $D$ is a domain in ${\mathbb{R}}^{d-1}$ with a piecewise $\cC^1$ boundary, satisfying the cone condition.
\item We assume that we have an abstract integration map $\cI$ of order $2$ as well as non-anticipative
kernels $K\in\scK_2$ and $Z\in\scZ_{2,P}$. We then construct the operator $Z_\gamma$ and, for every admissible model $(\Pi,\Gamma)$,
the operator $\cK_\gamma$ as in Sections~\ref{sec:kernel} and~\ref{subsec:int remainder}.
\item We fix a family $((\Pi^\varepsilon,\Gamma^\varepsilon))_{\varepsilon\in(0,1]}$ of admissible models converging to $(\Pi^0,\Gamma^0)$ as $\varepsilon\rightarrow0$.
\item We fix a collection of time-indexed spaces $\cV_\varepsilon$ with $\varepsilon \in [0,1]$ endowed
with adapted linear maps $\hat \CR^\varepsilon \colon \cV_\varepsilon \to \bigoplus_{i=0}^n\cC^{w_i}_P$
and $\iota_\varepsilon\colon \cV_\varepsilon \to \bigoplus_{i=0}^n\cD^{\gamma_0,w_i}_P(V_i,\Gamma^\varepsilon)$,
where $V_i$ are sectors of regularity $\alpha_i$, satisfying $\cI(V_i) \subset V$ and $w_i\in{\mathbb{R}}^3$.
Finally, we assume that for every $\varepsilon \in [0,1]$ and every $v \in \cV_\varepsilon$, one has
\begin{equ}\label{eq:hatR0}
\bigl(\tilde \cR\bR^D_+\iota_\varepsilon v\bigr)(\psi) = \bigl(\hat \CR^\varepsilon v\bigr)(\psi)
\end{equ}
for any $\psi \in \cC_0^\infty({\mathbb{R}}^d \setminus P)$.
Denote $\tilde \cC=\bigoplus_{i=0}^n\cC^{w_i}_P$ and $\tilde \cD=\bigoplus_{i=0}^n\cD^{\gamma_0,w_i}_P(V_i,\Gamma^\varepsilon)$, which are themselves time-indexed spaces equipped with the natural norms.
\item We fix a collection of time-indexed spaces $\CW_\varepsilon$ of modelled distributions
such that the linear maps
\begin{equ}
\cP_\gamma^{(\varepsilon)} v = \sum_{i=0}^n \bigl(\cK_\gamma^{(\hat\cR^\varepsilon v)_i} (\bR_+^D\iota_\varepsilon v)_i + Z_\gamma (\hat\cR^\varepsilon v)_i \bigr)\;,
\end{equ}
are bounded from $\cV_\varepsilon$ into $\cW_\varepsilon$ with a bound of order $\tau^\theta$ for some $\theta>0$ for its restriction to time $\tau \in (0,1]$,
uniformly over $\varepsilon \in [0,1]$.
\item For $\varepsilon \in [0,1]$, we fix a collection of adapted locally Lipschitz continuous maps
$F_\varepsilon:\cD^{\gamma,w}_P(V,\Gamma^\varepsilon) \to \cV_\varepsilon$.
\item There are `distances' $\vn{\cdot;\cdot}_{\cW;\tau}$ (possibly also depending on $\varepsilon$)
defined on $\cW_\varepsilon \times \cW_0$
that are compatible
with the maps $F_\varepsilon$ and $\cP_\gamma$ in the sense that, for $u\in\cV_\varepsilon$, $v\in\cV_0$,
and $\tau\in(0,1]$, one has
\begin{equ}
\tau^{-\theta}\vn{\cP^{(\varepsilon)}_\gamma u;\cP^{(0)}_\gamma v}_{\cW;\tau}\lesssim
\vn{\iota_\varepsilon u; \iota_0v}_{\tilde\cD; D_\tau}
+ \|\hat\CR^\varepsilon u- \hat \CR^0v\|_{\tilde\cC;D_\tau} +o(1)\;,
\end{equ}
as $\varepsilon \to 0$. Similarly, uniformly over modelled distributions $f\in\cW_\varepsilon$, $g\in\cW_0$
bounded by an arbitrary constant $C$ and uniformly over $\tau\in(0,1]$, one has
\begin{equ}[e:conteps]
\vn{\iota_\varepsilon F_\varepsilon(f); \iota_0 F_0(g)}_{\tilde\cD; D_\tau}
+ \|\hat\CR^\varepsilon F_\varepsilon(f)- \hat \CR^0 F_0(g)\|_{\tilde\cC;D_\tau} \lesssim\vn{f;g}_{\cW;\tau} + o(1)
\;,
\end{equ}
as $\varepsilon \to 0$.
\end{claim}
\begin{remark}
The reader may wonder what the point of this rather complicated setup is. By choosing for
$\CV_\varepsilon$ a direct sum of spaces of the type defined in Section~\ref{sec:def}, it
allows us to decompose the right hand side
of our equation into a sum of terms with well-controlled behaviour at the boundary.
This gives us the flexibility to exploit different features of each term to control the
corresponding ``reconstruction operator'' $\hat \CR^\varepsilon_i$.
For example, in the case of 2D gPAM, the term $\hat f_{ij}(u) \star \cD_i(u)\star \cD_j(u)$ can be
reconstructed because the corresponding weight exponents are sufficiently large, the term $(\hat g(u) - g(0)\bone) \star \Xi$
can be reconstructed because it vanishes on the boundary, and
the term $g(0)\Xi$ can be reconstructed because it corresponds to (a constant times) white noise,
multiplied by an indicator function.
\end{remark}
We then have the following result.
\begin{theorem}\label{thm:FPP}
In the above setting, there exists $\tau>0$ such that, for every $\varepsilon\in[0,1]$ and every $v \in \cW_\varepsilon$, the equation
\begin{equation}\label{eq:main eq}
u=\cP^{(\varepsilon)}_{\gamma_0} F_\varepsilon(u)+ v\;,
\end{equation}
admits a unique solution $u^\varepsilon\in \cW_\varepsilon$ on $(0,\tau)$. The solution map $\cS_\tau:(v,\varepsilon)\mapsto u^\varepsilon$ is furthermore jointly continuous at $(v,0)$.
\end{theorem}
\begin{proof}
By assumption $\cP^{(\varepsilon)}_{\gamma_0}$ is an adapted linear map from
$\CV_\varepsilon$ to $\cW_\varepsilon$ with control on its
norm that is uniform over $\varepsilon \in [0,1]$. It has the additional property that, when restricted to time $\tau$,
its operator norm is bounded by $\CO(\tau^\theta)$ for some exponent $\theta>0$, uniformly in $\varepsilon$.
Combining this with the uniform local Lipschitz continuity of the maps $F_\varepsilon$, it is immediate that,
for every $C> 2\|v\|_{\cW;1}$, there exists $\tau\in (0,1]$ such that
the right hand side of \eqref{eq:main eq} is a contraction and therefore admits a unique fixed point
in the centred ball of radius $C$ in $\cW_\varepsilon$.
To show that this is the unique fixed point in all of $\cW_\varepsilon$ is also standard:
assume by contradiction that there
exists a second fixed point $\bar u$ (which necessarily has norm strictly greater than $C$). Then, for every $\tau' < \tau$,
the restrictions of both $u$ and $\bar u$ are fixed points in $\cW_\varepsilon$. However, since the
norm of $A_\varepsilon$ is bounded by $\CO(\bar \tau^\theta)$, one has uniqueness of the fixed point in a ball of
radius $\bar C(\tau')$ of $\cW_\varepsilon$ with $\lim_{\tau' \to 0} \bar C(\tau') = \infty$,
so that one reaches a contradiction by choosing $\tau'$ small enough.
The continuity of the solution map at $(v,0)$ then follows immediately from \eqref{e:conteps}.
\end{proof}
\section{Singular SPDEs with boundary conditions}\label{sec:applications}
The next three subsections are devoted to the proofs of Theorems~\ref{thm:PAM},~\ref{thm:KPZ D}, and~\ref{thm:KPZ N}, respectively. We do rely on the results of the corresponding statements without boundary conditions from \cite{H_KPZ,H0}, in particular the specific regularity structures, models, and their convergence do not change in our setting. Therefore we only specify details about these objects to the extent that is sufficient to cover the new aspects of our setting.
\subsection{2D gPAM with Dirichlet boundary condition}
The regularity structure for the equation \eqref{eq:0PAM} is built as in \cite[~Sec 8]{H0},
and the models $(\Pi^\varepsilon,\Gamma^\varepsilon)_{\varepsilon\in[0,1]}$ as in \cite[~Sec 10]{H0},
and we will
use the notations from there without further ado.
We use the periodic model with sufficiently large period: if the truncated heat kernel $K^0$ is chosen to have support of diameter $1$,
then the periodic model on $[-2,2]^2$ suffices, since convolution with $K^0$ and with its periodic symmetrisation agrees on $[-1,1]^2$.
The homogeneity of the symbol $\Xi$ is denoted by
$-1-\kappa$, where $\kappa\in(0,(1/3)\wedge\delta)\setminus\mathbb{Q}$, with $\delta$ being the regularity of the initial condition.
Our setup to apply Theorem~\ref{thm:FPP} is the following. The sectors we are working with are
$$
V =\cI(T)+\bar T,\quad V_0 =T_0^+\star\scD(V)\star\scD(V),\quad V_1 =T_0^+\star\Xi
,\quad V_2=\scal{\Xi}
$$
and we set the exponents $\gamma =1+2\kappa$, $\gamma_0=\kappa$,
\begin{equs}
\alpha &=0, &\quad
\eta &=\kappa&\quad \sigma &=1/2+\kappa&\quad \mu &=-\kappa;\\
\alpha_0 &=-2\kappa, &\quad
\eta_0 &=2\kappa-2,&\quad \sigma_0 &=2\kappa-1,&\quad \mu_0 &=2\kappa-2;\\
\alpha_1 &=-1-\kappa, &\quad
\eta_1 &=-1,&\quad
\sigma_1 &=-1/2,&\quad
\mu_1 &=-1-\kappa;\\
\alpha_2 &=-1-\kappa, &\quad
\eta_2 &=-1-\kappa,&\quad
\sigma_2 &=-1-\kappa,&\quad
\mu_2 &=-1-\kappa.
\end{equs}
We then set
\begin{equ}[e:space]
\cV_\varepsilon=\cD_P^{\gamma_0,w_0}(V_0,\Gamma^\varepsilon)\oplus\cD_{P,\{1\}}^{\gamma_0,w_1}(V_1,\Gamma^\varepsilon)\oplus\cD_P^{\gamma_0,w_2}(V_2,\Gamma^\varepsilon)\;,
\end{equ}
and we let $\iota_\varepsilon$ be the identity.
As for $\hat \cR^\varepsilon$, it is chosen to act coordinate-wise, and in the first two coordinates there is no choice to be made, one simply applies Theorems~\ref{thm:reco}-\ref{thm:reco hat'}. The definition of the action
of $\hat \cR^\varepsilon$ on the third coordinate is momentarily postponed.
We take $G$ to be the Dirichlet heat kernel of the domain $D=(-1,1)^2$ continued to all of ${\mathbb{R}}^2$ as
in Example~\ref{example}. We also consider the decomposition $G \sim K + Z$ given there and construct
$\cK_{\gamma_0}$ and $Z_{\gamma_0}$ accordingly. Furthermore, by Schauder's estimate,
it follows that, for all $f\in\cC^\alpha$ with $\alpha>-2$, the function
$$
(t,x) \mapsto \int_{[0,t]\times D}G((t,x),(s,y))f(s,y)\,ds\,dy
$$
is continuous and vanishes on ${\mathbb{R}}_+ \times \partial D$.
In particular, for any $v\in\cV_\varepsilon$, the modelled distribution
$$
h=(\cK_{\gamma_0}^{(\varepsilon)} + Z_{\gamma_0}\hat\cR^\varepsilon)v
$$
satisfies $\langle \bone,h(t,x)\rangle=0$ for all $t>0$ and $x \in \partial D$. Since the only basis element in $V$
with homogeneity lower than $\sigma$ is $\bone$, we conclude that one has $h\in\cD_{P,\{1\}}^{\gamma,w}$.
We exploit this by setting the time-indexed space $\CW_\varepsilon$ to be
\begin{equ}
\CW_\varepsilon = \bigl\{u \in \cD_{P,\{1\}}^{\gamma,(\eta,\sigma,0)}\,:\, \scD_i u\in\cD_{P}^{\gamma-1,(\eta-1,\sigma-1,\kappa-1)},\,i = 1,2\bigr\}\;.
\end{equ}
The reason for only imposing a slightly weaker condition on $u$ itself (i.e.\ we use $0$ instead of $\kappa$ as the third
singularity index) is to be able to deal with initial conditions.
Indeed, let $v$ be the lift of the solution of the linear equation
\begin{equs}\label{eq:initial}
\partial_t v=\Delta v,\quad v|_{\partial D}=0,\quad v|_{\{0\}\times D}=u_0.
\end{equs}
Combining our assumption that $u_0 \in \CC^\delta$ with Lemma~\ref{lem:initial 2} and the definition of the various
exponents, we then note that indeed $v \in \CW_\varepsilon$ as required, but this would \textit{not} be the case
had we simply replaced $\CW_\varepsilon$ by $\cD_{P,\{1\}}^{\gamma,(\eta,\sigma,\kappa)}$. Due to the above choice of exponents, the required estimate of
order $T^\theta$ of the short time norm of $\cP_\gamma^{(\varepsilon)}$
from $\cV_\varepsilon$ to $\cW_\varepsilon$ follows from
Lemmas \ref{lem:short time K}-\ref{lem:short time Z,R}, with the choice
$$
\vn{f;g}_{\cW;\tau}:=\vn{f;g}_{\gamma,(\eta,\sigma,0);\tau}+\vn{\scD f;\scD g}_{\gamma-1,(\eta-1,\sigma-1,\kappa-1);\tau}.
$$
We now define the functions $F_\varepsilon$. They are given as local operations with formal
expression that do not depend on $\varepsilon$, and we define its three components according to
the decomposition \eqref{e:space} separately.
We first set
$$
F^{(0)}(u)=\hat f_{ij}(u)\star\scD_i(u)\star\scD_j(u).
$$
Here $\hat f_{ij}$ are the lifts of the functions $f_{ij}$ in \eqref{eq:0PAM}. By Lemmas~\ref{lem:comp},~\ref{lem:mult}, and~\ref{lem:diff}, $F^{(0)}$ is indeed a mapping from $\CW_\varepsilon$ to $\cD_{P}^{\gamma_0,w_0}(V_0)$.
At this stage we note that the fact that the derivatives of
elements of $\cW^\varepsilon$ have better corner singularity than $\mu-1$ is crucial, since otherwise we would have had to choose
$\mu_0 \le -2$ which would violate the condition $\mu_0 + 2 > (\mu\vee 0)$ appearing in the conditions of
Theorem~\ref{thm:FPP}.
Next, set
$$
F^{(1)}(u)=(\hat g(u)-g(0)\bone)\star\Xi
$$
Again, using Lemmas~\ref{lem:comp} and~\ref{lem:mult}, it is easy to see that $F^{(1)}$ maps from $\cW_\varepsilon$ to $\cD_{P}^{\gamma_1,w_1}(V_1)$. To see that it in fact maps to $\cD_{P,\{1\}}^{\gamma_0,w_1}(V_1)$, we need only check the coefficient of $\Xi$, since $\Xi$ is the only basis element in $V_1$ with homogeneity less than $\sigma_1$. Since $\langle \bone,u(z)\rangle$ has $0$ limit at $P_1\setminus P_0$, so does $\langle \bone, (\hat g(u))(z)-g(0)\bone\rangle$, and therefore so does the coefficient of $\Xi$ in $F_1(u,v)$.
Finally, the third coordinate is the constant modelled distribution
$$
F^{(2)}(u)= g(0)\Xi.
$$
It remains to define $\hat\cR^\varepsilon$ on $\cD_P^{\gamma_0,w_2}(\scal{\Xi})$. To this end, let us recall that for the
model constructed for this equation
in \cite[Sec.~10.4]{H0} (which coincides with the canonical BPHZ model defined more generally in
\cite{BHZ,CH}) $\Pi^0_x\Xi$ is the spatial white noise $\xi$ for all $x$, while
$\Pi_x^\varepsilon\Xi$ is the smoothed noise $\xi_\varepsilon$ for all $x$.
Also notice that any $f\in\cD_P^{\gamma_0,w_2}(\scal{\Xi})$ is necessarily constant on ${\mathbb{R}}^+\times D$,
and therefore in fact it suffices to define $\hat\cR^\varepsilon(\bR^D_+\Xi)$ in a way that the continuity property \eqref{e:conteps} holds.
Defining $\hat\cR^0(\bR^{D}_+\Xi)$ as $\mathbf{1}_{[0,\infty)\times D}\xi$ (which is of course a meaningful expression)
and $\hat\cR^\varepsilon(\bR^{D}_+\Xi)$ as $\mathbf{1}_{[0,\infty)\times D}\xi_\varepsilon$ we therefore only
need to show that the convergence
$$
\|\mathbf{1}_{[0,\infty)\times D}\xi-\mathbf{1}_{[0,\infty)\times D}\xi_\varepsilon\|_{-1-\kappa;[0,1]\times D}\xrightarrow{\varepsilon\rightarrow0}0
$$
holds in probability for \eqref{e:conteps} to hold. This however follows in a more or less standard way
from a Kolmogorov continuity type argument, see for example \cite[Prop.~9.5]{H0} for a very similar statement.
Therefore we can apply Theorem~\ref{thm:FPP} to get that the equation
$$
u=(\cK^{(\varepsilon)}_{\gamma_0}+Z_{\gamma_0}\hat\cR^\varepsilon)\big((F^{(0)},F^{(1)},F^{(2)})(u)\big)
+v
$$
has a unique local solution $u^\varepsilon\in\cD_{P,\{1\}}^{\gamma,w}(V,\Gamma)$ for each of the models $(\Pi^\varepsilon,\Gamma^\varepsilon)$, for $\varepsilon\in[0,1]$.
The fact that these correspond to the approximating equations in the sense that $\cR u^\varepsilon$ is the classical solution of \eqref{eq:0PAM approx}, for $\varepsilon>0$, follows exactly as in \cite{H0}:
indeed, this is a property of the models and the compatibility of the abstract integration operators with the corresponding convolutions, neither of which changed in our setting.
One also has, by Theorem~\ref{thm:FPP}, that $u^\varepsilon$ converges to $u^0$ in probability, with respect to the `distance' $\vn{\cdot;\cdot}_{\gamma,w,T}$.
Therefore, $\cR u^\varepsilon$ also converge to $\cR u^0$ in probability, which proves Theorem~\ref{thm:PAM}.
\begin{remark}\label{remark:initialPAM}
If we replace $u_0$ in \eqref{eq:initial} by $(\cR u^0)(s,\cdot)$, $s<\tau$, where $\tau$ is the solution time from Theorem~\ref{thm:FPP}, then $v$ still belongs to $\CW_\varepsilon$, in fact, one even has
$$
v\in\cD_P^{\gamma,(1-\kappa,1-\kappa,0)},\quad\scD_i v\in\cD_P^{\gamma-1,(-\kappa,-\kappa,-\kappa)},\,i=1,2.
$$
Therefore the solution can be restarted from time $s$ and these solutions can be patched together by the arguments in \cite[Sec.~7.3]{H0}. One then sees that the only way that the solution may fail to be global is if $\|\hat\cR^0 F(u^0)\|_{-1-\kappa;s}$, and consequently, $\|(\cR u^0)(s,\cdot)\|_{1-\kappa,\bar D}$ blows up in finite time.
\end{remark}
\subsection{KPZ equation with Dirichlet boundary condition}
The construction of the regularity structure and models (as before, with a sufficiently large period)
for the KPZ equation can be for example found in \cite[Sec.~15]{FH}. The homogeneity of the symbol $\Xi$ is now denoted by $-3/2-\kappa$, where $\kappa\in(0,(1/8)\wedge\delta)\setminus\mathbb{Q}$, with $\delta$ being the regularity of the initial condition.
Similarly to the previous subsection, we let $v$ be the lift of the solution to the linear problem with initial
condition $u_0$ (and Dirichlet boundary conditions). We also choose $K\in\scK_2$ and $Z\in\scZ_{2,P}$, as obtained
from $G$, the Dirichlet heat kernel on the domain $D=(-1,1)$ as in Example~\ref{example}.
We also set $\gamma=3/2+\kappa$, $\gamma_0=\kappa$, and define
\begin{equ}\label{eq:Psi}
\Psi=\Psi^\varepsilon=(\cK_{\gamma_0}^{(\varepsilon)}+Z_{\gamma_0}\hat\cR^\varepsilon)(\bR^D_+\Xi),
\end{equ}
where we define the distributions $\hat \cR^\varepsilon\bR_+^D\Xi$ as in the previous subsection, with the obvious modification that $\xi$ now stands for the 1+1-dimensional space-time white noise.
We then write the abstract fixed point problem for the remainder of a one step expansion
\begin{equation}\label{eq:kpz2}
u=(\cK_{\gamma_0}+Z_{\gamma_0}\hat\cR)((F^{(0)},F^{(1)},F^{(2)})(u))+v,
\end{equation}
with
\begin{equs}
F^{(0)}(u)=(\scD u)^{\star 2},\quad
F^{(1)}(u)=2(\scD\Psi)\star(\scD u),\quad
F^{(2)}(u)\equiv(\scD\Psi)^{\star 2}.
\end{equs}
We further set
$$
V =\cI(T_{-1-2\kappa}^+)+\bar T,
\quad \quad
V_0=(\scD V)^{\star 2},
\quad \quad
V_1=(\scD V)\star T_{-1/2-\kappa}^+,
\quad \quad
V_2=T_{-1-2\kappa}^+,
$$
which obviously implies $\alpha=0$, $\alpha_0=-4\kappa$, $\alpha_1=-1/2-3\kappa$, and $\alpha_2=-1-2\kappa$. As for the weight exponents, let
\begin{equs}
\eta&=\kappa, &\quad \sigma&=1/2+2\kappa, &\quad \mu&=-\kappa,\\
\eta_0&=2\kappa-2, &\quad \sigma_0&=2\kappa-1, &\quad \mu_0&=2\kappa-2,\\
\eta_1&=-3/2, &\quad \sigma_1&=\kappa-1, &\quad \mu_1&=-3/2,\\
\eta_2&=-1-2\kappa, &\quad \sigma_2&=-1-2\kappa, &\quad \mu_2&=-1-2\kappa.
\end{equs}
We then set
similarly to above
\begin{equ}
\CW_\varepsilon = \bigl\{u \in \cD_{P}^{\gamma,(\eta,\sigma,0)}\,:\, \scD_i u\in\cD_{P}^{\gamma-1,(\eta-1,\sigma-1,\kappa-1)},\,i = 1,2\bigr\}\;,
\end{equ}
as well as
$$
\cV_\varepsilon=\cD_P^{\gamma_0,w_0}(V_0,\Gamma^\varepsilon)\oplus\cD_{P}^{\gamma_0,w_1}(V_1,\Gamma^\varepsilon)\oplus\scal{\bR^D_+(\scD\Psi^\varepsilon)^{\star 2}}\;,
$$
and $\iota_\varepsilon$ to be the identity. As before, it is straightforward to check that that the conditions on $\cV_\varepsilon$ and $\cW_\varepsilon$ satisfied, and also that regarding the first two coordinates of $\hat\cR^\varepsilon$ one has
a canonical choice given by Theorem~\ref{thm:reco}.
It remains to define $\hat\cR^\varepsilon\bR^D_+(\scD\Psi)^{\star 2}.$ Recall that $\tilde \cR$ stands for the local reconstruction operator and that the issue with the singularity of low order is that $\tilde \cR \bR^{D}_+(\scD(G_\gamma\Xi))^{\star 2}$ does not have a canonical extension as a distribution in $\cC^{-1-2\kappa}$. Of course, for the approximating models this is just a bounded function, so it could even be extended as an element of $\cC^0$, but these extensions may not converge in the $\varepsilon\rightarrow0$ limit. Therefore some modification of these natural extensions are required at the boundary.
\begin{remark}
This process is very similar to the situation when one takes the sequence of distributions $1/(|x|+\varepsilon)$. This sequence of course does not converge to any distribution as $\varepsilon\rightarrow0$, but $1/(|x|+\varepsilon)+2\log(\varepsilon)\delta_0$ does, in $\cC^{-1-\rho}$ for any $\rho>0$. Moreover, the limiting distribution agrees with $1/|x|$ on test functions supported away from $0$.
\end{remark}
First, for the models $(\Pi^\varepsilon,\Gamma^\varepsilon)$, $\varepsilon>0$, we denote by $\cR \bR^{D}_+(\scD\Psi)^{\star 2}$
the natural extension of $\tilde \cR \bR^{D}_+(\scD\Psi)^{\star 2}$ which, as just mentioned, is a bounded function
and can be written in the form
$$
(\cR \bR^{D}_+(\scD\Psi)^{\star 2})(z)=A_2^\varepsilon(z)+A_0^\varepsilon(z),
$$
where $A_i^\varepsilon(z)$ are random variables belonging to the $i$-th homogeneous Wiener chaos for $i=0,2$.
To write them more explicitly, introduce the notations $\bar f(s,y)=f(-s,-y)$ for any function $f$, set
\begin{equs}
\tilde{K}_{Q,\varepsilon}(z,z')&=(\bar\rho_\varepsilon\ast (D_1K(z,\cdot)\mathbf{1}_{Q}(\cdot)))(z'),\\
\tilde{Z}_{Q,\varepsilon}(z,z')&=(\bar\rho_\varepsilon\ast (D_1Z(z,\cdot)\mathbf{1}_{Q}(\cdot)))(z'),
\end{equs}
and define $\tilde G_{Q,\varepsilon}=\tilde K_{Q,\varepsilon}+\tilde Z_{Q,\varepsilon}$ for any $Q\subset{\mathbb{R}}^d$, and with the convention that for $\varepsilon=0$ we substitute the convolution $\bar\rho_\varepsilon\ast$ with the identity. We can then write
\begin{equs}\label{eq:A2epsilon}
A^\varepsilon_2(z)&=\int (\tilde G_{[0,\infty)\times D,\varepsilon})(z,z')(\tilde G_{[0,\infty)\times D,\varepsilon})(z,z'')\,\xi(dz')\,\xi(dz''),\\
A_0^\varepsilon(z)&=\int (\tilde G_{[0,\infty)\times D,\varepsilon}(z,z'))^2-\tilde K_{{\mathbb{R}}^d,\varepsilon}^2(z,z')\, dz'.
\label{eq:kpzz}
\end{equs}
Note that the reason for the subtraction in \eqref{eq:kpzz} is the renormalisation already built in the model $(\Pi^\varepsilon,\Gamma^\varepsilon)$. Similarly, for the limiting model $(\Pi^0,\Gamma^0)$,
$$
\tilde \cR \bR^{D}_+(\scD\Psi)^{\star 2}=A_2+A_0,
$$
where $A_2$ and $A_0$ are given by setting $\varepsilon=0$ with the above mentioned convention in \eqref{eq:A2epsilon} and \eqref{eq:kpzz}, respectively.
The convergence of $A_2^\varepsilon$ to $A_2$ in the $\varepsilon\rightarrow0$ limit in $\cC^{-1-\kappa}$ follows from essentially the same power counting argument as in the case without boundary conditions.
The term $A_0^\varepsilon(z)$ however is more delicate. While it is not difficult to show that it converges pointwise to the
smooth function $A_0(z)$ on $(0,\infty)\times D$, the convergence in $\cC^{-1-\kappa}$ is not a priori clear. In fact,
without using the specific form of $G$, one cannot even rule out that the limit exhibits a non-integrable singularity
at the spatial boundary. To see how this can be `countered', first define
\begin{align}
B_0^\varepsilon(z)&=\int (\tilde G_{(-\infty,0)\times D,\varepsilon})
(\tilde G_{{\mathbb{R}}\times D,\varepsilon}+\tilde G_{[0,\infty)\times D,\varepsilon})
(z,z')
dz',\nonumber\\
C_0^\varepsilon(z)&=\int (\tilde K_{{\mathbb{R}}\times D,\varepsilon}+\tilde Z_{{\mathbb{R}}\times D,\varepsilon})^2(z,z')-\tilde K_{{\mathbb{R}}^d,\varepsilon}^2(z,z')dz'
\nonumber\\
&=\int 2\tilde K_{{\mathbb{R}}\times D,\varepsilon}\tilde Z_{{\mathbb{R}}\times D,\varepsilon}(z,z')+
\tilde Z_{{\mathbb{R}}\times D,\varepsilon}^2(z,z')-\tilde K_{{\mathbb{R}}\times D^c,\varepsilon}^2(z,z')
\nonumber\\
&\quad -2\tilde K_{{\mathbb{R}}\times D^c,\varepsilon}\tilde K_{{\mathbb{R}}\times D,\varepsilon}(z,z')\,dz',\label{eq:C0}
\end{align}
for $z\in(0,\infty)\times D$, and extending them by $0$ otherwise, we have $A_0^\varepsilon=-B_0^\varepsilon+C_0^\varepsilon$. We can similarly write $A_0=-B_0+C_0$, where $B_0$ and $C_0$ are defined by formally setting $\varepsilon=0$ in the above definitions, that is, replacing the convolution with $\rho_\varepsilon$ with the identity.
First we claim that for $z\in(0,\infty)\times D$
\begin{equation}\label{kpz:blowup t}
|B_0^\varepsilon(z)|\lesssim 1/(|z|_{P_0}+\varepsilon)=1/(t^{1/2}+\varepsilon).
\end{equation}
It is easy to see that one has the decomposition
\begin{equation}\label{kpz:decom}
(\tilde G_{(-\infty,0)\times D,\varepsilon})(z,\cdot)=\sum_{n\geq0}\tilde{G}^{(n)}(\cdot),
\end{equation}
where, for each $n$, the function $\tilde{G}^{(n)}$ is supported on $\{z':|z'|_{P_0}\leq\varepsilon,\|z-z'\|_\frs\leq 2^{-n}+\varepsilon\}$, and is bounded by $2^{-n}(\varepsilon\vee 2^{-n})^{-3}$. Furthermore, the function $(\tilde G_{{\mathbb{R}}\times D,\varepsilon}+\tilde G_{[0,\infty)\times D,\varepsilon})
(z,\cdot)$ is also bounded by $2^{-n}(\varepsilon\vee 2^{-n})^{-3}$ on the support of $\tilde G^{(n)}$. Hence in the case $|z|_{P_0}\geq 3\varepsilon$, noting that the only nonzero terms in the sum \eqref{kpz:decom} are those where $2^{-n}\geq(|z|_{P_0}/3)$, we can bound
\begin{align*}
B_0^\varepsilon(z)
&\lesssim\int\sum_{(|z|_{P_0}/3)\leq 2^{-n}}2^{-3n}2^{2n}2^{2n}\lesssim 1/|z|_{P_0}
\end{align*}
as required. On the other hand, in the case $|z|_{P_0}\leq3\varepsilon$, we have
\begin{align*}
B_0^\varepsilon(z)\lesssim \sum_{2^{-n}>\varepsilon}2^{-3n}2^{2n}2^{2n}+
\sum_{2^{-n}\leq\varepsilon}\varepsilon^32^{-n}\varepsilon^32^{-n}\varepsilon^3\lesssim1/\varepsilon,
\end{align*}
as required. The estimate
$
|B_0(z)|\lesssim 1/t^{1/2}
$
can be obtained analogously. Since $B_0^\varepsilon$ (extended by $0$ outside of $(0,\infty)\times D$) converges to $B_0$ locally uniformly on $(0,\infty)\times D$ and since by the above estimates $(B_0^\varepsilon)_{\varepsilon\in(0,1]}$
and $B_0$ are uniformly bounded in $\cC^{-1-\kappa/2}$, the convergence also holds in $\cC^{-1-\kappa}$.
Moving on to $C_0^\varepsilon$, first notice that it only depends on the variable $x$. Furthermore, by similar calculations as above, one obtains a bound analogous to \eqref{kpz:blowup t}, namely
\begin{equation}\label{kpz:blowup}
|C^\varepsilon_0(z)|\lesssim 1/(|z|_{P_1}+\varepsilon)=1/\big((x+1)\wedge(1-x)+\varepsilon\big)
\end{equation}
for $z\in(0,\infty)\times D$.
We then define the distribution $\hat C^\varepsilon_0$ by
\begin{equ}[e:defC0hat]
(\hat C^\varepsilon_0,\varphi):=\int C^\varepsilon_0(z)[\varphi(z)-\chi(x+1)\varphi(t,-1)-\chi(x-1)\varphi(t,1)]dz,
\end{equ}
where $\chi$ is a smooth symmetric cutoff function in the $x$ variable which is
$1$ on $\{x':|x'|\leq 1/8\}$, and is supported on $\{x':|x'|\leq 1/4\}$.
The estimate \eqref{kpz:blowup}, together with the local uniform convergence of $C_0^\varepsilon$, then implies that $\hat C^\varepsilon_0$ converges in $\cC^{-1-\kappa}$ to a limit, which we denote by $\hat C^\ast_0$.
Moreover, since $\hat C^\varepsilon_0$ agrees with $C^\varepsilon_0$ on test functions supported away from $P$, $\hat C^\ast_0$ also agrees with $C_0$ on the same class of test functions. In other words, defining
\begin{equation}\label{eq:kpz hat epsilon}
\hat \cR^\varepsilon \bR^{D}_+(\scD\Psi)^{\star 2}=A_2^\varepsilon-B_0^\varepsilon+\hat C^\varepsilon_0,
\end{equation}
as well as
\begin{equation}\label{eq:kpz hat}
\hat \cR^0 \bR^{D}_+(\scD\Psi)^{\star 2}=A_2-B_0+\hat C_0^\ast,
\end{equation}
the desired properties \eqref{eq:hatR0}-\eqref{e:conteps} of $(\hat\cR^\varepsilon)_{\varepsilon\in[0,1]}$ hold
Therefore by Theorem~\ref{thm:FPP} we can conclude \eqref{eq:kpz2} has a unique local solution $u^\varepsilon\in\cD_P^{\gamma,w}(V,\Gamma^\varepsilon)$
for each $\varepsilon\in[0,1]$,
and $\cR(u^\varepsilon+\Psi^\varepsilon)$ converges to $\cR( u^0+\Psi^0)$. To conclude the proof of Theorem~\ref{thm:KPZ D},
it remains to confirm that for $\varepsilon>0$, $\cR( u^\varepsilon+\Psi^\varepsilon)$ solves \eqref{eq:0KPZ approx}.
This would again follow in exactly the same manner as in \cite{H_KPZ} if we used the `natural' reconstructions everywhere, which we only steered away from in the previous construction. However, since $\hat \cR^\varepsilon$ and $\cR$ only differ by some (finite) Dirac mass on the boundary, and since $G$, the Dirichlet heat kernel, vanishes on the boundary, we have
\begin{align}
\cR (\cK_{\gamma_0}^{(\varepsilon)}+Z_{\gamma_0}\hat\cR^\varepsilon)\bR^D_+(\scD\Psi^\varepsilon)^{\star 2}&
=G\ast\hat\cR^\varepsilon(\bR^D_+(\scD\Psi^\varepsilon)^{\star 2})
\nonumber\\&=G\ast\cR(\bR^D_+(\scD\Psi^\varepsilon)^{\star 2}).\label{eq:kpz hat r}
\end{align}
The previous modification is therefore not visible after the application of the reconstruction operator, and this
concludes the proof of Theorem~\ref{thm:KPZ D}.
\subsection{KPZ equation with Neumann boundary condition}
\label{sec:KPZNeumann}
Most of the arguments of the previous subsection carry through if the Dirichlet heat kernel is replaced by the Neumann heat kernel, with the sole exception of \eqref{eq:kpz hat r}. Instead, we have
\begin{align}
\cR (\cK_{\gamma_0}^{(\varepsilon)}+Z_{\gamma_0}\hat\cR^\varepsilon)\bR^D_+(\scD\Psi^\varepsilon)^{\star 2}&
=G\ast\hat\cR^\varepsilon(\bR^D_+(\scD\Psi^\varepsilon)^{\star 2})
\nonumber\\&=G\ast(\cR(\bR^D_+(\scD\Psi^\varepsilon)^{\star 2})-c^-_\varepsilon\delta_{-1}-c^+_\varepsilon\delta_1),
\end{align}
where $\delta_{\pm1}$ is the Dirac distribution at $x=\pm1$, and
$$
c^-_\varepsilon=\int_{[-1,-3/4]}C_0^\varepsilon(x)\chi(x+1)\,dx,\quad
c^+_\varepsilon=\int_{[3/4,1]}C_0^\varepsilon(x)\chi(x-1)\,dx.
$$
(We henceforth view $C_0^\varepsilon$ and $C_0$ as functions of the spatial variable $x$ only, since we already noted
that these functions, as defined in \eqref{eq:C0}, do not depend on the time variable.)
Since these Dirac masses now do not cancel, one needs more concrete information about $c^-_\varepsilon$ and $c^+_\varepsilon$, and we begin with the former.
First, it will be convenient to shift the equation to the right, so that the left boundary is at $x=0$.
Furthermore, we note that we can add a globally smooth component to $K$ and $Z$ in the definitions
of $C_0^\varepsilon$ and $C_0$ without changing the conclusion that $\hat C_0^\varepsilon$ as defined
by \eqref{e:defC0hat} converges to a limit $\hat C_0^*$. In particular, setting
\begin{equ}[e:notationN]
\CN(x,\sigma) = {\bone_{\sigma > 0} \over \sqrt{2\pi\sigma}}\exp\Big(-{x^2\over 2\sigma}\Big)\;,
\end{equ}
we can assume that for $x\in[0,1/4]$, one has
\begin{equs}
K((0,x),(-s,y)) = \CN(x-y,s),\qquad
Z((0,x),(-s,y)) = \CN(x+y,s)\;.
\end{equs}
With the notations
\begin{equs}
f^{(1)}_x(s,y)=\bone_{y>0}\frac{x-y}{s}\CN(x-y,s),
\quad
f^{(2)}_x(s,y)=\bone_{y>0}\frac{x+y}{s}\CN(x+y,s),
\end{equs}
as well as $f^{(3)}_x(s,y) = f^{(1)}_x(s,y) + f^{(2)}_x(s,-y)$,
and after a trivial change of variables in $s$, we can then write, recalling the notation $\bar f(s,y)=f(-s,-y)$ for any function $f$ of time and space,
\begin{equs}
C_0^\varepsilon(x)&=\int_{{\mathbb{R}}^2}(\bar\rho_\varepsilon\ast(f^{(1)}_x+f^{(2)}_x))^2(s,y)-(\bar\rho_\varepsilon\ast f^{(3)}_x)^2(s,y)\,ds\,dy,\label{eq:C0varepsilon}
\\
C_0(x)&=\int_{{\mathbb{R}}^2}(f^{(1)}_x+f^{(2)}_x)^2(s,y)-(f^{(3)}_x)^2(s,y)\,ds\,dy.
\end{equs}
Note that our modifications of $K$ and $Z$ are only valid for $x\in[0, 1/4]$, and so \eqref{eq:C0varepsilon} also holds for these values of $x$. But since other values do not play a role in computing $c_\varepsilon^-$, for the duration of this computation we can simply define $C_0^\varepsilon(x)$ as the right-hand side of \eqref{eq:C0varepsilon} for other values of $x$. We can then write the decomposition
$$
c_\varepsilon^-=\bar c_\varepsilon^--\hat c_\varepsilon^-:=\int_0^\infty C_0^\varepsilon(x)\,dx-\int_0^\infty(1-\chi(x))C_0^\varepsilon (x)\,dx,
$$
We first show that the second term in this decomposition doesn't matter.
\begin{proposition}
With the above notations, one has $C_0(x) = 0$ for every $x \neq 0$. Furthermore, for every $\kappa \in (0,1)$,
there exists a constant $C$ such that, for $|x| \ge C\varepsilon$, one has the bound
$|C_0^\varepsilon(x)| \le C\varepsilon^{1-\kappa} |x|^{\kappa-2}$.
\end{proposition}
\begin{proof}
The first statement follows from the second one since $C_0^\varepsilon \to C_0$ locally uniformly, so
it remains to show that the claimed bound on $C_0^\varepsilon(x)$ holds.
We will assume without the loss of generality that $x>C\varepsilon$ for some sufficiently large $C$
($C=6$ will do)
and we write $z=(0,x)$ and $z'=(s,y)$. Since $f^{(3)}_x=f^{(1)}_x+f^{(3)}_x\bone_{y<0}$ almost everywhere, one has
\begin{equs}
C_0^\varepsilon(x)&=\int_{{\mathbb{R}}^2}2(\bar\rho_\varepsilon\ast f^{(1)}_x)(\bar\rho_\varepsilon\ast f^{(2)}_x)\,dz'
+\int_{{\mathbb{R}}^2}(\bar\rho_\varepsilon\ast f^{(2)}_x)^2-(\bar\rho_\varepsilon\ast (f^{(3)}_x\bone_{y<0}))^2\,dz'
\\
&\quad-\int_{{\mathbb{R}}^2}2(\bar\rho_\varepsilon\ast (f^{(3)}_x\bone_{y<0}))(\bar\rho_\varepsilon\ast (f^{(3)}_x\bone_{y>0}))\,dz'
\\
&=:2J_1+J_2-2J_3.
\end{equs}
With the usual convention $\bar\rho_0\ast$ standing for the identity, we can furthermore write
\begin{equs}
J_1&=\int f^{(1)}_x f^{(2)}_x\,dz'+
2\int(\bar\rho_\varepsilon\ast f^{(1)}_x)((\bar\rho_\varepsilon-\bar\rho_0)\ast f^{(2)}_x)\,dz'
+2\int((\bar\rho_\varepsilon-\bar\rho_0)\ast f^{(1)}_x)f^{(2)}_x\,dz'
\\
&=:I_1+I_2+I_3.
\end{equs}
The expression $I_1$ actually vanishes, since
\begin{align*}
I_1&=\int_{s>0}\frac{x^2-y^2}{s^{2}}\CN(x,s)\CN(y,s)\,dz'=\int_{s>0}{x^2-s \over s^2} \CN(x,s)\,ds \\
&=\int_{r>0}{r^2 - 1 \over |x|}\CN(r,1)\,dr = 0\;.
\end{align*}
To estimate $I_2$, we first note that it follows immediately from the scaling properties of $f^{(2)}_x$
and the fact that it only has a discontinuity at $y=0$, that one can write
\begin{equ}
(\bar\rho_\varepsilon-\bar\rho_0)\ast f_x^{(2)} = f_{x,\varepsilon}^{(2,1)} + f_{x,\varepsilon}^{(2,2)}\;,
\end{equ}
where $f_{x,\varepsilon}^{(2,2)}$ is supported on ${\mathbb{R}} \times [-2\varepsilon,2\varepsilon]$ and, for any $\kappa \in [0,1]$,
one has the bounds
\begin{equ}[e:boundsf2]
|f_{x,\varepsilon}^{(2,1)}(z')|\lesssim {\varepsilon^{1-\kappa} \over \|z'+z\|^{3-\kappa}}\;,\qquad
|f_{x,\varepsilon}^{(2,2)}(z')|\lesssim {1 \over \|z'+z\|^{2}} \lesssim {1\over s+x^2}\;.
\end{equ}
It follows immediately from standard properties of convolutions (see for example
\cite[Lem.~10.14]{H0}) that
\begin{equ}
\Bigl|\int (\bar\rho_\varepsilon\ast f^{(1)}_x)f_{x,\varepsilon}^{(2,1)}\,dz'\Bigr| \lesssim \varepsilon^{1-\kappa} |x|^{\kappa-2}\;,
\end{equ}
as required. Regarding the term involving $f_{x,\varepsilon}^{(2,2)}$, it follows from the support properties of that
function that
\begin{equ}[e:boundTermEps]
\Bigl|\int (\bar\rho_\varepsilon\ast f^{(1)}_x)f_{x,\varepsilon}^{(2,2)}\,dz'\Bigr| \lesssim \varepsilon \int_0^\infty {ds \over (s+x^2)^2} \lesssim \varepsilon |x|^{-2}\le \varepsilon^{1-\kappa} |x|^{\kappa-2}\;.
\end{equ}
The term $I_3$ can be bounded in exactly the same way.
To bound $J_2$, we use the notation $\tilde \rho_\varepsilon(t,x) = \bar \rho_\varepsilon(t,-x)$.
Since $(f^{(3)}_x\bone_{y<0})(s,y) = f^{(2)}_x(s,-y)$, we can then rewrite $J_2$ as
\begin{equ}
J_2=\int_{{\mathbb{R}}^2}((\bar\rho_\varepsilon-\tilde \rho_\varepsilon)\ast f_x^{(2)})((\bar\rho_\varepsilon+\tilde\rho_\varepsilon)\ast f_x^{(2)})
\,dz'\;.
\end{equ}
Exactly as above, we can decompose the first factor as
\begin{equ}
(\bar\rho_\varepsilon-\tilde\rho_\varepsilon)\ast f_x^{(2)} = f_{x,\varepsilon}^{(2,1)} + f_{x,\varepsilon}^{(2,2)}\;,
\end{equ}
so that the bounds \eqref{e:boundsf2} hold and $f_{x,\varepsilon}^{(2,2)}(z') = 0$ for $y \not \in [-2\varepsilon,2\varepsilon]$.
This time, we exploit the fact that the second factor itself satisfies the bound
\begin{equ}
|((\bar\rho_\varepsilon+\tilde\rho_\varepsilon)\ast f_x^{(2)})(z')| \lesssim \|z+z'\|^{-2}\;,
\end{equ}
uniformly in $\varepsilon$, and that the support of both factors is included in
the set $\|z+z'\| \ge |x|/2$. As a consequence, the term involving $f_{x,\varepsilon}^{(2,1)}$ is bounded
by
\begin{equ}
\int_{\|z'\| \ge |x|/2} {\varepsilon \over \|z'\|^5}\,dz' \lesssim \varepsilon |x|^{-2}\;,
\end{equ}
while the other term is bounded exactly as in \eqref{e:boundTermEps}.
Finally, regarding $J_3$, the product is supported on ${\mathbb{R}} \times [-\varepsilon,\varepsilon]$ and each
factor is bounded by $(s+x^2)^{-1}$ there, so that the corresponding integral is again bounded
as in \eqref{e:boundTermEps}, thus concluding the proof.
\end{proof}
Let us now return to the computation of the constant $\bar c_\varepsilon^-$.
Using the identity $(f\ast g,h)=(g,\bar f\ast h)_{L^2(R^2)}$ and the commutativity of the convolution, we
can rewrite it as
\begin{equs}\label{eq:bar c0 as a product}
\bar c_\varepsilon^-= (\bar\rho_\varepsilon\ast\rho_\varepsilon, F)_{L^2(R^2)},
\end{equs}
where
$$
F=F_1+F_2:= \int_{\mathbb{R}}\bar f^{(1)}_x\ast f^{(2)}_x\,dx+{1\over 2}\int_{\mathbb{R}}(\bar f^{(1)}_x\ast f^{(1)}_x+\bar f^{(2)}_x\ast f^{(2)}_x-\bar f^{(3)}_x\ast f^{(3)}_x)\,dx.
$$
We will use again the notation \eqref{e:notationN} and we will make use of the
identities
\begin{equs}
\CN(x,\sigma)\,\CN(y,\eta) &= \CN(x \pm y,\sigma+\eta)\, \CN \Bigl({\eta x \mp \sigma y \over \sigma+\eta},{\sigma \eta \over \sigma+\eta}\Bigr)\;,\\
\partial_x \CN(x,\sigma) &= -(x/\sigma)\CN(x,\sigma)\;.
\end{equs}
The first identity can be obtained by considering a jointly Gaussian centred random variable
$(X,Y)$ with $\mathbf{Var}(Y) = \sigma$, $\mathbf{E} (X\,|\,Y) = Y$, $\mathbf{Var}(X\,|\,Y) = \eta$ and noting that one then
has $\mathbf{Var}(X) = \sigma+ \eta$, $\mathbf{E}(Y\,|\,X) = {\sigma X \over \sigma+\eta}$, and $\mathbf{Var}(Y\,|\,X) = {\sigma \eta \over \sigma+\eta}$.
Exploiting this identity, we can rewrite $F_1$ as
\begin{equs}
F_1 &= \int \bone_{y'>y \vee 0} {x-y'+y \over s'-s}{x+y'\over s'} \CN(x-y'+y,s'-s)\CN(x+y',s')\,dz'\,dx \\
&={1\over 4}\int \bone_{y'>y \vee 0} {(2x+y)^2 - (2y'-y)^2 \over s'(s'-s)} \CN(2y'-y,2s'-s)\\
&\qquad \times \CN\Big(x+{y\over 2} - {s(2y'-y) \over 2(2s'-s)},{s'(s'-s) \over 2s'-s}\Big)\,dz'\,dx\;.
\end{equs}
We now perform the change of variables $2y'-y \mapsto y'$ and $2s'-s \mapsto s'$ which in
particular maps $dz'$ to ${1\over 4}dz'$ and $s'(s'-s)$ to $((s')^2-s^2)/4$ so that
\begin{equs}
F_1 &={1\over 4}\int \bone_{y'>|y|} {(2x+y)^2 - (y')^2 \over (s'+s)(s'-s)} \CN(y',s')
\CN\Big(x+{y\over 2} - {s y' \over 2s'},{(s'-s)(s'+s) \over 4 s'}\Big)\,dz'\,dx\\
&= {1\over 4}\int \bone_{y'>|y| \atop s' > |s|}{1\over s'} \Big({1- {(y')^2 \over s'}}\Big) \CN(y',s')\,dz'\\
&= {1\over 4}\int \bone_{y'>|y| \atop s' > |s|}{1\over \sqrt{s'}} \Big({1- {(y')^2 \over s'}}\Big) \CN(y'/\sqrt{s'},1)\,{dz' \over s'}\;.
\end{equs}
At this stage, for fixed $y'$, we perform the change of variables $r = y'/\sqrt{s'}$, so that
$dz'/s' = 2 dy'\,dr/r$, thus yielding
\begin{equs}
F_1(z) &= {1\over 2}\int_{|y|}^\infty {1\over y'} \int_0^{y' \over \sqrt{|s|}} \big({1- r^2}\big) \CN(r,1)\,dr\,dy'
= -{1\over 2}\int_{|y|}^\infty {1\over y'} \int_0^{y' \over \sqrt{|s|}} \partial_r^2 \CN(r,1)\,dr\,dy' \\
&= -{1\over 2}\int_{|y|}^\infty {1\over y'} (\partial_1 \CN)\Big({y' \over \sqrt{|s|}},1\Big) \,dy'
= {1\over 2}\int_{|y|}^\infty {1\over \sqrt{|s|}} \CN\Big({y' \over \sqrt{|s|}},1\Big) \,dy' \\
&= {1\over 2}\int_{|y| \over \sqrt{|s|}}^\infty \CN(q,1) \,dq = {1\over 4} - {1\over 4}\mathop{\mathrm{Erf}} \Bigl({|y| \over \sqrt{2|s|}}\Bigr)\;.
\end{equs}
Let's now turn to $F_2$. Setting $f_x(z) = {x-y\over s}\CN(x-y,s)$, a simple calculation shows that
\begin{equs}
F_2(z) &= {1\over 2} \int f_x(z-z')f_x(-z') \bigl(\bone_{y'<(0\wedge y)} + \bone_{y'>(0 \vee y)} - 1\bigr)\,dz'\,dx \\
&= -{1\over 2} \int f_x(z-z')f_x(-z') \bone_{-|y| < 2y'-y < |y|}\,dz'\,dx\\
&= {1\over 2} \int {x-y + y'\over s - s'}{x + y'\over s'}\CN(x -y+ y', s-s') \CN(x + y', -s') \bone_{|2y'-y| < |y|}\,dz'\,dx\\
&= - {1\over 8} \int {(2x+y')^2-y^2 \over (s')^2 - s^2} \CN(y,s') \CN \Bigl(x + {y'\over 2} + {ys\over 2s'}, {(s')^2 - s^2 \over 4s'}\Bigr) \bone_{|y'| < |y|} \,dz'\,dx\\
&= - {1\over 8} \int_{-|y|}^{|y|} \int_{|s|}^\infty \Bigl({1\over s'} - {y^2 \over (s')^2}\Bigr) \CN(y,s') \,ds'\,dy'\\
&= {|y|\over 4} \int_{|s|}^\infty {1\over s'}\Bigl({y^2 \over s'} - 1\Bigr) \CN(y,s') \,ds'
= -{|y| \over 2} \CN(y,|s|)\;,
\end{equs}
where the last equality was obtained in exactly the same way as above.
Combining these identities with \eqref{eq:bar c0 as a product} and exploiting the fact that $F$ is $0$-homogeneous
under the parabolic scaling, we finally obtain
\begin{equ}\label{eq:bar c0}
\bar c_\varepsilon^-=\int_{{\mathbb{R}}^2}(\bar\rho\ast\rho)(s,y)\,\Big({1\over 4} - \frac{1}{4}\mathop{\mathrm{Erf}}\Big(\frac{|y|}{\sqrt{2|s|}}\Big)-{|y| \over 2} \CN(|y|,|s|)\Big)\,ds\,dy=\frac{a}{2},
\end{equ}
where $a$ is the quantity given in \eqref{eq:constant a}.
If momentarily one also includes the dependence of $c^{\pm}_\varepsilon$ on $\rho$, one has, by symmetry, $c^+_\varepsilon(\rho)=c^-_\varepsilon(\hat \rho)$, with $\hat \rho(t,x) = \rho(t,-x)$. Therefore by \eqref{eq:bar c0}, $c_\varepsilon^+=\bar c_\varepsilon^+-\hat c_\varepsilon^+$, where $\hat c_\varepsilon^+\rightarrow 0$ as $\varepsilon\rightarrow0$ and $\bar c_\varepsilon^+$ is given by
\begin{equ}\label{eq:bar c1}
\bar c_\varepsilon^+ = \int_{{\mathbb{R}}^2}(\bar{\hat\rho}\ast\hat\rho)(s,y)F(y,s)\,ds\,dy
=\int_{{\mathbb{R}}^2}(\bar\rho\ast\rho)(s,y)F(-y,s)\,ds\,dy = \frac{a}{2},
\end{equ}
since $F$ is symmetric in both of its arguments.
We can conclude that, for any fixed constants $\hat b_\pm \in {\mathbb{R}}$, setting
\begin{equation}\label{eq:kpz hat epsilon N}
\hat \cR^\varepsilon \bR^{D}_+(\scD\Psi)^{\star 2}=A_2^\varepsilon-B_0^\varepsilon+ C^\varepsilon_0- {1\over 2}\bone_{t>0}\big((a-\hat b_-)\delta_{-1}+ (a+\hat b_+)\delta_1)\Big),
\end{equation}
for the models $(\Pi^\varepsilon,\Gamma^\varepsilon)$ and
\begin{equation}\label{eq:kpz hat N}
\hat \cR^0 \bR^{D}_+(\scD\Psi)^{\star 2}=A_2-B_0+ C_0 - {1\over 2}\bone_{t>0}(\hat b_+\delta_1- \hat b_-\delta_{-1})\Big) ,
\end{equation}
for the limiting model,
the desired properties \eqref{eq:hatR0}-\eqref{e:conteps} of $(\hat\cR^\varepsilon)_{\varepsilon\in[0,1]}$ hold.
Similarly to before, but accounting for the additional Dirac masses, we then see that for any
fixed $\varepsilon > 0$ the function $h^\varepsilon = \cR( u^\varepsilon+\Psi^\varepsilon)$
(there is no ambiguity for the reconstruction operator as far as the solution $u^\varepsilon$ is concerned, it is trivially given
simply by the component in the direction $\bone$) solves
\begin{equs}[eq:0KPZ N approx 2]
\partial_t h^\varepsilon &=\tfrac{1}{2}\partial_x^2 h^\varepsilon+(\partial_x h^\varepsilon)^2+2c\partial_x h^\varepsilon-C_\varepsilon+\xi_\varepsilon \quad & \text{on } &{\mathbb{R}}_+\times [-1,1],\\
\partial_x h^\varepsilon &= \mp a + b_{\pm} & \text{on } &{\mathbb{R}}_+\times \{\pm 1\},\\
h^\varepsilon &=u_0 &\text{on }&\{0\}\times[-1,1],
\end{equs}
where $c$ is given by \eqref{e:defc} below. Hence, clearly, $\hat h^\varepsilon = h^\varepsilon +cx+(C_\varepsilon+c^2) t$ solves \eqref{eq:0KPZ N approx} with boundary data $\hat b_\pm=\mp a+ b_\pm+c$ and $\hat u_0(x)=u_0(x)+cx$.
Applying again Theorem~\ref{thm:FPP}, combined with the results of \cite{HS15} regarding the convergence of the
corresponding admissible models, we conclude that, for any choice of $b_\pm$, the solution
to \eqref{eq:0KPZ N approx 2} (which is precisely the same as \eqref{e:renormu} provided
that the constant $C_\varepsilon$ is adjusted in the appropriate way) converges locally
as $\varepsilon \to 0$ to a limit which depends on the choice of $b_\pm$ but is independent of
the choice of mollifier $\rho$. It remains to show that this limit coincides with the
Hopf-Cole solution to the KPZ equation with Neumann boundary data given by $b_\pm$.
This follows by considering the special case $\rho(t,x) = \delta(t)\hat \rho(x)$, which is covered
by the above proof, the only minor modification being the proof of convergence of the corresponding
admissible model to the same limit, which can be obtained in a way very similar to \cite{H_KPZ,H0}.
As already mentioned at the end of Section~\ref{subsec:applications}, one has $a = c = 0$ in this case,
so that in particular
$\hat b_\pm = b_\pm$.
In this case, we can apply It\^o's formula to perform the Hopf-Cole transform and obtain convergence
to the corresponding limit by classical means \cite{DPZ}, which concludes the proof.
\subsubsection{Expression for the drift term}
It follows from \cite{HS15} that the constant $c$ appearing in \eqref{eq:0KPZ N approx 2}
is given by
\begin{equ}[e:defc]
c = -2\scal{\rho * \bar \rho, \partial_x P * \partial_x P * \overline{\partial_x P} } =: \scal{\rho * \bar \rho, F_{0}}\;,
\end{equ}
where $P$ is the heat kernel.
Similarly to above, we obtain the identity
\begin{equs}
(\partial_x P * \partial_x P)(t,x) &= \int {x-y\over t-s} {y\over s} \CN(y,s)\CN(x-y,t-s) \,dy\,ds\\
&= \CN(x,t) \int {x-y\over t-s} {y\over s} \CN\Bigl(y-{s x \over t},{s(t-s) \over t}\Bigr) \,dy\,ds\\
&= \CN(x,t) \int_0^t {x^2 - t \over t^2} \,ds
= \CN(x,t) {x^2 - t\over t}\;,
\end{equs}
which then implies that the function $F_0$ is indeed given by
\begin{equs}
F_0(t,x) &= 2\int {y^2-s\over s} {x-y\over t-s} \CN(y,s)\CN(x-y,s-t) \bone_{s \ge 0\vee t}\,dy\,ds \\
&= 2 \int {y^2-s\over s} {x-y\over t-s} \CN(x,2s-t) \CN\Bigl(y-{s x \over 2s-t},{s(s-t) \over 2s-t}\Bigr) \bone_{s \ge 0\vee t}\,dy\,ds\\
&= 2 \int {(2y^2-r-t)(y-x)\over r^2-t^2} \CN(x,r) \CN\Bigl(y-{(r+t) x \over 2r},{r^2-t^2 \over 4r}\Bigr) \bone_{r \ge |t|}\,dy\,dr\\
&= \int_{|t|}^\infty {(r+t) x \over 2r^2} \Bigl(3-{x^2 \over r}\Bigr) \CN(x,r) \,dr
= \mathop{\mathrm{Erf}}(x/\sqrt{2|t|}) + 2x\CN(x,t)\;.
\end{equs}
To obtain \eqref{eq:constant c}, it remains to note that the first term is odd under the
substitution $(t,x) \leftrightarrow (-t,-x)$, while $\rho * \bar \rho$ is even, so that this
does not contribute to the value of $c$.
|
2,877,628,089,058 | arxiv | \section{Introduction}
Complex Langevin (CL) methods have turned out to be quite useful in the
calculation (simulation) of high dimensional integrals over complex
valued weight functions of the form $e^{-S}$, where $S$ is the action
or the Hamiltonian of some physical system. Since there is no formal
restriction to a real valued drift term for Langevin equations, the
application of CL is convincingly simple \cite{Klaud1}. Unfortunately
one has to deal with two problems of uncertainty. The first is that it
is apriori unknown whether the process will converge at all. The second
problem is that, although the process has converged, it will not
necessarily give the correct answer. This is, that long time averages
of such a process do not necessarily simulate the complex valued weight
function integrals. CL is known to sometimes give the wrong answer (see
e.g. \cite{Klaud2}).
Several attempts have been made to understand CL (e.g. see references
\cite{Klaud2, Amb, Hay}). For some simple actions the behavior of CL
can be improved by modifying the drift term with an appropriate kernel,
but for general problems the choice of the kernel is not clear
\cite{Schul}. Recently progress has been made in the comprehension of
the results which one gets from a convergent process
\cite{GauLee,Lee1}. In particular the assumptions needed to guarantee
correct results for convergent processes on certain compact manifolds
($S_1$, $S_2$) turn out to be surprisingly simple and easy to verify in
a numerical simulation. Contrary to that, many assumptions are used to
prove the behavior of processes on ${\mbox{\math R}}^n$ driven by polynomial
actions and moreover these assumptions are rather technical
\cite{GauLee}.
For polynomial actions a lot of attention has been given to the
existence of a pseudo Fokker-Planck (F-P) equation which describes the
dynamics of a possibly equivalent complex valued weight function
\cite{GauLee}. In earlier investigations especially the spectrum of
this operator played a major role \cite{Klaud1}. But statements on the
properties of the spectrum are not sufficient to draw conclusions on
the correctness or the convergence of CL \cite{GauLee}. Certainly, if
one can show that the pseudo F-P equation exists and that the real
part of the spectrum of the operator is semidefinite then CL converges
but not necessarily to the desired result. Further conditions must hold
(see \cite{GauLee}). Except for very simple cases it is hard and most
unlikely to get exact information on the complete spectrum. Certainly
there always exist the real F-P operator for the process and the
convergence of the process follows if one can prove that the operator
has a unique nonnegative integrable solution to the zero eigenvalue.
But this is also very hard and so far there is no classification scheme
for actions which have the suitable properties. So, to get information
on the convergence for any problem one must check either the existence
and the whole spectrum of the pseudo F-P operator or the zero
eigenvalue properties of the real F-P operator. In practice therefore
the question of convergence still remains a matter of experiment and
experience.
\section{Proper Convergence}
Let us now turn to the main purpose of the paper and examine in a
rigorous fashion the conditions under which CL, if convergent, gives
the right answer. To demonstrate this, several conditions at finite
time have been put on the process in reference \cite{GauLee}. In this
approach fewer conditions on the process are used and these conditions
are put forward to $t \rightarrow \infty$. For simplicity the
discussion and the formulas are restricted to the one dimensional case.
All following statements allow for an immediate generalization to
${\mbox{\math R}}^n$. It will be assumed that the system of interest is described
by a complex polynomial action of degree $N$
\begin{equation}
S(x) = \sum_{n=0}^N a_n x^n \;\; .
\end{equation}
$S:{\mbox{\math R}} \rightarrow {\mbox{\math C}}$ such that $e^{-S} \in \cal{S}({\mbox{\math R}})$.
$\cal{S}({\mbox{\math R}})$ is the Schwartz space of $C^{\infty}$ functions of
rapid decrease. With $g(x)$ a polynomial of degree $M$ it is thus
guaranteed, that the quantities of physical interest
\begin{equation}\label{1}
\langle g(x) \rangle \equiv {1 \over \cal{N}
}\int_{\mbox{\mathi R}} g(x) e^{-S(x)} dx,
\end{equation}
\begin{equation}
{\cal{N}} = \int_{\mbox{\mathi R}} e^{-S(x)} dx \;\;\;,
\end{equation}
do exist, provided $0<|\cal{N}|$. If $S$ would be real valued
everything would be now straight forward ergodic theory and the
longtime averages computed with the Langevin equation would reproduce
the ensemble average of the system.
\cite{RiskGar})
In the complex case analytic continuation leads to the
following stochastic differential equation.
\begin{equation}\label{2}
dZ(t) = F(Z(t)) dt + dW(t) \;\;,
\end{equation}
with the drift term
\begin{equation}\label{3}
F(z) =-{1 \over 2} {{dS(z)} \over {dz}} \;\;.
\end{equation}
$W(t)$ is a standard Wiener process with zero mean and
cova\-riance
\begin{equation}\label{4}
E \left( W(t_1)W(t_2) \right) = \mbox{min} (t_1,t_2).
\end{equation}
Equation \ref{2} is the so called CL equation. This equation has a
locally unique solution which is defined up to a random explosion time
\cite{Arn}. In particular equation \ref{2} describes a two dimensional
diffusion process.
\begin{equation} \label{5}
dX(\tau) = G(X(\tau),Y(\tau)) d\tau + dW(\tau) \;,
\end{equation}
\begin{equation}\label{6}
dY(\tau) = H(X(\tau),Y(\tau)) d\tau \;\;.
\end{equation}
With $S(z)=u(x,y) + iv(x,y)$, we have
\begin{equation}
G(x,y) =-{1 \over 2} {{\partial u(x,y)} \over {\partial x}} \;, \;\;\;
H(x,y) = {1 \over 2} {{\partial u(x,y)} \over {\partial y}} \;\;.
\end{equation}
Special for this process is that equation \ref{6} looks like a
deterministic equation due to the zero diffusion coefficient.
Nevertheless this is a stochastic equation through the dependence on
$X(t)$. The singular diffusion matrix causes a lot of problems. So,
contrary to the real action case it is in general not possible to
determine from the drift and diffusion terms whether there exists a
unique stationary distribution density for this process \cite{RiskGar}.
As already mentioned in the introduction there is no general proof on
the existence of a stationary distribution density. For the moment let
us assume that for the process $X(t),Y(t)$ there exists a unique
stationary distribution density $\hat f(x,y)$. The idea behind CL then
is that
\begin{equation}\label{7}
\begin{array} {rcl}
&& \displaystyle \lim_{t \rightarrow \infty} E\left (g(X (t)+
i Y(t)) \right) =
\nonumber \\ \\
&& \displaystyle \int_{{\mbox{\mathi R}}^2} g(x+iy) \hat f(x,y) dxdy =
{1 \over \cal{N} }\int_{\mbox{\mathi R}} g(x) e^{-S(x)} dx.
\end{array}
\end{equation}
might hold.
\newpage
Assume:
\begin{enumerate}
\item
$S$ is a complex valued polynomial action of degree $N$ such that
\begin{equation}
e^{-S} \in \cal{S}({\mbox{\math R}})
\end{equation}
and
\begin{equation}
\left | \int_{\mbox{\mathi R}} e^{-S(x)} dx \right |> 0.
\end{equation}
\item
For
\begin{equation}\label{20}
c(k,t) \equiv E(e^{ikZ(t)}) = \int_{{\mbox{\mathi R}}^2} e^{ik(x+iy)} f(x,y,t)
dxdy
\end{equation}
the limit $t \rightarrow \infty$ exists pointwise and
\begin{equation}
\lim_{ \tau \to \infty } c_{ \tau}(k) \equiv c_\infty(k) \in
{\cal S}( {\mbox{\math R}})\;\;.
\end{equation}
\item
Further
\begin{equation}
\lim_{t \rightarrow \infty} \left| E(Z^n(t) e^{ikZ(t)}) \right|
< \infty \mbox{ for all }
0 \leq n \leq N-1, k \in {\mbox{\math R}} \;\;\;.
\end{equation}
\end{enumerate}
Equation \ref{7} then holds at least for $g(z)$ a polynomial of degree
$M \leq N-1$. Moreover equation \ref{7} holds for any higher moment
$E(Z^n(t)), \; n \geq N $ which exist for $t \rightarrow \infty$.
{}From assumption 2 we know that there is a $t_0 < \infty $ such that
$c(k,t)$ exists and from assumption 3 that there is a $t_1 < \infty$
such that $E(Z^n(t) e^{ikZ(t)})$ exists. Applying the It\^o rule one
gets with $F(z)$ as defined in \ref{3}
\begin{equation}\label{8}
{{ \partial E(e^{ikZ(t)}) } \over {\partial t}} = ik E \left(
e^{ikZ(t)}F(Z(t)) \right) - {k^2 \over 2}E \left( e^{ikZ(t)} \right)
\end{equation}
Due to assumptions 2 and 3 equation \ref{8} exists for $t^\prime =
\mbox{max} (t_0,t_1)$. As a side result we get that, if $c(k,t) \in
C^{N-1}({\mbox{\math R}})$ with repsect to $k$, equation \ref{8} can be understood
as the dynamical equation for $c(k,t)$.
\begin{equation}\label{9}
{{ \partial c(k,t) } \over {\partial t}} =
-{ik \over 2} \sum_{n=1}^N n a_n (-i { \partial \over
{\partial k}})^{n-1}c(k,t) - {k^2 \over 2} c(k,t) \;\;.
\end{equation}
Note that if assumption 3 does not hold, equation \ref{8} can also not
be defined in the sense of distributions. This is because we are not
simply dealing with Fourier transforms but with their possibly not
existing analytic continuations.
Let us define now $\hat h(x)$ as
\begin{equation}\label{10}
\hat h(x) = {1 \over {2 \pi}} \int_{{\mbox{\mathi R}}} c_\infty (k) e^{-ikx}
dk \;\;.
\end{equation}
{}From assumption 2 follows that $\hat h(x) \in {\cal S}( {\mbox{\math R}})$.
Using equation \ref{10} and assumption 3
\begin{equation} \label{11}
\lim_{t \rightarrow \infty} E(Z^n(t) e^{ikZ(t)}) =
\int_{{\mbox{\mathi R}}}x^n e^{ikx}\hat h(x) dx
\end{equation}
for $0 \leq n \leq N-1$ and $k \in {\mbox{\math R}}$.
Applying the above result to \ref{8}
one obtains in the limit $t \rightarrow \infty$
\begin{equation}\label{12}
0 = ik \int_{{\mbox{\mathi R}}} e^{ikx} F(x) \hat h(x) dx -
{k^2 \over 2} \int_{{\mbox{\mathi R}}} e^{ikx} \hat h(x) dx
\end{equation}
Integrating the right hand side of equation \ref{12} by parts gives
that $\hat h(x)$ is a $L^1({\mbox{\math R}},dx)$ zero eigenvalue solution of a F-P
type differential operator with a complex drift term (pseudo F-P
operator).
\begin{equation}\label{14}
{1 \over 2} {\partial \over {\partial x}} \left[
{{\partial S(x)} \over {\partial x}} + {\partial \over {\partial x}}
\right] \hat h(x) \equiv {\cal T} \hat h(x) = 0 .
\end{equation}
${\cal T}$ has two zero eigenvalue solutions. One is
\begin{equation} \label{15}
\hat h_1(x) \sim e^{-S(x)} \in {\cal S}( {\mbox{\math R}})
\end{equation}
which fits to assumption 2, since as the Fourier transform of
$c_\infty(k)$ it must be a Schwartz function. For the second solution
\begin{equation}
\hat h_2(x) \sim e^{-S(x)} \int^{x}_{x_0} e^{S(y)} dy
\end{equation}
one can show that
\begin{equation}
\hat h_2(x) = {\cal{O}} ( { 1 \over {x^{N-1} }}) \mbox{ for }
|x| \rightarrow \infty \;\;,
\end{equation}
This contradicts assumption 2. So, the only possible solution is the
one proportional to $e^{-S}$ and
\begin{equation}
\lim_{t \rightarrow \infty} E(Z^n(t) e^{ikZ(t)}) =
{1 \over \cal{N} }\int_{\mbox{\mathi R}} x^n e^{ikx} e^{-S(x)} dx
\end{equation}
for $0 \leq n \leq N-1$ and $k \in {\mbox{\math R}}$. If further $E(Z^n(t)), \;
n \geq N$ for $t \rightarrow \infty$ exist then
\begin{equation}
\lim_{t \rightarrow \infty} E(Z^n(t)) =
\left.{{d^n c_\infty(k)} \over {dk^n}} \right |_{k=0} =
{1 \over \cal{N} }\int_{\mbox{\mathi R}} x^n e^{-S(x)} dx.
\end{equation}
Let us now briefly discuss the assumptions. Polynomial actions are very
natural since most physical systems defined on ${\mbox{\math R}}^n$ have polynomial
actions. Since these actions must be bounded from below it follows that
$e^{-S} \in {\cal S}$. Condition 2 must be there otherwise the solution
$\hat h_2(x)$ cannot be excluded. With the correctness requirement on
CL that
\begin{equation}
\lim_{t \rightarrow \infty} E(e^{ikZ(t)}) =
{1 \over \cal{N} }\int_{\mbox{\mathi R}} e^{ikx} e^{-S(x)} dx
\end{equation}
this condition is also a necessary condition. Assumption 3 looks
technical, but is so far required to relate $\hat h_i(x)$, the Fourier
transform of $c_\infty(k)$, to the Fokker-Planck type operator $\cal
T$. This condition finally allows to show the correctness of CL. It
would be nice to eliminate assumption 3 by showing that it follows from
assumption 2. Unfortunately the integral transform defined by equation
\ref{20} is not an injective mapping. To the authors knowledge the
nature of this integral transform has not been analized in the
literature. At present, without more detailed information on the
probability density (in general not available), it is perhaps
impossible to draw conclusion on the properties of the function from
the properties of its image. In a numerical simulation certainly such
mathematical criteria a hard to verify exactly. Nevertheless experience
tells us that when plotting such expectation values ($c_\infty(k)$,
$E(Z^ne^{ikZ})$) one gets a very clear sign of the quality of the
result \cite{Lee2}.
\section{Conclusions}
The criteria under which a convergent CL simulation leads to correct
results have been significantly simplified. The assumptions used in the
present proof are much closer to a numerical verification than the one
used in reference \cite{GauLee}. Unfortunately a complete theory of CL
is still lacking. However the situation that it was generally neither
apriori nor aposteriori possible to prove convergence to the desired
result has been ameliorated in as far as a simple aposteriori proof is
now possible.
\newpage
|
2,877,628,089,059 | arxiv | \section{Introduction}
High T$_c$ cuprate superconductors differ from their low T$_c$ counterparts in their
extreme type II character with a Ginzburg-Landau parameter $\kappa
\thicksim200$ corresponding to extremely short coherence length $\xi\thicksim1$ nm
and high upper critical fields $B_{c2}\thicksim100$ T. The small Abrikosov vortex core size of order $\xi$ results in a series of
bound states with energies situated in the gap whose mean energy spacing is of
order $<\delta\epsilon>\thicksim\Delta^{2}/E_{F}\thicksim\hbar^{2}/m\xi^{2}$
where $\Delta$ is the superconductor gap energy (modulus of the order
parameter), $E_{F}$ the Fermi energy and $m$ the effective mass. The core
structure can affect the mobility for translational motion of the vortices by
modifying the spectral properties of the electron momentum transfer.
Dissipation in type II superconductors in magnetic field $H>H_{c1}$ in slowly
varying electromagnetic fields is dominated by the dynamics of Abrikosov
vortices \cite{kopnin}. Material details of the superconductor enter mainly
via the \textquotedblleft friction coefficient\textquotedblright\ that gives
the vortex velocity in terms of the driving Magnus-Lorentz force. The friction
is controlled by the spectral density and relaxation properties of the
low-lying \textquotedblleft core states\textquotedblright, i.e., quasiparticle
states localized to the vortex core. In dirty superconductors, where the mean
free path $\ell\ll\xi$, the friction coefficient is related to the normal state
resistivity by the Bardeen-Stephen (BS) rule \cite{bardeen1965}:
\begin{equation}
\rho_{f}=\alpha\rho_{n}B/B_{c2}\ ,
\label{eq:bs}
\end{equation}
where $\rho_{f}$ is the flux flow resistivity arising from vortex motion in
the absence of pinning, $\rho_{n}$ is the normal state resistivity, $B$ is the
magnetic field, $B_{c2}$ the upper critical field and $\alpha=1$. A more
careful examination of the core states allowed the extension of Eq. (\ref{eq:bs}) with
$\alpha\approx1$ to the moderately clean limit $\ell>\xi$ as well
\cite{kopnin}. The BS law has been experimentally confirmed for a broad range
of conventional ($s$-wave) superconductors \cite{parks}.
In unconventional superconductors with gap nodes, BCS theory suggests a high
density of core states at the Fermi energy. Nevertheless calculation
\cite{kopnin1997} of the flux-flow resistivity revealed that as long as one
stays in the moderately clean limit, Eq. (\ref{eq:bs}) remains valid with a possible
enhancement of $\alpha$ but still of order $1$. Recent measurements
\cite{alpha} on several anisotropic non-high-$T_{c}$ superconductors that most
likely exhibit gap nodes confirm Eq. (\ref{eq:bs}) with a moderately enhanced
$1.6<\alpha<4.7.$
The problem of low-temperature vortex dynamics in BSCCO (Bi$_{2}$Sr$_{2}%
$CaCu$_{2}$O$_{8+\delta})$ is particularly interesting in this respect in the
light of the recent results on the structure of the vortices in this material.
Scanning tunneling microscope (STM) spectroscopy \cite{maggio-aprile} revealed
that the zero-energy peak in the density of core states is missing. This,
together with the results of inelastic neutron scattering \cite{ins} and NMR
\cite{nmr} experiments, led to the suggestion that some concurrent, non
superconducting, order exists at the vortex cores when the superconducting
order parameter is suppressed \cite{kv}. The idea received direct support from
STM spectroscopy\cite{hoffman} where a periodic modulation of the local
density of electronic states around the vortex cores was observed. Since the
decay length of this modulation is much longer than the superconducting
coherence length, not only the structure of the core, but also the structure
of the flow field for vortex transport should be different from conventional
Abrikosov vortices with probable consequences for the velocity-force relation.
The experimental situation in the high-$T_{c}$ superconductors, and in
particular in BSCCO, is not clear. Low-frequency transport measurements
\cite{bs-bscco} in the vortex liquid phase close to the critical temperature
$T_{c}$ are in reasonable agreement with the BS law. Microwave and millimeter
wave impedance measurements \cite{microwave} at low temperatures, on the other
hand, indicate a large and nearly $B$-independent dissipation in a broad field range.
To clarify the situation and to see if these very special aspects of the
vortex structure influence the dynamics, we have performed a systematic
investigation of the flux-flow resistance in BSCCO single crystals by
measuring the $a$-$b$ plane voltage-current ($V$-$I$) characteristics up to
currents well above the threshold current for dissipation in $c$-directed
magnetic field. The free flux-flow resistance as measured on an $ab$ face can
be approximately described as $\propto B^{1/2}$ for low fields followed by
saturation (becomes field independent) above about 1 T in the low-temperature
vortex solid phase. As the saturated value corresponds quite well with the
extrapolation of the normal resistance to low temperatures, it might be
naively interpreted as reflecting $\alpha\sim100$ in Eq. (\ref{eq:bs}), at least at 1 T,
indicating that vortices move at up to two orders of magnitude faster than
predicted by BS law in low fields. Although it is known that vortex velocities
much larger than the BS value may result from a nonlinear instability at high
vortex velocities \cite{larkin}, not only does its onset in BSCCO
films\cite{bs-bscco} occur at current densities about two orders of magnitude higher
than investigated in this study, the characteristic signature
\cite{bs-bscco,kunchur} of a nonlinear runaway in the $V$-$I$ curve is absent in our case.
\section{Experiment}
Measurements were made on nine single crystals from three different batches of
Bi$_{2}$Sr$_{2}$CaCu$_{2}$O$_{8+\delta}$ fabricated by a melt cooling
technique \cite{sample}. The typical dimensions of the crystals were
$1\times0.5\times0.003$ mm$^{3}$ with the shortest dimension corresponding to
the poorly conducting $c$ axis. Most of the crystals have been close to
optimal doping with a resistance-determined critical temperature $T_{c}%
\approx89$~K and transition width about 2~K in zero field. The diamagnetism in
a $10~$Oe field set in in a uniformly progressive way below $T_{c}$ to near
$100\%$ at low temperature. Two underdoped crystals exhibited a resistive
$T_{c}\approx51$~K and $T_{c}\approx53$~K with width 6-8 K and slow initial
diamagnetism onset to about $30~$K before proceeding to about $40\%$ flux
exclusion at low temperature.
The resistance measurements were made in the usual four point configuuration
on an $ab$ face of the crystals. The contacts to inject and withdraw the
current are either on the top $a$-$b$ face or encompass the ends of the
crystal extending to both opposing $a$-$b$ faces. The features reported here
are common to both geometries. Four more contacts on the same $a$-$b$ face as
the current injection serve to measure the voltage parallel and transverse to
the current flow. The contacts are made by bonding 25~$\mathrm{\mu m}$ gold
wires with silver epoxy fired at $900$~K in an oxygen atmosphere resulting in
contact resistances of less than 3~$\mathrm{\Omega}$ for the current contacts.
The sample is inserted in the bore of a superconducting magnet with the field
along the $c$ direction.
To measure at the high currents required for the free flux flow regime without
significant Joule heating, we apply short (typically 50 $\mu$s or less)
current pulses of isosceles triangular shape with a repetition period of 0.2
to 1 s. Further technical details are given and the issue of Joule heating for
the such experimental conditions is analyzed in Ref.\ \cite{sas2000} with the
conclusion that the temperature change in the area between the voltage
contacts is negligible for the duration of the pulse.
Typical $V$-$I$ characteristics measured at different temperatures and in a
field of 3 T are shown in Fig.\ \ref{fig:iv}. At temperatures below the
vanishing of zero current resistance, which we interpret as the freezing of
the vortex system (in $B=3$~T this happens at $T_{m}=33$~K), dissipation sets
in abruptly beyond a sharply defined threshold current $I_{th}$. Usually, but
not always \cite{sas2000}, a linear segment appears above the threshold
current (see insert) the slope of which we denote by $R_{th}$. At higher
currents, the differential resistance increases further and saturates at a
value $R_{f}\gg R_{th}$ in the high-current limit. The high-current linear
segment of the $V$-$I$ characteristic extrapolates to finite current $I_{f}\gg
I_{th}$ at zero voltage. Above the melting temperature, a finite differential
resistance $R_{I\rightarrow0}$ is observed in the $I\rightarrow0$ limit. The
$V$-$I$ curve, however, is still nonlinear and $I_{sf}$ is finite up to a
temperature $T^{\ast}$ situated between the melting temperature and the
critical temperature $T_{c}$.
\begin{figure}[ptb]
\includegraphics[width=8.5cm]{figure1.eps} \caption{Typical
voltage-current characteristics in $B = 3$ T at temperatures (from left to
right) 60, 45, 38, 19, 11, 7, and 5 K. Inset: Magnified view of the
low-current region. The dashed lines are linear fits. (See text for the
definition of $I_{sf}$, $I_{th}$, $R_{f}$, and $R_{th}$.)}%
\label{fig:iv}%
\end{figure}
\begin{figure}[ptb]
\includegraphics[height=8.5cm, angle=-90]{figure2.ps} \caption{Temperature
dependence of the low-current ($R_{I \to0}$) and high-current ($R_{f}$)
differential resistances in 1 and 3 T magnetic fields. $R_{I \to0}$ is also
shown for zero field for reference. (See text for the definition of $T_{m}$,
$T_{c}$, and $T^{\ast}$.)}
\label{fig:RvsT}
\end{figure}\begin{figure}[ptbptb]
\includegraphics[height=8.5cm, angle=-90]{figure3.ps} \caption{Magnetic field
dependence of the high-current $R_{f}$ differential resistance at different
temperatures. Insert shows the low-current ($R_{th}$) differential
resistance.}
\label{fig:RvsB}
\end{figure}
The temperature dependence of the low-current and high-current differential
resistances, $R_{I\rightarrow0}$ and $R_{f}$, are shown in
Fig. \ref{fig:RvsT} for two magnetic fields. Between $T_{c}$ and a field
dependent characteristic temperature $T^{\ast}$, these two differential
resistances are identical (the $V$-$I$ curve is linear). At $T^{\ast}$ the two
resistance curves bifurcate; $R_{I\rightarrow0}$ decreases rapidly with
decreasing temperature and reaches zero ($<100$ $\mu \Omega$) at what is taken to be the melting temperature, $T_{m}$, while the
high-current resistance $R_{f}$ remains finite and varies smoothly across the
melting line. The \textquotedblleft pinned liquid\textquotedblright\ domain
extends over $T_{m}<T<T^{\ast}$ where signs of pinning are still present.
Below $T_{m}$, in the vortex solid phase, $R_{I\rightarrow0}$ is zero and
$R_{f}$ attains a finite, temperature-independent value visible in Fig. \ref{fig:RvsT}.
Moreover, this value is the same for 1 T and 3 T and agrees reasonably with a
linear extrapolation of the normal phase resistance to low temperature.
More insight on the low-temperature saturation of the high-current resistance
$R_{f}$ is gained from its field dependence for several temperatures shown in
Fig. \ref{fig:RvsT}. In the liquid phase, we find an approximately $B^{1/2}$ field
dependence over the whole field range. In the solid phase, however, $R_{f}$
becomes independent of magnetic field above about 1 T. On one sample the
measurements were extended to 17 T at 5 K and no variation of $R_{f}$ in
excess of the experimental uncertainties of about 10 \% were found in the
field range 1 to 17 T, in contrast to the 17-fold increase predicted for the
$ab$ resistivity by Eq. (\ref{eq:bs}). The differential resistance close to the
threshold current, $R_{\mathrm{th}}$, behaves similarly (see inset). In fact,
in the parameter range where both quantities can be measured, we find a field
and temperature independent proportionality between $R_{\mathrm{th}}$ and
$R_{f}$.
The behavior of the characteristic currents $I_{\mathrm{th}}$ and
$I_{\mathrm{f}}$ further corroborates the similarities of the dissipation
mechanisms close to threshold and in the high-current limit. The similarity
is again underlined by the same $B^{-1/2}$ field dependence for both
quantities in the low temperature region. All these observations suggest that
the same mechanism is responsible for the dissipation in the vicinity of the
threshold current as in the high-current limit. The similarity of behavior is
attributed to the way in which the resistive front propagates from the current
contacts towards the middle of the sample as the current is
increased\cite{currdist}. The current distribution reaches the resistive limit
characterised by $R_{f}$ only after the two resistive fronts meet. As
described earlier\cite{sas2000} at low temperature there is a difference
between field cooled (FC) and zero field cooled (ZFC) preparation. The ZFC
data for both $I_{\mathrm{th}}$ and $I_{\mathrm{f}}$ exhibit a characteristic
peak at the same line in the $(B,T)$ plane which defines a low temperature
region where the FC prepared state is only metastable. Both FC and ZFC
preparations give rise to the same $R_{f}$.
A first analysis to get our bearings consists of normalizing the resistance
to the resistance measured in the normal phase and the field to $B_{c2}.$ A
refinement on this is to estimate what the normal resistance would have been
at the temperature of the measurement were the sample not superconducting.
Following this procedure and noting that when the magnetoresistance value
saturates with field its value approximates the expected normal resistance, it
has been hypothesised to be just that and the value used to better interpolate
the normal resistance. For the upper critical field, we use the form
$B_{c2}(T)=(120\ \mathrm{T})[1-(T/T_{c})^{2}]$ yielding $\left.
dB_{c2}/dT\right\vert _{T_{c}}=-2.7$ T/K \cite{li1993}. To estimate $R_{n}$ in
the $T<T_{c}$ range, we use a quadratic interpolating function to connect with
the $T>T_{c}$ range on the hypothesis that the field saturation value for
$R_{f}$ corresponds to $R_{n}(T)$. For the very low field data below about
$0.5$ T where we could not apply sufficient current to reach the $R_{f}$
asymptote, we used the temperature and field independent scaling factor
between $R_{f}$ and $R_{th}$ found at higher fields to estimate $R_{f}$ from
the measurement of $R_{th}.$ The flux flow resistance data treated in this way
are displayed in Fig. \ref{fig:Rperrn}.
\begin{figure}[ptb]
\includegraphics[width=8.5cm]{figure4.eps} \caption{Reduced
differential resistance ($R_{f}/R_{n}$) as a function of the reduced magnetic
field ($B/B_{c2}$). The BS law for the flux flow resistivity is represented by
a straight line and corresponds to the right hand scale.}
\label{fig:Rperrn}
\end{figure}
It is evident from Fig. \ref{fig:Rperrn} that the ratio of flux flow resistance to normal
resistance is very different from the field proportional BS $\rho_{ab}$
resistivity ratio of Eq. (\ref{eq:bs}), ranging from a $\sqrt{B}$ like variation at low
field to a saturation value for the low temperature vortex solid phase.
Nonetheless there is a trend that with increasing temperature, in the vortex
liquid phase, the saturation feature disappears, at least for the field values
accessible to us. In the solid phase at low temperatures there is strictly no
field dependence of $R_{f}$ whatsoever above about 1 T although as mentioned
above its value is in good agreement with the linear extrapolation to $T=0$ of
the normal resistance measured above $T_{c}$. Since the field where saturation
occurs is in the order of $10^{-2}B_{c2}$, it might appear that for fields
below saturation, the vortices move $\sim10^{2}$ times faster than expected,
or that the vortex friction coefficient is $\sim10^{2}$ smaller, from a simple
application of the BS relation.
\section{Framework for understanding}
The experiment measures resistance and the ratio of resistances is only
proportional to the ratio of resistivities if the current distributions are
identical. This is not the case in anisotropic materials like BSCCO where the
current penetration depth is determined by the anisotropy of the resistivity.
The latter is not only different in the superconducting state but is also
field dependent. From this point of view we might interpret the comparison
made in Fig. \ref{fig:Rperrn} as an indication that the current penetration is considerably
less in the low temperature vortex solid phase, progressively approaching the
normal phase penetration depth at higher temperatures in the vortex liquid
phase. Indeed in the liquid phase, the data are in rough agreement with BS law
if the effects of inhomogeneous current distribution within the sample are
taken into account.
The voltage measured on the top plane of the sample is affected by the
$c$-axis properties because they influence the distribution of the transport
current within the sample \cite{currdist}. In the simplest approach we could
suppose that well into the ohmic response regime the current is distributed as
for a normal anisotropic ohmic conductor\cite{busch}, a surmise which is borne
out by numerical simulations based on the superconductor model of
Ref.\cite{currdist}. The penetration depth $d$ for a length $\ell$ between
current injection points on the surface is then given by $d^{2}\approx
(\sigma^{c}/\sigma^{ab})\ell^{2}$ (provided $d\ll$ sample thickness) and the
resistance $R$ along the length of the sample will scale like $R\approx
\rho^{ab}(\ell/d)w\approx w\sqrt{\rho^{ab}\rho^{c}}$ . If $\rho^{ab}=\rho
_{f}^{ab}$ of Eq. (\ref{eq:bs}) and the resistive response of the Josephson coupled
planes in the superconducting phase is represented by $\rho_{s}^{c}\,$, one
obtains $R_{f}/R_{n}=(\rho_{s}^{c}/\rho_{n}^{c})^{1/2}\sqrt{\alpha B/B_{c2}}$.
Results close to $T_{c}$ are indeed well described by this form. Putting
$\rho_{s}^{c}=\rho_{n}^{c},$ the 75-K data in Fig. \ref{fig:Rperrn}, for instance, are
reasonably well fitted with $\alpha=4$, which, given the uncertainties of the
simple resistive thick-sample estimate, we regard as rough agreement with BS
law. The crux of the matter then, from this viewpoint, is in $\rho_{s}%
^{c}(B,T).$
\begin{figure}
[ptb]
\begin{center}
\includegraphics[width=8.5cm]{figure5.eps}
\caption{Schematic illustration of voltage-current response between Josephson
coupled planes. Part (a) shows a standard SIS junction between s-wave
superconductor planes. (b) shows d-wave response as found in the experiment of
Ref.\cite{Latyshev} where the d-node quasi particle shunt conductance
$\sigma_{qp}^{c}\approx\sigma_{n}^{c}/40$. (c) shows the effect of spatial
dephasing across the junction.}
\label{Josephson junction}
\end{center}
\end{figure}
The $c$ axis conductivity $\sigma_{s}^{c}$ can be obtained from the $I-V$
characteristics of the interplane Josephson junctions. The usual
representation of a Josephson junction in a classical s-wave superconductor
subject to a spatially uniform phase difference is illustrated on Fig. \ref{Josephson junction}a: the voltage response to current is nil up to the Josephson current $J_{J}^{s}$ at
which point the junction opens to $V=V_{g}=2\Delta/e$ and responds to further
current according to the normal phase tunnel conductance $\sigma_{n}%
=1/\rho_{c}^{n}$ which is related to the critical current by the
Ambegaokar-Baratoff relation $J_{J}^{s}=\pi\sigma_{n}\Delta/2e=(\pi
/4)\sigma_{n}V_{g}$ . For currents $J\gg J_{J}^{s}$ the $c$ axis resistivity is
that apppropriate to the normal state $\rho_{s}^{c}=\rho_{n}^{c}.$ Assuming
the penetration depth to be limited by the resistive anisotropy gives the
result quoted above, but this regime would only be attained for extreme
currents $\sim10000$ Acm$^{-2}$ corresponding to $I\geqslant20$ A for our
samples. That is clearly not the regime which concerns us here.
The answer to this problem seems to be that the Josephson critical currents
are much lower than the Ambegoakar-Baratoff prediction. Measurements
\cite{Latyshev} on micrometer size mesa stacks of BSCCO junctions at zero
field show much lower critical currents, $J_{J}^{d}\approx(\pi/2)\sigma
_{qp}V_{g}$ where the quasi-particle conductivity $\sigma_{qp}\approx
\sigma_{n}/40$ as $V\rightarrow0$ at low temperature and $V_{g}\approx50$ mV
yielding $J_{J}^{d}\approx500$ Acm$^{-2}.$ As illustrated schematically on
Fig. \ref{Josephson junction}b the opening of the junction is followed by a voltage response at higher
currents with slope $\rho_{n}$ when $J>J_{J}^{s}\gg J_{J}^{d}.$ But even this
intermediate r\'{e}gime is not observed: a penetration depth of $150$ nm
corresponding to an anisotropy factor $\gamma\sim3000$ would involve $100$
planes and a voltage response of $5$ V, still a factor of $\sim10^{2}$ higher
than observed in our measurements. To understand what we see, we must take
account of the non-uniformity of the phase across the planes due to the random
positioning of vortices from plane to plane. The critical current for the
junction is then reduced to $J_{J}^{\alpha}=\left\langle \cos\alpha
(\mathbf{r})\right\rangle $ $J_{J}^{d}$ where $\alpha(\mathbf{r})$ represents
the phase difference across the junction at position $\mathbf{r}$ in the plane
and $\left\langle \cos\alpha(\mathbf{r})\right\rangle $ the value of
$\cos\alpha$ averaged over the junction area on a Josephson penetration length
scale\cite{shib, gaif}. It is typically of order $10^{-2}.$ As illustrated on
Fig. \ref{Josephson junction}c, the junction will then open to a potential difference of $\left\langle
\cos\alpha(\mathbf{r})\right\rangle V_{g} \ll V_{g}$ due to phase
slippage at $J=J_{J}^{\alpha}$, beyond which it should continue to see a
tunneling conductance $\sigma_{qp}^{c}$ of quasi-particles in the gap until
$V>V_{g},$ when the tunneling of quasi particles at the gap edge will result
in the usual normal phase dynamic conductance. The gap quasi-particle
conductance is thus experienced over a wide current range between
$J_{J}^{\alpha}$ and $J_{J}^{\alpha}/\left\langle \cos\alpha(\mathbf{r}%
)\right\rangle $ over which the resistance ratio is considerably enhanced to
$R_{f}/R_{n}=(\sigma_{n}^{c}/\sigma_{qp}^{c})^{1/2}\sqrt{\alpha B/B_{c2}\text{
}}$ where $(\sigma_{n}^{c}/\sigma_{qp}^{c})\approx30-40$ at low
temperature\cite{Latyshev}, again if we assume Eq. (\ref{eq:bs}) to be valid for the second
square root term. The magnetic field dependence of $\sigma_{qp}\simeq
\sigma_{qp}^{0}(1+\beta(T)B/B_{c2})$ was measured in a subsequent
experiment\cite{Morozov}, also on micrometer size mesa samples. If finally we
include also the effect of quasi-particle conductance $\sigma_{qp}^{ab}(B)$ in
the $ab$ plane, we find
\begin{equation}
\frac{R_{f}}{R_{n}}=\sqrt{\frac{\sigma_{n}^{c}}{\sigma_{qp}^{c}}}\sqrt{\frac{\alpha B/B_{c2}}{(1+\beta B/B_{c2})(1+(\sigma_{qp}^{ab}/\sigma_{n}^{ab})B/B_{c2})}}
\label{eq:RfperRn}
\end{equation}
This is the expression that we compare with the experimental results on Figs. \ref{Comparison}(a)-(d).
\begin{figure*}
[ptb]
\begin{center}
\subfigure[]{
\includegraphics[width=8.5cm]{figure6a.eps}}
\subfigure[]{
\includegraphics[width=8.5cm]{figure6b.eps}}
\subfigure[]{
\includegraphics[width=8.5cm]{figure6c.eps}}
\subfigure[]{
\includegraphics[width=8.5cm]{figure6d.eps}}
\caption{Comparison of experimental measurements with Eq. (\ref{eq:RfperRn}) for the flux flow
resistance as determined by anisotropy using the Bardeen-Stephen flux flow
relation. The raw experimental results are given in a linear representation
(a) and (c) and in a reduced field representation which takes account of the
temperature variation of $B_{c2}=B_{c2}(T)=120(1-(T/T_{c})^{2}$
T in a log-log representation in (b) and (d). The dashed
lines represent the formula of Eq. (\ref{eq:RfperRn}) using the parameters indicated in the
text. The same parameters are used for both optimal and underdoped samples
for want of data specific to the underdoped case. The solid lines are simply
an indication of a ($B/B_{c2})^{1/2}$ power dependence.}
\label{Comparison}
\end{center}
\end{figure*}
\section{Interpretation of results}
Equation (\ref{eq:RfperRn}) is compared with the experimental findings for both the optimal
doped and the underdoped samples on Figs. \ref{Comparison}. For simplicity a single plot of
Eq. (\ref{eq:RfperRn}) is drawn as a dashed line on the log-log plot against
$B/B_{c2}(T)$ where $B_{c2}(T)$ is the temperature corrected value as
described above. It corresponds to a choice of parameter values appropriate to
the optimally doped situation at low temperature $T\rightarrow0$:
$B_{c2}(T=0)=120$~T, $\sigma_{n}^{c}/\sigma_{qp}^{c}=30$\cite{Latyshev},
$\alpha=1$, $\beta=1$\cite{Morozov}, $\sigma_{qp}^{ab}/\sigma_{n}%
^{ab}\rightarrow0$ and $R_{n}/R_{n}(T_{c}^{+})=1/3$ corresponding to a linear
extrapolation to $T=0$ . A parallel solid line is also drawn as the best
$\sqrt{B/B_{c2}}$ fit to the data.
We note that:
1. The order of magnitude is correct to within a factor of 2 for both optimal
and underdoped samples, with the notable exception of the saturation plateau
values in the vortex solid phase, especially in the underdoped sample.
2. The square root behavior predicted by the BS relation is approximately
correct, again with the obvious exception of the plateaus.
3. The magneto-resistance plateaus are all situated in the vortex solid phase.
It is to be remarked that the $20$~K magneto-resistance in the underdoped
sample begins to saturate, but on reaching about $0.5$ T it reverts to an
increasing power law. The melting field for $20$ K is estimated to be about
$0.3$ T from the appearance of finite $R_{I\rightarrow0}$.
Clearly Equation (\ref{eq:RfperRn}) does not describe all the features of the
magneto-resistance, even if it does reasonably well for temperatures above the
vortex solid melting and in the solid region for fields below the saturation.
It has a structure which could in principle describe a plateau if the $\beta$
coefficient in the field dependence of the shunt conductance of the Josephson
coupled layers were to be much larger $(\sim10^{2})$ such as might be
introduced by aligned vortex core NIN tunneling junctions or possibly $d$-wave
node quasi-particle to vortex core tunneling. Such effects have not however
been seen in the mesa magneto-conductivity measurements\cite{Morozov},
although these were performed at relatively high temperature ($>20$ K) and
high field, so in the vortex liquid phase.
\section{Conclusion}
It is difficult to conclude that the BS relation is untrue, either by a
multiplicative factor or in its form. To determine the multiplicative factor
requires a good way of estimating the normal resistivities in the
superconducting phase. Also, because the agreement is reasonably good at high
temperature and the vortex entity is not expected to change nature from high
to low temperature, the serious disagreement at low temperature is much more
likely to be related to the nature of the solid phase as the BS relation is a
single vortex property.
It can however be concluded from the sublinear, approximately $B^{1/2}$, field
dependence that if the linear field dependence of the BS relation is correct,
the current distribution is not constant and its penetration depth is
controlled by the resistive anisotropy and thus by $\rho_{f}$.
\section{Acknowledgements}
We acknowledge with pleasure fruitful discussions with F.Portier, I.Tutto,
L.Forro and T.Feher and the help and technical expertise of F.Toth. L.Forro
and the EPFL laboratory in Lausanne have contributed in a very essential way
to sample preparation and characterisation. Finally we acknowledge with
gratitude the OTKA funding agency for OTKA grant no. K 62866.
|
2,877,628,089,060 | arxiv | \section{1. Introduction}
In recent years there have been growing interest both in brane universes \cite{RS,brane1}
and in the higher-order gravity theories of which the simplest is $f(R)$ gravity \cite{f(R)}. In this talk we are going to combine both ideas and formulate the higher-order gravities on the brane. It emerges that the formulation is a bit non-trivial, since one faces ambiguities of the quadratic delta function contributions to the field equations. We will say how to avoid these problems and show how the Israel junction conditions for such higher-order brane gravity models can be formulated.
\section{2. Fourth-order gravities.}
When one considers the general gravity theories (e.g. \cite{clifton,braneR2}):
\begin{equation}
\label{XYZ}
S = \chi^{-1} \int d^D \sqrt{-g} f(X,Y,Z)
\end{equation}
in a D-dimensional spacetime ($\chi =$ const.), where $X,Y,Z$ are curvature invariants
\begin{equation}
X = R,\hspace{0.4cm} Y= R_{ab}R^{ab},\hspace{0.4cm} Z= R_{abcd}R^{abcd},
\end{equation}
then one immediately faces the 4th order field equations, except
when they reduce to the theories with Euler densities of the n-th order $I^{(n)}$
\cite{lovelock}
\begin{eqnarray}
\label{euler}
S = \int_M d^D x \sqrt{-g} \sum_n \kappa_n I^{(n)}~,
\end{eqnarray}
the lowest of them being the cosmological constant $I^{(0)} = 1$ ($\kappa_0 = -2\Lambda(2\kappa^2)^{-1} = -2\Lambda/16 \pi G$), the Ricci scalar $I^{(1)} =R$ ($\kappa_1 = (2\kappa^2)^{-1}$), and the Gauss-Bonnet density $I^{(2)} = R{GB} = R^2 - 4 R_{ab}R^{ab} + R_{abcd}R^{abcd}$ ($\kappa_2=\alpha(2\kappa^2)^{-1}$, $\alpha=$ const.).
However, the theories based on the Lagrangians which are the functions of the Euler densities
such as
\begin{eqnarray}
f(R) = f(X), \hspace{1.5cm} f(R_{GB}) = f(Z-4Y+X^2)~,\hspace{1.5cm} f = f(I^{(n)})
\end{eqnarray}
are again fourth-order.
\section{3. Formulation of the 4th order gravities on the brane - Israel formalism.}
In the context of the recent interest in string/M-theory, it is interesting to formulate the general gravity theories (\ref{XYZ}) within the framework of the brane models \cite{PRD08}. The full brane action for such a theory reads as
\begin{eqnarray}
\label{XYZB}
S &=& \chi^{-1} \int_{M} d^{D}x \sqrt{-g} f(X,Y,Z) + S_{brane} + S_{m}~,
\end{eqnarray}
with the total energy-momentum tensor
\begin{eqnarray}
\label{Tab}
T_a^{~b}=T_{a}^{~b~-}\theta(-w) + T_{a}^{~b~+}\theta(w) +
\delta(w)S_a^{~b},
\end{eqnarray}
where $S_a^{~b}$ is the energy-momentum tensor on the brane, and
$T_{a}^{~b~\pm}$ are the energy-momentum tensors on the both sides of the brane,
$\theta(w)$ is the Heaviside step function, and $\delta(w)$ is the Dirac delta function.
We assume Gaussian normal coordinates, i.e.,
$(\mu,\nu = 0, 1, 2,\ldots,D-2;w=D)$
\begin{eqnarray}
\label{bm}
ds^2=g_{ab} dx^a dx^b = \epsilon dw^2+h_{\mu\nu}dx^{\mu}dx^{\nu}~,
\end{eqnarray}
where $\epsilon = \vec{n} \cdot \vec{n} = +1$ for a spacelike hypersurface,
$\epsilon= -1$ for a timelike hypersurface, and $h_{ab} = g_{ab} - \epsilon n_a n_b$
is a projection tensor onto a $(D-1)$-dimensional hypersurface, $\vec{n}$ is the normal vector to the hypersurface. In these coordinates the extrinsic
curvature is
\begin{eqnarray}
K_{\mu\nu}=-{1\over 2}{\partial h_{\mu\nu}\over\partial w}~,
\end{eqnarray}
and the Gauss-Codazzi equations read \cite{brane2}
\begin{eqnarray}
\label{GC}
R_{w\mu w\nu}&=& {\partial K_{\mu\nu}\over \partial w}+K_{\rho\nu}K^{\rho}_{\,\,\mu}, \\
R_{w\mu\nu\rho}&=&\nabla_{\nu}K_{\mu\rho}-\nabla_{\rho}K_{\mu\nu}, \\
R_{\lambda\mu\nu\rho}&=&~^{(D-1)} R_{\lambda\mu\nu\rho}+
\epsilon\left[K_{\mu\nu}K_{\lambda\rho}
-K_{\mu\rho}K_{\lambda\nu}\right]~.
\end{eqnarray}
In the standard Israel approach \cite{israel66} one assumes that at the brane position $w=0$:
\begin{eqnarray}
\label{cont1}
h^{-}_{\mu\nu} &=& h^{+}_{\mu\nu}~,\\
\label{cont2}
h^{-}_{\mu\nu,w} & \neq & h^{+}_{\mu\nu,w}~, \hspace{0.5cm} K^{-}_{\mu\nu}
\neq K^{+}_{\mu\nu}~,
\end{eqnarray}
i.e., the {\it metric is continuous} but it has a kink, its first derivative has {\it a step function} discontinuity, and its second derivative gives the {\it delta function} contribution.
In terms of $\theta(w)$ and $\delta(w)$ functions this is equivalent to
\begin{eqnarray}
h_\mu{_\nu}(w) &=& h^{-}_{\mu\nu}(w) \theta(-w) + h^{+}_{\mu\nu}(w)
\theta(w) ~,\\
{\partial h_{\mu\nu}} \over \partial w &=& {\partial h^{+}_{\mu\nu} \over
\partial w} \theta(-w) + {\partial h^{-}_{\mu\nu} \over \partial w} \theta(w)~, \\
{\partial{^2} {h_\mu{_\nu}} \over \partial w{^2}}&=& {\partial{^2} h^{-}_{\mu\nu}
\over \partial w{^2}} \theta(-w) + {\partial{^2} h^{+}_{\mu\nu} \over \partial w{^2}}
\theta(w) \nonumber \\ &+& \left( {\partial h^{-}_{\mu\nu} \over \partial w} -
{\partial h^{+}_{\mu\nu} \over \partial w} \right)\delta(w)~.
\end{eqnarray}
For the standard brane models with the Einstein-Hilbert action in the bulk
\begin{eqnarray}
S = \frac{1}{2\kappa^2}\int_{M} d^{D}x \sqrt{-g} R + S_{brane} +
S_{m}
\end{eqnarray}
the field equations read as \cite{brane2}
\begin{eqnarray}
\label{Gww}
G^w_{~w}&=&-{1\over 2}~^{(D-1)}R+{1\over 2}\epsilon\left[K^2-Tr(K^2)\right]=\kappa^2 T^w_{~w}, \\
\label{Gwm}
G^w_{~\mu}&=&\epsilon\left[\nabla_{\mu}K-\nabla_{\nu}K^{\nu}_{\,\,\mu}\right]=\kappa^2
T^w_{~\mu}, \\
\label{Gmm}
G^{\mu}_{~\nu}&=&~^{(D-1)}G^{\mu}_{~\nu}
+\epsilon\left[{\partial K^{\mu}_{~\nu}\over\partial w}-\delta^{\mu}_{~\nu}
{\partial K\over\partial w}\right]\\
&+& \epsilon\left[-
K K^{\mu}_{~\nu}+{1\over 2}\delta^{\mu}_{~\nu}Tr(K^2)+{1\over 2}\delta^{\mu}_{~\nu}
K^2\right]=\kappa^2 T^{\mu}_{~\nu}~.\nonumber
\end{eqnarray}
and in the limit $ \lim_{w \to 0} \int_{-w}^{w}$, which ``fishes out'' the delta function contributions, one gets the {\it standard Israel junction conditions} as \cite{brane2}:
\begin{eqnarray}
\label{jcE}
\epsilon \{ [K^{\mu}_{~\nu}]-\delta^{\mu}_{~\nu}[K]\} &=& \kappa^2 {S}^{\mu}_{~\nu},
\hspace{0.5cm} [K^{\mu}_{~\nu}] \equiv K^{\mu~+}_{~\nu}-K^{\mu~-}_{~\nu}.
\end{eqnarray}
By $[X] = X^+ + X^-$ we define a jump of an appropriate quantity $X$ at the brane.
However, for the general $f(X,Y,Z)$ theory on the brane, the standard continuity relations (\ref{cont1})-(\ref{cont2}) do not work. This can be seen from the field equations
of the action (\ref{XYZ})
\begin{eqnarray}
\label{XYZ1}
P_{a b}&=&\frac{\chi}{2} T_{a b}, \\
\label{XYZ2}
P^{a b} &=& -\frac{1}{2} f g^{a b} + f_X R^{a b}+2 f_Y R^{c (a} {R^{b)}}_{c}+2
f_Z R^{e d c (a} {R^{b)}}_{c d e} \nonumber \\ &+& f_{X; c d}(g^{a
b} g^{c d}-g^{a c} g^{b d}) + \square (f_Y R^{a b}) + g^{a b} (f_Y
R^{c d})_{;c d} \nonumber \\ &-& 2 (f_Y R^{c (a})_{;\; \; c}^{\;
b)}-4 (f_Z R^{d (a b) c})_{;c d},
\end{eqnarray}
where $f_X = {\partial f / \partial X}$ etc.
Take, for example, the square of the Ricci scalar
\begin{eqnarray}
R&=&~^{(D-1)}R+\epsilon\left[2h^{\mu\nu}{\partial K_{\mu\nu}\over\partial w}
+3Tr(K^2)-K^2\right]~,\nonumber
\end{eqnarray}
where $K\equiv K^{\mu}_{\,\,\mu}$ , $Tr(K^2)\equiv K^{\mu\nu}K_{\mu\nu}$,
and appropriately, of the Ricci tensor, and of the Riemann tensor. These squares produce the terms of the type
\begin{eqnarray}
\label{terms}
{\partial^2 h^{\mu \nu} \over \partial^2 w}{\partial K_{\mu \nu} \over \partial w},
{\partial K_{\mu \nu} \over \partial w}{\partial K^{\mu \nu} \over \partial w},
\left({\partial K \over \partial w}\right)^2~,
\end{eqnarray}
which are proportional to $\delta^2(w)$, and so they are {\it ambiguous}.
Amazingly, all these ambiguous terms cancel each other exactly in the case of the Euler densities \cite{meissner01}. In fact, the junction conditions for one of the Euler densities -- the Gauss-Bonnet density, were already obtained as \cite{deruelle00,davis}
\begin{eqnarray}
2 \alpha \left( 3 [J_{\mu\nu}] - [J] h_{\mu\nu}
- 2 [P]_{\mu\rho\nu\sigma} [K]^{\rho\sigma} \right)
+ [K_{\mu\nu}] - [K] h_{\mu\nu} = - \kappa^2 S_{\mu\nu}~,
\end{eqnarray}
where
\begin{eqnarray}
P_{\mu\rho\nu\sigma} &=& R_{\mu\rho\nu\sigma} + 2 h_{\mu[\sigma}R_{\nu]\rho}
+ 2 h_{\rho[\nu}R_{\sigma]\mu}
+ R h_{\mu[\nu}h_{\sigma]\rho}~, \\
J_{\mu\nu} &=& \frac{1}{3} \left( 2KK_{\mu\sigma}K^{\sigma}_{\nu} +
K_{\sigma\rho}K^{\sigma\rho}K_{\mu\nu} -
2K_{\mu\rho}K^{\rho\sigma}K_{\sigma\nu}
- K^2 K_{\mu\nu} \right)~.
\end{eqnarray}
In the limit $\alpha \to 0$, they just give Einstein-Hilbert action junction conditions (\ref{jcE}).
In view of the ambiguities of the terms in (\ref{terms}), we find two ways to formulate the junction conditions for general $f(X,Y,Z)$ theories on the brane.
\subsection{A. Smoothing out the continuity conditions for the metric tensor at the brane}
In order to do that we impose more regularity onto the metric tensor at the brane position, i.e., we consider {\it a singular hypersurface of the order three} \cite{israel66} which fulfills the conditions (compare (\ref{cont1})-(\ref{cont2}))
\begin{eqnarray}
\label{hh1}
h^{-}_{\mu\nu} &=& h^{+}_{\mu\nu}~,\\
\label{hh2}
h^{-}_{\mu\nu,w} & = & h^{+}_{\mu\nu,w}~, \hspace{0.5cm} K^{-}_{\mu\nu}
= K^{+}_{\mu\nu}~,\\
\label{hh3}
h^{-}_{\mu\nu,ww} &=& h^{-}_{\mu\nu,ww}~, \hspace{0.5cm} K^{-}_{\mu\nu,w} =
K^{+}_{\mu\nu,w}~,\\
\label{hh4}
h^{-}_{\mu\nu,www} & \neq & h^{+}_{\mu\nu,www}~, \hspace{0.5cm} K^{-}_{\mu\nu,ww}
\neq K^{+}_{\mu\nu,ww}~,
\end{eqnarray}
i.e., the metric and its first derivative are regular, the {\it second derivative of the
metric is continuous}, but possesses a kink, the third derivative of the metric
has {\it a step function} discontinuity, and no sooner than the fourth derivative of the
metric on the brane produces the {\it delta function} contribution.
The physical interpretation as put in terms of the second-order theory can be that there is a jump of the first derivative of the energy-momentum tensor (e.g. jump of a pressure gradient) at the brane.
In his seminal work, Israel \cite{israel66} proposed {\it a singular hypersurface of order two}, which physically corresponded to a boundary surface characterized by a jump of the energy-momentum tensor (e.g. a boundary surface separating a star from the surrounding vacuum) which was characterized by
\begin{eqnarray}
\label{hhb1}
h^{-}_{\mu\nu} &=& h^{+}_{\mu\nu}~,\\
\label{hhb2}
h^{-}_{\mu\nu,w} & = & h^{+}_{\mu\nu,w}~, \hspace{0.5cm} K^{-}_{\mu\nu}
= K^{+}_{\mu\nu}~,\\
\label{hhb3}
h^{-}_{\mu\nu,ww} &\neq& h^{-}_{\mu\nu,ww}~, \hspace{0.5cm} K^{-}_{\mu\nu,w} = K^{+}_{\mu\nu,w}~,
\end{eqnarray}
i.e., the metric is regular, the {\it first derivative of the
metric is continuous}, but possesses a kink, the second derivative of the metric has {\it a step function} discontinuity, and the third derivative of the metric on the brane produces the {\it delta function} contribution.
The appropriate junction conditions can be obtained as follows.
We rewrite the field equations (\ref{XYZ1})-(\ref{XYZ2}) as
\begin{eqnarray}
\label{Wabd}
\sqrt{-g}C_{ab}{W^{abd}}_{;d} + \sqrt{-g}C_{ab}V^{ab} =
{\chi \over 2} T^{ab}C_{ab}\sqrt{-g}~,
\end{eqnarray}
where we have introduced is an arbitrary tensor field $C_{ab}$, and
\begin{eqnarray}
W^{abd}&=&f_{X; c }(g^{a b} g^{c d}-g^{(a c} g^{b) d}) + (f_Y
R^{ab})^{;d} \\ \nonumber
&+& g^{ab}(f_Y R^{cd})_{;c} -2 (f_Y R^{d(a})^{;b)}
- 4(f_Z R^{d(ab)c})_{;c}~,\\
V^{ab} &=& -\frac{1}{2} f g^{a b} + f_X R^{a b}+2 f_Y R^{c (a}
{R^{b)}}_{c} \nonumber \\
&+& 2 f_Z R^{e d c (a} {R^{b)}}_{c d e}~,
\end{eqnarray}
contain third derivatives of the metric giving a step function discontinuity, so that ${W^{abd}}_{;d}$ is proportional to $\delta(w)$. Then, we integrate both sides of the formula (\ref{Wabd}) over the volume $V$ which contains the
following parts (cf. Fig. \ref{fig1}): $G1$, $G2$ - are the
left-hand-side and the right-hand-side bulk volumes which are
separated by the brane, $A1=
\partial G1 + A0$, $A2= \partial G2 - A0$ are the boundaries of
these volumes, and $A0$ is the brane which orientation is given by
the direction of the normal vector $\vec{n}$.
\begin{figure}[h]
\includegraphics[width=8cm]{brane.png}
\caption{A schematic picture illustrating the domains of integration
used in derivation of the junction conditions. Here $V= G1+G2$ is the
total volume, $G1$, $G2$ - are the
left-hand-side and the right-hand-side bulk volumes which are
separated by the brane, $A1=
\partial G1 + A0$, $A2= \partial G2 - A0$ are the boundaries of
these volumes, and $A0$ is the brane which orientation is given by
the direction of the normal vector $\vec{n}$.}
\label{fig1}
\end{figure}
We have
\begin{eqnarray}
\int_{G1+G2}{\sqrt{-g}C_{ab}{W^{abd}}_{;d} d\Omega}
+ \int_{G1+G2} {\sqrt{-g}C_{ab}V^{ab}d\Omega}
=\int_{G1+G2}{{\chi \over 2}T^{ab}C_{ab}\sqrt{-g}d\Omega}~,
\end{eqnarray}
and so
\begin{eqnarray}
&& \int_{G1+G2}\sqrt{-g}(C_{ab}W^{abd})_{;d} d\Omega
- \int_{G1+G2}\sqrt{-g}C_{ab;d}W^{abd}
d\Omega
+ \int_{G1+G2}\sqrt{-g}C_{ab}V^{ab}d\Omega \nonumber \\
&=& \int_{G1}{\chi \over 2} T^{ab}C_{ab}\sqrt{-g}d\Omega + \int_{G2}{\chi \over 2}
T^{ab}C_{ab}\sqrt{-g}d\Omega
+ \int_{A0}{\chi \over 2} S^{ab}C_{ab}\sqrt{-\gamma}d\sigma~,
\end{eqnarray}
of which the first term can be integrated out to a boundary A1+A2 and then the limit $V \to A0$ (or $ \lim_{w \to 0} \int_{-w}^{w}$ in Gaussian coordinates) is taken.
The final form of the junction conditions which generalize (\ref{jcE}) onto the fourth-order gravity are
\begin{eqnarray}
\label{ws}
[W]^{abd}n_d - {\chi \over 2} S^{ab} &=& 0~, \hspace{.3cm} [W]^{abd} = W^{abd+} - W^{abd-}.
\end{eqnarray}
It is remarkable that these junction conditions involve the higher derivatives of the scale factor. To see this take for example $f(X,Y,Z)=f(R)$ theory in $D=5$ dimensions with metric
\begin{eqnarray}
\label{bw1}
ds^2=-dt^2
+ a^2(t,w)[dr^2 +r^2(d\Theta^2 +sin^2\Theta d\phi^2)]+dw^2~~.
\nonumber
\end{eqnarray}
The junction conditions (\ref{ws}) give a jump of the third derivative of $a(t,w)$, as expected
\begin{eqnarray}
[a'''] &=& {\chi \over 2}{a_0} {p_0}~, \\
p_0&=& \rho_0~,
\end{eqnarray}
where $(\ldots)' = \partial / \partial w$, $a_0=a(w=0)$, and the
brane energy-momentum tensor is $S_{\mu}^{\nu} =
(-\rho_0,p_0,p_0,p_0)$.
\subsection{B. Reduction to an equivalent 2nd order theory}
Yet another way to obtain the junction conditions is the reduction of the action (\ref{XYZ}) to a second-order action. This gives equivalent junction conditions, though at the expense of introducing a new tensor field $H^{abcd}$ (tensoron). In fact, starting from the action \cite{kijowski}
\begin{eqnarray} \label{r} S_{G} &=& \chi^{-1} \int_{M} d^{D}x \sqrt{-g}
f(g_{ab},R_{abcd}).
\end{eqnarray}
we may transform to an equivalent 2nd order action in the form
\begin{eqnarray} \label{equiv}
S_{I} = \chi^{-1} \int_{M} d^{D}x \sqrt{-g} \{ H^{ghij}(R_{ghij}-
\phi_{ghij}) + f(g_{ab},\phi_{cdef}) \}~,
\end{eqnarray}
where
\begin{eqnarray} \label{H} H^{ghij} \equiv {\partial f(g_{ab},\phi_{abcd}) \over
\partial \phi_{ghij}}~, \hspace{1.cm} det \left[{\partial^2 f(g_{ab},\phi_{abcd}) \over
\partial \phi_{ghij} \partial \phi_{klmn}} \right] \neq 0.
\end{eqnarray}
This transition for $f(R)$ theory requires a new scalar $H=f'(Q)$ (a scalaron) with the condition that $f''(Q) \neq 0$, and the equation of motion $Q=R$. Similarly, for $f(R_{GB})$ theory, one defines a scalar $H=f'(A)$, with the equation of motion $A=R_{GB}$).
In order to get junction conditions, we have to slightly redefine the tensoron
\begin{eqnarray}
A^{abcd}={1 \over 2} \{H^{acdb} &+& H^{abdc}-H^{cbda} - H^{acbd} - H^{abcd}+H^{cbad}\}
\end{eqnarray}
which in a particular case of $f(X,Y,Z)$ theory takes the form
\begin{eqnarray}
A^{abcd}= f_{X}(g^{ad} g^{cb}-g^{cd} g^{ba}) + f_{Y}(2R^{ad}g^{bc} - R^{cd}g^{ba} -R^{ba}g^{cd}) + 4f_{Z}R^{acbd}~.
\end{eqnarray}
The field equations for an equivalent action (\ref{r}) read as
\begin{eqnarray} \label{em1}
R_{ghij} &=& - {\partial V(g_{ab},H^{cdef}) \over \partial H^{ghij}}~, \\
\label{em2} {1\over 2} g^{ab}f &+& {\partial f \over \partial
g_{ab}} + H^{becd} \phi ^{a}_{~ecd}(g_{ab},H^{klmn})
+\{A^{(ab)cd}\}_{;dc}
= - {\chi \over 2} T^{ab}~,
\end{eqnarray}
where
\begin{eqnarray}
V(g_{ab},H^{cdef}) = - H^{hgij} \phi_{ghij}(g_{ab},H^{cdef}) +
f(g_{ab},\phi_{klmn}(g_{ab},H^{cdef}))
\end{eqnarray}
In fact, the possibility to express the fields $\phi_{abcd}$ as a function of
$g_{ab}$ and $H^{cdef}$ is guaranteed by the condition
(\ref{H}) (which is an analogue of the condition $f''(Q) \neq 0$).
One can show that junction conditions of the second-order theory are equivalent to junction
conditions of the fourth-order theory \cite{PRD08}.
Applying the same method as in the previous case (i.e. taking the limit of $V \to A0$) we notice that the first three terms of (\ref{em2}) do not give any contribution to the junction conditions (since they do not contain delta functions at all) which now have the form:
\begin{eqnarray}
\label{jc1}
[{A^{(ab)cd}}_{;d}]n_{c} = - {\chi \over 2} S^{ab}~.
\end{eqnarray}
Assuming that
\begin{eqnarray} \label{f}
f(g_{ab},\phi_{abcd})=f({\phi_{ab}}^{ab},{\phi_{acb}}^{c}{\phi^{acb}}_{c},\phi_{abcd}
\phi^{abcd}),
\end{eqnarray}
we can get the same result as in the 4th theory
\begin{eqnarray} \label{equivH}
{[A^{(ab)cd}}_{;d}]n_{c}&=&{[A^{(ab)cd}}_{;c}]n_{d} = [- \{f_{X; c
}(g^{ab} g^{c d}-g^{c(a} g^{b)d}) \nonumber \\ &+& (f_Y R^{ab})^{;d} + g^{ab}(f_Y R^{cd})_{;c} \\
\nonumber -2(f_YR^{d(a})^{;b)} &-& 4(f_Z R^{d(ab)c})_{;c}\}]n_{d}=
-[W^{abd}]n_{d}. \end{eqnarray}
Similar approach was used for less-general $f(R)$ theories of gravity on the brane by
Borzeszkowski and Frolov \cite{borzeszkowski}; Parry at al. \cite{branef(R)}, Deruelle et al. \cite{deruelle07}, and for $f(X,Y,Z) = aX^2 + bY + cZ$ ($a,b,c =$ const.) theories by Nojiri and Odintsov \cite{braneR2}.
\section{4. Formulation of the 4th order gravities on the brane - Gibbons-Hawking Boundary Terms}
In this approach, following the idea of Gibbons and Hawking \cite{GH}, we do not assume any vanishing of the first derivative of the variation of the metric tensor $\delta g_{ab;c}$ on the boundary of the integration volume while using the variational principle. Strictly speaking, only the assumption of the vanishing of the normal derivative of the variation of the metric tensor $\delta g_{ab,w}$ is required. Instead, we postulate that some extra terms to the action are added and that these terms ``kill'' the first derivatives of the metric variation. These terms are called Gibbons-Hawking boundary terms now. In fact, the Gibbons-Hawking boundary term for the Einstein-Hilbert action is composed of the trace of the extrinsic curvature and it was found by Gibbons and Hawking themselves \cite{GH}. Then, for the action being the combination of the square of the Weyl tensor and an arbitrary function of the scalar curvature they were found by Hawking and Lutrell \cite{lutrell} and Barrow and Madsen \cite{madsen}.
For the Gauss-Bonnet and other Lovelock densities they were found by Bunch \cite{bunch81}, Mueller-Hoissen and Myers \cite{surface}, Davis \cite{davis} and Gravanis and Willinson \cite{gravanis}. The boundary terms for the action being an arbitrary function of the curvature invariants were found by Barvinsky and Solodukhin \cite{barvinsky}.
For the theories which are of interest for this talk, the Gibbons-Hawking boundary terms have
the following form \cite{JCAP09}: for the $f(R)$ theory the term reads as
\begin{eqnarray}
\label{gib}
S_{GH,p}= -2(-1)^{p}\epsilon \int_{\partial M_p} \sqrt{-h} H K
d^{D-1}x~,
\end{eqnarray}
where $H=f'(Q)$ is the scalaron,
while for the $f(X,Y,Z)$ theory it reads as
\begin{eqnarray}
\label{gib2}
S_{GH,p} =
- (-1)^{p} \int_{\partial M_p}
d^{D-1}x \sqrt{-h} A^{(ab)cd}n_{c} n_{d}\mathcal{L}_{\vec{n}}g_{ab}~,
\end{eqnarray}
where $A^{(ab)cd}$ is the tensoron.
Using the method of the boundary terms we derived the most general Israel junction conditions for $f(R)$ theory as \cite{JCAP09}:
\begin{eqnarray}
\label{jc2}
[K]&=& 0~, \\
\label{jc21}
S^{ab}n_{a}n_{b}&=& 0~, \\
\label{jc22}
S^{ab}h_{ac}n_{b}&=& 0~, \\
\label{jc23}
-(D-1)[H_{;c}n^{c}]-D[H]K &=& \epsilon {\chi \over 2} S^{ab}h_{ab}~,\\
\label{jc24}
-h_{ab}[H_{;c}n^{c}]-[H]Kh_{ab} &+& [HK_{ab}] \\
&=&
\epsilon {\chi \over 2}S^{cd}h_{ca}h_{db}. \nonumber
\end{eqnarray}
A generality of these conditions refers to the fact that no assumption about the
continuity of the scalaron on the brane has been made. They reduce to the conditions already obtained in the literature, if one assumes $[H] =0$ \cite{deruelle07}.
On the other hand, the most general Israel junction conditions for the $f(X,Y,Z)$ theory,
with no assumption about the continuity of the tensoron on the brane, are \cite{JCAP09}:
\begin{eqnarray}
\label{JCXYZ1}
&&[KA^{(ab)cd}] n_{c} n_{d} + [\mathcal{L}_{\vec{n}}A^{(ab)cd}] n_{c} n_{d}
\\ \nonumber &-& \epsilon[A^{(ab)cd}K_{cd}] - g^{ab}[A^{(ef)cd} K_{ef}]n_{c} n_{d} \\
\nonumber &+& 2 \epsilon [D_{s}A^{(ef)cd}n_{c} n_{d}]h^{s}_{e}h^{(a}_{f}n^{b)}
- 2\epsilon[{A^{(ab)cd}}_{;(c}]n_{d)} = {\chi \over 2} S^{ab}~,
\\
\label{JCXYZ2}
&&
n_{b} n_{c}[\mathcal{L}_{\vec{n}}g_{ad}]-
n_{a} n_{c}[\mathcal{L}_{\vec{n}}g_{db}]-n_{b} n_{d}[\mathcal{L}_{\vec{n}}g_{ac}]
+n_{a} n_{d}[\mathcal{L}_{\vec{n}}g_{cb}]=0~.
\end{eqnarray}
They reduce to the conditions (\ref{jc1}), if one assumes continuity of the tensoron on the brane
\begin{equation}
[A^{(ab)cd}] = 0~.
\end{equation}
\section{5. Fourth-order gravities and statefinders}
We claim the fact that general $f(R,R_{ab}R^{ab},R_{abcd}R^{abcd})$ theories are fourth-order may have some advantageous consequences onto their observational verification
by the application of statefinder diagnosis of the universe.
In fact, statefinders are the higher-order characteristics of the universe expansion which go
beyond the Hubble parameter $H$ and the deceleration parameter $q$:
\begin{eqnarray}
\label{hubb}
H &=& \frac{\dot{a}}{a}~,\hspace{0.5cm} q = - \frac{1}{H^2} \frac{\ddot{a}}{a} = - \frac{\ddot{a}a}{\dot{a}^2}~.
\end{eqnarray}
They can generally be expressed as ($i \geq 2$)
\begin{eqnarray}
\label{dergen}
x^{(i)} &=& (-1)^{i+1}\frac{1}{H^{i}} \frac{a^{(i)}}{a} = (-1)^{i+1}
\frac{a^{(i)} a^{i-1}}{\dot{a}^{i}}~,
\end{eqnarray}
and the lowest order of them are known as:
jerk, snap ("kerk"), crack ("lerk")
\begin{eqnarray}
\label{jerk}
j &=& \frac{1}{H^3} \frac{\dot{\ddot{a}}}{a} =
\frac{\dot{\ddot{a}}a^2}{\dot{a}^3}~, \hspace{0.5cm} k = -\frac{1}{H^4} \frac{\ddot{\ddot{a}}}{a} = -\frac{\ddot{\ddot{a}}a^3}{\dot{a}^4}~, l = \frac{1}{H^5} \frac{a^{(5)}}{a} =
\frac{a^{(5)} a^4}{\dot{a}^5}~,
\end{eqnarray}
and pop ("merk"), "nerk", "oerk", "perk" etc. \cite{statefind}.
In the case of the 4th order gravities, statefinders may become powerful tools to constrain such theories observationally, since they enter observational relations
in the higher orders of redshift $z$ (see \cite{statefR} for non-brane case diagnosis).
Apparently, a blow-up of statefinders may also be linked to an
emergence of exotic singularities in the universe \cite{blowup}.
\section{6. Conclusions}
We conclude the following:
\begin{itemize}
\item The formulation of the fourth-order gravity theories on the brane is non-trivial because of the powers of {\it delta function ambiguities}.
\item Two methods were applied: \\
A. {\it Smoothing out} the continuity conditions for the metric tensor at the brane; \\
B. Reduction to an {\it equivalent} 2nd order theory.
\\In both cases the Israel {\it junction conditions} have been obtained and they {\it are} also mutually {\it equivalent}.
\item The method of the {\it GH boundary terms} was also applied and the most general junction conditions (with no continuity of the scalaron and tensoron on the brane assumed) were obtained that way, too.
\item Higher-order brane gravities contain {\it higher-order derivatives} of the geometric quantities (in a Friedmann model it is just the scale factor) which may manifest themselves in the {\it higher-order characteristics of expansion} such as statefinders (jerk, snap, lerk/crack, merk/pop).
\item A blow-up of statefinders may be linked to an {\it emergence of exotic singularities} in the universe.
\end{itemize}
\begin{theacknowledgments}
We acknowledge the support of the Polish Ministry of Science
and Higher Education grant No N N202 1912 34 (years 2008-10).
\end{theacknowledgments}
|
2,877,628,089,061 | arxiv | \section{introduction}
A Charge density wave transition is a metal to insulator transition
originating from the inherent instability of a one dimensional
charge system coupled to a three dimensional
lattice\cite{Pei30,Pei55,Gru88,Gru94}. Due to the electron-phonon
coupling the electron density condenses in a charge modulated state
(modulation wavelength $\lambda=\pi/k_F$, with $k_F$ the Fermi
wavevector), and a charge density wave gap opens in the single
particle excitation spectrum at the Fermi energy. Above a certain
temperature (the Peierls temperature $T_p$) these materials are
quasi one dimensional metals, below $T_p$ they are either insulators
or semi-metals. Charge density wave systems exhibit a number of
intriguing phenomena, ranging from Luttinger liquid like behavior in
the metallic state\cite{Wan06} to highly non-linear conduction and
quasi periodic conductance oscillations in the charge ordered
state\cite{Fle78,Gru94}. The non-linear conduction seems to be a
property which is not unique to charge density wave systems, as it
is observed in other low dimensional charge ordering systems as
well\cite{Sir06}. One of the well known inorganic systems exhibiting
charge density wave transitions are the blue bronzes\cite{Sch86}.
The term bronze is applied to a variety of crystalline phases of the
transition metal oxides. They have a common formula A$_{0.3}$MoO$_{3}$, where the
alkali metal A can be K, Rb, or Tl, and are often referred to as
{\em blue} bronzes because of their deep blue color. The crystal
structure of blue bronze contains rigid units comprised of clusters
of ten distorted MoO$_{6}$ octahedra, sharing corners along the
monoclinic b-axis \cite{Tra81}. This corner sharing provides an easy
path for the conduction electrons along the [102] directions. The
band filling is 3/4 \cite{Gru88}. The particular material addressed
in this paper is K$_{0.3}$MoO$_{3}$\ which is a quasi-one-dimensional metal which
undergoes the metal to insulator transition ($T_\mathrm{CDW}$\ = 183 K) through
the Peierls channel.\cite{Tra81,Pou83}
Apart from the usual single particle excitations (quasi particles),
charge density wave systems possess two other fundamental
excitations, which are of a collective nature. They arise from the
modulation of the charge density
$\rho(r)=\rho_0+\rho_1\cos(2k_Fr+\phi)$, and are called phasons and
amplitudons for collective phase ($\phi$) and amplitude ($\rho_1$)
oscillations, respectively. Ideally, the phason a is the gapless
Goldstone mode, leading to the notion of Fr\"ohlich
superconductivity\cite{Fro54}. However, due to the electrostatic
interactions of the charges with the underlying lattice and possibly
with impurities and imperfections, the translational symmetry of the
state is broken leading to a finite gap in the phason dispersion
spectrum. In contrast the amplitudon has an intrinsic gap in its
excitation spectrum \cite{Lee74}. In centrosymmetric media the
phason has a {\em ungerade} symmetry, and is therefore infrared
active, and the amplitudon mode (AM) has a gerade symmetry and hence
Raman active\cite{Gru88,Deg91}. The phason mode is relatively well
studied by for instance neutron scattering\cite{Esc87,Hen92} and far
infrared spectroscopy\cite{Gru88,Deg91}, and plays an important role
in the charge density wave transport. The amplitudon, {\em i.e.} the
transverse oscillation of the coupled charge-lattice system, has
been observed experimentally in for instance Raman experiments
\cite{Tra81}. Transient experiments have proven to be versatile
tools in studies on the properties of CDW materials as well.
Optically induced transient oscillatory conductivity experiments
have shown that one can increase the coherence length of the CDW
correlations by exciting quasi particles from the CDW
condensate\cite{Loo02}. An important breakthrough was the
observation that one can coherently excite the amplitudon mode in
pump-probe spectroscopy experiments\cite{Dem99,Dem02,Tsv03}. These
experiments open the possibility to study the temporal dynamics of
the collective and single particle charge density wave excitations,
as well as their interactions with quasi particles and vibrational
excitations. Demsar \textit{et al.,}\cite{Dem99} observed the
amplitudon in K$_{0.3}$MoO$_{3}$~ as a real-time coherent modulation of the
transient reflectivity with the frequency of the amplitudon mode.
The frequency and the decay time of the AM oscillation was measured
as 1.67 THz and 10 ps, respectively. It was also found that the
single particle excitations across the CDW gap appear as a rapidly
decaying contribution to the transient reflectivity. The mechanism
of the coherent AM generation was speculated to be Displacive
Excitation of Coherent Phonons (DECP \cite{Zei92}) which is the
mechanism which describes the generation of coherent phonons in
absorbing media \cite{Val91,Val94,Gan97}. The experiment was
performed with the pump and the probe wavelength fixed at 800 nm and
at a low pump fluency of 1 $\mu$J/cm$^{2}$. The aim of the present
study is to obtain a better understanding of the transient response
of the charge density wave material K$_{0.3}$MoO$_{3}$, and more in particular of
the generation and dephasing mechanisms of coherent amplitudon
oscillations. These issues are addressed using variable energy
pump-probe spectroscopy, ellipsometry and Raman scattering
experiments.
\maketitle
\section{Experimental}
A regenerative Ti:Sapphire amplifier seeded by a mode-locked
Ti:Sapphire laser was used to generate laser pulses at 800 nm with a
temporal width of 150 fs, operating at 1 kHz repetition rate. The
relatively low repetition rate minimizes heating effects due to the pile-up of the
pulses. To obtain laser pulses of continuously tunable energies a
Traveling Wave Optical Parametric Amplifier of Superfluorescence
(TOPAS) was used. The TOPAS contains a series of nonlinear crystals
based on ultrashort pulse parametric frequency converters that allows
for a continuous tuning over a wide wavelength range. The output of
TOPAS is used as the pump-pulse and the wavelength of the probe is
kept at 800 nm through out the experiment. The pump and the probe
were focused to a spot of 100 microns and 50 microns diameter
respectively. In the wavelength dependent measurements the
pump-fluency was kept constant and in the pump-fluency-dependent
measurement the wavelength of the pump was kept constant. The
polarization of the incident pump pulse was parallel to the b-axis
along which the charge density wave ordering develops. The
experiments are performed in a reflection geometry with the angle of
the pump and probe pulses close to normal incidence with respect to the
sample surface. The sample was placed in a He-flow cryostat, which
allows to vary the temperature between 4.2 and 300 K (stability $\pm$0.1 K).
Raman experiments have been performed using a standard triple grating
spectrometer, using 532 nm excitation. The ellipsometry experiments have
been performed using a Woollam spectroscopic ellipsometer with the sample
placed in a special home build UHV optical cryostat.
\section{Transient reflectivity}
\begin{figure}[ht]
\centerline{\includegraphics[width=7.5cm,clip=true]{fig1.eps}}
\caption{\label{tcdrr}
The transient reflectivity response of K$_{0.3}$MoO$_{3}$\ above and below $T_\mathrm{CDW}$.
}
\end{figure}
\begin{figure}[ht]
\centerline{\includegraphics[width=7.5cm,clip=true]{fig2.eps}}
\caption{\label{reflectivity}
The transient reflectivity response of K$_{0.3}$MoO$_{3}$\ at $T=4.2$~K, using a 1150 nm pump pulse (grey line).
The dark line represents a fit of Eq. 1 to the data.
{\it Inset:} Fourier spectrum showing the amplitudon mode at 1.67~THz, and two zone
folded phonons at 2.25~THz and 2.5~THz.}
\end{figure}
\begin{figure}[ht]
\centerline{\includegraphics[width=7.5cm,clip=true]{fig3.eps}}
\caption{\label{raman}
The (bb) polarized raman spectrum of K$_{0.3}$MoO$_{3}$\ above and below the CDW transition temperature,
showing the appearance of the amplitudon mode and the two fully symmetric modes.
}
\end{figure}
The formation of the charge density wave in blue bronze leads to
drastic changes in the nature the transient reflectivity. This is
exemplified in Fig.\ref{tcdrr} which shows two representative
transient reflectivity traces recorded above and below
$T_\mathrm{CDW}$~obtained using 800 nm for both the pump and the probe
wavelengths. Above the phase transition (T = 240 K curve) the
material is metallic and gapless leading to a featureless very fast
decay (faster than the time resolution $~$150 fs) of the excited
electrons, followed by a slower decay which may be attributed to
electron-phonon coupling induced heating effects. In the charge
density wave state (Fig. \ref{tcdrr} T = 40 K curve, and Fig
\ref{reflectivity}) the response is more interesting due to the
presence of various coherent excitations and a slowing down of the
decay of the excited quasi particles resulting from the opening of
the CDW gap in the electronic excitation spectrum. A Fourier
analysis of the response (see inset Fig. \ref{reflectivity}) shows
the presence of three coherent excitations. The strongest component
found at 1.67 THz has been attributed to the coherent excitation of
the collective amplitudon mode \cite{Dem99}. The two additional
modes at 2.25 and 2.5 THz can be attributed to Raman active phonons
which are activated in the CDW state due to folding of the Brillouin
zone \cite{Pou83}. Polarized frequency domain Raman scattering
experiments have indeed confirmed this interpretation (see Fig.
\ref{raman}). The excitation wavelength for this experiment was 532
nm with the polarization of the incoming and scattered light
parallel to the b-axis of the crystal. Above the phase transition
temperature (T = 295 K spectrum) the Raman spectrum is rather featureless
in the region of the amplitudon mode. In contrast, the 2.7 K
spectrum shows the appearance of just the three modes which are also
observed in the time domain traces.
The electronic contribution to $\Delta R/R$~can be discussed in terms of
their relaxation time scales. Since the energy of the pump pulse is
much larger than the CDW gap $(\Delta_{CDW}$=0.12 eV \cite{Deg91})
one expects that the optical pumping excites a large number of quasi
particles ($\frac{E_{Pump}}{\Delta}$~$\sim$ 30-50 QP's per photon)
across the CDW-gap. After this photo excitation a very fast internal
thermalization of the highly non-equilibrium electron distribution
occurs on a time scale of a few fs, which is followed by
electron-phonon thermalization that occurs on a time scale less than
100 fs \cite{Dem99}. Although the temporal resolution of the current
experiments is not enough to observe these effects, once the quasi
particles have internally thermalized and relaxed to states near the
Fermi level, further energy relaxation is delayed due to the
presence of the CDW-gap. The presence of the CDW-gap acts as a
"bottle-neck" for the thermalized quasi particles
\cite{bil93,odi01}. The typical time for relaxation over the CDW gap
is found to be 0.6 ps in the present experiments. However, the
observed decay is not a simple exponential but it is rather a
stretched exponential decay which is typical for a system with a
distribution of relaxation times as is for instance found in systems
with an anisotropic gap such as 1{\it T}-TaS$_2$\cite{Dem02}. Even
though blue bronze is only quasi-one dimensional, there is no
evidence for an anisotropic gap in this system. It is more likely
that the stretched decay originates from distribution of relaxation
times resulting from a possible glassy nature of the CDW ground
state\cite{bil93,odi01}. Finally, like in the high temperature
phase, a long time relaxation of $\Delta R/R$\ is observed, which may be
attributed to heating effects. No qualitative difference with the
high temperature relaxation is observed here, ruling out a possible
contribution of phasons to the response as has been suggested
earlier \cite{Dem99}.
To summarize, the observed transient reflectivity response in the
CDW phase can be described by
\begin{equation}\label{eq:one}
\frac{\Delta R}{R}= A e^{(-t/\tau_{QP})^n} + \sum_j A_j~
e^{-t/\tau_j} \cos(\omega_j t) + B e^{-t/\tau_{L}}\ \ \ ,
\end{equation}
where the first term describes the quasi particle response using a
stretched exponential decay with time constant $\tau_{QP}$ and
stretch index $n$ which takes the relaxation of the quasi particles
across the CDW gap into account. The second term in this expression
accounts for the observed coherent amplitudon and phonon
oscillations with frequencies $\omega_j$ and decay times $\tau_j$,
and the last term represents the observed long time response with a
decay time $\tau_L$ presumably originating from heating effects.
A fit of Eq.~\ref{eq:one} to the data generally leads to excellent
agreement with the data, as is for instance shown in
Fig.~\ref{reflectivity}. For this fit the quasi particle decay time is found
to be $\tau_{QP}=0.65$~ps with $n=1/2$, and the amplitudon lifetime
$\tau_{AM}=3.5$~ps. The decay times of the two coherent phonons is
of the order of 20$\pm$5 ps, whereas the cooling time is to slow to
determine with any accuracy ($\tau_{L}>60$~ps). It is interesting to
note that the coherent amplitudon is very short lived when compared
to the two coherent phonons. This heavy damping of coherent
amplitudon presumably results from amplitudon-quasi particle
scattering leading either to a decay or to a dephasing of the
coherent amplitudon excitation. The observation that the decay time
in the present experiment is shorter than in the experiments by
Demsar et al,\cite{Dem99}, who reported $\tau_{AM}=10$~ps is
consistent with this. The density of excited quasi particles in the
current experiments is substantially higher (several orders of
magnitude) than in the previous experiments,
leading to this faster decay of the coherent amplitudon mode,
and also to a larger magnitude of the induced response
(here as large as 0.1, compared to 10$^{-4}$ in
Ref.\cite{Dem99}). The quasi particle lifetime measured for the
lowest pump fluency is about 0.6 ps which is close to the value
measured by Demsar et al. (0.5 ps) \cite{Dem99}. Besides the
quasi particle density, the main difference is that the experimental
temperature in our case is T = 4.2 K whereas in \cite{Dem99} it is T
= 45 K. The stretched exponential behavior observed in the present
study, in contrast to the single exponent in Demsar's study, might
be due to the presence of a glassy state which has recently been
discussed in literature\cite{bil93,odi01,Star04}.
The stretched exponential behavior is typical for a glassy state.
\section{Pump fluency dependence\label{sec:fluency}}
\begin{figure}[hbt]
\centerline{\includegraphics[width=7.5cm,clip=true]{fig4.eps}}
\caption{\label{pumpdep}(Color online)
Normalized time resolved reflectivity traces for 800 nm pump-probe experiments
with a pump fluency varying between 0.1 and 10 mW. The data is normalized to
the zero delay response. The insert shows
the position of the first beating of the coherent mode evidencing a phase shift
upon increasing fluency.}
\end{figure}
In order to address the generation mechanism of the coherent AM
pump-fluency dependent (this section) and pump-wavelength (see section \ref{wave})
dependent experiments have been carried out.
The experiments reported in Fig. \ref{pumpdep} are performed with a
relatively high pump fluency of 1-10 mJ/cm$^2$. Note that the data
are scaled to the strength of the initial response, {\em i.e.} to the
strength of the quasi particle peak. The strength of the quasi particle peak itself
is found to be linearly dependent on the pump fluency (see Fig. \ref{parameter} (b)).
This linear dependence is consistent with
the expectation that the density of the excited electrons scales
with the mean number of photon of the light pulse. Hence, if the
generation mechanism of the coherent AM modes can be described, as
suggested in \cite{Demsar1}, by the simple theory of displacive
excitation (DECP) \cite{Zei92}, a linear dependence of the coherent
excitation amplitude with the pump fluency is expected. On the
contrary, Fig.\ref{pumpdep} and Fig.\ref{parameter}(a) clearly show that the intensity of
the coherent amplitudon oscillations decreases dramatically with
increasing the pump fluency. This immediately rules out the
possibility of a linear relationship between the number of photons
of the pump pulse and the observed amplitude of the coherent AM mode.
A second striking feature of the pump fluency dependence of the
time resolved reflectivity traces is the substantial increase in the
quasi particle lifetime with the pump power (see Fig.\ref{parameter}(b)).
\begin{figure}[hb!]
\centerline{\includegraphics[width=7.5cm,clip=true]{fig5.eps}}
\caption{\label{parameter}(Color online)
Dependence of the AM amplitude (a), and the quasi particle lifetime and amplitude of the quasi particle
peak (b) on the pump fluency. The QP life time and the quasi particle peak show a linear increase with pump fluency, while the AM amplitude shows a dramatic decrease. The solid lines are guides to the eye.}
\end{figure}
The increasing lifetime of the quasi particles, which is probably
due to state filling effects, together with the decreasing
amplitudon amplitude seems to indicate that the coherent amplitudons
are not directly generated by a coupling to the photons, but rather
by a coupling to the decay of the quasi particle excitations. {\em
I.e.} the coherent amplitudon oscillation are generated by the decay
processes of the excited electron population. The increasing of the
quasi particle lifetime leads to a reduction of the coherence of the
amplitudons generated, and thereby to a decrease in the amplitude of
the coherent response. In a very simple approach, neglecting the
temporal width of the pump-pulse and the stretched exponential
behavior of the decay of the quasi particle, one can model the quasi
particle response by a simple exponential decay
($e^{-t/\tau_{QP}}/\tau_{QP}$ for $t>0$), and a linear coupling
between the quasi particle decay and the amplitudon generation. The
resulting coherent amplitudon response is then given by:
\begin{equation}\label{AM-Size}
\int_{0}^{\infty}\frac{A}{\tau_{QP}}~e^{-t'/\tau_{QP}}cos(\Omega(t+t'))=\frac{A}{1+\Omega_{AM}^{2}\tau_{QP}^{2}}cos(\Omega
t+arctan(\Omega\tau_{QP}))
\end{equation}
where $A$ is the integrated area of the quasi particle peak, $\Omega$
is the frequency of the AM mode, and $\tau_{QP}$ is the quasi particle
lifetime. The product $\Omega_{AM}\tau_{QP}$ plays the role of a decoherence factor.
For this special case, the dependence of the
size of the coherent amplitudon response on $\Omega_{AM}\tau_{QP}$ is
Lorentzian. This is valid as long as the quasi particle response is slower than the temporal pump pulse width.
For short quasi particle relaxation times, the quasi particle response becomes more symmetric, leading to
an decaying exponential dependence of the coherent signal on $\Omega_{AM}\tau_{QP}$.
\begin{figure}[hb!]
\centerline{\includegraphics[width=7.5cm,clip=true]{fig6.eps}}
\caption{\label{decaytime}(Color online)
Scaled amplitude (see text) of the coherent AM as a function of the decoherence factor $\tau_{QP}\Omega_{AM}$ (symbols).
The decrease with increasing quasi particle life time shows the
loss of coherence due to the delayed relaxation of the quasi particles. The solid line
displays the amplitude of Eq. \ref{AM-Size} as a function of the coherence factor.}
\end{figure}
To elucidate the point, Fig.\ref{decaytime} shows the amplitude of the
coherent amplitudon response, normalized to the quasi particle response,
as a function of the decoherence factor (symbols).
The quasi particle response is approximated as the product of
its intensity and lifetime for a given fluency.
Inspection of Fig.\ref{decaytime} shows that
the intensity of the coherent amplitudon decreases rapidly as the
decoherence factor reaches unity, beyond which the coherence is lost
due to the increase of the quasi particle lifetime and hence dephasing
of the generated amplitudons. The same figure also shows the amplitude
obtained from Eq.\ref{AM-Size}.
Given the simplicity of the model and the fact that
there are no adjustable parameters, the agreement
between the experimental data and Eq.\ref{AM-Size} is rather striking.
The small deviation for low decoherence factor is probably due to the assumption
of an exponentially decaying quasi particle population. This can
be improved by using a more realistic quasi particle response.
In addition to this the decreasing size of the coherent response,
the model proposed by (Eq. \ref{AM-Size}) also predicts
a phase shift of the coherent amplitudon response proportional to
$\arctan(\Omega_{AM}\tau_{QP})$.
This is indeed what is experimentally observed.
Comparing just the lowest and the highest fluency (with $\tau_{QP}=$~0.3 and 2.0 ps, respectively),
the expected phase shift is $\simeq2\pi/8$, which is consistent with the experimentally
observed shift (see insert Fig. \ref{pumpdep}).
Some further evidence for the coupling of the photo excited
quasi particles to the collective amplitudon mode of the charge
density wave state may be found from the pump-wavelength dependent
experiments discussed in the next section.
\section{The Wavelength Dependence\label{wave}}
Before discussing the pump-wavelength dependent transient CDW
dynamics it is instructive to consider the linear optical
response of the system. The linear optical response at various
wavelengths provides information on possible absorption bands of
the material, eventually giving more insight into the coupling of
the electronic transitions with the CDW excitations.
Fig.\ref{dielectric} shows the optical response in the
wavelength region ranging from 270 nm to 1700 nm.
\begin{figure}[ht]
\centerline{\includegraphics[width=7.5cm,clip=true]{fig7.eps}}
\caption{\label{dielectric}
Optical response $\epsilon_{1}$ and $\epsilon_{2}$ of K$_{0.3}$MoO$_{3}$. {\it Inset:}
The energy loss function calculated from the data in the main panel.}
\end{figure}
The sharp band in $\epsilon_{2}$ raising from 500 nm toward the
lower wavelength side is due to the "p-d"-transitions involving the
electronic excitations from the Oxygen "2p" to the Molybdenum "4d"
levels. The quotations are used to indicate that the levels are
admixtures rather than pure ones. This is consistent with the
photo-emission and electron energy loss experiments done on
blue bronze \cite{Wer85,Sin99}. The broad asymmetric band
around 1000 nm is due to interband "d-d"-transitions.
The actual zero crossing of $\epsilon_{1}$ occurs at 1150 nm (1.08 eV).
This, however, does not correspond to the actual plasma energy,
which occurs at 1.5 eV\cite{Sin99}. This is also demonstrated in
the inset of Fig.\ref{dielectric}, which shows the energy loss spectrum
calculated from the dielectric function. This spectrum shows two
peaks, one at the actual plasma frequency (where also a distinct minimum
in $\epsilon_1$ is observed), and one at 1.08 eV.
This latter energy corresponds to an interband plasmon involving
the Mo-d derived dispersionless conduction branch predicted by
tight-binding calculations of Travaglini and Wachter\cite{Tra85,Sin99}.
\begin{figure}[ht]
\centerline{\includegraphics[width=7.5cm,clip=true]{fig8.eps}}
\caption{\label{dRbyw}(Color online)
Variation of the transient response with the pump wavelength.}
\end{figure}
The transient response, and in particular the coherent amplitudon
response, depends strongly on the excitation wavelength, and is
found to be strongest for $\lambda_{Pump}\sim 1150$~nm and at small
wavelengths. This is demonstrated in Fig.\ref{dRbyw}, which shows
the transient response for few selected pump wavelengths
($\lambda_{Pump}$).
\begin{figure}[ht]
\centerline{\includegraphics[width=7.5cm,clip=true]{fig9.eps}}
\caption{\label{amplitude}
Coherent amplitudon response as a function of the pump wavelength (symbols, the
drawn line is a guide to the eye).
The total absorbed pump energy 1-R, with R the normal incidence reflectivity,
is also plotted for comparison (solid line).
}
\end{figure}
Overall, the amplitude of the coherent amplitudon mode follows the
pump energy absorbed in the material. This can be seen from
Fig.\ref{amplitude}, which shows the amplitude of the coherent
amplitudon mode measured for various pump wavelengths at a constant
pump fluency of 1 mJ/cm$^{2}$. The same graph also shows the amount
of absorbed pump energy calculated from the optical data in
Fig.\ref{dielectric}. It is found that the quasi particle lifetime
does not vary strongly with the wavelength. This, together with the
fact that the efficiency of the coherent amplitudon generation
roughly follows the amount of absorbed energy, and hence the number
of photo-excited quasi particles, suggests once again the quasi
particle induced nature of the coherent excitation.
Although the coherent AM roughly follows the absorbed energy curve,
it does actually not exactly scale with it. In particular, the AM
response (as well as the quasi particle response) is markedly peaked
near 1150 nm, corresponding to the interband plasma wavelength
discussed above. This is surprising since one does not expect the
light to couple directly to the plasmon modes (as can also be seen
from Fig.\ref{dielectric}). Nevertheless, the experiments do
evidence an efficient coupling most likely through highly excited
quasi particles which relax by the emission of plasmon excitations,
or though a coupling to surface plasmons. Once excited, the
interband plasmons can relax either by emission of lower energy
quasi particles which subsequently can excite amplitudons, as the
enhanced quasi particle response seems to suggest, or possibly even
directly via decay into amplitudon modes.
\section{Conclusion}
In summary, we studied K$_{0.3}$MoO$_{3}$\ using time resolved spectroscopy,
ellipsometry, and polarized Raman spectroscopy. The transient
reflectivity experiments show, in addition to the coherent
amplitudon mode observed previously\cite{Dem99}, two coherent phonon
modes which are also observed in the low temperature Raman spectra.
They are assigned to zone-folding modes associated with the charge
density wave transition. The lifetime of the coherent amplitudon
mode is found to be relatively short, which is believed to be due to
the coupling of the AM to the high density of quasi particles.
The generation mechanism of coherent amplitudons in blue bronze does not seem
to be the usual DECP\cite{Zei92} mechanism, nor the transient stimulated
Raman mechanism\cite{Mer96}, since these are not consistent with the
observed non linear pump power dependence of the magnitude of the response.
It is found that the magnitude of the coherent amplitudon mode
depends on the dynamics of the quasi particle decay.
It is therefore beleived that the coherent amplitudons are generated
through the decay of quasi particles over the Peierls gap. A simple model
taking based on this notion accounts well for the observed non linear
pump power dependence, as well as for the observed phase shift in
the coherent amplitudon response. Pump wavelength dependent experiments,
where the coherent amplitudon response is found to be consistent with
the equilibrium absorption derived by ellipsometry measurements (and hence
with the quasi particle response) further supports the proposed interpretation.
The mechanism proposed could be relevant in the generation mechanism
of coherent excitations in other highly absorbing materials as well. Further
investigations of different compounds are necessary to confirm this.
|
2,877,628,089,062 | arxiv |
\section{Introduction}
\input{Intro}
\section{Method}
\input{Method}
\section{Results and discussion}
\input{Results}
\section{Conclusions}
\input{Summary}
\bibliographystyle{./prsty.bst}
\section{Acknowledgments}
This work was supported by NSF CHE-0807194 and the Welch Foundation
(C-0036). Calculations were performed in part on the Rice Terascale
Cluster funded by NSF under Grant EIA-0216467, Intel, and HP. O.H.
would like to thank the generous financial support of the Rothschild
and Fulbright foundations.
|
2,877,628,089,063 | arxiv | \section{Introduction}
The theory of branching random walk has been studied by
many authors. It plays
an important role, and is closely related to many problems arising
in a variety of applied probability setting, including
branching processes, multiplicative
cascades, infinite particle systems, Quicksort algorithms
and random fractals (see e.g. \cite{Liu98, Liu00}). For recent developments of the subject, see e.g. Hu and Shi \cite{HS09}, Shi \cite{Shi2012}, Hu \cite{Hu14}, Attia and Barral \cite{Barral14} and the references therein.
In the classical branching random walk, the point processes indexed
by the particles $u$, formulated by the number of its children and
their displacements, have a fixed constant distribution for all
particles $u$. In reality this distributions may vary from
generation to generation according to a random environment, just as
in the case of a branching process in random environment introduced
in \cite{SmithWilkinson69, AthreyaKarlin71BPRE1, AthreyaKarlin71BPRE2}. In other words, the
distributions themselves may be realizations of a stochastic
process, rather than being fixed. This property makes the model be closer to the reality compared to the classical branching random walk.
In this paper, we shall consider such a model, called \emph{a branching random walk with a random environment in time} .
Different kinds of branching random walks in random environments have been introduced and studied in the literature.
Baillon, Cl{\'e}ment, Greven and den Hollander \cite{BaillonClementGrevenHollander93,GrevenHollander92PTRF} considered the case where the
offspring distribution of a particle situated at $z\in \mathbb{Z}^d$
depends on a random environment indexed by the location $z$, while the moving
mechanism is controlled by a fixed deterministic law. Comets and Popov \cite{CometsPopov2007AOP, CometsPopov2007ALEA} studied the case where
both the offspring distributions and the moving laws depend on a random environment indexed by the location. In the model studied in \cite{BirknerGK05,HuYoshida09,Nakashima11,Yoshida08,CometsYoshida2011JTP}, the offspring distribution of a particle of generation $n$ situated at $z\in \mathbb{Z}^d (d\geq 1)$
depends on a random space-time environment indexed by $\{(z, n)\}$, while each particle performs a simple symmetric random walk on $d$-dimensional integer lattice $\mathbb{Z}^d (d\geq 1)$. The model that we study in this paper is different from those mentioned above.
It should also be mentioned that recently another different kind of branching random walks in time inhomogeneous environments has been considered extensively, see e.g. Fang and Zeitouni (2012, \cite{FangZeitouni2012}), Zeitouni (2012, \cite{Zeitouni2012}) and Bovier and Hartung(2014, \cite{Bovier14}).
The readers may refer to these articles and references therein for more information.
Denote by $Z_n(\cdot)$ the counting measure which counts the number of particles of generation $n$ situated in a given set.
For the classical branching random walk, a central limit theorem on $Z_n(\cdot)$, first conjectured by Harris (1963, \cite{Harris63BP}), was shown by Asmussen and Kaplan (1976, \cite{AsmussenKaplan76BRW1,AsmussenKaplan76BRW2}), and then extended to a general case by Klebaner (1982, \cite{Klebaner82AAP}) and
Biggins (1990, \cite{Biggins90SPA}); for a branching Wiener process, R\'ev\'esz (1994,\cite{Revesz94}) studied the convergence rates in the central limit theorems and conjectured the exact convergence rates, which were confirmed by Chen (2001,\cite{Chen2001}). Kabluchko (2012,\cite{Kabluchko12}) gave an alternative proof of Chen's results under slightly stronger hypothesis. R\'ev\'esz, Rosen and Shi (2005,\cite{ReveszRosenShi2005}) obtained a large time asymptotic expansion in the local limit theorem for branching Wiener processes, generalizing Chen's result.
The first objective of our present paper is to extend Chen's results to the branching random walk under weaker moment conditions. In our results about the exact convergence rate in the central limit theorem and the local limit theorem, the rate functions that we find include some new terms which didn't appear in Chen's paper \cite{Chen2001}.
In Chen's work, the second moment condition was assumed for the offspring distribution. Although the setting we consider now is much more general, in our results the second moment condition will be relaxed to a moment condition of the form
$\E X (\ln^+ X) ^{1+\lambda} < \infty$ . It has been well known that in branching random walks, such a relaxation is quite delicate. Another interesting aspect is that we do not assume the existence of exponential moments for the moving law, which holds automatically in the case of the branching Wiener process. The lack of the second moment condition (resp. the exponential moment condition) for the offspring distribution (resp. the moving law) makes the proof delicate. The difficulty will be overcome via a careful analysis of the convergence of some associated martingales using truncating arguments.
The second objective of our paper is to extend the results to the branching random walk with a random environment in time.
This model first appeared in Biggins and Kyprianou (2004, \cite[Section 6]{BigginsKyprianou04AAP}), where a criterion was given for the non-degeneration of the limit of the natural martingale; see also Kuhlbusch (2004, \cite{Ku04}) for the equivalent form of the criterion on weighted branching processes in random environment.
For $Z_n(\cdot)$ and related quantities on this model, Liu (2007,\cite{Liu07ICCM}) surveyed several limit theorems, including large deviations theorems and a law of large numbers on the rightmost particle. In \cite{GLW14}, Gao, Liu and Wang showed a central limit theorem on the counting measure $Z_n(\cdot)$ with appropriate norming. Here we study the convergence rate in the central limit theorem and a local limit theorem for $Z_n(\cdot)$. Compared with the classical branching random walk,
the approach is significantly more difficult due to the appearance of the random environment.
The article is organized as follows. In Section \ref{sec2}, we give a rigorous description of the model and introduce the basic assumptions and notation, then we formulate our main results as Theorems \ref{th1} and \ref{th2}. In Section \ref{sec6}, we introduce some notation and recall a theorem on the Edgeworth expansions for sums of independent random variables used in our proofs. We give the proofs of the main theorems in Section \ref{sec3} and \ref{sec4}, respectively. Whilst Section \ref{sec5} will be devoted to the proofs of the reminders.
\section{Description of the model and the main results}\label{sec2}
\subsection{Description of the model}
We describe the model as follows (\cite{Liu07ICCM,GLW14}).
\emph{A random environment in time} $\xi=(\xi_n)$ is formulated as a sequence of random variables independent and identically distributed with values in some measurable space $(\Theta,\mathcal{F})$. Each realization of $ \xi_n $ corresponds to two probability distributions: the offspring distribution $p(\xi_n)
= (p_0(\xi_n), p_1(\xi_n), \cdots ) $ on $\N = \{0,1, \cdots\}$, and the moving distribution $ G (\xi_n) $ on $\R$.
Without loss of generality, we can take $\xi_n $ as coordinate functions defined on the product space $(\Theta^{\mathbb{N}}, \mathcal{F}^{\otimes\mathbb{N}})$
equipped with the product law $\tau$ of some probability law $\tau_0$ on $(\Theta, \cal F)$,
which is invariant and ergodic under the usual shift transformation
$\theta$ on $\Theta^{\mathbb{N}}$: $\theta(\xi_0,\xi_1,\cdots)= (\xi_1,\xi_2,\cdots) $.
When the environment $\xi=(\xi_n)$ is given, the process can be described as follows.
It
begins at time $ 0$ with one initial particle $\varnothing$ of
generation $0 $ located at $S_{\varnothing} = 0 \in \mathbb{R}$; at time
$1$, it is replaced by $N = N_{\varnothing}$ new particles $ \varnothing i = i( 1\leq i\leq N) $ of generation
$1$, located at $S_i = L_{\varnothing i} (1\leq i\leq N),$ where
$N, L_1, L_2, \cdots $ are mutually independent, $N$ has the law $p(\xi_0)$, and each $ L_i$ has the law $G(\xi_0)$.
In general,
each particle $u= u_1...u_n$ of generation $n$ is replaced at time $ n + 1 $ by $N_{u} $ new particles $ u i (1\leq i \leq N_u) $ of generation $n+1$,
with displacements $L_{u i} (1\leq i \leq N_u) $, so that the $i$-th
child $ u i $ is located at $$S_{u i}=S_{u}+ L_{u i},$$ where
$N_u, L_{u1}, L_{u2}, \cdots $ are mutually independent, $N_u$ has the law $ p(\xi_n)$, and each $L_{ui}$ has the same law $G(\xi_n)$. By definition,
given the environment $\xi$, the random variables $N_u$ and $L_u$, indexed by all the finite sequences $u$ of positive integers, are independent of each other.
For each realization $\xi \in \Theta^\N$ of the environment sequence,
let $(\Gamma, {\cal G}, \mathbb{P}_\xi)$ be the probability space under which the
process is defined (when the environment $\xi$ is fixed to the given realization). The probability
$\mathbb{P}_\xi$ is usually called \emph{quenched law}.
The total probability space can be formulated as the product space
$( \Theta^{\mathbb{N}}\times\Gamma , {\cal E}^{\N} \otimes \cal G, \mathbb{P})$,
where $ \mathbb{P} = \E (\delta_\xi \otimes \mathbb{P}_{\xi}) $ with $\delta_\xi $ the Dirac measure at $\xi$ and $\E$ the expectation with respect to the random variable $\xi$, so that for all measurable and
positive $g$ defined on $\Theta^{\mathbb{N}}\times\Gamma$, we have
\[\int_{ \Theta^{\mathbb{N}}\times\Gamma } g (x,y) d\mathbb{P}(x,y) = \E \int_\Gamma g(\xi,y) d\mathbb{P}_{\xi}(y).\]
The total
probability $\P$ is usually called \emph{annealed law}.
The quenched law $\P_\xi$ may be considered to be the conditional
probability of $\P$ given $\xi$. The expectation with respect to $\mathbb{P}$ will still be denoted by $\E$; there will be no confusion for reason of consistence. The expectation with respect to
$\P_\xi$ will be denoted by $\E_\xi$.
Let $\mathbb{T}$ be the genealogical tree with $\{N_u\}$ as defining elements. By definition, we have:
(a) $\varnothing\in \mathbb{T}$; (b) $ui \in \mathbb{T}$ implies $u\in \mathbb{T}$; (c) if $ u\in \mathbb{T} $, then $ui\in \mathbb{T} $
if and only if $1\leq i\leq N_u $.
Let $$ \mathbb{T}_n =\{u\in
\mathbb{T} :|u|=n\} $$
be the set of particles of generation $n$, where $|u|$ denotes the length of the
sequence $u$ and represents the number of generation to which $u$ belongs.
\subsection{Main results}
Let $Z_n(\cdot)$ be the counting measure of particles of generation $n$:
for $B \subset \mathbb{R}$,
$$Z_n(B)= \sum_{u\in \mathbb{T}_n} \mathbf{1}_{ B}(S_u).$$
Then $\{ Z_n(\mathbb{R})\}$ constitutes a branching process
in a random environment (see e.g. \cite{AthreyaKarlin71BPRE1,AthreyaKarlin71BPRE2,SmithWilkinson69}).
For $n\geq 0$, let $\widehat{N}_n$ (resp. $\widehat{L}_n$) be a random variable with distribution $p(\xi_n)$ (resp. $G(\xi_n)$) under the law $\P_\xi$, and define
\begin{equation*}
m_n= m(\xi_n)= \E_\xi \widehat{N}_n,\quad \Pi_n = m_0\cdots m_{n-1}, \quad \Pi_0=1.
\end{equation*}
It is well known that the normalized sequence $$W_n=\frac{1}{\Pi_n} Z_n(\mathbb{R}), \quad n\geq 1$$
constitutes a martingale with respect to the filtration
$(\mathscr{F}_n)$ defined by
$$ \mathscr{F}_0=\{\emptyset,\Omega \}, \quad \mathscr{F}_n =\sigma ( \xi, N_u:|u| < n), \mbox{ for }n\geq 1. $$
Throughout the paper, we shall always assume the following conditions:
\begin{equation}\label{cbrweq1}
\E \ln m_0>0 \quad { \mathrm{and}}\quad {\E}\left[\frac{1}{m_0}\widehat{N}_0 \Big(\ln ^+{\widehat{N}_0 }\Big)^{1+\lambda} \right]<\infty ,
\end{equation}
where the value of $\lambda >0$ is to be specified in the hypothesis of the theorems. Under these conditions, the underlying branching process $\{ Z_n(\mathbb{R})\}$ is \emph{supercritical}, $ Z_n(\mathbb{R}) \rightarrow \infty$ with positive probability,
and the limit
\begin{equation*}
W=\lim_n W_n
\end{equation*}
verifies $\E W =1$ and $W>0$ almost surely (a.s.) on the explosion event $\{Z_\infty \rightarrow \infty \}$ (cf. e.g. \cite{AthreyaKarlin71BPRE2,Tanny1988SPA}).
For $n\geq 0$, define
\begin{eqnarray*}
&& l_n = \E_\xi \widehat{L}_n,~~
\sigma_{n}^{(\nu)} =\E_\xi \big(\widehat{L}_n- l_n\big)^\nu, \mbox{ for } \nu\geq 2;\\
&&
\ell_n= \sum_{k=0}^{n-1} l_{k}, \quad s_n^{(\nu)} = \sum_{k=0}^{n-1} \sigma_{k}^{(\nu)} , \mbox{ for } \nu\geq 2, \quad
s_n = \big(s_n^{(2)}\big)^{\frac{1}{2} }.
\end{eqnarray*}
We will need the following conditions on the motion of particles:
\begin{equation}\label{cbrweq2-3}
\P \Big( \limsup_{|t|\rightarrow \infty }\big|\E_\xi e^{it\widehat{L}_0}\big| <1 \Big) >0 \quad \mbox{ and } \quad \E\big(| \widehat{L}_0 |^{\eta } \big) <\infty , \end{equation}
where the value of $\eta>1 $ is to be specified in the hypothesis of the theorems. The first hypothesis means that Cram\'er's condition about the characteristic function of $\widehat{L}_0$ holds with positive probability.
Let $\{N_{1,n}\}$ and $\{N_{2,n}\}$ be two sequences of random variables, defined respectively by
\begin{equation*}
N_{1,n} = \frac{1}{\Pi_n} \sum_{u\in \T_n} (S_u-\ell_n)\quad \mbox{and} \quad N_{2,n} = s_n^2 W_n- \frac{1}{\Pi_n} \sum_{u\in \T_n} (S_u-\ell_n)^2.
\end{equation*}
We shall prove that they are martingales with respect to
the filtration $ (\mathscr{D}_n ) $ defined by $$ \mathscr{D}_0=\{\emptyset,\Omega \}, \quad \mathscr{D}_n = \sigma ( \xi, N_u, L_{ui}: i\geq 1, |u| <
n), \mbox{ for $n\geq 1$}.$$
More precisely,we have the following propositions.
\begin{prop}\label{th1a} Assume \eqref{cbrweq1} and $\E \big(\ln^- m_0\big)^{1+\lambda}<\infty$ for some $ \lambda>1$, and
$ \E\big(| \widehat{L}_0 |^{\eta } \big) <\infty $ for some $ \eta>2$.
Then the sequence $\{(N_{1,n}, \mathscr{D}_n) \}$ is a martingale and converges a.s.:
$$V_1:= \displaystyle \lim_{n\rightarrow\infty}N_{1,n} \mbox{ exists a.s. in } \R.
$$
\end{prop}
\begin{prop}
\label{th2a}
Assume \eqref{cbrweq1} and $\E \big(\ln^- m_0\big)^{1+\lambda}<\infty$ for some $ \lambda>2$ ,
and $ \E\big(| \widehat{L}_0 |^{\eta } \big) <\infty $ for some $ \eta>4$.
Then the sequence $\{(N_{2,n}, \mathscr{D}_n) \}$ is a martingale and converges a.s.:
$$V_2 := \displaystyle \lim_{n\rightarrow\infty}N_{2,n} \mbox{ exists a.s. in } \R.
$$
\end{prop}
Our main results are the following two theorems. The first theorem concerns the exact convergence rate in the central limit theorem about the counting measure $Z_n$, while the second one is a local limit theorem. We shall use the notation
$$ Z_n(t)=Z_n((-\infty, t]), \;\; \phi(t)=\frac{1}{\sqrt{2\pi}}e^{-t^2/2}, \;\; \Phi(t) = \int_{-\infty}^{t}\phi(x) \mathrm{d}x, \quad t\in \R. $$
\begin{thm}\label{th1}
Assume \eqref{cbrweq1} for some $\lambda>8$, \eqref{cbrweq2-3} for some $ \eta>12$ and $\E m_0^{-\delta}<\infty$ for some $\delta>0.$ Then for all $ t\in \R$,
\begin{equation} \label{cbrweq4}
\sqrt{n}\Big[\frac{1}{\Pi_n} Z_n(\ell_n+s_n t) - \Phi(t) W\Big] \xrightarrow{n \rightarrow \infty } \mathcal{V}(t) \quad \mbox {a.s.},
\end{equation}
where
\begin{align*}
\mathcal{V} (t) =- \frac{ \phi(t)\; V_1 }{ (\E \sigma_0^{(2)})^{1/2} } +\frac{(\E \sigma_0^{(3)}) \, (1-t^2)\; \phi(t) \; W }{6 (\E \sigma_0^{(2)})^{3/2} } .
\end{align*}
\end{thm}
\begin{thm}\label{th2}
Assume \eqref{cbrweq1}for some $\lambda>16$, \eqref{cbrweq2-3} for some $ \eta >16$ and $\E m_0^{-\delta}<\infty$ for some $\delta>0.$ Then for any bounded measurable set $A\subset \R$ with Lebesgue measure $|A|>0$,
\begin{equation}\label{cbrweq4a}
{n}\Bigg[ \sqrt{2\pi}s_n\Pi_n^{-1} Z_n(A+\ell_n) - W\int_A e^{-\frac{x^2}{2s_n^2}} dx\Bigg] \xrightarrow{n \rightarrow \infty } \mu(A) \quad \mbox {a.s.},
\end{equation}
where
$$ \mu(A) = \frac{|A|} { 2 \E \sigma_0^{(2)} } \Big( V_2 + 2 \; \overline{x}_A V_1\Big) + \frac{ |A| \; c(A) }{8 (\E \sigma_0^{(2)})^{2}}
$$
with $\displaystyle \overline{x}_A= \frac{1} { |A| } \int_A xdx $\; and
$$ c(A) = W \; \E\big(\sigma_0^{(4)}-3\big(\sigma_0^{(2)}\big)^2 \big)
~+ 4 \; ({\E \sigma_0^{(3)} }) (V_1 -\overline{x}_AW ) - \frac{5 (\E\sigma_0^{(3)})^2 }{ 3 \; \E \sigma_0^{(2)} }W. $$
\end{thm}
\begin{rem}
For a branching Wiener process,
Theorems \ref{th1} and \ref{th2} improve Theorems 3.1 and 3.2 of Chen (2001,\cite{Chen2001}) by relaxing the second moment condition used by Chen to the
moment condition of the form $ \E X (\ln^+ X)^{1+\lambda} < \infty$ (cf. (\ref{cbrweq1})). For a branching random walk with a constant or random environment,
the second terms in $\mathcal{V}(\cdot)$ and $\mu(\cdot)$ are new: they did not appear
in Chen's results \cite{Chen2001} for a branching Wiener process; the reason is that in the case of a Brownian motion, we have $\sigma_0^{(3)}= \sigma_0^{(4)}-3\big(\sigma_0^{(2)}\big)^2=0$.
\end{rem}
\begin{rem} As will be seen in the proof,
if we assume an exponential moment condition for the motion law, then the moment condition on the underlying branching mechanism can be weakened: in that case, we only need to assume that $\lambda>3/2$ in Theorem \ref{th1} and $\lambda >4$ in Theorem \ref{th2}. In particular, for a branching Wiener process, Theorem \ref{th1} (resp. Theorem \ref{th2} ) is valid when (\ref{cbrweq1}) holds for some $\lambda>3/2$ (resp. $\lambda >4$).
\end{rem}
\begin{rem} \label{rem-lattice}
When the Cram\'er condition $ \P \Big( \limsup_{|t|\rightarrow \infty }\big|\E_\xi e^{it\widehat{L}_0}\big| <1 \Big) >0 $ fails,
the situation is different. Actually, while revising our manuscript we find that a lattice version (about a branching random walk on $\mathbb{Z}$ in a constant environment, for which the preceding condition fails) of Theorems \ref{th1} and \ref{th2} has been established very recently in \cite{GK2015}.
\end{rem}
For simplicity and without loss of generality, hereafter we always assume that $l_n=0$ (otherwise, we only need to replace $L_{ui}$ by $L_{ui}-l_n$) and hence $\ell_n=0$. In the following, we will write $K_\xi $ for a constant depending on the environment, whose value may vary from lines to lines.
\section{Notation and Preliminary results}\label{sec6}
In this section, we introduce some notation and important lemmas which will be used in the sequel.
\subsection{Notation}\label{sec3.1}
In addition to the $\sigma-$fields $\mathscr{F}_n$ and $\mathscr{D}_n$, the following $\sigma$-fields will also be used:
\begin{eqnarray*}
\mathscr{I}_0=\{\emptyset,\Omega \}, \quad \mathscr{I}_n &=& \sigma ( \xi_k, N_u, L_{ui}: k<n, i\geq 1, |u| <
n) \mbox{ for $n\geq 1$}.
\end{eqnarray*}
For conditional probabilities and expectations, we write:
\begin{eqnarray*}
&&\P_{\xi, n}(\cdot ) = \P_\xi(\cdot | \D_n), \quad \E_{\xi,n}(\cdot )= \E_\xi(\cdot | \D_n);\quad \P_{n}(\cdot )= \P(\cdot | \I_n), \quad \E_{n}(\cdot )= \E(\cdot | \I_n);
\\ && \P_{\xi, \mathscr{F}_n}(\cdot ) = \P_\xi(\cdot | \F_n), \quad \E_{\xi,\F_n}(\cdot )= \E_\xi(\cdot | \F_n) .
\end{eqnarray*}
As usual, we set $\N^* = \{1,2,3,\cdots \}$ and denote by
$$ U= \bigcup_{n=0}^{\infty} (\N^*)^n $$
the set of all finite sequences, where $(\N^*)^0=\{\varnothing \}$ contains the null sequence $ \varnothing$.
For all $u\in U$, let $\mathbb{T}(u)$ be the shifted tree of $\mathbb{T}$ at $u$ with defining elements $\{N_{uv}\}$: we have
1) $\varnothing \in \mathbb{T}(u)$, 2) $vi\in \mathbb{T}(u)\Rightarrow v\in \mathbb{T}(u)$ and 3) if $v\in \mathbb{T}(u)$, then $vi\in \mathbb{T}(u)$ if and only if $1\leq i\leq N_{uv} $. Define $\mathbb{T}_n(u)=\{v\in \mathbb{T}(u): |v|=n\}$. Then $\mathbb{T}=\mathbb{T}(\varnothing)$ and $\mathbb{T}_n=\mathbb{T}_n(\varnothing)$.
For every integer $m\geq 0$, let $H_m$ be the Chebyshev-Hermite polynomial of degree $m$ (\cite{Petrov75}):
\begin{equation}\label{eqCH}
H_{m}(x)=m! \sum_{k=0}^{\lfloor \frac{m}{2}\rfloor} \frac{(-1)^k x^{m-2k}}{ k!(m-2k)! 2^k}.
\end{equation}
The first few Chebyshev-Hermite polynomials relevant to us are:
\begin{eqnarray*}
& & H_0(x)=1, \\
& & H_1(x) =x,\\
& & H_2(x)=x^2-1,\\
&& H_3(x)= x^3-3x, \\
& & H_4(x)= x^4-6x^2+3, \\
&& H_5 (x)= x^5-10x^3+15x, \\
&& H_6(x)= x^6-15x^4+45x^2-15, \\
&& H_7(x)= x^7-21x^5+105x^3-105x, \\
&& H_8(x)= x^8-28x^6+210x^4-420x^2+105.
\end{eqnarray*}It is known that (\cite{Petrov75}) : for every integer $m\geq 0$
\begin{equation*}
\Phi^{(m+1)}(x) = \frac{d^{m+1}}{dx^{m+1}}\Phi(x)= (-1)^m\phi(x)H_m(x).
\end{equation*}
\subsection{Two preliminary lemmas}
We first give an elementary lemma which will be often used in Section \ref{sec5}.
\begin{lem}\label{lemma2-1}
\begin{enumerate}
\item[(a)] For $x,y \geq 0$,
\begin{equation}\label{cbrw2.1}
\ln^+(x+y) \leq 1+\ln^+x+ \ln^+y , \qquad \ln(1+x) \leq 1+\ln^+x.
\end{equation}
\item[(b)] For each $\lambda >0$, there exists a constant $K_\lambda>0$, such that \begin{equation}\label{cbrw3.4}
(\ln^+ x) ^{1+\lambda} \leq K_\lambda x, \ \ x>0,
\end{equation}
\item[(c)] For each $\lambda >0$, the function
\begin{equation}\label{cbrw2.3}
( \ln(e^\lambda+ x) )^{1+\lambda} \;\; \mbox{ is concave for } \; x>0.
\end{equation}
\end{enumerate}
\end{lem}
\begin{proof}
Part (a) holds since $\ln^+(x+y) \leq \ln^+(2 \max\{x,y\}) \leq 1+\ln^+x+ \ln^+y$.
Parts (b) and (c) can be verified easily.
\end{proof}
We next present
the Edgeworth expansion for sums of independent random variables, that we shall need in Sections \ref{sec3} and \ref{sec4} to prove the main theorems. Let us recall the theorem used in this paper obtained by Bai and Zhao(1986, \cite{BaiZhao1986}), that generalizing the case for i.i.d random variables (cf. \cite[P.159, Theorem 1]{Petrov75}).
Let $\{X_j\}$ be independent random variables, s atisfying for each $j\geq 1$
\begin{equation}\label{cbrwa1}
\E X_j=0, \E |X_j|^{k} <\infty \mbox{ with some integer } k \geq 3.
\end{equation}
We write $B_n^2 = \sum_{j=1}^{n} \E X_j^2$ and only consider the nontrivial case $B_n>0$.
Let $\gamma_{\nu j}$ be the $\nu$-order cumulant of $X_j$ for each $j\geq1$.
Write
\begin{align*}
& \lambda_{\nu,n }= n^{(\nu-2)/2} B_n^{-\nu} \sum_{j=1}^n \gamma_{\nu j}, \quad {\nu=3,4\cdots, k}; \\
& Q_{\nu,n}(x)= \sum { }^{'}(-1)^{\nu+2s}\Phi^{(\nu+2s)}(x) \prod_{m=1}^{\nu} \frac{1}{k_m!} \bigg(\frac{\lambda_{m+2,n}}{(m+2)!}\bigg)^{k_m}
\\ & \qquad \quad =- \phi(x)\sum { }^{'} H_{\nu+2s-1}(x)\prod_{m=1}^{\nu} \frac{1}{k_m!} \bigg(\frac{\lambda_{m+2,n}}{(m+2)!}\bigg)^{k_m},
\end{align*}
where the summation $ \sum { }^{'}$ is carried out over all nonnegative integer solutions $(k_1, \dots, k_\nu )$ of the equations:
\begin{equation*}
k_1+\cdots+ k_\nu=s \quad \mbox{ and } \quad k_1+2k_2+\cdots +\nu k_{\nu}=\nu.
\end{equation*}
For $ 1\leq j\leq n$ and $x\in \R$, define
\begin{align*}
& F_n(x)= \P \Big ( {B_n}^{-1} \sum_{j=1}^n X_j \leq x\Big), \quad v_j(t) = \E e^{itX_j}; \\
&Y_{nj}= X_j \mathbf{1}_{\{ |X_j| \leq B_n\}}, \quad Z_{nj}^{(x)}= X_{j }\mathbf{1}_{ \{|X_j| \leq B_n(1+|x|)\}}, \quad W_{nj}^{(x)}= X_{j }\mathbf{1}_{ \{|X_j| > B_n(1+|x|)\}}.
\end{align*}
The Edgeworth expansion theorem can be stated as follows.
\vspace{2mm}
\begin{lem} [\cite{BaiZhao1986}] \label{lem-Edge-exp}
Let $n\geq 1$ and $X_1, \cdots, X_n$ be a sequence of independent random variables satisfying \eqref{cbrwa1} and $ B_n>0$. Then for the integer $k\geq 3$,
\begin{multline*}
| F_n(x) - \Phi(x)- \sum_{\nu=1}^{k-2} Q_{\nu n}(x)n^{-1/2} | \leq C(k)\Bigg\{ (1+|x|)^{ -k} B_n^{-k} \sum_{j=1}^n \E |W_{nj}^{(x)}|^k + \\ (1+|x|)^{ -k-1} B_n^{-k-1} \sum_{j=1}^n\E |Z_{nj}^{(x)}|^{k+1} + (1+|x|)^{ -k-1} n^{k(k+1)/2}\Big( \sup_{|t|\geq \delta_n} \frac{1}{n} \sum_{j=1}^n |v_{j}(t)| +\frac{1}{2n} \Big)^n \Bigg\},
\end{multline*}
where $\displaystyle \delta_n = \frac{1}{12} {B_n^2}{ (\sum_{j=1}^n\E |Y_{nj}|^3)^{-1} }$, $C(k)>0 $ is a constant depending only on $k$.
\end{lem}
\section{Convergence of the martingales $\{(N_{1,n},\D_n)\}$ and $\{( N_{2,n}, \D_n )\}$}\label{sec5}
Now we can proceed to prove the convergence of the two martingales defined in Section \ref{sec2}.
\subsection{Convergence of the martingale $ \{ (N_{1,n},\D_n) \}$ }
The fact that $ \{ (N_{1,n},\D_n) \}$ is a martingale can be easily shown: it suffices to notice that
\begin{eqnarray*}
\E_{\xi, n} {N_{1,n+1} }&=&\E_{\xi, n} \bigg( \frac{1}{\Pi_{n+1}} \sum_{u\in \T_{n+1}} S_u \bigg) = \frac{1}{\Pi_{n+1}} \E_{\xi, n} \bigg( \sum_{u\in \T_{n}} \sum_{i=1}^{N_u}( S_u+ L_{ui} ) \bigg) \\
&=&\frac{1}{\Pi_{n+1}} \sum_{u\in \T_{n}} \E_{\xi, n} \Bigg( \sum_{i=1}^{N_u}( S_u+ L_{ui} ) \Bigg) \\
&=&\frac{1}{\Pi_{n+1}} \sum_{u\in \T_{n}} m_n S_u= N_{1,n}.
\end{eqnarray*}
We shall prove the convergence of the martingale by showing that the series
\begin{equation} \label{eqn-series1}
\sum_{n=1}^{\infty} I_n \;\; \mbox{ converges a.s., \; with } \;\; I_{n}= N_{1,n+1}-N_{1,n}.
\end{equation}
To this end, we first establish a lemma. For $n\geq 1$ and $|u|= n$, set
\begin{eqnarray}\label{eqn-Xu}
&& X_u=S_u \bigg(\frac{N_u}{m_{|u|}} -1\bigg) + \sum_{i=1}^{N_u}\frac{L_{ui}}{m_{|u|}},
\end{eqnarray}
and let $\widehat{X}_n $ be a generic random variable of $X_u$, i.e. $\widehat{X}_n $ has the same distribution with $X_u$ (for $|u|=n$). Recall
that $\widehat{N}_n $ has the same distribution as $N_u$, $|u|=n$.
We proceed the proof by proving the following lemma:
\begin{lem}\label{lem6} Under the conditions of Proposition \ref{th1a}, we have
\begin{equation}\label{cbrweq5-1}
\E_{\xi} {|\widehat{X}_n|} (\ln^+{{|\widehat{X}_n|}} )^{1+\lambda} \leq K_{\xi}n \left( (\ln n)^{1+\lambda} + \E_\xi \frac{\widehat{N}_n}{ m_n} (\ln^+ \widehat{N}_n)^{1+\lambda} + (\ln^- m_n)^{1+\lambda}\right),
\end{equation}
where $K_\xi$ is a constant.
\end{lem}
\begin{proof}
For $u\in \T_n$,
\begin{eqnarray*}
& &|X_u|\leq |S_u|\left(1+ \frac{N_u}{m_{n}}\right) + \frac{ \abs{\sum_{i=1}^{N_u} L_{ui}}}{m_{n}}, \\
& & \ln^+ |X_u|\leq 2+ \ln^+|S_u| + \ln(1+ N_u/m_n) + \ln^+ \abs{\frac{1}{m_{n}} \sum_{i=1}^{N_u} L_{ui} }, \\
& & 4^{-\lambda}(\ln^+ |X_u|)^{1+\lambda}\leq 2^{1+\lambda}+( \ln^+|S_u|)^{1+\lambda} + \bigg(\ln\left(1+ \frac{N_u}{m_{n}}\right)\bigg)^{1+\lambda} + \bigg(\ln^+ \abs{ \frac{1}{m_{n}} \sum_{i=1}^{N_u} L_{ui} }\bigg)^{1+\lambda}.
\end{eqnarray*}
Hence we get that
$$ 4^{-\lambda} |X_u| (\ln^+|X_u|)^{1+\lambda}\leq \sum_{i=1}^8 \mathbb{J}_i, $$
with
\begin{eqnarray*}
&& \mathbb{J}_1= 2^{1+\lambda} |S_u|\left(1+ \frac{N_u}{m_{n}}\right) ,\qquad \mathbb{J}_2= |S_u|( \ln^+|S_u|)^{1+\lambda} \left(1+ \frac{N_u}{m_{n}}\right), \\ && \mathbb{J}_3= |S_u|\left(1+ \frac{N_u}{m_{n}}\right)\bigg(\ln\left(1+ \frac{N_u}{m_{n}}\right)\bigg)^{1+\lambda}, \qquad \mathbb{J}_4= |S_u|\left(1+ \frac{N_u}{m_{n}}\right) \bigg(\ln^+ \abs{ \frac{1}{m_{n}}\sum_{i=1}^{N_u} L_{ui} }\bigg)^{1+\lambda}, \\
&&\mathbb{J}_5= \frac{2^{1+\lambda}}{m_{n}}\abs{\sum_{i=1}^{N_u} L_{ui}} ,\quad \mathbb{J}_6= \frac{( \ln^+|S_u|)^{1+\lambda}}{m_{n}} \abs{\sum_{i=1}^{N_u} L_{ui}},\quad \mathbb{J}_7=\bigg(\ln\left(1+ \frac{N_u}{m_{n}}\right)\bigg)^{1+\lambda} \abs{\frac{1}{m_{n}}\sum_{i=1}^{N_u} L_{ui}}, \\ && \mathbb{J}_8=\frac{1}{m_{n}}\abs{\sum_{i=1}^{N_u} L_{ui}}\bigg(\ln^+ \abs{ \frac{1}{m_{n}} \sum_{i=1}^{N_u} L_{ui} }\bigg)^{1+\lambda}.
\end{eqnarray*}
Since \begin{equation*}
\lim_{n\rightarrow \infty} \frac{1}{n} \sum_{j=1}^{n} \E_\xi|\widehat{L}_j|^{q} =\E |\widehat{L}_1|^q <\infty, \quad q=1,2 ,
\end{equation*}
there exists a constant $K_\xi < \infty $ depending only on $\xi$ such that for $n\geq 1$ and $|u|= n$,
\begin{equation}\label{cbrweq3.1}
\E_\xi |\widehat{L}_n|\leq K_\xi n, \quad \E_\xi |S_u|\leq \sum_{j=1}^{n} \E_\xi|\widehat{L}_j| \leq K_\xi n, \quad \E_\xi |S_u|^2=\sum_{j=1}^{n} \E_\xi|\widehat{L}_j|^2 \leq K_\xi n.
\end{equation}
By the definition of the model, $S_u$, $N_u$ and $L_{ui}$ are mutually independent under $\P_\xi$. On the basis of the above estimates, we have the following inequalities, where $K_\xi $ is a constant depending on $\xi$, whose value may be different from lines to lines: for $n\geq 1$ and $|u|= n$,
\begin{align*}
& \E_\xi \J_1 = 2^{1+\lambda} \E_\xi |S_u| \E_\xi \left(1+ \frac{N_u}{m_{|u|}}\right) \leq K_\xi n; \\
& \E_\xi \J_2 \leq K_{\lambda} \E_\xi( |S_u|^2 +|S_u|) \leq K_\xi n \quad (\mbox{by } \eqref{cbrw3.4}); \\
& \E_\xi \J_3 \leq \E_\xi|S_u| \E_\xi \left(1+ \frac{N_u}{m_{|u|}}\right)\bigg(\ln\left(1+ \frac{N_u}{m_{n}}\right)\bigg)^{1+\lambda} \\
&\qquad \leq K_\xi n \left(K_\xi+\E_\xi \frac{\widehat{N}_n}{ m_n} (\ln^+ \widehat{N}_n)^{1+\lambda} + \big(\ln^{-} m_n \big)^{1+\lambda} \right);
\\ &
\E_\xi \J_4 \leq \E_\xi |S_u| \; \E_\xi \left[ \left(1+ \frac{N_u}{m_{|u|}} \right) \bigg(\ln \Big(e^{\lambda}+ \frac{1}{m_{|u|}}\abs{ \sum_{i=1}^{N_u} L_{ui} } \Big)\bigg)^{1+\lambda} \right]
\\ & \qquad\leq (K_\xi n) \E_\xi \left[ \left(1+ \frac{N_u}{m_{|u|}} \right) \bigg(\ln \E_\xi \Big( e^{\lambda}+ \frac{1}{m_{|u|}}\sum_{i=1}^{N_u} \abs{ L_{ui} } \; \Big | \; N_u\Big) \bigg)^{1+\lambda} \right]
\\ & \qquad ~~~~ \mbox{(by Jensen's inequality under $\E_\xi ( \cdot | N_u)$ using the concavity of $(\ln (e^{\lambda}+x))^{1+\lambda}$)}
\\& \qquad = (K_\xi n) \E_\xi \left(1+ \frac{N_u}{m_{|u|}} \right) \bigg(\ln \Big( e^{\lambda}+ \frac{1}{m_{|u|}}\sum_{i=1}^{N_u}\E_\xi \abs{ L_{ui} }\Big)\bigg)^{1+\lambda}
\\ & \qquad\leq K_\xi n \left( K_\xi (\ln n)^{1+\lambda}+ \E_\xi \Big(\frac{1}{ m_{|u|} }N_u\big(\ln^+ { { N_u} } \big)^{1+\lambda}\Big) +2 \big(\ln^{-} m_n \big)^{1+\lambda}\right )
\\ &\qquad\leq K_\xi n ( \ln n )^{1+\lambda}+ K_\xi n \E_\xi \frac{1}{ m_n}\widehat{N}_n \big(\ln^+ {{ \widehat{N}_n }} \big) ^{1+\lambda} + K_\xi n\big(\ln^{-} m_n \big)^{1+\lambda} ;
\\ & \E_\xi \J_5 \leq 2^{1+\lambda} \E_\xi |\widehat{L}_n|\leq K_\xi n ;
\\ & \E_\xi \J_6 = \E_\xi (\ln^+ |S_u|)^{1+\lambda} \E_\xi \frac{1}{m_{|u|}} \abs{\sum_{i=1}^{N_u} L_{ui} }\leq \E_\xi (\ln (e^{\lambda}+ |S_u|))^{1+\lambda} \E_\xi \frac{1}{m_{|u|}} \abs{\sum_{i=1}^{N_u} L_{ui} }
\\ &\qquad \leq (\ln(e^{\lambda}+\E_\xi|S_u|) )^{1+\lambda} \E_\xi|\widehat{L}_n|\leq (\ln (K_\xi n) )^{1+\lambda} K_\xi n \leq K_\xi n (\ln n )^{1+\lambda};
\\ & \E_\xi \J_7 \leq \E_\xi \left[ \frac{1}{m_n} \sum_{i=1}^{N_u} (\E_\xi |L_{ui}|) \Big(\ln\big (1+\frac{N_u}{m_n}\big)\Big)^{1+\lambda} \right] \; \mbox{ (by the independence between $N_u$ and $L_{ui}$)}\\
& \qquad \leq K_\xi n \E_\xi\left[\frac{1}{m_n}N_u 3^{\lambda}
\Big( 1+ (\ln^+ N_u)^{1+\lambda} + (\ln^-m_n)^{1+\lambda}\Big)\right]
\\ & \qquad \leq K_\xi n + K_\xi n \E_\xi \frac{1}{ m_n}\widehat{N}_n \big(\ln^+ {{ \widehat{N}_n }} \big) ^{1+\lambda} + K_\xi n\big(\ln^{-} m_n \big)^{1+\lambda};
\\ & \E_\xi \J_8 \leq \E_\xi \left[\frac{1}{m_n} \Big|{\sum_{i=1}^{N_u} L_{ui}}\Big| \bigg( \ln^+ \Big|{\sum_{i=1}^{N_u} L_{ui}}\Big|+ \ln^- m_n \bigg)^{1+\lambda}\right]
\\ & \qquad \leq \E_\xi \left[\frac{1}{m_n} \Big|{\sum_{i=1}^{N_u} L_{ui}}\Big| 2^{\lambda} \bigg( \Big(\ln^+ \Big|{\sum_{i=1}^{N_u} L_{ui}}\Big|\Big)^{1+\lambda} + (\ln^- m_n)^{1+\lambda} \bigg) \right]
\\ & \qquad \leq K_\lambda \frac{1}{m_n}\E_\xi \Big|{\sum_{i=1}^{N_u} L_{ui}}\Big| ^2 + 2^{\lambda} (\ln^- m_n)^{1+\lambda} \frac{1}{m_n} \E_\xi \Big|{\sum_{i=1}^{N_u} L_{ui}}\Big| \quad (\mbox{ by } \eqref{cbrw3.4})
\\& \qquad \leq K_\lambda \frac{1}{m_n} \E_\xi \sum_{i=1}^{N_u} \E_\xi |L_{ui}|^2 + 2^{\lambda} (\ln^- m_n)^{1+\lambda} \frac{1}{m_n} \E_\xi {\sum_{i=1}^{N_u}\E_\xi \Big| L_{ui}}\Big|
\\&\qquad\leq K_\xi n + K_\xi n (\ln^- m_n)^{1+\lambda}.
\end{align*}
Hence we get that for $n\geq 1$ and $|u|= n$,
\begin{equation}\label{cbrweq3.4}
\E_{\xi} {|X_u|} (\ln^+{{|X_u|}} )^{1+\lambda}\leq K_{\xi}n \left( (\ln n )^{1+\lambda} + \E_\xi \frac{\widehat{N}_n}{ m_n} \Big(\ln^+ { \widehat{N}_n} \Big) ^{1+ \lambda} + (\ln^- m_n)^{1+\lambda}\right).
\end{equation}
This gives \eqref{cbrweq5-1}.
\end{proof}
\begin{proof}[Proof of Proposition \ref{th1a}] We have already seen that $ \{ (N_{1,n},\D_n) \}$ is a martingale. We now prove its convergence by showing the a.s. convergence of $\sum I_n $ (cf. (\ref{eqn-series1})). Notice that
$$ I_{n}= N_{1,n+1}-N_{1,n} = \frac{1 }{\Pi_n} \sum_{u\in \T_n} X_u. $$
We shall use a truncating argument to prove the convergence. Let
$$
X_u'= X_u \mathbf{1}_{\{|X_u| \leq \Pi_{|u|}\}} \quad \mbox{ and }
I_{n}'= \frac{1 }{\Pi_n} \sum_{u\in \T_n} X'_u.
$$
The following decomposition will play an important role:
\begin{equation} \label{eqn-decomposition}
\sum_{n=0}^\infty I_n = \sum_{n=0}^\infty (I_n-I_n')+ \sum_{n=0}^\infty (I_n'- \E_{\xi,\F_n} I_n' ) + \sum_{n=0}^\infty \E_{\xi,\F_n} I_n'.
\end{equation}
We shall prove that each of the three series on the right hand side converges a.s.
To this end, let us first prove that
\begin{equation}\label{cbrweq3.2}
\sum_{n=1}^\infty\frac{1}{(\ln \Pi_n)^{1+\lambda}} \E_{\xi} {|\widehat{X}_n|} (\ln^+{{|\widehat{X}_n|}} )^{1+\lambda}<\infty \quad \mbox{ a.s. }
\end{equation}
Since $ \lim_{n\rightarrow\infty} {\ln \Pi_n}/{ n} =\E\ln m_0 >0$ a.s., for a given constant $0 <\delta_1< \E\ln m_0$ and for $n$ large enough,
\begin{equation*}
\ln \Pi_n > \delta_1 n,
\end{equation*}
so that, by Lemma \ref{lem6},
\begin{equation*}
\frac{1}{(\ln \Pi_n)^{1+\lambda}} \E_{\xi} {|\widehat{X}_n|} (\ln^+{{|\widehat{X}_n|}} )^{1+\lambda}\leq \frac{K_\xi}{\delta_1^{1+\lambda}} \frac{1}{n^{\lambda}} \left[(\ln n)^{1+\lambda}+ \E_\xi \frac{\widehat{N}_n }{m_n} (\ln ^+ {\widehat{N}_n} )^{1+ \lambda} + (\ln^- m_n)^{1+\lambda}\right].
\end{equation*}
Observe that for $ \lambda>1 $, \begin{align*}
& \E \sum_{n=1}^\infty \frac{1}{n^{\lambda} }\bigg[\E_\xi \frac{\widehat{N}_n }{m_n} (\ln ^+ {\widehat{N}_n} )^{1+ \lambda}+ (\ln^- m_n)^{1+\lambda}\bigg ] \\
= \ & \sum_{n=1}^\infty \frac{ 1}{n^{\lambda} } \bigg[ \E \frac{{\widehat{N}_0} }{m_0} (\ln^+ {\widehat{N}_0} )^{1+\lambda}+ \E (\ln^- m_0)^{1+\lambda} \bigg] <\infty,
\end{align*}
which implies that
\begin{equation*}
\sum_{n=1}^\infty \frac{1}{n^{\lambda} }\bigg[\E_\xi \frac{\widehat{N}_n }{m_n} (\ln ^+ {\widehat{N}_n} )^{1+ \lambda}+ (\ln^- m_n)^{1+\lambda}\bigg ] <\infty \mbox{~~ a.s.}
\end{equation*}
Therefore (\ref{cbrweq3.2}) holds.
For the first series $\sum_{n=0}^\infty (I_n-I_n')$ in (\ref{eqn-decomposition}), we observe that
\begin{eqnarray*}
\E_\xi |I_n-I_n'| &=& \E_\xi \abs{\frac{1}{\Pi_n} \sum_{u\in \T_n} X_u \ind{|X_u| >\Pi_n} } \\
&\leq & \E_\xi \left\{\frac{1}{\Pi_n} \sum_{u\in \T_n} \E_{\xi,\F_n} ({|X_u|}\ind{|X_u| >\Pi_n})\right\} \\
&=&\E_{\xi}\big( {|\widehat{X}_n|} \ind{\abs{\widehat{X}_n} >\Pi_n}\big)\\ & \leq &\frac{1}{(\ln \Pi_n)^{1+\lambda}}\E_{\xi} {|\widehat{X}_n|} (\ln^+{{|\widehat{X}_n|}} )^{1+\lambda} .
\end{eqnarray*}
From this and \eqref{cbrweq3.2}, \begin{equation*}
\E_{\xi}\sum_{n=0}^\infty \Big|I_n-I_n'\Big| \leq \sum_{n=0}^\infty \E_\xi |I_n-I_n'| <\infty,
\end{equation*}
whence $\sum_{n=0}^\infty (I_n-I_n')$ converges a.s.
For the third series $\sum_{n=0}^{\infty} \E_{\xi,\F_n} I_n'$, as $ \E_{\xi,\F_n} I_n=0 $, we have
\begin{eqnarray*}
&&\E_\xi\sum_{n=0}^{\infty}\abs{ \E_{\xi,\F_n} I_n' } = E_\xi\sum_{n=0}^{\infty}\abs{ \E_{\xi,\F_n} (I_n-I_n') }
\leq \sum_{n=0}^{\infty}\E_{\xi} |I_n-I_n'| <\infty,
\end{eqnarray*}so that $\sum_{n=0}^{\infty} \E_{\xi,\F_n} I_n'$ converges a.s.
It remains to
prove that the second series
\begin{equation}\label{cbrweq3.3}
\sum_{n=0}^\infty (I_n'- \E_{\xi,\F_n}I_n') \mbox{ converges a.s. }
\end{equation}
By the a.s. convergence of an $L^2$ bounded martingale (see e.g. \cite[P. 251, Ex. 4.9]{Durrett96Proba}), we only need to show the convergence of the series
$ \sum_{n=0}^\infty \E_{\xi} (I_n'- \E_{\xi,\F_n}I_n')^2 .$
Notice
\begin{align*}
\E_{\xi} (I_n'- \E_{\xi,\F_n}I_n')^2 &= \E_\xi \Bigg(\frac{1}{\Pi_n} \sum_{u\in\T_n } (X_u'- \E_{\xi,\F_n} X_u')\Bigg)^2
= \E_{\xi} \Bigg(\frac{1}{\Pi_n^2} \sum_{u\in\T_n } \E_{\xi,\F_n} (X_u'- \E_{\xi,\F_n} X_u')^2\Bigg) \\
&\leq \E_\xi \frac{1}{\Pi_n^2} \sum_{u\in \T_n} \E_{\xi,\F_n} X_u'^2
=\frac{1}{\Pi_n} \E_{\xi} (\widehat{X}_n^2\ind{ |\widehat{X}_n| \leq \Pi_n} ) \\
&= \frac{1}{\Pi_n} \E_{\xi} \Big( \widehat{X}_n^2\ind{ |\widehat{X}_n| \leq \Pi_n } \ind{ |\widehat{X}_n| \leq e^{2\lambda}} + \widehat{X}_n^2\ind{ |\widehat{X}_n| \leq \Pi_n } \ind{ |\widehat{X}_n| >e^{2\lambda}} \Big)\\
&\leq \frac{e^{4\lambda}}{\Pi_n}+ \frac{1}{\Pi_n} \E_{\xi}\frac{ \widehat{X}_n^2 \Pi_n (\ln \Pi_n)^{-(1+\lambda) } }{ | \widehat{X}_n| (\ln^+ |\widehat{X}_n|)^{-(1+\lambda)}}
\\&
\quad (\mbox{ because } x(\ln x)^{-1-\lambda} \mbox{ is increasing for } x > e^{2\lambda}) \\ &= \frac{e^{4\lambda}}{\Pi_n}+\frac{ 1 }{ (\ln \Pi_n)^{1+\lambda}}\E_{\xi}|\widehat{X}_n|(\ln^+ |\widehat{X}_n|)^{1+\lambda}.
\end{align*}
Therefore by \eqref{cbrweq3.2}, we see that $\sum_{n=0}^\infty \E_{\xi} (I_n'- \E_{\xi,\F_n}I_n')^2 <\infty $ a.s.. This implies \eqref{cbrweq3.3}.
\if
For convenience, we will use the following notation:
\begin{eqnarray*}
&& X_u= S_u \bigg(\frac{N_u}{m_{|u|}} -1\bigg) + \sum_{i=1}^{N_u}\frac{L_{ui}}{m_{|u|}},\quad X_u'= X_u \mathbf{1}_{\{|X_u| \leq \Pi_{|u|}, |L_{{ui} }|\leq |u|, 1\leq i\leq N_u\}}; \\
&& I_{n}= V_{n+1}-V_{n} = \frac{1 }{\Pi_n} \sum_{u\in \T_n} X_u, \quad I_{n+1}'= \frac{1 }{\Pi_n} \sum_{u\in \T_n} X'_u.
\end{eqnarray*}
Observe that,
\begin{equation*}
\sum_{n=1}^{\infty} I_n = \sum_{n=1}^{\infty} (I_n-I_n') + \sum_{n=1}^{\infty} (I_n'- \E_{\xi, \F_n} I_n') + \sum_{n=1}^{\infty} \E_{\xi, \F_n} I_n'
\end{equation*}
So we will prove that the three series in the right hand side are finite a.s.
By the definition of the notations, we see that
\begin{equation*}
\sum_{n=1}^{\infty} (I_n-I_n') = \sum_{n=1}^{\infty} \frac{1}{\Pi_{n-1}} \sum_{u\in \T_{n-1}} X_u \mathbf{1}_{\{|X_u| > \Pi_{n-1}, | L_{{ui} }|\leq n-1, 1\leq i\leq N_u\}}.
\end{equation*}
Then we have
\begin{eqnarray*}
\E_\xi |\sum_{n=1}^{\infty} (I_n-I_n')| & \leq &\sum_{n=1}^{\infty} \E_\xi |(I_n-I_n')| \\
&\leq & \sum_{n=1}^{\infty} \E_\xi \bigg( \frac{1}{\Pi_n} \sum_{u\in\T_n} \E_{\xi, \F_n} X_u \mathbf{1}_{\{|X_u| >c^n\}} \bigg).
\end{eqnarray*}
Observe that
\begin{multline}\label{eq5}
\P_\xi(I_n\neq I_n') \leq \E_\xi \bigg( \sum_{u\in \T_{n-1} } \sum_{i=1}^{N_u} \P_\xi( |L_{ui}|>n)\bigg ) + \\
\E_\xi\bigg ( \sum_{u\in \T_{n-1} } \P_{\xi, \I_n}( |X_u|>\Pi_{n-1}, |L_{ui}| \leq n-1, 1\leq i\leq N_u ) \bigg).
\end{multline}
First we notice that \begin{eqnarray*}
&& \E \Big(\sum_{n=1}^{\infty} \sum_{u\in \T_n} \P_\xi (|L_u|>n )\Big) \\
&\leq& \E \Big(\sum_{n=1}^{\infty} \E_\xi (\sum_{u\in \T_n} \P_\xi (|L_u|>n ) )\Big ) \\
&=& \E \Big(\sum_{n=1}^{\infty}\Pi_n \P_\xi (|H_n| >n) \Big) \quad \mbox{(using the independence of $N_u$ and $ L_u$ under $\P_\xi$)}\\
&\leq & \E \Big(\sum_{n=1}^{\infty}\Pi_n e^{-\delta_0 n} \E_\xi e^{\delta_0|H_n|} \Big) \\
&\leq & \KM \sum_{n=1}^{\infty}e^{-\delta_0 n} \E \Pi_n < \Big (1-e^{(\ln \E m_0-\delta_0 )} \Big)^{-1}\KM <\infty .
\end{eqnarray*}
Then the first term in the right hand side of \eqref{eq5} is summable.
Because $ \lim_{n\rightarrow +\infty} \ln \Pi_n /n =\E\ln m_0>0$, then we have that for some constant $c>1$, $n $ large enough,
\begin{equation*}
\Pi_n>c^n.
\end{equation*}
Let $\widehat{X}_n$ be the generic random variable of $X_u \mathbf{1}_{\{|L_{ui}| \leq n, 1\leq i\leq N_u\}} $ and $\widehat{N}_n $ be the generic random variable of $N_u (|u|=n )$.
Then for $n$ large,
\begin{align*}
& \E_\xi\bigg ( \sum_{u\in \T_{n-1} } \P_{\xi, \I_n}( |X_u|>\Pi_{n-1}, |L_{ui}| \leq n-1, 1\leq i\leq N_u ) \bigg) \\
\leq~ & \Pi_{n-1} \E_\xi\bigg ( \P_{\xi, \I_n}( |\widehat{X}_n|>\Pi_{n-1} ) \bigg)
\\ \leq~ & \frac{\E_\xi |\widehat{X}_n| (\ln^{+}|\widehat{X}_n|)^{1+\lambda}}{(\ln \Pi_{n-1} )^{1+\lambda}}\\
\leq~ & \frac{1 }{ (n-1)^{1+\lambda} (\ln c)^{1+\lambda} } (\E_\xi S_u^2 +2n) \E_\xi\frac{\widehat{N}_{n-1}}{m_{n-1}}(\ln^+|\frac{\widehat{N}_{n-1}}{m_{n-1}}|)^{1+\lambda} \\
\leq ~& \frac{(\KM+2)}{(n-1)^\lambda(\ln c)^{1+\lambda} }\E_\xi\frac{\widehat{N}_{n-1}}{m_{n-1}}(\ln^+|\frac{\widehat{N}_{n-1}}{m_{n-1}}|)^{1+\lambda}.
\end{align*}
Note that \begin{align*}
&\E \sum_{n} \frac{(\KM+2)}{(n-1)^\lambda\ln c}\E_\xi\frac{\widehat{N}_{n-1}}{m_{n-1}}(\ln^+|\frac{\widehat{N}_{n-1}}{m_{n-1}}|)^{1+\lambda} = \sum_{n}\frac{(\KM+2)}{n^\lambda\ln c}\E\frac{N}{m_0}(\ln^+|\frac{N}{m_0}|)^{1+\lambda}< \infty
\end{align*}
implies that a.s.
\begin{equation*}
\sum_{n} \frac{(\KM+2)}{(n-1)^\lambda\ln c}\E_\xi\frac{N_u}{m_{n-1}}(\ln^+|\frac{N_u}{m_{n-1}}|)^{1+\lambda}<\infty.
\end{equation*}
Therefore
\begin{equation*}
\sum_n \E_\xi\bigg ( \sum_{u\in \T_{n-1} } \P_{\xi, \I_n}( |X_u|>\Pi_{n-1}, |L_{ui}| \leq n-1, 1\leq i\leq N_u ) \bigg) <\infty.
\end{equation*}
Now by the above arguments and \eqref{eq5}, we will obtain that
\begin{equation*}
\sum_n \P_\xi(I_n\neq I_n')<\infty.
\end{equation*}
By the Borel-Cantelli Lemma, we get that $ \P_\xi (I_n\neq I_n' \mbox{ infintey often} ) =0$ and hence that $ \sum_{n=1}^{\infty} (I_n-I_n')<\infty $.
Next we are going to prove that
\begin{equation*}
\sum_{n=1}^\infty (I_n' -\E_{\xi, \I_n} I_n')<\infty \quad a.s.
\end{equation*}
It will follow from the fact that $ \sum_{n=1}^{\infty} \E_\xi (I_n' -\E_{\xi, \I_n} I_n')^2 <\infty$.
Notice that
\begin{eqnarray*}
&& \E_\xi (I_n' -\E_{\xi, \I_n} I_n')^2 = \E_\xi \bigg( \E_{\xi, \I_n} \Big( \frac{1}{\Pi_{n-1}} \sum_{u\in \T_{n-1}} (X_u-\E_{\xi, \I_n} X_u' ) \Big)^2 \bigg)
\\&=&
\E_{\xi} \bigg( \frac{1}{\Pi_{n-1}^2} \sum_{u\in\T_{n-1}} \E_{\xi, \I_n}(X'_u-\E_{\xi, \I_n} X_u' )^2\bigg)
\\ & \leq&
\frac{1}{\Pi_{n-1}^2} \E_{\xi} \bigg( \sum_{u\in\T_{n-1}} \E_{\xi, \I_n}\big(X_u^2\mathbf{1}_{\{|X_u| < \Pi_{n-1}, |L_{{ui} }|\leq n, 1\leq i\leq N_u\}} \big) \bigg)
\\ &=&
\frac{1}{\Pi_{n-1}} \E_{\xi} \bigg( \E_{\xi, \I_n}\big(\widehat{X}_n^2\mathbf{1}_{\{|\widehat{X}_n| < \Pi_{n-1}\}} \big) \bigg)
\\ &=&
\frac{1}{\Pi_{n-1}} \E_{\xi} \bigg( \E_{\xi, \I_n}\big(\widehat{X}_n^2\mathbf{1}_{\{|\widehat{X}_n| < \Pi_{n-1}\}} (\mathbf{1}_{\{|\widehat{X}_n| \leq c\}} + \mathbf{1}_{\{|\widehat{X}_n| > c\}})\big) \bigg)
\\ &\leq &
\frac{c^2}{ \Pi_{n-1} } + \frac{1}{\Pi_{n-1}} \E_\xi \bigg( \E_{ \xi, \I_n} \frac{\widehat{X}_n^2 \Pi_{n-1} (\ln^+ \Pi_{n-1})^{-(1+\lambda) } }{ |\widehat{X}_n| (\ln^+ \widehat{X}_n )^{-(1+\lambda) }} \bigg)
\\ & \leq & \frac{c^2}{ \Pi_{n-1} } +\frac{\E_\xi |\widehat{X}_n| (\ln^{+}|\widehat{X}_n|)^{1+\lambda}}{(\ln \Pi_{n-1} )^{1+\lambda}}
\end{eqnarray*}
Then by use the properties of $\Pi_n$, we see that $ \sum_{n=1}^{\infty} \E_\xi (I_n' -\E_{\xi, \I_n} I_n')^2 <\infty$.
Now we turn to prove that $\sum_{n=1}^\infty \E_{\xi, \I_n} I_n' <\infty $.
Observe that ( using that symmetry of $L_{u}$),
\begin{eqnarray*}
\E_\xi \bigg|\sum_{n=1}^\infty \E_{\xi, \I_n} I_n' \bigg|&=& \E_\xi \Big|\sum_{n=1}^\infty \E_{\xi, \I_n} \Big( \frac{1}{\Pi_{n-1}} \sum_{u\in \T_{n-1}} X_u\mathbf{1}_{\{|X_u|\leq \Pi_{n-1}, |L_{ui}| \leq n, 1\leq i \leq N_u \} } \Big) \Big| \\
&=& \E_\xi \Big|\sum_{n=1}^\infty \E_{\xi, \I_n} \Big( \frac{1}{\Pi_{n-1}} \sum_{u\in \T_{n-1}} X_u\mathbf{1}_{\{|X_u|> \Pi_{n-1}, |L_{ui}| \leq n, 1\leq i \leq N_u \}} \Big) \Big| \\
&\leq & \sum_{n=1}^\infty \frac{\E_\xi |\widehat{X}_n| (\ln^{+}|\widehat{X}_n|)^{1+\lambda}}{(\ln \Pi_{n-1} )^{1+\lambda}}<\infty.
\end{eqnarray*}
Now the desired result follows.
\fi
Combining the above results, we see that the series $\sum I_n$ converges a.s., so that $N_{1,n}$ converges a.s. to $$V_1=\sum_{n=1}^\infty (N_{1,n+1} -N_{1,n})+ N_{1,1}.$$
\end{proof}
\subsection{Convergence of the martingale $ \{(N_{2,n},\D_n)\}$}
To see that $ \{(N_{2,n},\D_n)\}$ is a martingale, it suffices to notice that (remind that we have assumed $\ell_n=0$)
\begin{eqnarray*}
\E_{\xi, n} {N_{2,n+1} }&=&\E_{\xi, n}(s_{n+1}^2W_{n+1})- \E_{\xi, n} \bigg( \frac{1}{\Pi_{n+1}} \sum_{u\in \T_{n+1}} S_u^2 \bigg)\\
& = & s_{n+1}^2W_{n}- \frac{1}{\Pi_{n+1}} \sum_{u\in \T_{n}} \E_{\xi, n} \bigg( \sum_{i=1}^{N_u}(S_{u}+L_{ui})^2 \bigg) \\
&=&s_{n+1}^2W_{n}- \frac{1}{\Pi_{n+1}} \sum_{u\in \T_{n}} \E_{\xi, n} \Bigg( \sum_{i=1}^{N_u}( S_u^2+ 2S_u L_{ui}+L_{ui}^2 ) \Bigg) \\
&=&s_{n+1}^2W_{n}- \frac{1}{\Pi_{n+1}} \sum_{u\in \T_{n}} \E_{\xi, n} \Bigg( \sum_{i=1}^{N_u} \E_{\xi, n}\{ ( S_u^2+ 2S_u L_{ui}+L_{ui}^2 )|N_u \}\Bigg)\\
&=&s_{n+1}^2W_{n}- \frac{1}{\Pi_{n+1}} \sum_{u\in \T_{n}} m_n(S_u^2+\sigma_n^{(2)}) =s_n^2W_n- \frac{1}{\Pi_{n}} \sum_{u\in \T_{n}} S_u^2 =N_{2,n}.
\end{eqnarray*}
As in the case of $ \{(N_{1,n},\D_n)\}$, we will prove the convergence of the martingale $ \{(N_{2,n},\D_n)\}$ by showing that
\begin{equation*}
\sum_{n=1}^{\infty} (N_{2,n+1} -N_{2,n}) \mbox{ converges a.s., }
\end{equation*}
following the same lines as before. For $n\geq 1$ and $|u|=n$,
we will still use the notation $X_u$ and $I_n$, but this time they are defined by:
\begin{align} \label{Xu2}
X_u&~= (S_u^2- s_n^2) (1-\frac{N_u}{m_n}) + \frac{1}{m_n} \sum_{i=1}^{N_u} (\sigma_n^{(2)}-L_{ui}^2) - \frac{2}{m_n} S_u \sum_{i=1}^{N_u} L_{ui}, \\
\label{In2}
I_n&= N_{2,n+1}-N_{2,n} = \frac{1}{\Pi_{n}} \sum_{u\in \T_{n}} X_u.
\end{align}
Instead of
Lemma \ref{lem6}, we have:
\begin{lem}\label{lem7} For $n\geq 1$ and $|u|= n$, let $\widehat{X}_n $ be a random variable with the common distribution of $X_u$ defined by (\ref{Xu2}), under the law $\P_\xi$. If the conditions of Proposition \ref{th2a} holds, then
\begin{equation}\label{cbrweq5-2}
\E_{\xi} {|\widehat{X}_n|} (\ln^+{{|\widehat{X}_n|}} )^{1+\lambda}\leq K_{\xi} n^2 \bigg[ \E_\xi \frac{\widehat{N}_n}{ m_n} (\ln ^+ { \widehat{N}_n} )^{1+\lambda}+ (\ln^- m_n)^{1+\lambda}+1 \bigg].
\end{equation}
\end{lem}
\begin{proof}
Observe that for $|u|= n$,
\begin{align*}
&|X_u|~\leq |s_n^2-S_u^2|(1+\frac{N_u}{m_n})+ \left| \frac{1}{m_n} \sum_{i=1}^{N_u} (\sigma_n^{(2)} -L_{ui}^2)\right|+ |S_u| \left|\frac{2}{m_n}\sum_{i=1}^{N_u}L_{ui}\right|, \\
& \ln^+ |X_u|~ \leq 2+ \ln^+|s_n^2-S_u^2|+\ln (1+\frac{N_u}{m_n})+\ln^+ \left| \frac{1}{m_n} \sum_{i=1}^{N_u} (\sigma_n^{(2)} -L_{ui}^2) \right| \\ &\qquad \qquad \qquad + \ln^+ \left|\frac{2}{m_n} \sum_{i=1}^{N_u} L_{ui} \right| + \ln^+|S_u|,\\
& 6^{-\lambda} (\ln^+|X_u|)^{1+\lambda} \leq 2^ {1+\lambda}+ (\ln^+|s_n^2-S_u^2|)^{1+\lambda}+ (\ln(1+\frac{N_u}{m_n}))^{1+\lambda} \\ &\qquad \qquad \qquad+ \left(\ln^+ \left| \frac{1}{m_n} \sum_{i=1}^{N_u} (\sigma_n^{(2)} -L_{ui}^2) \right|\right)^{1+\lambda} + \left(\ln^+ \left|\frac{2}{m_n} \sum_{i=1}^{N_u} L_{ui} \right| \right )^{1+\lambda}+( \ln^+|S_u| )^{1+\lambda}.
\end{align*}
Therefore
$$ 6^{-\lambda}|X_u| (\ln^+|X_u|)^{1+\lambda} \leq \sum_{i=1}^8 \mathbb{K}_i $$
with
\begin{align*}
& \K_1 =|s_n^2-S_u^2| (1+\frac{N_u}{m_n})\Bigg[ 2^ {1+\lambda}+ \Big(\ln(1+\frac{N_u}{m_n})\Big)^{1+\lambda}+ \left(\ln^+ \left| \frac{1}{m_n} \sum_{i=1}^{N_u} (\sigma_n^{(2)} -L_{ui}^2) \right|\right)^{1+\lambda}\\ & \hspace{7.8cm} + \left(\ln^+ \left|\frac{2}{m_n} \sum_{i=1}^{N_u} L_{ui} \right| \right )^{1+\lambda} \Bigg],\\
& \K_2 =|s_n^2-S_u^2| (1+\frac{N_u}{m_n}) \bigg[ (\ln^+|s_n^2-S_u^2|)^{1+\lambda}+ ( \ln^+|S_u| )^{1+\lambda} \bigg],
\\ & \K_3 = \left| \frac{1}{m_n} \sum_{i=1}^{N_u} (\sigma_n^{(2)} -L_{ui}^2) \right| \bigg[2^ {1+\lambda}+ (\ln^+|s_n^2-S_u^2|)^{1+\lambda}+ ( \ln^+|S_u| )^{1+\lambda} \bigg],\\ &
\K_4 = \left| \frac{1}{m_n} \sum_{i=1}^{N_u} (\sigma_n^{(2)} -L_{ui}^2) \right|
\Big(\ln(1+\frac{N_u}{m_n})\Big)^{1+\lambda},
\\ & \K_5=\left| \frac{1}{m_n} \sum_{i=1}^{N_u} (\sigma_n^{(2)} -L_{ui}^2) \right|
\left[\left(\ln^+ \left|\frac{2}{m_n} \sum_{i=1}^{N_u} L_{ui} \right| \right )^{1+\lambda}+ \left(\ln^+ \left| \frac{1}{m_n} \sum_{i=1}^{N_u} (\sigma_n^{(2)} -L_{ui}^2) \right|\right)^{1+\lambda}\right],
\\ & \K_6 = \left|\frac{2}{m_n} \sum_{i=1}^{N_u} L_{ui} \right| |S_u|
\bigg[ 2^ {1+\lambda}+ (\ln^+|s_n^2-S_u^2|)^{1+\lambda}+ ( \ln^+|S_u| )^{1+\lambda} \bigg],\\
& \K_7 = \left|\frac{2}{m_n} \sum_{i=1}^{N_u} L_{ui} \right| |S_u|\Big(\ln(1+\frac{N_u}{m_n})\Big)^{1+\lambda},
\\ & \K_8= \left|\frac{2}{m_n} \sum_{i=1}^{N_u} L_{ui} \right| |S_u|
\left[\left(\ln^+ \left|\frac{2}{m_n} \sum_{i=1}^{N_u} L_{ui} \right| \right )^{1+\lambda}+ \left(\ln^+ \left| \frac{1}{m_n} \sum_{i=1}^{N_u} (\sigma_n^{(2)} -L_{ui}^2) \right|\right)^{1+\lambda}\right].
\end{align*}
It is clear that \eqref{cbrweq3.1} remains valid here; similarly, we get
\begin{equation*}
\E_\xi |\sigma_n^{(2)} -L_{ui}^2| = \E_\xi |\sigma_n^{(2)} -\widehat{L}_{n}^2|^2 \leq K_\xi n
\end{equation*}
(recall that $\widehat{L}_{n}$ is a random variable with the same distribution as $ L_{ui}$ for any $|u|=n $ and $i\geq 1$).
By the definition of the model, $S_u$, $N_u$ and $L_{ui}$ are mutually independent under $\P_\xi$. On the basis of the above estimates, we have the following inequalities: for $|u|= n$,
\begin{align*}
\E_\xi \K_1 & \leq \E_\xi| S_u^2+s_n^2| \E_\xi (1+\frac{N_u}{m_n})\Bigg[ 2^ {1+\lambda}+ \Big(\ln(1+\frac{N_u}{m_n})\Big)^{1+\lambda}+\left(\ln \Big(e^{\lambda}+ \frac{1}{m_n} \sum_{i=1}^{N_u} \E_\xi|\sigma_n^{(2)} -L_{ui}^2|\Big) \right)^{1+\lambda} \\ & \qquad + \left(\ln\Big(e^{\lambda}+ \frac{2}{m_n} \sum_{i=1}^{N_u} \E_\xi | L_{ui}| \Big)\right )^{1+\lambda} \Bigg] \quad (\mbox{by Jensen's inequality under } \E_\xi(\cdot | N_u) )
\\ & \leq K_\xi n\left[K_\xi + \E_\xi \frac{N_u}{m_n} \big( \ln^+{N_u} \big)^{1+\lambda}+ \big( \ln^-{m_n} \big)^{1+\lambda}+ (\ln n)^{1+\lambda} \right];
\\\E_\xi \K_2 & \leq 2(\E_\xi |S_u |^{2+\varepsilon} + |s_n|^{2+\varepsilon} )\leq K_\xi n^{2};
\\ \E_\xi \K_3 & \leq \E_\xi \Big(\frac{1}{m_n} \sum_{i=1}^{N_u} \E_\xi |\sigma_n^{(2)} -L_{ui}^2|\Big) \bigg(2^ {1+\lambda}+ (\ln(e^{\lambda}+\E_\xi|s_n^2-S_u^2|) )^{1+\lambda}+ ( \ln (e^{\lambda}+\E_\xi|S_u|) )^{1+\lambda} \bigg)\\ &
\leq K_\xi n (\ln n )^{1+\lambda};
\\ \E_\xi \K_4 & \leq K_\xi n+ K_\xi n \E_\xi \frac{N_u}{m_n} \Big(\ln(1+\frac{N_u}{m_n})\Big)^{1+\lambda};
\\ \E_\xi \K_5 &\leq 3^{\lambda} \E_\xi \frac{1}{m_n}\bigg| \sum_{i=1}^{N_u} ( \sigma^{(2)}_n - L_{ui}^2)\bigg| \Bigg[ \Big( \ln^+\Big | \sum_{i=1}^{N_u} L_{ui} \Big|\Big) ^{1+\lambda} + \Big( \ln^+\Big | \sum_{i=1}^{N_u} (\sigma^{(2)}_n - L_{ui}^2) \Big|\Big) ^{1+\lambda} + \\ & \qquad \qquad \qquad\qquad \qquad \qquad 2 (\ln^- m_n)^{1+\lambda} + 1 \Bigg]
\\ & \leq K_\lambda \frac{1}{m_n} \E_\xi \Bigg[\bigg| \sum_{i=1}^{N_u} ( \sigma^{(2)}_n - L_{ui}^2)\bigg| ^2+ \Big( \ln^+\Big | \sum_{i=1}^{N_u} L_{ui} \Big|\Big) ^{2+2\lambda} \Bigg]+
\\ &\qquad K_\lambda \E_\xi \frac{1}{m_n}\bigg| \sum_{i=1}^{N_u} ( \sigma^{(2)}_n - L_{ui}^2)\bigg|^2 + K_\xi n ( (\ln^- m_n)^{1+\lambda}+1) \qquad (\mbox{by \eqref{cbrw3.4} and } 2ab \leq a^2+b^2)
\\ & \leq_{\eqref{cbrw3.4}} K_\lambda \frac{1}{m_n} \E_\xi \Big[ \sum_{i=1}^{N_u} \E_\xi ( \sigma^{(2)}_n - L_{ui}^2)^2\Big] + K_\lambda \frac{1}{m_n} \E_\xi \Big[ \sum_{i=1}^{N_u} \E_\xi\Big |L_{ui} \Big| \Big]+K_\xi n ( (\ln^- m_n)^{1+\lambda}+1)
\\ & \leq K_\xi n \big ( (\ln^- m_n)^{1+\lambda}+1\big);
\\ \E_\xi \K_6 & \leq_{\eqref{cbrw2.1} } \E_\xi \bigg(\frac{2}{m_n} \sum_{i=1}^{N_u} \E_\xi |L_{ui}|\bigg) \E_\xi \Big[K_\lambda |S_u|\big(1+ (\ln^+|S_u|)^{1+\lambda}+ (\ln s_n^2)^{1+\lambda} \big)\Big]
\\ &\leq _{\eqref{cbrw3.4} } K_\xi n \E_\xi \Big[ |S_u|^2+ |S_u|+ s_n^2 \big)\Big] \leq K_\xi n^2 ;
\\ \E_\xi \K_7 & \leq \E_\xi \left(\Big(\ln(1+\frac{N_u}{m_n})\Big)^{1+\lambda}\frac{2}{m_n} \sum_{i=1}^{N_u} \E_\xi |L_{ui}|\right) \E_\xi |S_u|
\\ & \leq K_\xi n^2 \Big[ \E_\xi \frac{N_u}{m_n} \Big(\ln ^+ {N_u} \Big)^{1+\lambda} + (\ln^- m_n)^{1+\lambda}\Big];
\\ \E_\xi \K_8 & \leq K_\xi n^2 \big ( (\ln^- m_n)^{1+\lambda}+1\big)
\qquad ( \mbox{similar reason as in the estimation for } \E_\xi \mathbb{K}_5 ) .
\end{align*}
Combining the above estimates, we get that
\begin{equation}\label{cbrweq4.1}
\E_{\xi} {|\widehat{X}_n|} (\ln^+{{|\widehat{X}_n|}} )^{1+\lambda} \leq K_\xi n^2 \bigg ( \E_\xi \frac{\widehat{N}_n}{ m_n}\Big (\ln^+ {\widehat{N}_n}\Big)^{1+\lambda}+ (\ln^- m_n)^{1+\lambda}+1\bigg)
\end{equation}
This ends the proof of Lemma \ref{lem7}.
\end{proof}
\begin{proof}[Proof of Proposition \ref{th2a} ]
The proof is almost the same as that of Proposition \ref{th1a}: we still use the decomposition (\ref{eqn-decomposition}), but with $I_n$ and $X_u$ defined by
(\ref{In2}) and (\ref{Xu2}), and Lemma \ref{lem7} instead of
Lemma \ref{lem6}, to prove that the series $ \sum_{n=0}^\infty (N_{2,n+1}-N_{2,n}) $ converges a.s., yielding that
$\{N_{2,n}\}$ converges a.s. to $$V_2=\sum_{n=1}^\infty (N_{2,n+1}-N_{2,n})+ N_{2,1}.$$
\end{proof}
\section{Proof of Theorem \ref{th1} }\label{sec3}
\if
For each $n$,
we choose an integer $k_n<n$ as follows.
Let $\beta$ with $1/2<\beta<1$ and $\alpha>2/
(\beta^{-1}-1)$. For $j\in \mathbb{N}$ and $j^{ \alpha/\beta} \leq
n<(j+1)^{\alpha/\beta}$, set $k_n=a_j=[j^{\alpha}]$. Let $t_n= \ell_n+s_n t$ for $t\in \mathbb{R}$ and $n\geq 1$.
Then by \eqref{cbrweq11},
\begin{equation}\label{cbrweq12}
W_n( s_n t )=\mathds{A}_n+\mathds{B}_n+ W_{k_n}\Phi(t),
\end{equation}
where
\begin{eqnarray*}
\mathds{A}_n &=& \frac{1}{\pi_{k_n }} \sum_{u\in \mathbb{T}_{k_n}}
\left\{W_{n-k_n}(u,s_n t-S_u)- {\E}_{\xi,k_n} (W_{n-k_n}(u,s_n t-S_u) )\right\},\\
\mathds{B}_n&=&\frac{1}{\pi_{k_n }} \sum_{u\in \mathbb{T}_{k_n}} [{\E}_{\xi,k_n} (W_{n-k_n}(u,s_n t-S_u) )
-\Phi (t)].
\end{eqnarray*}
Here we remind that the random variables $W_{n-k_n}(u,s_n t-S_u)$ are independent of each other under the conditional probability $\P_{\xi, k_n}$.
For the proof of Theorem \ref{th1}, as in Asmussen and Kaplan \cite{AsmussenKaplan76BRW2}, the main idea is to use the decomposition formula \eqref{cbrweq12}, proving that $\mathds{A}_n\rightarrow 0$ and $\mathds{B}_n\rightarrow 0$ in probability. However, new ideas will be needed for the proof of the later due to the appearance of the random environment.
\fi
\subsection{A key decomposition }
For $u\in (\N^*)^k (k\geq 0)
$ and $n\geq 1$,
write for $B\subset \R$,
\begin{equation*}
Z_{n}(u,B)= \sum_{v \in \mathbb{T}_n(u)}\mathbf{1}_B(S_{uv}-S_u).
\end{equation*}
It can be easily seen that the law of $Z_{n}(u,B)$ under ${\P}_{\xi}$ is the same as that of $Z_n(B)$ under ${P}_{\theta^k\xi}$. Define
\begin{eqnarray*}
&&W_{n}(u,B) =Z_{n}(u,B)/\Pi_n(\theta^k\xi), \quad W_n(u,t) = W_n(u,(-\infty,t]), \\
&& W_{n}(B) =Z_{n}( B)/ \Pi_n, \quad W_n(t) =W_n((-\infty,t]).
\end{eqnarray*}
By definition, we have $\Pi_{n}(\theta^k\xi)=m_k\cdots m_{k+n-1}$, $Z_n(B)= Z_n(\varnothing, B)$, $W_n(B)=W_n(\varnothing,B)$, $W_n= W_n(\mathbb{R})$.
The following decomposition will play a key role in our approach: for $k\leq n$,
\begin{equation}\label{cbrweq11}
Z_n(B)=\sum_{u\in \mathbb{T}_k} Z_{n-k}(u, B-S_u).
\end{equation}
Remark that by our definition, for $u\in \T_k $, $$Z_{n-k}(u,B-S_u)=\sum_{v_1\cdots v_{n-k} \in \mathbb{T}_{n-k}(u) } \mathbf{1}_B(S_{uv_1\cdots v_{n-k}})$$ represents number of the descendants of $u$ at time $n$ situated in $B$.
For each $n$,
we choose an integer $k_n<n$ as follows. Let $\beta$ be a real number such that
$
\max{\{\frac{2}{\lambda}, \frac{3}{\eta}\}}<\beta<\frac{1}{4}
$
and set $k_n=\lfloor n^{\beta}\rfloor$, the integral part of $n^{\beta}$.
Then on the basis of \eqref{cbrweq11}, the following decomposition will hold:
\begin{equation}\label{cbrweq12}
\Pi_n^{-1} Z_n(s_n t) - \Phi(t) W=\mathds{A}_n+\mathds{B}_n+ \mathds{C}_n,
\end{equation}
where
\begin{eqnarray*}
\mathds{A}_n &=& \frac{1}{\Pi_{k_n }} \sum_{u\in \mathbb{T}_{k_n}}
\left[W_{n-k_n}(u,s_n t-S_u)- {\E}_{\xi,k_n} W_{n-k_n}(u,s_n t-S_u) \right],\\
\mathds{B}_n&=&\frac{1}{\Pi_{k_n }} \sum_{u\in \mathbb{T}_{k_n}}\left [{\E}_{\xi,k_n} W_{n-k_n}(u,s_n t-S_u)
-\Phi (t)\right],
\\ \mathds{C}_n&=& (W_{k_n}-W)\Phi(t).
\end{eqnarray*}
Here we remind that the random variables $W_{n-k_n}(u,s_n t-S_u)$ are independent of each other under the conditional probability $\P_{\xi, k_n}$.
\subsection{Proof of Theorem \ref{th1}}
First, observe that the condition $ \E m_0^{-\delta}<\infty$ implies that $\E \big(\ln^- m_0\big)^{ \kappa }<\infty$ for all $\kappa>0$.
So the hypotheses of Propositions \ref{th1a} and \ref{th2a} are satisfied under the conditions of Theorem \ref{th1}.
By virtue of the decomposition \eqref{cbrweq12}, we shall divide the proof into three lemmas.
\begin{lem}\label{lem1}
Under the hypothesis of Theorem \ref{th1},
\begin{equation}\label{eq6}
\sqrt{n}\mathds{A}_n \xrightarrow{n \rightarrow \infty } 0 \mbox{ a.s.}
\end{equation}
\end{lem}
\begin{lem}\label{lem2} Under the hypothesis of Theorem \ref{th1},
\begin{equation}\label{eq7}
\sqrt{n} \mathds{B}_n \xrightarrow{n \rightarrow \infty } \frac{1}{6}{\E \sigma_0^{(3)}}{(\E \sigma_0^{(2)})^{-\frac{3}{2}} }(1-t^2)\phi(t) W-(\E \sigma_0^{(2)})^{-\frac{1}{2}}\phi(t) \, V_1 \mbox{ a.s. }
\end{equation}
\end{lem}
\begin{lem}\label{lem3} Under the hypothesis of Theorem \ref{th1},
\begin{equation}\label{eq8}
\sqrt{n}\mathds{C}_n \xrightarrow{n \rightarrow \infty } 0 \mbox{ a.s. }
\end{equation}
\end{lem}
Now we go to prove the lemmas subsequently.
\begin{proof}[Proof of Lemma \ref{lem1}]
For ease of notation, we define for $|u|=k_n$,
\begin{align*}
& X_{n,u}= W_{n-k_n}(u,s_n t-S_u) - \E_{\xi, k_n}W_{n-k_n}(u,s_n t-S_u), ~~\bar{X}_{n,u} = X_{n,u} \mathbf{1}_{\{|X_{n,u}| <\Pi_{k_n}\}}, \\
& \bar{A}_{n} = \frac{1}{\Pi_{k_n}} \sum_{u\in \T_{k_n} } \bar{X}_{n,u}.
\end{align*}
Then we see that $ |X_{n,u}|\leq W_{n-k_n}(u)+1$.
To prove Lemma \ref{lem1}, we will use the extended Borel-Cantelli Lemma. We can obtain the required result once we prove that $\forall \varepsilon>0$,
\begin{equation}\label{eq10}
\sum_{n=1}^{\infty} \P_{k_n} (|\sqrt{n} A_n| >2 \varepsilon) <\infty.
\end{equation}
Notice that \begin{eqnarray*}
&& \P_{k_n}(|A_n| >2\frac{\varepsilon}{\sqrt{n}}) \\
&\leq& \P_{k_n} (A_n\neq \bar{A}_n ) +\P_{k_n} (|\bar{A}_n - \E_{\xi, k_n} \bar{A}_n| > \frac{\varepsilon}{\sqrt{n}} ) +\P_{k_n} ( |\E_{\xi, k_n} \bar{A}_n| >\frac{\varepsilon}{\sqrt{n}}).
\end{eqnarray*}
We will proceed the proof in 3 steps.\\
{\bf Step 1 } We first prove that
\begin{equation}\label{cbrweq3-7}
\sum_{n=1}^{\infty}{\P}_{k_n} (A_n\neq \overline{A}_n) <\infty.
\end{equation}
To this end, define $$W^*=\sup_n W_n,$$
and we need the following result :
\begin{lem}(\cite[Th. 1.2]{LiangLiu10})\label{lem5}
Assume \eqref{cbrweq1} for some $\lambda>0$ and $\E m_0^{-\delta}<\infty $ for some $\delta>0$. Then
\begin{equation}\label{cbrweq17a}
{\E}(W^*+1)(\ln (W^*+1))^{\lambda} <\infty.
\end{equation}
\end{lem}
We observe that
\begin{align*}
\P_{k_n}(A_n\neq \overline{A}_n )&\leq \sum_{u\in \T_{k_n} } \P_{k_n}(X_{n,u}\neq \overline{X}_{n,u} ) = \sum_{u\in \T_{k_n} } \P_{k_n}(|X_{n,u}|\geq \Pi_{k_n}) \\&\leq \sum_{u\in \T_{k_n} } \P_{k_n} ( W_{n-k_n} (u)+1 \geq \Pi_{k_n})\\ &= W_{k_n} \Big[r_n \P( W_{n-k_n}+1 \geq r_n )\Big]_{r_n=\Pi_{k_n}}
\\&\leq W_{k_n} \Big[\E\big((W_{n-k_n}+1 ) \mathbf{1}_{ \{W_{n-k_n}+1 \geq r_n\}} \big)\Big]_{r_n=\Pi_{k_n}}
\\ &\leq W_{k_n} \Big[\E\big((W^*+1 ) \mathbf{1}_{ \{W^*+1 \geq r_n\}} \big)\Big]_{r_n=\Pi_{k_n}}
\\ & \leq W^*(\ln \Pi_{k_n})^{-\lambda}\E (W^*+1)(\ln (W^*+1))^{\lambda}
\\ &\leq K_\xi W^* n^{-\lambda \beta} \E (W^*+1)(\ln (W^*+1))^{\lambda},
\end{align*}
where the last inequality holds since
\begin{equation}\label{cbrweq4.9}
\frac{1}{n} \ln \Pi_{n} \rightarrow \E\ln m_0>0 \mbox{ a.s. },
\end{equation}
and $k_n\sim n^{\beta}$.
By the choice of $\beta$ and Lemma \ref{lem5}, we obtain \eqref{cbrweq3-7}.
\medskip
\noindent {\bf Step 2}. We next prove that $\forall \varepsilon>0$,
\begin{equation}\label{cbrweq3-8}
\sum_{n=1}^{\infty}{\P}_{k_n} ( |\overline{A}_n -{\E}_{\xi,k_n} \overline{A}_n|>\frac{ \varepsilon}{\sqrt{n}}) <\infty.
\end{equation}
Take a constant $b \in (1, e^{\E\ln m_0})$. Observe that $\forall u\in \T_{k_n}, n\geq 1$,
\begin{eqnarray*}
\E_{k_n} \bar{X}^2_{n,u} &=& \int_{0}^\infty 2x\P_{k_n} (|\bar{X}_{n,u}|>x ) dx
= 2\int_0^\infty x\P_{k_n} ( |{X}_{n,u} | \mathbf{1 }_{ \{|{X}_{n,u} |<\Pi_{k_n} \}} >x ) dx\\
&\leq & 2\int_0^{\Pi_{k_n}} x\P_{k_n} ( | W_{n-k_n}(u)+1 | >x) dx= 2\int_0^{\Pi_{k_n}} x\P ( | W_{n-k_n}+1 | >x) dx \\
&\leq& 2 \int_0^{\Pi_{k_n} } x\P( W^*+1 >x) dx \\
& \leq& 2 \int_e^{\Pi_{k_n} } (\ln x)^{-\lambda} \E (W^*+1)(\ln(W^*+1) )^{\lambda} dx + 9\\
&\leq & 2 \E (W^*+1)(\ln(W^*+1) )^{\lambda} \left(\int_e^{b^{k_n}} (\ln x)^{-\lambda} dx + \int_{b^{k_n}}^{\Pi_{k_n}} (\ln x)^{-\lambda} dx \right)+9\\
& \leq & 2 \E (W^*+1)(\ln(W^*+1) )^{\lambda}(b^{k_n} + (\Pi_{k_n}-b^{k_n}) (k_n\ln b )^{-\lambda} )+9.
\end{eqnarray*}
Then we have that
\begin{align*}
&~ ~~\sum_{n=1}^{\infty} {\P}_{k_n}(|\overline{A}_n-{\E}_{\xi,k_n} \overline{A}_n|>\frac{\varepsilon}{\sqrt{n}})\\ &=
\sum_{n=1}^{\infty} {\E}_{k_n}{\P}_{\xi,k_n} (|\overline{A}_n-{\E}_{\xi,k_n} \overline{A}_n|>\frac{\varepsilon}{\sqrt{n}}) \\
&\leq \varepsilon^{-2}\sum_{n=1}^{\infty} n{\E}_{k_n}\left( {\Pi_{k_n}^{-2}} \sum_{u\in \T_{k_n}}{\E}_{\xi,k_n} \overline{X}_{n,u}^2\right)= \varepsilon^{-2} \sum_{n=1}^{\infty}n \left( {\Pi_{k_n}^{-2} } \sum_{u\in \T_{k_n}}{\E}_{k_n} \overline{X}_{n,u}^2\right)\\
& \leq \varepsilon^{-2} \sum_{n=1}^{\infty} \frac{nW_{k_n}}{\Pi_{k_n}} \big [2 \E (W^*+1)(\ln(W^*+1)^{\lambda} )(b^{k_n} + (\Pi_{k_n}-b^{k_n}) (k_n\ln b )^{-\lambda} )+9\big ]\\
& \leq 2\varepsilon^{-2}W^* \E (W^*+1)(\ln(W^*+1)^{\lambda} ) \bigg( \sum_{n=1}^{\infty} \frac{n}{\Pi_{k_n}}b^{k_n} + \sum_{n=1}^{\infty} n (k_n\ln b )^{-\lambda} \bigg) +9\varepsilon^{-2}W^* \sum_{n=1}^{\infty} \frac{n}{\Pi_{k_n}}.
\end{align*}
By \eqref{cbrweq4.9} and $\lambda \beta >2$, the three series in the last expression above converge under our hypothesis and hence \eqref{cbrweq3-8} is proved.
\medskip
\noindent {\bf Step 3.} Observe
\begin{eqnarray*}
& & \P_{k_n} \Bigg(| \E_{\xi,k_n} \bar{A}_n | > \frac{\varepsilon}{\sqrt{n}} \Bigg ) \\
& \leq & \frac{\sqrt{n}}{\varepsilon} \E_{k_n} | \E_{\xi,k_n} \bar{A}_n |
= \frac{\sqrt{n}}{\varepsilon} \E_{k_n} \Big| \frac{1}{\Pi_{k_n} } \sum_{u\in \T_{k_n}} \E_{\xi,k_n} \bar{X}_{n,u} \Big|\\ & = &\frac{\sqrt{n}}{\varepsilon} \E_{k_n} \Big| \frac{1}{\Pi_{k_n} } \sum_{u\in \T_{k_n}} (- \E_{\xi,k_n } X_{n,u} \mathbf{1}_{ \{ |X_{n,u}| \geq \Pi_{k_n}\}} ) \Big|
\\ & \leq & \frac{\sqrt{n}}{\varepsilon} \frac{1}{\Pi_{k_n} } \sum_{u\in \T_{k_n}} \E_{k_n } ( W_{n-k_n} (u) +1) \mathbf{1}_{ \{ W_{n-k_n}(u)+1 \geq \Pi_{k_n}\}} \\ & = & \frac{\sqrt{n}W_{k_n}}{\varepsilon} \Big[ \E ( W_{n-k_n} +1) \mathbf{1}_{ \{ W_{n-k_n}+1 \geq r_n\}} \Big]_{r_n =\Pi_{k_n}}\\ &\leq& \frac{W^*}{\varepsilon}\sqrt{n} \Big[ \E ( W^* +1) \mathbf{1}_{ \{ W^*+1 \geq r_n\}} \Big]_{r_n =\Pi_{k_n}}\\
&\leq &\frac{W^*}{\varepsilon} \frac{\sqrt{n}}{(\ln\Pi_{k_n})^{\lambda} } \E (W^*+1) \ln ^{\lambda } ( W^*+1 )
\\ & \leq & \frac{W^*}{\varepsilon} K_\xi n^{\frac{1}{2} -\lambda\beta} \E (W^*+1) \ln ^{\lambda } ( W^*+1 ).
\end{eqnarray*}
Then by \eqref{cbrweq4.9} and $ \lambda \beta >2$, it follows that \[ \sum_{n=1}^\infty \P_{k_n} \Bigg(| \E_{\xi,k_n} \bar{A}_n | > \frac{\varepsilon}{\sqrt{n}} \Bigg )<\infty. \]
Combining Steps 1-3, we obtain \eqref{eq10}. Hence the lemma is proved.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem2} ]
For ease of notation, set \[ D_1(t)= (1-t^2) \phi(t), \qquad \kappa_{1,n}= \frac{s_n^{(3)} -s_{k_n}^{(3)} }{6 (s_n^2-s_{k_n}^2 )^{3/2}}.\]
Observe that
\begin{equation}\label{cbrweq3-16}
\mathds{B}_n=\mathds{B}_{n1}+\mathds{B}_{n2}+\mathds{B}_{n3}+\mathds{B}_{n4},
\end{equation}
where
\begin{align*}
\mathds{B}_{n1} &= \frac{1}{\Pi_{k_n}} \sum_{u\in\T_{k_n}} \Bigg( \E_{\xi,k_n} W_{n-k_n} (u,s_n t-
S_u)- \Phi\bigg( \frac{s_n t-S_u}{( s_n^2-s_{k_n}^2)^{1/2}}\bigg)-\kappa_{1,n} D_1\bigg( \frac{s_n t-S_u}{( s_n^2-s_{k_n}^2)^{1/2}}\bigg)\Bigg);
\\ \mathds{B}_{n2} & = \frac{1}{\Pi_{k_n}} \sum_{u\in\T_{k_n}}\left( \Phi\bigg( \frac{s_n t-S_u}{( s_n^2-s_{k_n}^2)^{1/2}}\bigg)-\Phi(t) \right);\\
\mathds{B}_{n3}&=\kappa_{1,n}\frac{1}{\Pi_{k_n}} \sum_{u\in\T_{k_n}} \left( D_1\bigg( \frac{s_n t-S_u}{( s_n^2-s_{k_n}^2)^{1/2}}\bigg)-D_1(t) \right);\\
\mathds{B}_{n4}&=\kappa_{1,n} D_1(t) W_{k_n}.
\end{align*}
Then the lemma will be proved once we show that
\begin{align}
\label{cbrweq3-17} & \sqrt{n}\mathds{B}_{n1} \xrightarrow{n\rightarrow\infty} 0; \\
\label{cbrweq3-18} & \sqrt{n}\mathds{B}_{n2} \xrightarrow{n\rightarrow\infty} -(\E \sigma_0^{(2)})^{-\frac{1}{2}}\phi(t)V_1 ; \\
\label{cbrweq3-19} & \sqrt{n}\mathds{B}_{n3} \xrightarrow{n\rightarrow\infty} 0; \\
\label{cbrweq3-20} & \sqrt{n}\mathds{B}_{n4} \xrightarrow{n\rightarrow\infty} \frac{1}{6}{\E \sigma_0^{(3)}}{(\E \sigma_0^{(2)})^{-\frac{3}{2}} }D_1(t) W.
\end{align}
We will prove these results subsequently.
We first prove \eqref{cbrweq3-17}. The proof will mainly be based on the following result about asymptotic expansion of the distribution of the sum of independent random variables:
\begin{prop} \label{prop4.5}
Under the hypothesis of Theorem \ref{th1}, for a.e. $\xi$,
\begin{equation*}
\epsilon_n=n^{1/2}\sup_{x\in \R}\Bigg|\P_{\xi } \bigg(\frac{\sum_{k=k_n}^{n-1}\widehat{L}_{k}}{( s_n^2-s_{k_n}^2)^{1/2}} \leq x \bigg)- \Phi (x) -\kappa_{1,n}D_1(x) \Bigg| \xrightarrow{n\rightarrow\infty} 0.
\end{equation*}
\end{prop}
\begin{proof}
Let $X_k=0$ for $0\leq k \leq k_n-1$ and $X_k= \widehat{L}_{k}$ for $k_n\leq k \leq n-1$. Then the random variables $\{X_k\}$ are independent under $\P_\xi$. Denote by $ v_k(\cdot)$ the characteristic function of $X_k$: $ v_k(t):= \E_\xi e^{it X_k} $. Using the Markov inequality and Lemma \ref{lem-Edge-exp}, we obtain the following result:
\begin{align*}
&\sup_{x\in \R} \Bigg|\P_{\xi } \bigg(\frac{\sum_{k=k_n}^{n-1}\widehat{L}_{k}}{( s_n^2-s_{k_n}^2)^{1/2}} \leq x \bigg)- \Phi (x) -\kappa_{1,n}D_1(x) \Bigg| \\
\leq & K_\xi \left\{ (s_n^2-s_{k_n}^2 )^{-2} \sum_{j=k_n }^{n-1} \E_\xi | \widehat{L}_{j}|^4+ n^6 \left(\sup_{|t| >T } \frac{1}{n} \bigg( k_n+ \sum_{j=k_n}^{n-1} |v_j(t)| \bigg)+ \frac{1}{2n}\right)^n\right\}.
\end{align*}
By our conditions on the environment, we know that
\begin{equation}\label{cbrweq3.18}
\lim_{n\rightarrow \infty} n {(s_n^2-s_{k_n}^2)^{-2}} \sum_{j=k_n}^{n-1} \E_\xi |\widehat{L}_k|^4 = \E |\widehat{L}_0|^4/ (\E \sigma_0^{(2)})^2.
\end{equation}
By \eqref{cbrweq2-3}, $\widehat{L}_n $
satisfies
\begin{equation*}
\P\Big( \limsup_{|t|\rightarrow\infty}|v_n(t)|<1 \Big) >0.
\end{equation*}
So there exists a constant $c_n \leq 1$ depending on $\xi_n$ such that
\begin{equation*}
\sup_{|t|> T} |v_n(t)| \leq c_n\quad \mbox{ and } \quad \P(c_n <1) >0.
\end{equation*}
Then $\E c_0 <1$. By the Birkhoff ergodic theorem, we have
\begin{align*}
\sup_{|t|> T} \bigg( \frac{1}{n}\sum_{j=k_n}^{n-1} |v_j(t)|\bigg) &\leq \frac{1}{n}\sum_{j=1}^{n-1} c_j \rightarrow \E c_0<1.
\end{align*}
Then for $n$ large enough,
\begin{equation}\label{cbrweq3-22a}
\left(\sup_{|t| >T } \frac{1}{n} \bigg( k_n+ \sum_{j=k_n}^{n-1} |v_j(t)| \bigg)+ \frac{1}{2n}\right)^n=o( n^{-m}), \quad \forall m >0.
\end{equation}
From \eqref{cbrweq3.18} and \eqref{cbrweq3-22a}, we get the conclusion of the proposition.
\end{proof}
From Proposition \ref{prop4.5}, it is easy to see that
\begin{equation*}
\sqrt{n} |\mathds{B}_{n1} | \leq W_{k_n} \epsilon_n \xrightarrow{n\rightarrow\infty} 0.
\end{equation*}
Hence \eqref{cbrweq3-17} is proved.
We next prove \eqref{cbrweq3-18}.
Observe that
\begin{align*}
&\mathds{B}_{n2} = \mathds{B}_{n21}+ \mathds{B}_{n22}+ \mathds{B}_{n23} + \mathds{B}_{n24}+\mathds{B}_{n25}, \\
\mbox{with}~~ &\mathds{B}_{n21} = \frac{1}{\Pi_{k_n}} \sum_{u\in \T_{k_n}} \left[\Phi \bigg( \frac{s_n t-S_u}{( s_n^2-s_{k_n}^2)^{1/2}}\bigg) -\Phi(t) - \phi(t) \bigg( \frac{s_n t-S_u}{( s_n^2-s_{k_n}^2)^{1/2}}- t\bigg)\right]\mathbf{1}_{\{|S_u|\leq k_n\}},\\
& \mathds{B}_{n22} = \frac{1}{\Pi_{k_n}} \sum_{u\in \T_{k_n}} \left[\Phi \bigg( \frac{s_n t-S_u}{( s_n^2-s_{k_n}^2)^{1/2}}\bigg) -\Phi(t) \right]\mathbf{1}_{\{|S_u|>k_n\}},\\
& \mathds{B}_{n23}=- \frac{1}{\Pi_{k_n}} \sum_{u\in \T_{k_n}}\bigg( \frac{s_n t-S_u}{( s_n^2-s_{k_n}^2)^{1/2}}- t\bigg)\phi(t)\mathbf{1}_{\{|S_u|>k_n\}},\\
& \mathds{B}_{n24}= \frac{1}{( s_n^2-s_{k_n}^2)^{1/2}}\big(s_n- ( s_n^2-s_{k_n}^2)^{1/2}\big) W_{k_n} \phi(t)t,\\
& \mathds{B}_{n25}= -\frac{1}{( s_n^2-s_{k_n}^2)^{1/2}} \phi(t) N_{1,k_n}.
\end{align*}
By Taylor's formula and the choice of $\beta$ and $k_n$, we get
\begin{align*}
\widetilde{\epsilon}_n= & \sqrt{n}\sup_{|y|\leq k_n} \left| \Phi \bigg( \frac{s_n t-y}{( s_n^2-s_{k_n}^2)^{1/2}}\bigg) -\Phi(t) - \phi(t) \bigg( \frac{s_n t-y}{( s_n^2-s_{k_n}^2)^{1/2}}- t\bigg) \right| \\
& \leq \sqrt{n }\sup_{|y| \leq k_n}\bigg| \frac{s_n t-y}{( s_n^2-s_{k_n}^2)^{1/2}}- t\bigg| ^2 \xrightarrow{n\rightarrow\infty} 0.
\end{align*}
Thus
\begin{equation}\label{cbrweq3.20}
|\sqrt{n} \mathds{B}_{n21}| \leq W_{k_n} \widetilde{\epsilon}_n \xrightarrow{n\rightarrow\infty} 0.
\end{equation}
We continue to prove that
\begin{equation}\label{cbrweq3.21}
\sqrt{ n}\mathds{B}_{n22} \xrightarrow{n\rightarrow\infty} 0; \qquad \sqrt{n}\mathds{B}_{n23}\xrightarrow{n\rightarrow\infty} 0.
\end{equation}
This will follow from the facts:
\begin{equation}\label{cbrweq3.22}
\frac{1}{\Pi_{k_n}} \sum_{u\in \T_{k_n}} |S_u|\mathbf{1}_{\{|S_u|>k_n\}} \xrightarrow{n\rightarrow \infty} 0 \mbox{~ a.s.};~~ \sqrt{n} \frac{1}{\Pi_{k_n}} \sum_{u\in \T_{k_n}} \mathbf{1}_{\{S_u|>k_n\}} \xrightarrow{n\rightarrow \infty} 0 ~ \mbox{a.s.}
\end{equation}
In order to prove \eqref{cbrweq3.22}, we firstly observe that
\begin{align*}
&\E \left( \sum_{n=1}^\infty \frac{1}{\Pi_{k_n}} \sum_{u\in \T_{k_n}} |S_u|\ind{|S_u|>k_n} \right)\\ = \ &\sum_{n=1}^\infty \E |\widehat{S}_{k_n}| \ind{|\widehat{S}_{k_n}|>k_n } \leq\ \sum_{n=1}^\infty k_n^{1-\eta} \E|\widehat{S}_{k_n}|^{\eta}
\leq\sum_{n=1}^\infty k_n^{-\frac{\eta}{2}} \sum_{j=0}^{k_n-1} \E |\widehat{L}_j|^{\eta}
=\sum_{n=1}^\infty k_n^{1-\frac{\eta}{2}}\E |\widehat{L}_0|^{\eta}, \\&
\E \left( \sum_{n=1}^\infty \sqrt{n} \frac{1}{\Pi_{k_n}} \sum_{u\in \T_{k_n}} \ind{|S_u|>k_n} \right)
\\ = \ &\sum_{n=1}^\infty \sqrt{n} \E \ind{|\widehat{S}_{k_n}|>k_n } \leq\ \sum_{n=1}^\infty \sqrt{n} k_n^{-\eta} \E|\widehat{S}_{k_n}|^{\eta}
\leq\sum_{n=1}^\infty \sqrt{n} k_n^{-\frac{\eta}{2}-1} \sum_{j=0}^{k_n-1} \E |\widehat{L}_j|^{\eta}
=\sum_{n=1}^\infty n^{\frac{1}{2}} k_n^{-\frac{\eta}{2}}\E |\widehat{L}_0|^{\eta}.
\end{align*}
The assumptions on $\beta$, $k_n$ and $\eta$ ensure that the series in the right hand side of the above two expressions converge.
Hence $$ \sum_{n=1}^\infty \frac{1}{\Pi_{k_n}} \sum_{u\in \T_{k_n}} |S_u|\ind{|S_u|>k_n} <\infty, \quad \sum_{n=1}^\infty \sqrt{n} \frac{1}{\Pi_{k_n}} \sum_{u\in \T_{k_n}} \ind{|S_u|>k_n} <\infty\mbox{ ~ a.s.,} $$
which deduce \eqref{cbrweq3.22}, and consequently, \eqref{cbrweq3.21} is proved.
By the Birkhoff ergodic theorem, we have
\begin{equation}\label{cbrweq3.23}
\lim_{n\rightarrow \infty} \frac{s_n^2}{n} = \E \sigma_0^{(2)},
\end{equation}
whence by the choice of $\beta<1/4$ and the conditions on the environment,
\begin{equation}\label{cbrweq3.24}
\sqrt{n} \mathds{B}_{n24}= \frac{\sqrt{n}}{( s_n^2-s_{k_n}^2)^{1/2}}\frac{s_{k_n}^2}{s_n+ ( s_n^2-s_{k_n}^2)^{1/2}} W_{k_n} \phi(t)t\xrightarrow{n\rightarrow\infty} 0.
\end{equation}
Due to Proposition \ref{th1a} and \eqref{cbrweq3.23}, we conclude that
\begin{equation}\label{cbrweq3.25}
\sqrt{n}\mathds{B}_{n25} \xrightarrow{n\rightarrow \infty}-(\E \sigma_0^{(2)})^{-\frac{1}{2}}\phi(t)V_1~~~ \mbox{a.s.}
\end{equation}
From \eqref{cbrweq3.20}, \eqref{cbrweq3.21}, \eqref{cbrweq3.24} and \eqref{cbrweq3.25}, we derive \eqref{cbrweq3-18}.
Now we turn to the proof of \eqref{cbrweq3-19}.
According to the hypothesis of Theorem \ref{th1}, it follows from the Birkhoff ergodic theorem that
\begin{equation}\label{cbrweq3-26}
\lim_{n\rightarrow \infty } \sqrt{n}\kappa_{1,n}= \frac{1}{6} (\E \sigma_0^{(2)})^{-3/2} \E \sigma_0^{(3)}.
\end{equation}
Notice that
\begin{eqnarray*}
&& \left|\frac{1}{\Pi_{k_n}} \sum_{u\in\T_{k_n}} \left( D_1\bigg( \frac{s_n t-S_u}{( s_n^2-s_{k_n}^2)^{1/2}}\bigg)-D_1(t) \right)\right| \\
&\leq & \frac{2}{\Pi_{k_n}} \sum_{u\in\T_{k_n}} \mathbf{1}_{\{|S_u|>k_n\}} + \frac{1}{\Pi_{k_n}} \sum_{u\in\T_{k_n}} \left| D_1\bigg( \frac{s_n t-S_u}{( s_n^2-s_{k_n}^2)^{1/2}}\bigg)-D_1(t) \right|\mathbf{1}_{\{|S_u|\leq k_n\}}.
\end{eqnarray*}
The first term in the last expression above tends to 0 a.s. by \eqref{cbrweq3.22}, and the second one tends to 0 a.s. because the martingale $\{W_n\}$ converges and
\begin{equation*}
\sup_{|y|\leq k_n} \left|D_1\Bigg( \frac{s_n t-y}{( s_n^2-s_{k_n}^2)^{1/2}}\Bigg)-D_1(t)\right | \xrightarrow{n\rightarrow \infty} 0 .
\end{equation*}
Combining the above results, we obtain \eqref{cbrweq3-19}.
It remains to prove \eqref{cbrweq3-20}, which is immediate from \eqref{cbrweq3-26} and the fact $W_n\xrightarrow{n\rightarrow\infty} W$.
So Lemma \ref{lem2} has been proved.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem3}]
This lemma follows from the following result given in \cite{HuangLiu}.
\begin{prop}[ \cite{HuangLiu} ]\label{pro3}
Assume the condition \eqref{cbrweq1}.
Then
\begin{equation*}
W-W_n=o(n^{-\lambda})\qquad a.s.
\end{equation*}
\end{prop}
By the choice of $\beta$ and $ k_n$, we see that
\begin{equation*}
\sqrt{n}(W-W_{k_n}) =o(n^{\frac{1}{2} -\lambda\beta}) \xrightarrow{n\rightarrow\infty} 0.
\end{equation*}
\end{proof}
Now Theorem \ref{th1} follows from the decomposition \eqref{cbrweq12} and Lemmas \ref{lem1} -- \ref{lem3}.
\section{Proof of Theorem \ref{th2}}\label{sec4}
We will follow the similar procedure as in the proof of Theorem \ref{th1}.
We remind that $\lambda,\eta>16 $ in the current setting. Hereafter we will choose $\max \{ \frac{4}{\lambda}, \frac{4}{\eta}\}<\beta <\frac{1}{4} $ and let $k_n= \lfloor n^{\beta}\rfloor $ (the integral part of $n^{\beta}$).
By \eqref{cbrweq11}, we have
\begin{equation}\label{cbrweq4-1}
\sqrt{2\pi} s_n \Pi_n^{-1} Z_n(A) -W \int_A \exp \{ -\frac{x^2}{2s_n^2}\} dx = \Lambda_{1,n} +\Lambda_{2,n }+\Lambda_{3,n} ,
\end{equation}
\begin{eqnarray*}
\mbox{~~ with ~~ } \Lambda_{1,n} &=& \sqrt{2\pi} s_n \Pi_{k_n}^{-1} \sum_{u\in \T_{k_n}} \bigg( W_{n-k_n} (u, A-S_u) - \E_{\xi,k_n} W_{n-k_n} (u, A-S_u) \bigg); \\
\Lambda_{2,n} &=& \Pi_{k_n}^{-1} \sum_{u\in \T_{k_n}} \bigg ( \sqrt{2\pi} s_n \E_{\xi,k_n} W_{n-k_n} (u, A-S_u) - \int_A \exp \{ -\frac{x^2}{2s_n^2}\} dx \bigg ); \\ \Lambda_{3,n} &=& ( W_{k_n} -W) \int_A \exp \{ -\frac{x^2}{2s_n^2}\} dx .
\end{eqnarray*}
On basis of this decomposition, we shall divide the proof of Theorem \ref{th2} into the following lemmas.
\begin{lem}\label{lem4-1}
Under the hypothesis of Theorem \ref{th2}, a.s.
\begin{equation}\label{eq4-6}
{n}\Lambda_{1,n} \xrightarrow{n \rightarrow \infty } 0.
\end{equation}
\end{lem}
\begin{lem}\label{lem4-2} Under the hypothesis of Theorem \ref{th2}, a.s.
\begin{multline}\label{eq4-7}
{n} \Lambda_{2,n} \xrightarrow{n \rightarrow \infty } ( \E \sigma_0^{(2)})^{-1}(\frac{1}{2} V_2 +\overline{x}_A V_1) |A|+ \frac{1}{2}{\E \sigma_0^{(3)} } ( \E \sigma_0^{(2)})^{-2}(V_1-\overline{x}_A W )|A| \\
~+\frac{1}{8} (\E \sigma_0^{(2)} )^{-2}\E(\sigma_0^{(4)}-3(\sigma_0^{(2)} )^2 ) W |A| - \frac{5}{24}( \E \sigma_0^{(2)} )^{-3}(\E\sigma_0^{(3)})^2 W |A| .
\end{multline}
\end{lem}
\begin{lem}\label{lem4-3} Under the hypothesis of Theorem \ref{th2}, a.s.
\begin{equation}\label{eq4-8}
{n}\Lambda_{3,n} \xrightarrow{n \rightarrow \infty } 0.
\end{equation}
\end{lem}
Now we go to prove the lemmas subsequently.
\begin{proof}[Proof of Lemma \ref{lem4-1}]
The proof of Lemma \ref{lem4-1} follows the same procedure as that of Lemma \ref{lem1} with minor changes in scaling. We omit the details.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem4-2}]
We start the proof by introducing some notation: set
\begin{align*}
\kappa_{1, n} = &{\frac{1}{6} (s_n^2-s_{k_n}^2 )^{-3/2} }{(s_n^{(3)} - s_{k_n}^{(3)}) }, \quad \kappa_{2, n} = \frac{1}{72} (s_n^2-s_{k_n}^2 )^{-3} (s_n^{(3)} - s_{k_n}^{(3)})^2, \\
\kappa_{3, n} = &\frac{ 1}{24} (s_n^2-s_{k_n}^2 )^{-2} \sum_{j=k_n}^{n-1} \Big(\sigma_j^{(4)} -3\big(\sigma_j^{(2)}\big)^2 \Big).
\end{align*}
Define for $x\in \R$,
\begin{align*}
D_1(x)= &-H_2(x)\phi(x) , ~ D_2(x)= -H_5(x)\phi(x), ~
D_3(x)= -H_3(x) \phi(x),\\
R_n(x) = &-\frac{\Big(s_n^{(3)}-s_{k_n}^{(3)} \Big)^3}{1296 (s_n^2-s_{k_n}^2)^{9/2}} H_8(x) \phi(x) -\frac{\sum_{j=k_n}^{n-1} \Big(\sigma_j^{(5)} -10\sigma_j^{(3)}\sigma_j^{(2)} \Big)}{120 (s_n^2-s_{k_n}^2)^{5/2}}H_4(x) \phi(x)\\ & -\frac{ \Big(s_n^{(3)}-s_{k_n}^{(3)}\Big ) \sum_{j=k_n}^{n-1} \Big(\sigma_j^{(4)} -3\big(\sigma_j^{(2)}\big)^2 \Big)} { 144 (s_n^2-s_{k_n}^2)^{7/2}}H_6(x) \phi(x),
\end{align*}
where $H_m$ are Chebyshev-Hermite polynomials defined in \eqref{eqCH}.
We decompose $\Lambda_{2,n}$ into 7 terms:
\begin{align}\label{eq4-14}
\Lambda_{2,n}=&\Lambda_{2,n1} + \Lambda_{2,n2}+\Lambda_{2,n3}+\Lambda_{2,n4} +\Lambda_{2,n5}+\Lambda_{2,n6} + \Lambda_{2,n7},
\end{align}
where
\begin{align*}
\Lambda_{2,n1}=&~\sqrt{2\pi} s_n \Pi_{k_n}^{-1} \sum_{u\in \T_{k_n}} \Bigg[ \E_{\xi,k_n} W_{n-k_n} (u, A-S_u) - \int_A\Bigg( \phi\bigg( \frac{x-S_u}{({s_n^2-s_{k_n}^2})^{1/2}}\bigg) \\
\nonumber & ~+ \sum_{\nu=1}^3\kappa_{\nu,n} D'_\nu \bigg( \frac{x-S_u}{({s_n^2-s_{k_n}^2})^{1/2}}\bigg) +R'_n\bigg( \frac{x-S_u}{({s_n^2-s_{k_n}^2})^{1/2}}\bigg) \Bigg) \frac{ dx}{({s_n^2-s_{k_n}^2})^{1/2}} \Bigg] , \\
\nonumber \Lambda_{2,n2} =& ~ \Pi_{k_n}^{-1} \sum_{u\in \T_{k_n}} \mathbf{1}_{\{|S_u|\leq k_n\}} \int_A \bigg[ \frac{s_n}{({s_n^2-s_{k_n}^2})^{1/2}} \exp\{-\frac{(x-S_u)^2}{2({s_n^2-s_{k_n}^2})} \}- \exp\{-\frac{x^2}{2s_n^2}\} \bigg] dx , \\
\nonumber\Lambda_{2,n3} =&~\frac{ \sqrt{2\pi}\kappa_{1,n}s_n}{({s_n^2-s_{k_n}^2})^{1/2}} \Pi_{k_n}^{-1} \sum_{u\in \T_{k_n}} \mathbf{1}_{\{|S_u|\leq k_n\}} \int_A D'_1 \bigg( \frac{x-S_u}{({s_n^2-s_{k_n}^2})^{1/2}}\bigg) dx,\\
\nonumber\Lambda_{2,n4} =& ~\frac{ \sqrt{2\pi}\kappa_{2,n}s_n}{({s_n^2-s_{k_n}^2})^{1/2}} \Pi_{k_n}^{-1} \sum_{u\in \T_{k_n}}\mathbf{1}_{\{|S_u|\leq k_n\}} \int_A D'_2 \bigg( \frac{x-S_u}{({s_n^2-s_{k_n}^2})^{1/2}}\bigg) dx,\\
\nonumber \Lambda_{2,n5} =& ~ \frac{ \sqrt{2\pi}\kappa_{3,n}s_n}{({s_n^2-s_{k_n}^2})^{1/2}} \Pi_{k_n}^{-1} \sum_{u\in \T_{k_n}} \mathbf{1}_{\{|S_u|\leq k_n\}} \int_A D'_3 \bigg( \frac{x-S_u}{({s_n^2-s_{k_n}^2})^{1/2}}\bigg) dx,\\
\nonumber \Lambda_{2,n6} =& ~ \frac{ \sqrt{2\pi}s_n}{({s_n^2-s_{k_n}^2})^{1/2}}\Pi_{k_n}^{-1} \sum_{u\in \T_{k_n}} \mathbf{1}_{\{|S_u|\leq k_n\}} \int_A R'_n \bigg( \frac{x-S_u}{({s_n^2-s_{k_n}^2})^{1/2}}\bigg) dx, \\
\nonumber\Lambda_{2,n7}= &~ ~ \frac{ \sqrt{2\pi}s_n}{({s_n^2-s_{k_n}^2})^{1/2}} \Pi_{k_n}^{-1} \sum_{u\in \T_{k_n}}
\Bigg ( \int_A \bigg( \phi \bigg( \frac{x-S_u}{({s_n^2-s_{k_n}^2})^{1/2}}\bigg)+R_n\bigg( \frac{x-S_u}{({s_n^2-s_{k_n}^2})^{1/2}}\bigg) \\
\nonumber &~~+\sum_{\nu=1}^3\kappa_{\nu,n} D'_\nu \bigg( \frac{x-S_u}{({s_n^2-s_{k_n}^2})^{1/2}}\bigg) - \Big({1- \frac{s^2_{k_n}}{s^2_n}}\Big)^{1/2} \phi(x/s_n) \bigg)dx \Bigg )\mathbf{1}_{\{|S_u|>k_n\}} .
\end{align*}
The lemma will follow once we prove that a.s.
\begin{align}
\label{eq4-15}
& n \Lambda_{2,n1} \xrightarrow{n \rightarrow \infty } 0, \\
\label{eq4-16} & n \Lambda_{2,n2} \xrightarrow{n \rightarrow \infty } ( \E \sigma_0^{(2)})^{-1}(\frac{1}{2} V_2 +\overline{x}_A V_1) |A|, \\
\label{eq4-17} & n \Lambda_{2,n3} \xrightarrow{n \rightarrow \infty } \frac{1}{2}{\E \sigma_0^{(3)} } ( \E \sigma_0^{(2)})^{-2}(V_1-\overline{x}_A W )|A|, \\
\label{eq4-18} & n \Lambda_{2,n4} \xrightarrow{n \rightarrow \infty } - \frac{5}{24}( \E \sigma_0^{(2)} )^{-3}(\E\sigma_0^{(3)})^2 W |A|, \\
\label{eq4-19} & n \Lambda_{2,n5} \xrightarrow{n \rightarrow \infty } \frac{1}{8} (\E \sigma_0^{(2)} )^{-2}\E(\sigma_0^{(4)}-3(\sigma_0^{(2)})^2 ) W |A|,\\
\label{eq4-20}
& n \Lambda_{2,n6} \xrightarrow{n \rightarrow \infty } 0,\\
\label{eq4-13}
& n \Lambda_{2,n7} \xrightarrow{n \rightarrow \infty } 0.
\end{align}
The proof of \eqref{eq4-15} is based on the following result on the asymptotic expansion of the distribution of the sum of independent random variables:
\begin{prop}\label{pro4} Under the hypothesis of Theorem \ref{th2}, for a.e. $\xi$,
\begin{equation*}
\epsilon_n=n^{3/2}\sup_{x\in \R}\Bigg|\P_{\xi} \Bigg(\frac{\sum_{k=k_n}^{n-1}\widehat{L}_{k}}{( s_n^2-s_{k_n}^2)^{1/2}} \leq x \Bigg)- \Phi (x) - \sum_{\nu=1}^{3} \kappa_{\nu,n}D_\nu(x) -R_n(x) \Bigg| \xrightarrow{n\rightarrow\infty} 0.
\end{equation*}
\end{prop}
\begin{proof}
Let $X_k=0$ for $0\leq k \leq k_n-1$ and $X_k= \widehat{L}_{k}$ for $k_n\leq k \leq n-1$. Then the random variables $\{X_k\}$ are independent under $P_\xi$. By Markov's inequality and Lemma \ref{lem-Edge-exp} we obtain the following result:
\begin{align*}
&\sup_{x\in \R} \Bigg|\P_{\xi} \bigg(\frac{\sum_{k=k_n}^{n-1}\widehat{L}_{k}}{{( s_n^2-s_{k_n}^2)^{1/2}}} \leq x \bigg)- \Phi (x) - \sum_{\nu=1}^{3} \kappa_{\nu,n}D_\nu(x) -R_n(x)\Bigg| \\
\leq & K_\xi \left\{ (s_n^2-s_{k_n}^2 )^{-3} \sum_{j=k_n }^{n-1} \E_\xi | L_{j}|^6+ n^{15} \left(\sup_{|t| >T } \frac{1}{n} \left( k_n+ \sum_{j=k_n}^{n-1} |v_j(t)| \right)+ \frac{1}{2n}\right)^n\right\}.
\end{align*}
By our conditions on the environment, we know that
\begin{equation}\label{cbrweq3-21}
\lim_{n\rightarrow \infty} n^{2} {(s_n^2-s_{k_n}^2)^{-3}} \sum_{j=k_n}^{n-1} \E_\xi |\widehat{L}_k|^6 = \E |\widehat{L}_0|^6/ (\E \sigma_0^{(2)})^{3}.
\end{equation}
The required proposition concludes from \eqref{cbrweq3-21} and \eqref{cbrweq3-22a}.
\end{proof}
Using Proposition \ref{pro4}, we deduce that
\begin{align*}
|n\Lambda_{2,n1}| & \leq {\sqrt{2\pi} s_n}{n^{-\frac{1}{2}}} W_{k_n} \epsilon_n \xrightarrow{n \rightarrow \infty } 0,
\end{align*}
and \eqref{eq4-15} is proved.
Next we turn to the proof of \eqref{eq4-16}.
Using Taylor's expansion and the boundedness of the set $A$, together with the choice of $ \beta$ and $k_n$, we get that
\begin{equation*}
\frac{s_n}{({s_n^2-s_{k_n}^2})^{1/2}} \exp\{-\frac{(x-y)^2}{2({s_n^2-s_{k_n}^2})} \}- \exp\{-\frac{x^2}{2s_n^2}\}= \frac{1}{2(s_n^2-s_{k_n}^2) } \big(s_{k_n}^2- y^2 +2 xy + o(1)\big ),
\end{equation*}
uniformly for all $|y| \leq k_n$ and $x\in A $ as $n\rightarrow \infty$.
By the same arguments as in the proof of \eqref{cbrweq3.22}, we can show that for $\eta>16$, with $ \beta, k_n$ chosen above,
\begin{equation}\label{cbrweq4.18}
n \Pi_{k_n}^{-1} \sum_{u\in \T_{k_n}} \mathbf{1}_{\{|S_u|>k_n\}} \xrightarrow{n\rightarrow \infty} 0 \quad \mbox{ and }\qquad \Pi_{k_n}^{-1} \sum_{u\in \T_{k_n}}S_u^2 \mathbf{1}_{\{|S_u|>k_n\}} \xrightarrow{n\rightarrow \infty} 0 \quad \mbox{ a.s.}
\end{equation}
Therefore as $n$ tends to infinity, we have a.s.
\begin{align*}
n\Lambda_{2,n2} = ~& n \frac{1}{2(s_n^2-s_{k_n}^2) } \bigg(|A| \Pi_{k_n}^{-1} \sum_{u\in \T_{k_n}} (s_{k_n}^2- S_u^2 ) \mathbf{1}_{\{|S_u|\leq k_n\}} \\& + 2 \int_A x dx\Pi_{k_n}^{-1} \sum_{u\in \T_{k_n}} S_u \mathbf{1}_{\{|S_u|\leq k_n\}} +o(1) \bigg) \\
= ~ & \frac{n}{2(s_n^2-s_{k_n}^2) } \big( N_{2,k_n}|A| + 2|A|\overline{x}_A N_{1,k_n} +o(1) \big) \\
= ~ & (2\E \sigma_0^{(2)})^{-1} (V_2 +2 \overline{x}_A V_1) |A| +o(1),
\end{align*}
which proves \eqref{eq4-16}.
To prove \eqref{eq4-17}, we observe that
\begin{align*}
& ~\Lambda_{2,n3}= \frac{ \kappa_{1,n}s_n}{({s_n^2-s_{k_n}^2})^{1/2}} \Pi_{k_n}^{-1} \sum_{u\in \T_{k_n}} \mathbf{1}_{\{|S_u|\leq k_n\}} \int_A \bigg( \frac{(x-S_u)^{3}}{({s_n^2-s_{k_n}^2})^{3/2}}- \frac{3 (x-S_u)}{({s_n^2-s_{k_n}^2})^{1/2}} \bigg) e^{-\frac{(x-S_u)^2}{2({s_n^2-s_{k_n}^2})}} dx \\
& ~ ~~~~~~=\Lambda_{2,n31}+\Lambda_{2,n32}+\Lambda_{2,n33}+\Lambda_{2,n34},
\end{align*}
{with} \begin{align*} & \Lambda_{2,n31}=~ \frac{ \kappa_{1,n}s_n}{({s_n^2-s_{k_n}^2})^{1/2}} \Pi_{k_n}^{-1} \sum_{u\in \T_{k_n}} \mathbf{1}_{\{|S_u|\leq k_n\}} \int_A \frac{(x-S_u)^{3}}{({s_n^2-s_{k_n}^2})^{3/2}} e^{-\frac{(x-S_u)^2}{2({s_n^2-s_{k_n}^2})}} dx ; \\
&\Lambda_{2,n32}=~ \frac{ \kappa_{1,n}s_n}{({s_n^2-s_{k_n}^2})^{1/2}} \Pi_{k_n}^{-1} \sum_{u\in \T_{k_n}} \mathbf{1}_{\{|S_u|\leq k_n\}} \int_A \frac{3 (x-S_u)}{({s_n^2-s_{k_n}^2})^{1/2}} \bigg (1- e^{-\frac{(x-S_u)^2}{2({s_n^2-s_{k_n}^2})}} \bigg) dx ;\\
&\Lambda_{2,n33}=~- \frac{ \kappa_{1,n}s_n}{({s_n^2-s_{k_n}^2})^{1/2}} \Pi_{k_n}^{-1} \sum_{u\in \T_{k_n}} \int_A \frac{3 (x-S_u)}{({s_n^2-s_{k_n}^2})^{1/2}} dx;\\
&\Lambda_{2,n34}=~ \frac{ \kappa_{1,n}s_n}{({s_n^2-s_{k_n}^2})^{1/2}} \Pi_{k_n}^{-1} \sum_{u\in \T_{k_n}} \mathbf{1}_{\{|S_u|> k_n\}} \int_A \frac{3 (x-S_u)}{({s_n^2-s_{k_n}^2})^{1/2}} dx.
\end{align*}
It is clear that \begin{align*}
& n|\Lambda_{2,n31}|\leq \frac{n \kappa_{1,n}s_n}{({s_n^2-s_{k_n}^2})^{2}} \int_A (|x|+ k_n)^3 dx W_{k_n} \xrightarrow{n\rightarrow \infty} 0 \mbox{ a.s.}, \\
& n|\Lambda_{2,n32}|\leq \frac{n \kappa_{1,n}s_n}{({s_n^2-s_{k_n}^2})^{2}} \int_A \frac{3}{2}(|x|+ k_n)^3 dx W_{k_n} \xrightarrow{n\rightarrow \infty} 0 \mbox{ a.s. } \quad ( 1- e^{-x} \leq x, \mbox{ for } x>0 ),\\
& n\/ \Lambda_{2,n33}~= \frac{n (s_n^{(3)}- s_{k_n}^{(3)})s_n}{6({s_n^2-s_{k_n}^2})^{5/2}}\cdot 3|A|(N_{1,k_n}- \overline{x}_A W_{k_n}) \\
&~~~~~~~~~~~\xrightarrow{n\rightarrow \infty} \frac{1}{2}{\E \sigma_0^{(3)} } ( \E \sigma_0)^{-2}(V_1-\overline{x}_A W )|A| \mbox{ a.s.}, \\
& n|\Lambda_{2,n34}|\leq \frac{3n \kappa_{1,n}s_n}{({s_n^2-s_{k_n}^2})} \bigg( \int_A|x|dx \Pi_{k_n}^{-1} \sum_{u\in \T_{k_n}} \mathbf{1}_{\{|S_u|> k_n\}} + |A|\Pi_{k_n}^{-1} \sum_{u\in \T_{k_n}} |S_u|\mathbf{1}_{\{|S_u|> k_n\}} \bigg) \\
& ~~~~~~~~~~~\xrightarrow{n\rightarrow \infty} 0 \mbox{ a.s. } (\mbox{ by } \eqref{cbrweq3.22}),
\end{align*}
whence \eqref{eq4-17} follows.
By the Birkhoff ergodic theorem, we see that
\begin{equation}\label{eq4-21}
\lim_{n\rightarrow \infty} \frac{ n \kappa_{2,n}s_n}{({s_n^2-s_{k_n}^2})^{1/2}} = \frac{(\E \sigma_0^{(3)})^2}{72(\E \sigma_0^{(2)})^{3} }, \quad \lim_{n\rightarrow \infty} \frac{ n\kappa_{3,n}s_n}{({s_n^2-s_{k_n}^2})^{1/2}} = \frac{\E(\sigma_0^{(3)} -3(\sigma_0^{(2)})^2 )}{24(\E \sigma_0^{(2)})^{2} }.
\end{equation}
Elementary calculus shows that, uniformly for $|y|\leq k_n$
\begin{align}\label{eq4-22}
\mbox{ if } \nu\geq 1, \quad & \int_A \bigg( \frac{x-y}{({s_n^2-s_{k_n}^2})^{1/2}}\bigg)^\nu \exp\bigg(-\frac{(x-y)^2}{2({s_n^2-s_{k_n}^2})}\bigg)dx \xrightarrow{n\rightarrow \infty} 0 \mbox{ a.s. }, \\
\label{eq4-23} \mbox{ and } ~~~~~~ & \int_A \exp\bigg(-\frac{(x-y)^2}{2({s_n^2-s_{k_n}^2})}\bigg)dx \xrightarrow{n\rightarrow \infty} 1 \mbox{ a.s. }
\end{align}
Combining \eqref{cbrweq4.18},\eqref{eq4-21}, \eqref{eq4-22} and \eqref{eq4-23}, we deduce \eqref{eq4-18} and \eqref{eq4-19}.
By the Birkhoff ergodic theorem and the definition of $H_m(x)$ and $\phi(x)$, we see that
\begin{equation*}
\sup_{x\in \R} |R_n'(x) |= O(\frac{1}{n^{3/2}}),
\end{equation*}
whence \eqref{eq4-20} follows.
Finally because $|\Lambda_{2,n7}| $ is bounded by $K_\xi\cdot\Pi_{k_n}^{-1} \sum_{u\in \T_{k_n}} \mathbf{1}_{\{|S_u|>k_n\}}$,
\eqref{cbrweq4.18} implies \eqref{eq4-13}.
So the required result \eqref{eq4-7} follows from \eqref{eq4-15} -- \eqref{eq4-13}.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem4-3}]
By Proposition \ref{pro3} , under our assumption, we have
\begin{equation*}
W-W_n=o(n^{-\lambda})\qquad a.s.
\end{equation*}
By the choice of $\beta$ and $ k_n$, we see that
\begin{equation*}
n^{ \frac{3}{2}}(W-W_{k_n}) =o(n^{\frac{3}{2} -\lambda\beta}) \xrightarrow{n\rightarrow\infty} 0.
\end{equation*}
\end{proof}
\section*{Acknowledgments}
The authors are very grateful to the reviewers for the very valuable remarks and comments which lead to a significant improvement of our original manuscript.
The work has benefited from a visit of Q. Liu to the School of Mathematical Sciences, Beijing Normal University, and a visit of Z. Gao to Laboratoire de Math\'ematiques de Bretagne Atlantique,
Universit\'e de Bretagne-Sud. The hospitality and support of both universities
have been well appreciated.
|
2,877,628,089,064 | arxiv | \section{Introduction} \label{sec:intro}
Partial differential equations (PDEs) with uncertain inputs have provided engineers and scientists with enhanced
fidelity in the modelling of real-life phenomena, especially within the last decade.
Sparse grid stochastic collocation representations of parametric uncertainty, in combination with finite element
discretization of physical space, have emerged as an efficient alternative to Monte-Carlo strategies over this period,
especially in the context of nonlinear PDE models or linear PDE problems that are nonlinear in the parameterization
of the uncertainty.
The combination of adaptive sparse grid methods with a hierarchy of spatial approximations is a relatively new
development, see for example,~\cite{LangSS20,TeckentrupJantschWebsterGunzburger2012}. In our precursor
paper~\cite{bsx21} (part I), we extended the adaptive framework developed by Guignard \& Nobile~\cite{GuignardN18}
and presented a critical comparison of alternative strategies in the context of solving a model
problem that combines strong anisotropy in the parametric dependence with singular behavior in the physical space.
The numerical results presented in \partI\ demonstrate the effectivity and robustness
of our error estimation strategy as well as the utility of the error indicators guiding the adaptive refinement process.
The results in \partI\ also showed that optimality of convergence is difficult to achieve using a simple single-level approach
where a single finite element space is associated with all active collocation points.
The main aim of this contribution is to see if optimal convergence rates can be recovered by computing results using
a multilevel implementation of the algorithm outlined in \partI.
The convergence of a modified version of the adaptive algorithm in~\cite{GuignardN18} has been established by
Eigel et al.~\cite{eest20} and independently by Feischl \& Scaglioni~\cite{FeischlS21}. The authors of~\cite{FeischlS21}
note that the main difficulty in establishing convergence is ``the interplay of parametric refinement and finite element
refinement''. This interplay is the focus of this contribution.
The model problems that are of interest are stated in section~\ref{sec:problem}. The only difference
from the problem statement in \partI\ is that we also cover the case where the right-hand side function
\abrevx{has a parametric dependence}.
The adaptive solution algorithm from \partI\ is extended to cover the case of a
non-deterministic right-hand side function in section~\ref{sec:scfem}. The novel contribution of this work
primarily lies in section~\ref{sec:results}, where we compare numerical results obtained with our multilevel
algorithm with those generated using a single-level strategy
\abrevx{and with those computed using the multilevel stochastic Galerkin finite element method (SGFEM)}.
\section{\rbl{Parametric model problems}} \label{sec:problem}
Let $D \in \mathbb{R}^2$ be a bounded Lipschitz domain with polygonal boundary $\partial D$.
Let $\Gamma := \Gamma_1 \times \Gamma_2 \times \cdots \times \Gamma_M$ denote the parameter domain in $\mathbb{R}^M$,
where $M \in \mathbb{N}$ and each $\Gamma_m$ ($m = 1,\ldots,M$) is a bounded interval in~$\mathbb{R}$.
We introduce a probability measure $\pi(\mathbf{y}} % or \bm{y) := \prod_{m=1}^M \pi_m(y_m)$ on $(\Gamma,\mathcal{B}(\Gamma))$;
here, $\pi_m$ denotes a
Borel probability measure on $\Gamma_m$ ($m = 1,\ldots,M$) and
$\mathcal{B}(\Gamma)$ is the Borel $\sigma$-algebra on $\Gamma$.
The first model problem is the parametric elliptic problem analyzed in \partI:
we seek $u \colon \overline D \times \Gamma \to \mathbb{R}$ satisfying
\begin{subequations}
\begin{align}
\label{eq:pde:strong}
\begin{aligned}
-\nabla \cdot (a(\cdot, \mathbf{y}} % or \bm{y)\nabla u(\cdot, \mathbf{y}} % or \bm{y))
&= f
&& \text{in $ D$},\\
u(\cdot, \mathbf{y}} % or \bm{y) &= 0 && \text{on $\partial D$,}
\end{aligned}
\\
\intertext{$\pi$-almost everywhere on $\Gamma$. The second model problem is
to find $u \colon \overline D \times \Gamma \to \mathbb{R}$ satisfying}
\label{eq:pde2:strong}
\begin{aligned}
-\nabla^2 u(\cdot, \mathbf{y}} % or \bm{y))
&= f({\cdot,\mathbf{y}} % or \bm{y})
&& \text{in $ D$},\\
u(\cdot, \mathbf{y}} % or \bm{y) &= 0 && \text{on $\partial D$,}
\end{aligned}
\end{align}
\end{subequations}
$\pi$-almost everywhere on $\Gamma$.
\rblx{In} the first model problem, the deterministic right-hand side function $f \in L^2(D)$ and
the coefficient $a$ is a random field on $(\Gamma,\mathcal{B}(\Gamma),\pi)$ over $L^\infty(D)$.
In this case we will assume that there exist constants $a_{\min},\, a_{\max}$ such that
\begin{equation} \label{eq:amin:amax}
0 < a_{\min} \le \operatorname*{ess\;inf}_{x \in D} a(x,\mathbf{y}} % or \bm{y) \le \operatorname*{ess\;sup}_{x \in D} a(x,\mathbf{y}} % or \bm{y) \le a_{\max} < \infty \quad
\text{$\pi$-a.e. on $\Gamma$}.
\end{equation}
This assumption implies the following norm equivalence:
for any $v \in \mathbb{X} := H^1_0(D)$ there holds
\begin{equation} \label{eq:norm:equiv}
a_{\min}^{1/2} \|\nabla v\|_{L^2(D)} \le
\| a^{1/2}(\cdot,\mathbf{y}} % or \bm{y) \nabla v \|_{L^2(D)} \le
a_{\max}^{1/2} \|\nabla v\|_{L^2(D)}\quad
\text{$\pi$-a.e. on $\Gamma$}.
\end{equation}
The parametric problem~\eqref{eq:pde:strong} is understood in the weak sense:
given $f \in L^2(D)$, find $u : \Gamma \to \mathbb{X}$ such that
\begin{align} \label{eq:pde:weak}
\int_D a(x, \mathbf{y}} % or \bm{y) \nabla u(x,\mathbf{y}} % or \bm{y) \cdot \nabla v(x) \, \mathrm{d}x = \int_D f(x) v(x) \, \mathrm{d}x
\quad \forall v \in \mathbb{X},\ \text{$\pi$-a.e. on $\Gamma$}.
\end{align}
The above assumptions on $a$ and $f$ guarantee that the parametric problem~\eqref{eq:pde:strong}
admits a unique weak solution $u$ in the Bochner space \abrevx{$\mathbb{V} := L_\pi^p(\Gamma; \mathbb{X})$ for any $p \in [1, \infty]$};
see~\cite[Lemma~1.1]{BabuskaNT07} for details.
In the sequel, we \abrevx{restrict attention to $p=2$ and denote by $\norm{\cdot}{}$ the norm in $\mathbb{V} = L_\pi^2(\Gamma; \mathbb{X})$;
we also define $\norm{\cdot}{\mathbb{X}} := \norm{\nabla\cdot}{L^2(D)}$}.
The second parametric elliptic problem \eqref{eq:pde2:strong} combines \abrevx{uncertainty in the}
source term with an isotropic diffusion coefficient field.
In this case the right-hand side function $f$ simply needs to be a random field that is smooth enough to ensure that
\eqref{eq:pde2:strong} also admits a unique weak solution $u$
in the Bochner space $\mathbb{V}$.
\section{Multilevel stochastic collocation finite element method} \label{sec:scfem}
Full details of the construction of a multilevel stochastic collocation finite element approximation
of the first parametric elliptic problem can be found in \partI. The parametric approximation is associated with
a monotone (or, downward-closed) finite set $\indset_\bullet \subset \mathbb{N}^M$ of multi-indices, where
$\indset_\bullet = \{ \nnu = (\nu_1,\ldots,\nu_M) : \nu_m \in \mathbb{N}, \forall\, m = 1,\ldots,M \}$
is such that $\#\indset_\bullet < \infty$.
\abrevx{Each component $\nu_m$ ($m = 1,\ldots,M$) of the multi-index $\nnu \in \indset_\bullet$ corresponds to}
a set of collocation points along the $m$th coordinate axis in $\mathbb{R}^M$, and the associated {\it sparse grid}
$\Colpts_\bullet = \Colpts_{\indset_\bullet}$
of collocation points on $\Gamma$ is given by\footnote{The notation is
identical to that in \partI. The reader is referred to this paper for any omitted details.}
\[
\Colpts_{\indset_\bullet} := \bigcup_{\nnu \in \indset_\bullet} \Colpts^{\,(\nnu)}
= \abrevx{\bigcup_{\nnu \in \indset_\bullet}}\,
\Colpts_1^{\mf(\nu_1)} \times \Colpts_2^{\mf(\nu_2)} \times \ldots \times \Colpts_M^{\mf(\nu_M)}.
\]
Each collocation point \abrevx{$\mathbf{z}} % or \bm{z \in \Colpts_{\indset_\bullet} \subset \Gamma$}
is associated with a piecewise linear finite element approximation space
\abrevx{$\mathbb{X}_{\bullet \mathbf{z}} % or \bm{z} = \SS^1_0(\mathcal{T}_{\bullet \mathbf{z}} % or \bm{z})$} defined on a mesh \abrevx{$\mathcal{T}_{\bullet \mathbf{z}} % or \bm{z}$} and
an enhanced space $\widehat\mathbb{X}_{\bullet \mathbf{z}} % or \bm{z}$ \abrevx{defined on the mesh $\widehat\mathcal{T}_{\bullet \mathbf{z}} % or \bm{z}$}
obtained by {\it uniform refinement} of \abrev{$\mathcal{T}_{\bullet \mathbf{z}} % or \bm{z}$}.
The spatial detail space $\mathbb{Y}_{\bullet \mathbf{z}} % or \bm{z}$ is the approximation space associated with the newly introduced (mid-edge) nodes\abrevx{, i.e.,}
$\widehat \mathbb{X}_{\bullet \mathbf{z}} % or \bm{z} = \mathbb{X}_{\bullet \mathbf{z}} % or \bm{z} \oplus \mathbb{Y}_{\bullet \mathbf{z}} % or \bm{z}$.
\abrevx{We assume that any finite element mesh employed for the spatial discretization is obtained by (uniform or local)
refinement of a given (coarse) initial mesh~$\mathcal{T}_0$.}
The SC-FEM approximation of the solution $u$ to either of the parametric problems \eqref{eq:pde:strong} or
\eqref{eq:pde2:strong} is given by
\begin{equation} \label{eq:scfem:sol}
\scsol := \sum\limits_{\mathbf{z}} % or \bm{z \in \Colpts_\bullet} u_{\bullet \mathbf{z}} % or \bm{z}(x) \LagrBasis{\bullet \mathbf{z}} % or \bm{z}{}(\mathbf{y}} % or \bm{y),
\end{equation}
where $u_{\bullet \mathbf{z}} % or \bm{z} \in \mathbb{X}_{\bullet \mathbf{z}} % or \bm{z}$ are Galerkin approximations satisfying~\eqref{eq:sample1:fem}
or~\eqref{eq:sample2:fem} for $\mathbf{z}} % or \bm{z \in \Colpts_\bullet$, and
$\{ \LagrBasis{\bullet \mathbf{z}} % or \bm{z}{}(\mathbf{y}} % or \bm{y) = \LagrBasis{\mathbf{z}} % or \bm{z}{\Colpts_\bullet}(\mathbf{y}} % or \bm{y) : \mathbf{z}} % or \bm{z \in \Colpts_\bullet \}$ is a set of
multivariable Lagrange basis functions associated with $\Colpts_\bullet$ and satisfying
$\LagrBasis{\bullet \mathbf{z}} % or \bm{z}{}(\mathbf{z}} % or \bm{z') = \delta_{\mathbf{z}} % or \bm{z\z'}$ for any $\mathbf{z}} % or \bm{z,\,\mathbf{z}} % or \bm{z' \in \Colpts_\bullet$.
The enhancement of the parametric component of the SC-FEM approximation~\eqref{eq:scfem:sol}
is done by enriching the index set~$\indset_\bullet$
\abrevx{with multi-indices selected} from the {\it reduced margin} set $ \rmarg_{\bullet} = \rmarg({\indset_\bullet})$\abrevx{; this
corresponds to adding \rbl{some} collocation points from the set $\widehat\Colpts_\bullet \setminus \Colpts_\bullet$,
where $\widehat\Colpts_\bullet := \Colpts_{\indset_\bullet \cup \rmarg(\indset_\bullet)}$}.
To keep the discussion concise we simply summarize the components of the adaptive refinement strategy.
The three components are:
\begin{itemize}
\item
solution of a deterministic finite element problem at each sparse grid collocation point.
That is, the computation of $u_{\bullet \mathbf{z}} % or \bm{z} \in \mathbb{X}_{\bullet \mathbf{z}} % or \bm{z}$ satisfying either
\begin{subequations}
\begin{align} \label{eq:sample1:fem}
\int_D a(x, \mathbf{z}} % or \bm{z) \nabla u_{\bullet \mathbf{z}} % or \bm{z}(x) \cdot \nabla v(x) \, \mathrm{d}x = \int_D f(x) v(x)\, \mathrm{d}x
\quad \forall v \in \mathbb{X}_{\bullet \mathbf{z}} % or \bm{z}
\\
\intertext{in the case of the first parametric problem \eqref{eq:pde:strong}, or}
\label{eq:sample2:fem}
\int_D \nabla u_{\bullet \mathbf{z}} % or \bm{z}(x) \cdot \nabla v(x) \, \mathrm{d}x = \int_D f(x, \mathbf{z}} % or \bm{z) v(x)\, \mathrm{d}x
\quad \forall v \in \mathbb{X}_{\bullet \mathbf{z}} % or \bm{z}
\end{align}
\end{subequations}
in the case of the second parametric problem \eqref{eq:pde2:strong}.
The {\it enhanced} Galerkin solution satisfying~\eqref{eq:sample1:fem} or~\eqref{eq:sample2:fem}
for all $v \in \widehat\mathbb{X}_{\bullet \mathbf{z}} % or \bm{z}$
is denoted by $\widehat u_{\bullet \mathbf{z}} % or \bm{z} \in \widehat\mathbb{X}_{\bullet \mathbf{z}} % or \bm{z}$.
\item
computation of the spatial hierarchical error \abrevx{indicators}.
For each $\mathbf{z}} % or \bm{z \in \Colpts_\bullet$, we define $\mu_{\bullet \mathbf{z}} % or \bm{z} := \norm{e_{\bullet \mathbf{z}} % or \bm{z}}{\mathbb{X}}$,
where $e_{\bullet \mathbf{z}} % or \bm{z} \in \mathbb{Y}_{\bullet \mathbf{z}} % or \bm{z}$ satisfies
\begin{subequations}
\begin{align} \label{eq:hierar1:estimator}
\begin{split}
\int_D \nabla e_{\bullet \mathbf{z}} % or \bm{z}(x) \cdot \nabla v(x) \,\mathrm{d}x &=
\int_D f(x) v(x) \,\mathrm{d}x
\\
&\quad -
\int_D a(x, \mathbf{z}} % or \bm{z) \nabla u_{\bullet \mathbf{z}} % or \bm{z}(x) \cdot \nabla v(x) \,\mathrm{d}x \quad \forall v \in \mathbb{Y}_{\bullet \mathbf{z}} % or \bm{z}
\end{split}
\\
\intertext{in the case of the first parametric problem \eqref{eq:pde:strong}, or satisfies}
\label{eq:hierar2:estimator}
\begin{split}
\int_D \nabla e_{\bullet \mathbf{z}} % or \bm{z}(x) \cdot \nabla v(x) \,\mathrm{d}x &=
\int_D f(x, \mathbf{z}} % or \bm{z) v(x) \,\mathrm{d}x
\\
&\quad -
\int_D \nabla u_{\bullet \mathbf{z}} % or \bm{z}(x) \cdot \nabla v(x) \,\mathrm{d}x \quad \forall v \in \mathbb{Y}_{\bullet \mathbf{z}} % or \bm{z}
\end{split}
\end{align}
\end{subequations}
in the case of the second parametric problem \eqref{eq:pde2:strong}\abrevx{; the corresponding \emph{local}
error indicators $\mu_{\bullet \mathbf{z}} % or \bm{z}(\xi)$ associated with interior edge midpoints $\xi \in \mathcal{N}_{\bullet \mathbf{z}} % or \bm{z}^{+}$
are given by components of the solution vector to the linear system stemming from
the discrete formulation~\eqref{eq:hierar1:estimator} or~\eqref{eq:hierar2:estimator}.}
\item
computation of \abrevx{the
parametric} error indicators\footnote{This construction assumes that
the enriched index set $\widehat\indset_\bullet$ is obtained using the reduced margin of $\indset_\bullet$, see
Remark~2 in \partI.}
\begin{align} \label{eq:param:indicators:1}
\widetilde\tau_{\bullet \nnu} =
\sum\limits_{\mathbf{z}} % or \bm{z' \in \widetilde\Colpts_{\bullet \nnu}}
\abrevx{\norm[\bigg]{u_{0 \mathbf{z}} % or \bm{z'} - \sum\limits_{\mathbf{z}} % or \bm{z \in \Colpts_\bullet} u_{0 \mathbf{z}} % or \bm{z} \LagrBasis{\bullet \mathbf{z}} % or \bm{z}{}(\mathbf{z}} % or \bm{z')}{\mathbb{X}}} \,
\norm{\LagrBasisHat{\bullet \mathbf{z}} % or \bm{z'}{}}{L_\pi^{\abrevx{2}}(\Gamma)}\quad
\forall \nnu \in \rbl{\rmarg({\indset_\bullet})}\abrevx{,}
\end{align}
where
\abrevx{$\widetilde\Colpts_{\bullet \nnu} \subset \widehat\Colpts_{\bullet} \setminus \Colpts_{\bullet}$
are the collocation points `generated' by the multi-index $\nnu \in \rmarg({\indset_\bullet})$,
the functions $u_{0 \mathbf{z}} % or \bm{z'} \in \mathbb{X}_{0 \mathbf{z}} % or \bm{z'}$ for $\mathbf{z}} % or \bm{z' \in \widetilde\Colpts_{\bullet \nnu}$
and $u_{0 \mathbf{z}} % or \bm{z} \in \mathbb{X}_{0 \mathbf{z}} % or \bm{z}$ for $\mathbf{z}} % or \bm{z \in \Colpts_{\bullet}$ are
Galerkin approximations on some meshes $\mathcal{T}_{0 \mathbf{z}} % or \bm{z'}$ and $\mathcal{T}_{0 \mathbf{z}} % or \bm{z}$, respectively, that are to be specified
(e.g., $u_{0 \mathbf{z}} % or \bm{z}$ satisfies~\eqref{eq:sample1:fem} or~\eqref{eq:sample2:fem} with $\mathbb{X}_{\bullet \mathbf{z}} % or \bm{z}$ replaced by
$\mathbb{X}_{0 \mathbf{z}} % or \bm{z}$), and}
$\LagrBasisHat{\bullet \mathbf{z}} % or \bm{z'}{}(\mathbf{y}} % or \bm{y) = \LagrBasis{\mathbf{z}} % or \bm{z'}{\widehat\Colpts_\bullet}(\mathbf{y}} % or \bm{y)$ denotes
the Lagrange polynomial basis function associated with the point $\mathbf{z}} % or \bm{z' \in \widehat\Colpts_\bullet$ satisfying
$\LagrBasisHat{\bullet \mathbf{z}} % or \bm{z'}{}(\mathbf{z}} % or \bm{z'') = \delta_{\mathbf{z}} % or \bm{z'\mathbf{z}} % or \bm{z''}$ for any $\mathbf{z}} % or \bm{z',\,\mathbf{z}} % or \bm{z'' \in \widehat\Colpts_\bullet$.
\end{itemize}
\abrevx{
We emphasize that the computation of parametric error indicators according to~\eqref{eq:param:indicators:1}
is in line with the hierarchical a posteriori error estimation strategy developed in~\partI\ (see section~4 therein).
In the standard \emph{single-level} SC-FEM setting discussed in~\cite[section~5]{bsx21},
the meshes $\mathcal{T}_{0 \mathbf{z}} % or \bm{z'}$ and $\mathcal{T}_{0 \mathbf{z}} % or \bm{z}$ underlying the Galerkin approximations $u_{0 \mathbf{z}} % or \bm{z'}$ and $u_{0 \mathbf{z}} % or \bm{z}$ in~\eqref{eq:param:indicators:1}
are all selected to be identical to the (single) finite element mesh $\mathcal{T}_{\bullet \mathbf{z}} % or \bm{z} = \mathcal{T}_{\bullet}$ that underlies
the current SC-FEM solution $\scsol$ in~\eqref{eq:scfem:sol}.
In this case, the indicators in~\eqref{eq:param:indicators:1} are written as
\begin{align*}
\widetilde\tau_{\bullet \nnu} =
\sum\limits_{\mathbf{z}} % or \bm{z' \in \widetilde\Colpts_{\bullet \nnu}}
\norm{u_{\bullet \mathbf{z}} % or \bm{z'} - \scsol(\cdot,\mathbf{z}} % or \bm{z')}{\mathbb{X}} \, \norm{\LagrBasisHat{\bullet \mathbf{z}} % or \bm{z'}{}}{L_\pi^{\abrevx{2}}(\Gamma)}\quad
\forall \nnu \in \rmarg({\indset_\bullet}),
\end{align*}
where $u_{\bullet \mathbf{z}} % or \bm{z'} \in \mathbb{X}_{\bullet \mathbf{z}} % or \bm{z'} = \SS^1_0(\mathcal{T}_\bullet)$ for all $\mathbf{z}} % or \bm{z' \in \widetilde\Colpts_{\bullet \nnu}$ and
for all $\nnu \in \rmarg({\indset_\bullet})$.
In the multilevel SC-FEM setting presented in the adaptive algorithm below,
the meshes underlying Galerkin approximations for different collocation points might be different.
In this case, when computing the parametric error indicators in~\eqref{eq:param:indicators:1},
the meshes $\mathcal{T}_{0 \mathbf{z}} % or \bm{z'}$~($\mathbf{z}} % or \bm{z' \in \widetilde\Colpts_{\bullet \nnu}$) and $\mathcal{T}_{0 \mathbf{z}} % or \bm{z}$~($\mathbf{z}} % or \bm{z \in \Colpts_{\bullet}$)
are all selected to be identical to the \emph{coarsest} finite element mesh~$\mathcal{T}_0$.
}
With \abrevx{the above} ingredients in place, the solution to the problems in~section~\ref{sec:problem} can be
generated using the iterative strategy described in Algorithm~\ref{algorithmx} together with
the marking strategy in Algorithm~\ref{algorithmm}.
\begin{algorithm} \label{algorithmx}
{\bfseries Input:}
$\indset_0 = \{ \1 \}$;
\abrevx{$\mathcal{T}_{0 \mathbf{z}} % or \bm{z} := \mathcal{T}_0$} for all $\mathbf{z}} % or \bm{z \in \widehat\Colpts_0 = \Colpts_{\indset_0 \cup \rmarg(\indset_0)}$;
marking criterion.\\
Set the iteration counter $\ell := 0$, the output counter $k$ and the \rbl{tolerance}.
\begin{itemize}
\item[\rm(i)]
Compute Galerkin approximations $\big\{ u_{\ell \mathbf{z}} % or \bm{z} \in \mathbb{X}_{\ell \mathbf{z}} % or \bm{z} : \mathbf{z}} % or \bm{z \in \widehat\Colpts_\ell \big\}$ by solving~\eqref{eq:sample1:fem} or~\eqref{eq:sample2:fem}.
\item[\rm(ii)]
Compute spatial error indicators \abrevx{$\big\{ \mu_{\ell\mathbf{z}} % or \bm{z} = \norm{e_{\ell \mathbf{z}} % or \bm{z}}{\mathbb{X}} : \mathbf{z}} % or \bm{z \in \Colpts_{\ell} \big\}$}
by solving \eqref{eq:hierar1:estimator} or \eqref{eq:hierar2:estimator}.
\item[\rm(iii)]
Compute the parametric error indicators
$\big\{ \widetilde\tau_{\ell \nnu} : \nnu \in \rbl{\rmarg({\indset_\ell)}} \big\}$
given by~\eqref{eq:param:indicators:1}.
\item[\rm(iv)]
Use a marking criterion
to determine $\mathcal{M}_{\ell \mathbf{z}} % or \bm{z} \subseteq \mathcal{N}_{\ell \mathbf{z}} % or \bm{z}^+$ for all $\mathbf{z}} % or \bm{z \in \Colpts_\ell$ and
$\markindset_\ell \subseteq \rbl{\rmarg({\indset_\ell)}}$.
\item[\rm(v)] For all $\mathbf{z}} % or \bm{z \in \Colpts_\ell$, set $\mathcal{T}_{(\ell+1) \mathbf{z}} % or \bm{z} := \refine(\mathcal{T}_{\ell \mathbf{z}} % or \bm{z},\mathcal{M}_{\ell \mathbf{z}} % or \bm{z})$.
\item[\rm(vi)] Set $\indset_{\ell+1} := \indset_\ell \cup \markindset_\ell$,
\abrevx{run Algorithm~{\rm \ref{meshalgorithm}} for each
$\mathbf{z}} % or \bm{z' \in \mathop{\cup}\limits_{\nnu \in \markindset_\ell} \widetilde\Colpts_{\ell \nnu}$
to \rbl{construct meshes} $\mathcal{T}_{(\ell+1) \mathbf{z}} % or \bm{z'}$ and
\rbl{initialize} $\mathcal{T}_{(\ell+1) \mathbf{z}} % or \bm{z} := \mathcal{T}_{0 \mathbf{z}} % or \bm{z} = \mathcal{T}_0$ for all $\mathbf{z}} % or \bm{z \in \widehat\Colpts_{\ell+1} \setminus \Colpts_{\ell+1}$}.
\item[\rm(vii)]
If $\ell = j k$, $j\in \mathbb{N}$, compute the spatial and parametric {error estimates} $\mu_\ell$ and $\tau_\ell$
\abrevx{given by~\eqref{eq:estimate:3} and~\eqref{eq:estimate:8}, respectively,}
and exit if $\mu_\ell + \tau_\ell < {\tt error tolerance}$.
\item[\rm(viii)] Increase the counter $\ell \mapsto \ell+1$ and goto {\rm(i)}.
\end{itemize}
{\bfseries Output:} For some specific $\ell_*=jk \in \mathbb{N}$,
the algorithm returns the multilevel SC-FEM approximation~$u_{\ell_*}^{\rm SC}$
computed via~\eqref{eq:scfem:sol} from Galerkin approximations
$\big\{ u_{{\ell_*} \mathbf{z}} % or \bm{z} \in \mathbb{X}_{{\ell_*} \mathbf{z}} % or \bm{z} : \mathbf{z}} % or \bm{z \in \Colpts_\ell \big\}$
together with a corresponding \rbl{error estimate} $\mu_{\ell_*} + \tau_{\ell_*}$.
\end{algorithm}
A general marking strategy for step~(iv) of Algorithm~\ref{algorithmx} is specified next. We will
adopt this strategy in the numerical experiments discussed in the next section.
\begin{algorithm} \label{algorithmm}
\textbf{Input:}
\abrevx{error indicators}
$\{ \mu_{\ell \mathbf{z}} % or \bm{z}
: \mathbf{z}} % or \bm{z \in \Colpts_\ell \}$,
$\{ \mu_{\ell \mathbf{z}} % or \bm{z}(\xi) : \mathbf{z}} % or \bm{z \in \Colpts_\ell,\, \xi \in \mathcal{N}_{\ell \mathbf{z}} % or \bm{z}^+ \}$,
and
$\{ \abrevx{\widetilde\tau_{\ell \nnu}} : \nnu \in \rbl{\rmarg({\indset_\ell)}} \}$;
marking parameters $0 < \theta_\mathbb{X}, \theta_\Colpts \le 1$ and $\vartheta > 0$.
\begin{itemize}
\item[$\bullet$]
If \
$\sum_{\mathbf{z}} % or \bm{z \in \Colpts_\ell} \mu_{\ell \mathbf{z}} % or \bm{z} \norm{L_{\ell \mathbf{z}} % or \bm{z}}{L^{\abrevx{2}}_{\pi}(\Gamma)}
\ge \vartheta \sum_{\nnu \in \rbl{\rmarg({\indset_\ell)}} } \abrevx{\widetilde\tau_{\ell \nnu}}$,
then proceed as follows:
\begin{itemize}
\item[$\circ$]
set $\abrev{\markindset_\ell} := \emptyset$
\item[$\circ$]
for each $\mathbf{z}} % or \bm{z \in \Colpts_\ell$,
determine $\mathcal{M}_{\ell \mathbf{z}} % or \bm{z} \subseteq \mathcal{N}_{\ell \mathbf{z}} % or \bm{z}^+$
\abrevx{such that}
\abrevx{
\begin{equation} \label{eq:doerfler:separate1}
\theta_\mathbb{X} \, \sum_{\mathbf{z}} % or \bm{z \in \Colpts_\ell} \sum_{\xi \in \mathcal{N}_{\ell \mathbf{z}} % or \bm{z}^+} \mu_{\ell \mathbf{z}} % or \bm{z}(\xi) \norm{L_{\ell \mathbf{z}} % or \bm{z}}{L^{\abrevx{2}}_{\pi}(\Gamma)} \le
\sum_{\mathbf{z}} % or \bm{z \in \Colpts_\ell} \sum_{\xi \in \mathcal{M}_{\ell \mathbf{z}} % or \bm{z}} \mu_{\ell \mathbf{z}} % or \bm{z}(\xi) \norm{L_{\ell \mathbf{z}} % or \bm{z}}{L^{\abrevx{2}}_{\pi}(\Gamma)}
\end{equation}
\rbl{with a cumulative cardinality $\sum_{\mathbf{z}} % or \bm{z \in \Colpts_\ell} \#\mathcal{M}_{\ell \mathbf{z}} % or \bm{z}$ that is minimized
over all the sets that satisfy~\eqref{eq:doerfler:separate1}}.
}
\end{itemize}
\item[$\bullet$]
Otherwise, if \
$\sum_{\mathbf{z}} % or \bm{z \in \Colpts_\ell} \mu_{\ell \mathbf{z}} % or \bm{z} \norm{L_{\ell \mathbf{z}} % or \bm{z}}{L^{\abrevx{2}}_{\pi}(\Gamma)}
< \vartheta \sum_{\nnu \in \rbl{\rmarg({\indset_\ell)}} } \abrevx{\widetilde\tau_{\ell \nnu}}$,
then proceed as follows:
\begin{itemize}
\item[$\circ$]
set $\mathcal{M}_{\ell \mathbf{z}} % or \bm{z} := \emptyset$ for all $\mathbf{z}} % or \bm{z \in \Colpts_\ell$
\item[$\circ$]
determine $\markindset_\ell \subseteq \rbl{\rmarg({\indset_\ell)}} $
of minimal cardinality such that
\begin{equation} \label{eq:doerfler:separate2}
\theta_\Colpts \, \sum_{\nnu \in \rbl{\rmarg({\indset_\ell)}} } \abrevx{\widetilde\tau_{\ell \nnu}} \le
\sum_{\abrev{\nnu \in \markindset_\ell}} \abrevx{\widetilde\tau_{\ell \nnu}}.
\end{equation}
\end{itemize}
\end{itemize}
\textbf{Output:}
$\mathcal{M}_{\ell \mathbf{z}} % or \bm{z} \subseteq \mathcal{N}_{\ell \mathbf{z}} % or \bm{z}^+$ for all $\mathbf{z}} % or \bm{z \in \Colpts_\ell$ and
$\markindset_\ell \subseteq \rbl{\rmarg({\indset_\ell)}}$.
\end{algorithm}
As discussed in section~4 of \partI, the
computation of the {\it error estimates} in step~(vii) of Algorithm~\ref{algorithmx} is best done
periodically because of the significant computational overhead.
Specifically, the spatial error estimate
\begin{align} \label{eq:estimate:3}
\mu_\bullet & :=
\norm[\bigg]{\sum\limits_{\mathbf{z}} % or \bm{z \in \Colpts_\bullet} (\widehat u_{\bullet \mathbf{z}} % or \bm{z} - u_{\bullet \mathbf{z}} % or \bm{z})\, \LagrBasis{\bullet \mathbf{z}} % or \bm{z}{}}{}
\end{align}
requires computation of the enhanced Galerkin approximation \abrevx{$\widehat u_{\bullet \mathbf{z}} % or \bm{z} \in \widehat\mathbb{X}_{\bullet \mathbf{z}} % or \bm{z}$}
and thus requires the solution of the PDE on
\abrevx{the mesh $\widehat\mathcal{T}_{\bullet \mathbf{z}} % or \bm{z}$---a uniform refinement of $\mathcal{T}_{\bullet \mathbf{z}} % or \bm{z}$---for each collocation point}
generated by the current index set.
The parametric error estimate \abrev{(cf.~\eqref{eq:param:indicators:1})}
\begin{align}
\label{eq:estimate:8}
\tau_\bullet & : =
\norm[\bigg]
{\sum\limits_{\mathbf{z}} % or \bm{z' \in \widehat\Colpts_\bullet \setminus \Colpts_\bullet}
\abrevx{ \Big( u_{0 \mathbf{z}} % or \bm{z'} - \sum\limits_{\mathbf{z}} % or \bm{z \in \Colpts_\bullet} u_{0 \mathbf{z}} % or \bm{z} \LagrBasis{\bullet \mathbf{z}} % or \bm{z}{}(\mathbf{z}} % or \bm{z') \Big) }
\LagrBasisHat{\bullet \mathbf{z}} % or \bm{z'}{}}{}
\end{align}
requires\abrevx{, as discussed above,} additional PDE solves \abrev{on the coarsest mesh~$\mathcal{T}_{0 \mathbf{z}} % or \bm{z'} := \mathcal{T}_0$ for all}
margin collocation points $\mathbf{z}} % or \bm{z' \in \widehat\Colpts_\bullet \setminus \Colpts_\bullet$
\abrevx{(the coarsest-mesh Galerkin approximations $u_{0 \mathbf{z}} % or \bm{z}$ in~\eqref{eq:estimate:8}
for the current collocation points $\mathbf{z}} % or \bm{z \in \Colpts_\bullet$ will have been computed in preceding iterations
and, thus, can be \rbl{reused})}.
The key point here is that computation of the \abrevx{error} estimates is only needed
to give reliable termination of the adaptive process (and to provide reassurance
that the SC-FEM error is decreasing at an acceptable rate).
\abrevx{Regarding the implementation aspects of computing the above error estimates, we
note that the sum in~\eqref{eq:estimate:3} involves Galerkin approximations over different finite element meshes.
In our implementation, the computation of this sum is effected by
interpolating piecewise linear functions $u_{\bullet \mathbf{z}} % or \bm{z}$ and $\widehat u_{\bullet \mathbf{z}} % or \bm{z}$
at the nodes of the mesh
$\bigoplus_{\mathbf{z}} % or \bm{z \in \Colpts_\bullet} \widehat\mathcal{T}_{\bullet \mathbf{z}} % or \bm{z}$---the overlay (or, the coarsest common refinement) of the meshes
$\widehat\mathcal{T}_{\bullet \mathbf{z}} % or \bm{z}$, $\mathbf{z}} % or \bm{z \in \Colpts_\bullet$---and by subtracting/summing the obtained coefficient vectors
representing these piecewise linear functions over the same mesh
$\bigoplus_{\mathbf{z}} % or \bm{z \in \Colpts_\bullet} \widehat\mathcal{T}_{\bullet \mathbf{z}} % or \bm{z}$.
In this respect, the implementation of the parametric error estimate in~\eqref{eq:estimate:8} is rather straightforward,
as the involved Galerkin approximations $u_{0 \mathbf{z}} % or \bm{z}$ and $u_{0 \mathbf{z}} % or \bm{z'}$ are all computed on the same coarsest finite element mesh~$\mathcal{T}_0$.
}
The other detail that is missing in the statement of Algorithm~\ref{algorithmx}
is the identification of a strategy for defining suitable meshes \abrevx{$\mathcal{T}_{(\ell+1), \mathbf{z}} % or \bm{z'}$}
corresponding to the \abrevx{newly `activated'} collocation points
in step~(vi).
This specification of sample-specific {\it initial meshes} turns out to be
crucial if \rbl{optimal rates of convergence are to be realized in practice}.
If an initial mesh \abrevx{associated with a collocation point} is too coarse, then
\abrevx{`activating' this} collocation point will introduce a large spatial error at the next iteration step.
Conversely, if the initial mesh is too fine, as in the case
of a single-level implementation of the algorithm, then the growth in the \abrevx{number of}
degrees of freedom is not matched by the resulting error reduction. Indeed, the conclusion
reached in~\cite{FeischlS21} on this point
is that ``while the theoretical results are strongest for the fully adaptive algorithm ... the
single mesh algorithm seems to be more efficient''.
A mesh initialization strategy that attempts to balance the conflicting requirements is
given in Algorithm~\ref{meshalgorithm}.
\abrevx{Specifically, for a given (newly `activated') collocation point $\mathbf{z}} % or \bm{z' \not\in \Colpts_{\bullet}$,
we start with the coarsest mesh $\mathcal{T}_0$
and iterate the standard SOLVE $\to$ ESTIMATE $\to$ MARK $\to$ REFINE loop until the resolution of the mesh is such that
the estimated error in the corresponding Galerkin solution $u_{\bullet \mathbf{z}} % or \bm{z'}$ is on par with the error estimates for Galerkin solutions
associated with other (already `active') collocation points $\mathbf{z}} % or \bm{z \in \Colpts_{\bullet}$.
This is ensured by the choice of stopping tolerance ${\tt tol}$ in Algorithm~\ref{meshalgorithm}.
We note that in the multilevel SGFEM, such a mesh initialization procedure is not needed.
Instead, for every newly `activated' multi-index,
the associated finite element mesh is set to the coarsest mesh $\mathcal{T}_0$; see~\cite{bpr2020+}.
Due to the inherent orthogonality of the parametric components of SGFEM approximations associated with different multi-indices,
this initialization by the coarsest mesh does not affect optimal convergence properties of the multilevel SGFEM; see~\cite{bpr2021+}.
}
\begin{algorithm} \label{meshalgorithm}
\textbf{Input:}
spatial error indicators \abrevx{$\big\{ \mu_{\ell\mathbf{z}} % or \bm{z}: \mathbf{z}} % or \bm{z \in \Colpts_{\ell} \big\}$};
\rblx{the set of collocation points $\Colpts_{\ell+1} = \Colpts_{\indset_{\ell+1}}$;}
the collocation point~\rblx{$\mathbf{z}} % or \bm{z' \in \Colpts_{\ell+1} \setminus \Colpts_{\ell}$};
marking parameter $\theta$.\\
\abrevx{
Set the tolerance
${\tt tol} := (\# \Colpts_{\ell})^{-1} \sum_{\mathbf{z}} % or \bm{z \in \Colpts_{\ell}} \mu_{\ell \mathbf{z}} % or \bm{z} \norm{L_{\rblx{(\ell+1)} \mathbf{z}} % or \bm{z}}{L^{2}_{\pi}(\Gamma)}$
and the iteration counter $n := 0$;
initialize the mesh $\mathcal{T}_{0 \mathbf{z}} % or \bm{z'} := \mathcal{T}_0$.
\begin{itemize}
\item[\rm(i)]
Compute the Galerkin approximation $u_{n \mathbf{z}} % or \bm{z'} \in \mathbb{X}_{n \mathbf{z}} % or \bm{z'}$ by solving~\eqref{eq:sample1:fem} or~\eqref{eq:sample2:fem}.
\item[\rm(ii)]
Compute the error estimate $\mu_{n \mathbf{z}} % or \bm{z'} = \norm{e_{n \mathbf{z}} % or \bm{z'}}{\mathbb{X}}$
by solving \eqref{eq:hierar1:estimator} or \eqref{eq:hierar2:estimator} and compute the corresponding
local error indicators $\big\{ \mu_{n \mathbf{z}} % or \bm{z'}(\xi): \xi \in \mathcal{N}^{+}_{n \mathbf{z}} % or \bm{z'} \big\}$.
\item[\rm(iii)]
If $\mu_{n \mathbf{z}} % or \bm{z'} \norm{L_{\rblx{(\ell+1)} \mathbf{z}} % or \bm{z'}}{L^{2}_{\pi}(\Gamma)} < {\tt tol}$, set $\mathcal{T}_{(\ell+1) \mathbf{z}} % or \bm{z'} := \mathcal{T}_{n \mathbf{z}} % or \bm{z'}$ and exit.
\item[\rm(iv)]
Determine $\mathcal{M}_{n \mathbf{z}} % or \bm{z'} \subseteq \mathcal{N}_{n \mathbf{z}} % or \bm{z'}^+$ of minimal cardinality such that
\begin{equation*}
\theta \, \sum_{\xi \in \mathcal{N}_{n \mathbf{z}} % or \bm{z'}^+} \mu_{n \mathbf{z}} % or \bm{z'}(\xi)^2 \le
\sum_{\xi \in \mathcal{M}_{n \mathbf{z}} % or \bm{z'}} \mu_{n \mathbf{z}} % or \bm{z'}(\xi)^2.
\end{equation*}
\item[\rm(v)] Set $\mathcal{T}_{(n+1) \mathbf{z}} % or \bm{z'} := \refine(\mathcal{T}_{n \mathbf{z}} % or \bm{z'},\mathcal{M}_{n \mathbf{z}} % or \bm{z'})$.
\item[\rm(vi)] Increase the counter $n \mapsto n+1$ and goto {\rm(i)}.
\end{itemize}
}
\textbf{Output:}
\rbl{The mesh} $\mathcal{T}_{(\ell+1) \mathbf{z}} % or \bm{z'}$
\abrevx{associated with} the collocation point $\mathbf{z}} % or \bm{z'$.
\end{algorithm}
Results presented in the next section will
show that a well-designed multilevel strategy can give significant efficiency gains
compared to a \rblx{single-level SC-FEM} algorithm if the parameterized problem has local features
that \abrevx{vary} in spatial location across the parameter~space.
\section{Numerical experiments}\label{sec:results}
Results for three test cases are discussed in this section of the paper.
The performance of our adaptive SC multilevel algorithm will be directly compared with that of the
single-level algorithm discussed in \partI\ to see if any gains in efficiency
can be realized.
The first two test cases are identical to those discussed in \S5 of \partI. The third test case is a refinement of
the {\it one peak} test problem that was introduced by Kornhuber \& Youett~\cite{ky18} in order to assess
the efficiency of adaptive Monte Carlo methods.
The single-level refinement strategy that is the basis for comparison is the obvious and
natural simplification of the multilevel strategy described in \S\ref{sec:scfem}.
Thus, at each step $\ell$ of the process,
we compute the error indicators associated with the SC-FEM solution $u_{\ell\mathbf{z}} % or \bm{z}$
(steps (ii)--(iii) of Algorithm~\ref{algorithmx}).
The marking criterion in Algorithm~\ref{algorithmm} then
identifies the refinement type by comparing the (global) spatial error estimate
$\bar\mu_{\ell} := \| \mu_{\ell \mathbf{z}} % or \bm{z} \norm{L_{\ell \mathbf{z}} % or \bm{z}}{L^2_{\pi}(\Gamma)} \|_{\ell_1}$ with
the parametric error estimate $\bar\tau_{\ell} := \| \abrevx{\widetilde\tau_{\ell \nnu}} \|_{\ell_1}$.
To effect a spatial refinement in the single-level case,
we use \abrevx{a D{\" o}rfler-type} marking with \rbl{threshold} $\theta_\mathbb{X}$ to produce sets of marked elements
from the (single) grid $\mathcal{T}_\ell$.
A refined triangulation $\mathcal{T}_{\ell+1}$ can then be constructed by refining the elements in the {\it union}
of these individual sets $\mathcal{M}_{\ell \mathbf{z}} % or \bm{z}$ ($\mathbf{z}} % or \bm{z \in \Colpts_\ell$) of marked elements.
\subsection{Test case I: affine coefficient data}\label{sec:affineresults}
We set $f = 1$ and look to solve the first model problem on the square-shaped
domain $D = (0, 1)^2$ with random field coefficient given by
\begin{align} \label{kl}
a(x, \mathbf{y}} % or \bm{y) = a_0(x) + \sum_{m = 1}^M a_m(x) \, y_m,\quad
x \in D,\ \mathbf{y}} % or \bm{y \in \Gamma.
\end{align}
The specific problem we consider is taken from~\cite{bs16}.
The parameters $y_m$ in \eqref{kl} are the images of uniformly
distributed independent mean-zero random variables, so that $\pi_m = \pi_m(y_m)$ is
the associated probability measure on $\Gamma_m = [-1,1]$.
The expansion coefficients $a_m$, $m\, \in \, \mathbb{N}_0$ are chosen
to represent planar Fourier modes of increasing total order.
Thus, we fix $a_0(x) := 1$ and set
\begin{equation}
\label{diff_coeff_Fourier}
a_m(x) := \alpha_m \cos(2\pi\beta_1(m)\,x_1) \cos(2\pi\beta_2(m)\,x_2),\ x=(x_1,x_2)
\in (0,1) \times (0,1).
\end{equation}
The modes are ordered so that for any $m \in \mathbb{N}$,
\begin{equation}
\beta_1(m) = m - k(m)(k(m)+1)/2\ \ \hbox{and}\ \ \beta_2(m) =k(m) - \beta_1(m)
\end{equation}
with $k(m) = \lfloor -1/2 + \sqrt{1/4+2m}\rfloor$ and the amplitude coefficients are
constructed so that
$\alpha_m = \bar\alpha m^{-2}$ with $ \bar\alpha = 0.547$.
This is referred to as the {\it slow decay case} in~\cite{bs16}.
\begin{figure}[!pth]
\centering
\includegraphics[width = 0.45\textwidth]{{point1.eps}}
\includegraphics[width = 0.45\textwidth]{{grid1.eps}}
\includegraphics[width = 0.45\textwidth]{{point10.eps}}
\includegraphics[width = 0.45\textwidth]{{grid10.eps}}
\includegraphics[width = 0.45\textwidth]{{point12.eps}}
\includegraphics[width = 0.45\textwidth]{{grid12.eps}}
\caption{Selected collocation point (left) and corresponding spatial mesh (right) that is generated by the
multilevel adaptive strategy for test case I.}
\label{fig:sc4.1meshes}
\end{figure}
A reference solution to this problem with $M$ set to 4 is illustrated in Fig.~\rblx{1} in~\partI.
This solution was generated by running the {\it single-level} algorithm with
the $\tt{error tolerance}$ set to {\tt 6e-3}, starting
from a uniform initial mesh with 81 vertices and a sparse grid
consisting of a single collocation point. The threshold parameter $\vartheta$
was set to {\tt 1}, the marking parameters $\theta_\mathbb{X}$ and $\theta_\Colpts$ were set to {\tt 0.3}.
The error tolerance was satisfied after 25 iterations comprising
{20} spatial refinement steps and {5} parametric refinement steps.
There were {13} Clenshaw--Curtis sparse grid collocation points when
the iteration terminated. These points are visualized in Fig.~\ref{fig:sc4.1meshes}.
The associated sparse grid indices are listed in Table~1 in \partI.
The final spatial mesh is shown in Fig.~2 in~\partI.
The number of vertices in this mesh is {\tt 16,473} so the total number of
degrees of freedom when the error tolerance was satisfied when running the
single-level algorithm was {\tt 214,149}.
\begin{figure}[!thp]
\centering
\includegraphics[width = 0.65\textwidth]{{sc5.1errors.eps}}
\includegraphics[width = 0.65\textwidth]{{sc4.1errors.eps}}
\caption{Evolution of the single-level error estimates (top) and the
multilevel error estimates (bottom)
for test case I with error tolerance set to {\tt 6e-3}.}
\label{fig:sc4.1errors}
\end{figure}
The first test of the {\it multilevel} algorithm is to repeat the above experiment;
that is, starting from the same point with identical marking parameters
$\vartheta=1$, $\theta_\mathbb{X} = \theta_\Colpts = 0.3$
\rblx{(we also set the marking parameter $\theta$ in Algorithm~\ref{meshalgorithm}
to the same value as $\theta_\mathbb{X}$ in all our experiments)}.
Specifying the same error tolerance {\tt 6e-3} led to the
the same 13 collocation points being activated, in this case after 26 rather
than 25 iterations.
A comparison of the single-level and multilevel error \rblx{estimates}
is given in Fig.~\ref{fig:sc4.1errors}. While the final number of degrees of freedom
is reduced from {\tt 214,149} to {\tt 137,943} in the multilevel case, the
{\it rate of convergence} is still far from optimal (close to $O({\rm dof}^{-1/3})$).
The degree of refinement of the final meshes associated with some specific
collocation points is illustrated in Fig.~\ref{fig:sc4.1meshes}. The two finest meshes
had over {\tt 32,000} vertices and are associated with the pair of collocation points
that are activated by the sparse grid index {\tt 3 1 1 1} that is introduced at the
final iteration (one of \rblx{these collocation points and the corresponding mesh are} shown in the bottom plot).
The two coarsest meshes had close to {\tt 3,600} vertices; one of \rblx{these} is shown in the middle plot.
The \abrevx{mesh} that is associated with the mean field $a_0=1$ has {\tt 11,157} vertices
and is shown in the topmost plot. As might be anticipated, the level of refinement
of this \abrevx{mesh} is less than that of the final \abrevx{mesh} that is generated by the
single-level~strategy.
\abrevx{It is worth pointing out that in our extensive experimentations with other choices of marking parameters
the adaptive multilevel SC-FEM algorithm did not exhibit a faster convergence rate compared to that of the single-level algorithm
for the respective choice of marking parameters.
This is in contrast to SGFEM, where multilevel adaptivity always results in a faster convergence rate than that of the single-level counterpart
for problems with affine-parametric coefficients including the test case considered here; see~\cite{egsz14, cpb18+, bpr2020+, bpr2021+}.
Furthermore, for this class of problems, the analysis in~\cite{bpr2021+} has shown that,
under an appropriate saturation assumption,
the adaptive multilevel SGFEM algorithm driven by a two-level a posteriori error estimator and employing a D{\" o}rfler-type marking
on the joint set of spatial and parametric indicators yields optimal convergence rates with respect to the number of degrees of freedom
in the underlying multilevel approximation space.
}
\subsection{Test case II: nonaffine coefficient data}\label{sec:nonaffineresults}
In this case, we set $f = 1$ and look to solve the first model problem
on the L-shaped domain $D = (-1, 1)^2\backslash (-1, 0]^2$ with
coefficient $a(x, \mathbf{y}} % or \bm{y) = \exp(h( x, \mathbf{y}} % or \bm{y))$,
where the exponent field $h(x, \mathbf{y}} % or \bm{y)$ has affine dependence on
parameters $y_m$ that are images of uniformly
distributed independent mean-zero random variables,
\begin{align} \label{kll}
h(x, \mathbf{y}} % or \bm{y) = h_0(x) + \sum_{m = 1}^{\rbl{4}} h_m(x) \, y_m,\quad
x \in D,\ \mathbf{y}} % or \bm{y \in \Gamma.
\end{align}
We further specify $h_0(x) \,{=}\, 1$ and
$h_m(x) = \sqrt{\lambda_m} \varphi_m(x)$
($m = 1,\ldots, \rbl{4}$). Here $\{(\lambda_m, \varphi_m)\}_{m=1}^\infty$
are the eigenpairs of the integral operator
$\int_{\abrevx{ D \cup (-1,0]^2} } \hbox{\rm Cov}[\rblx{h}](x, x') \varphi(x')\, \hbox{d} x' $
with a synthetic covariance function given by
\begin{align} \label{cov}
\hbox{\rm Cov}[\rblx{h}](x, x')
= \sigma^2 \exp
\left( -| x_1 - x_1' | - | x_2 - x_2' | \right).
\end{align}
The standard deviation $\sigma$ is set to 1.5 in order
to mirror the most challenging test case in \S5.2 of \partI.
The convergence of the multilevel adaptive algorithm, starting with one collocation point and
with the initial grid shown in Fig.~7 of \partI\ is compared with the single-level result
in Fig.~\ref{fig:sc4.2errors}. The multilevel algorithm is again run
using the marking parameters $\theta_\mathbb{X} = \theta_\Colpts = 0.3$
specified in \partI\ and the same error tolerance, that is {\tt 6e-3}.
\begin{figure}[!thp]
\centering
\includegraphics[width = 0.65\textwidth]{{sc5.2cerrors.eps}}
\includegraphics[width = 0.65\textwidth]{{sc4.2cerrors.eps}}
\caption{Evolution of the single-level error estimates (top) and the
multilevel error estimates (bottom)
for test case II with error tolerance set to {\tt 6e-3}.}
\label{fig:sc4.2errors}
\end{figure}
These results reinforce the view that performance gains from the multilevel
strategy are difficult to realize. While the number of active collocation points
is smaller in the multilevel case (51 vs 57; the sparse grid index {\tt 2 1 2 2}
added at the final single-level iteration is not included), the total number
of degrees of freedom when the tolerance is reached is almost identical
({\tt 2,212,393} vs {\tt 2,190,847}). The issue here is that meshes associated with
mixed indices with multiple active dimensions have multiple features that
require resolution. Thus, the most refined grid associated with the index that is
introduced in the final parametric enhancement has {\tt 428,972} vertices.
This is significantly more refined than the final grid that is generated
in the single-level implementation, which had {\tt 37,133} vertices. This fact,
together with the increase in the number of adaptive steps taken (37 vs 31) means
that the overall computation time is significantly increased when the
multilevel strategy is adopted.
\abrevx{The plots in Fig.~\ref{fig:sc4.2errors} also show that the use of the \emph{coarsest-mesh} approximations
for computing the parametric error estimates $\tau_{\ell}$ in~\eqref{eq:estimate:8} does not affect
the overall effectivity of the error estimation in the multilevel algorithm.
Indeed, in the single-level algorithm (where parametric error estimates employ the (single) \emph{refined mesh}
underlying the current SC-FEM solution~$u_{\ell}^{\rm SC}$),
the effectivity indices $\Theta_\ell$ computed\footnote{The effectivity indices are computed using a reference solution
as explained in~\cite{bsx21}, see equation~(42)~therein.}
at each iteration range between 1.047 and 1.296,
whereas for the multilevel algorithm they stay between 0.930 and~1.257.
}
\subsection{Test case III: one peak problem}\label{sec:onepeakresults}
We are looking to solve the Poisson equation $-\nabla^2 u = f $ in a unit square domain
$D=(-4,4)\times(-4,4)$ with Dirichlet boundary data $u=g$.
The source term $f$ and boundary data are {\it uncertain} and are
parameterized by $ \mathbf{y}} % or \bm{y =(y_1,y_2)$, representing the image of a pair
of independent random variables with $y_j \sim {U}[-1,1]$.
In the vanilla case discussed in~\cite{ky18}, the same test problem is \rblx{posed}
on the unit domain $I=(-1,1)\times(-1,1)$ with $y_j \sim {U}[-1/4,1/4]$.
The source term $f$ and the boundary data $g$ are chosen so that the problem
has a specific pathwise solution given by
\begin{align} \label{peaksolv}
u(x, \mathbf{y}} % or \bm{y) & = \exp ( - \beta \{ (x_1 -y_1)^2 + (x_2 -y_2)^2 \} ),
\end{align}
where a scaling factor $\beta=50$ is chosen to generate
a highly localized Gaussian profile centered at the uncertain spatial location $(y_1,y_2)$.
\begin{figure}[!thp]
\centering
\includegraphics[width = 0.8\textwidth]{{onepeaksols.eps}}
\caption{One peak problem solutions on the unit domain: $\alpha=1.54$ (top), $\alpha=9.46$ (bottom).}
\label{fig:sc4.3sols}
\end{figure}
In the paper~\cite{LangSS20}, the one peak test problem defined on the unit domain
is made {\it anisotropic} by scaling the solution in the first coordinate
direction by a linear function $\alpha(y_1)= 18 y_1 + 11/2$ so that
$\alpha$ takes values in the interval $[\rblx{1},10]$. The corresponding pathwise solution
is then given by
\begin{align} \label{peaksol}
u(x,\mathbf{y}} % or \bm{y) &= \exp ( - \rblx{50} \{ \alpha(y_1) (x_1 -y_1)^2 + (x_2 -y_2)^2 \} ).
\end{align}
\rblx{The solution \eqref{peaksol} is generated by specifying an uncertain forcing function
\begin{subequations}
\begin{align} \label{eq:dscaled}
f(x,\mathbf{y}} % or \bm{y) &= d(x_1,x_2,y_1,y_2) \cdot \exp ( - \beta \{ \alpha(y_1) (x_1 -y_1)^2 + (x_2 -y_2)^2 \} )
\\
\intertext{with}
\label{eq:fscaled}
d(x_1,x_2,y_1,y_2) &= -4\beta^2 \left \{ \alpha^2(y_1) (x_1 -y_1)^2 + (x_2 -y_2)^2 \right \}
+ 2\beta (\alpha(y_1)+1) .
\end{align}
\end{subequations}
Realisations of the reference solution~\eqref{peaksol} are shown
at two distinct sample points in Fig.~\ref{fig:sc4.3sols}.}
The anisotropy introduced by the \rblx{scaling with $\alpha$} is a clear feature.
Our specific goal is to compute the following quantity of interest (QoI)
\begin{align} \label{QoIref}
\Bbb{E}\left[\phi_{\rblx{I}}(u)\right] &= \int_{\rblx{\left[-\frac 14, \frac 14\right]^2}} \int_I u^2(x,\mathbf{y}} % or \bm{y) \, \mathrm{d}x \, \rblx{\mathrm{d} \pi(\mathbf{y}} % or \bm{y)} ,
\end{align}
where $\phi_{\rblx{I}}(u)= \int_I u^2(x,\cdot) \, \mathrm{d}x$.
The choice $\beta=50$ is then helpful for two reasons:
\begin{itemize}
\item
The Dirichlet boundary condition ($u$ satisfying \eqref{peaksol} on \rblx{$\partial I$}) may be replaced
without significant loss of accuracy by the numerical approximation
$ u_{\bullet \mathbf{z}} % or \bm{z}=0$ on \rblx{$\partial I$}.
\item
A reference value (accurate to more than 10 digits)
\begin{align} \label{QoIref:10:digits}
\Bbb{E}\left[\phi_{\rblx{I}}(u)\right]\approx Q := {1\over 9} \cdot (\sqrt{10 }-1)\cdot {\pi\over \beta} = 0.015 095 545 \ldots
\end{align}
may be readily computed\rblx{; see~\cite[Appendix]{LangSS20}} for details.
\end{itemize}
\begin{figure}[!thp]
\centering
\includegraphics[width = 0.8\textwidth]{{sc4.3refsol.eps}}
\caption{Reference solution for test case III.}
\label{fig:sc4.3refsol}
\end{figure}
\noindent
\rblx{We compute estimates of the QoI by solving the problem~\eqref{eq:pde2:strong}
using the coordinate transformations $x_j\leftarrow 4 x_j$ and $y_j\leftarrow 4 y_j$ ($j = 1,2$).
In this case, the pathwise solution on the scaled domain
$D \times \Gamma$ is given by~\eqref{peaksol} by specifying $\beta = 50/16$
and $\alpha(y_1)= (9 y_1 + 11)/2$.
Moreover, the QoI in~\eqref{QoIref} (and its reference value given in~\eqref{QoIref:10:digits})}
can be estimated within Algorithm~\ref{algorithmx} by computing \rblx{the following~quantity:}
\begin{align*}
\rblx{\frac {1}{16}}\, \Bbb{E}\left[\phi_{\rblx{D}}(\rblx{u_{\ell}^{\rm SC}})\right] &=
{1\over 16}\, \int_\Gamma \int_D \rblx{\big( u_{\ell}^{\rm SC}(x,\mathbf{y}} % or \bm{y) \big)^2}\, \mathrm{d}x \, \rblx{\mathrm{d} \pi(\mathbf{y}} % or \bm{y)} .
\end{align*}
A reference solution to the scaled problem is shown in Fig.~\ref{fig:sc4.3refsol}.
\begin{figure}[!thp]
\centering
\includegraphics[width = 0.8\textwidth]{{SLvsMLx.eps}}
\caption{Evolution of the \rblx{single-level} and multilevel error estimates
for the one peak test problem with error tolerance set to {\tt 1e-1}.}
\label{fig:sc4.3errors}
\end{figure}
A comparison of the \rblx{single-level} and multilevel \rblx{SC-FEM} algorithms when
applied to the one peak test problem is given by the \rblx{evolution of error estimates} in Fig.~\ref{fig:sc4.3errors}.
The single-level algorithm reached the tolerance in 37 steps with 169 active collocation points and the final
approximation had {\tt 42,961,659} degrees of freedom. The multilevel algorithm proved to be
much more efficient. The same tolerance was reached in 34 steps with 153 collocation points
in the final approximation space\rblx{. Crucially,} each \rblx{collocation point is} associated with a mesh
that is locally refined
in the vicinity of \rblx{the respective point in~$D$} (as illustrated in Fig.~\ref{fig:sc4.3meshes}).
\rblx{In contrast, the final mesh generated by the adaptive single-level SC-FEM}
has refinement everywhere in \rblx{a larger region corresponding to the union of supports of all sampled solutions}.
When the error tolerance \rblx{was} reached, both algorithms gave estimates of the QoI that agreed with the
reference value \rblx{to five} decimal places (0.015092 for the single-level case vs 0.015087 for the multilevel case).
\begin{figure}[!pth]
\centering
\includegraphics[width = 0.32\textwidth]{{sc4.3slmesh.eps}}
\includegraphics[width = 0.32\textwidth]{{sc4.3mlmesh1.eps}}
\includegraphics[width = 0.32\textwidth]{{sc4.3mlmesh21.eps}}
\caption{Single-level mesh (left) and meshes associated with the central
collocation point (middle) and top right corner point (right)
when the tolerance is reached for test case III.}
\label{fig:sc4.3meshes}
\end{figure}
The upshot of the effective use of tailored refinement is an order of magnitude decrease in the
overall computation time. The total number of degrees of freedom in the multilevel case
was \rblx{{\tt 2,620,343}---a} factor of \rblx{16} reduction overall. Looking at the
associated rates of convergence we see that the optimal rate $O({\rm dof}^{-1/2})$
is recovered in the multilevel case. We anticipate that similar performance gains
will be realized whenever a problem has local features that can be effectively
resolved using sample-dependent meshes.
We have also solved the one peak test problem using an efficient adaptive stochastic Galerkin approximation strategy.
While the linear algebra associated with the
Galerkin formulation is decoupled in \abrevx{this} case, the computational overhead of evaluating
the right-hand side vector is a significant limiting factor in terms of the relative efficiency.
The overall CPU time taken to compute 4 digits in the QoI using adaptive \abrevx{stochastic Galerkin FEM} is
comparable to the CPU time taken to compute 5 digits using the multilevel \abrevx{SC-FEM}~strategy.
\section{Conclusions}\label{sec:conclusions}
Adaptive methods hold the key to efficient approximation of solutions to linear elliptic partial differential \rblx{equations}
with random data. The numerical results presented in this series of two papers demonstrate the effectiveness
and the robustness of our novel SC-FEM error estimation strategy, as well as the utility of the error indicators guiding
the adaptive refinement process. Our results also suggest that optimal rates of convergence are more
difficult to achieve in a sparse grid collocation framework than in a
multilevel stochastic Galerkin framework. It is demonstrated herein that
the overhead of generating specially tailored sample-dependent meshes can be worthwhile
and optimal convergence rates can be recovered when the solutions to the sampled problems have
local features in space. The single-level strategy discussed in part~I of this work is, however, likely to be more
efficient (certainly in terms of overall CPU time) when a single adaptively refined grid can
adequately resolve spatial features associated with solutions to a range of individually sampled problems.
\bibliographystyle{siam}
|
2,877,628,089,065 | arxiv | \section{Introduction}
In formal language theory, the idea of homomorphic characterization of a language family refers
to the definition of all and only the languages in that family, starting from languages of simpler families and applying an alphabetic transformation.
Such idea has been applied to many different language families, from the regular to the recursively enumerable ones,
and also to non-textual languages, such as the two-dimensional picture languages~\cite{GiammRestivo1997}.
Our focus here is on context-free (\emph{CF}) languages,
but a short reference to the earlier homomorphic characterization of regular languages, known as Medvedev theorem~\cite{Medvedev1964} (also in~\cite{Eilenberg74}), is useful to set the ground.
The regular language family coincides with the family obtained by applying an alphabetic letter-to-letter homomorphism to the simpler family
at that time named \emph{definite events} and presently known as {\em local}, and also referred to as \emph{strictly locally testable} languages of width 2, shortened to 2-\emph{SLT} (e.g.,~\cite{Caron2000}).
\par
Then, Chomsky and Sch{\"u}tzenberger ~\cite{ChomskySchutz1963} stated the theorem, referred to as \emph{CST},
saying that the CF family coincides with the family obtained by the following two steps. First, we intersect a Dyck language $D$ over an alphabet consisting of brackets, and a 2-SLT language $R$. Second, we apply to the result an alphabetic homomorphism $h$, in formula $h(D \cap R)$, which maps some brackets to terminal letters and erases some others.
Therefore, a word $w\in D \cap R$ may be longer than its image $h(w)$.
\par
The original proof of CST considers a grammar in Chomsky Normal Form (CNF) and uses a Dyck alphabet made
by a distinct pair of brackets for each grammar rule, which makes the Dyck alphabet typically much larger than the terminal alphabet and dependent on the grammar size.
\par
In the almost contemporary variant by Stanley~\cite{Stanley1965} (also in Ginsburg~\cite{Ginsburg1966}), the Dyck alphabet is grammar-independent: it consists of the terminal alphabet, a marked copy thereof,
and four extra letters, two of them used as delimiters (i.e., brackets), the other two as unary codes.
In this variant, the homomorphism has to erase many more symbols than in the original version of CST.
The regular language is not 2-SLT, but it is immediate to prove that it is strictly locally testable,
by using a width parameter greater than two, depending linearly on the number of grammar rules.
\par
Then Berstel~\cite{Berstel79} (his Exercise 3.8) found that fewer symbols than in the original CST need to be erased by the homomorphism, if the grammar is in Greibach Normal Form. In that case, there exists a constant $k>0$ such that, for every word $w\in D \cap R$, the ratio of the lengths of $w$ and $h(w)$ does not exceed $k$. Later, Berstel and Boasson~\cite{DBLP:journals/fuin/BerstelB96}, and independently Okhotin~\cite{Okhotin2012}, proved a non-erasing variant of CST by using grammars in
Double Greibach Normal Form, \emph{DGNF} (see e.g.~\cite{Engelfriet1992}).
\par
In the statements ~\cite{Berstel79},~\cite{DBLP:journals/fuin/BerstelB96} and~\cite{Okhotin2012},
however, the Dyck alphabet depends on the grammar size.
Most formal language books include statements and proofs of CST essentially similar to the early ones.
\par
To sum up, we may classify the existing versions of CST with respect to two primary parameters:
the property of being erasing versus nonerasing, and the grammar-dependence versus grammar-independence of the Dyck alphabet, as shown in the following table:
\begin{center}
\begin{tabular}{|l||p{4.1cm}|p{3.5cm}|}\hline
& \multicolumn{2}{|c|}{Dyck Alphabet}\\ \hline\hline
Homomorphism & \emph{grammar-dependent} & \emph{grammar-independent } \\\hline
\emph{erasing } & Chomsky and Sch{\"u}tzenberger~\cite{ChomskySchutz1963}, & Stanley~\cite{Stanley1965}\\
&Berstel~\cite{Berstel79}&\\
\hline
\emph{nonerasing} & Berstel and Boasson~\cite{DBLP:journals/fuin/BerstelB96}, Okhotin~\cite{Okhotin2012} &
\\\hline
\end{tabular}
\end{center}
This paper fills the empty case of the table. It presents a new non-erasing version of CST that uses a Dyck alphabet polynomially dependent on the terminal alphabet size and independent from the grammar size. Besides the two parameters of the table, a third aspect may be considered: whether the regular language is strictly locally testable or not and, in the former case, whether its width is two or greater. Actually, this aspect is correlated with the alphabet choice, because, if the alphabet is grammar-independent, the grammar complexity, which cannot be encoded inside the Dyck alphabet, must affect the size of the regular language, in particular its SLT width.
We show that the width parameter is logarithmically related to grammar complexity, both in the erasing and the non-erasing cases.
\par
In our previous communication~\cite{DBLP:conf/lata/Crespi-Reghizzi16} we proved by means of standard constructions for pushdown automata, grammars and sequential transductions
(without any optimization effort) that the Dyck alphabet needed by our version of CST is polynomially related to the original alphabet. However, we could not give a precise upper bound.
Here we develop some new grammar transformations (in particular a new normal form that we call \emph{quotiented}) and analyze their complexity, to obtain a precise, but still pessimistic, upper bound on the exponent of the polynomial dependence between the two alphabets.
As a side result, we improve the known transformation from CNF to the \emph{generalized} DGNF~\cite{DBLP:journals/ipl/Yoshinaka09,DBLP:journals/tcs/BlattnerG82}) in the relevant case here, namely when the two parameters of the DGNF are equal, i.e., when the terminal prefix and suffix of every production right-hand side have the same length.
\par
The Dyck alphabet we use, though independent from the grammar size, is much larger than the original alphabet. At the end, we show that a substantial reduction of alphabet size is easy in the case of the linear grammars in DGNF. For that we exploit the recent extension ~\cite{DBLP:journals/ijfcs/Crespi-ReghizziP12} of Medvedev homomorphic characterization of regular languages, which reduces the alphabet size at the expense of the SLT width.
\par
The enduring popularity of CST can be ascribed to the elegant combination of two structural aspects of CF languages, namely the free well-nesting of brackets, and a simple finite-state control on the adjacency of brackets.
Taking inspiration from CST, many homomorphic characterizations for other language families have been proposed. A commented historical bibliography is in \cite{ScLegacyMPSchutz:Crespi-Reghizzi16}; we mention one example, the case of the slender CF languages \cite{DBLP:journals/actaC/DomosiO01}.
\par
Paper organization: Sect.~\ref{SectionPreliminaries} lists the basic definitions, recalls some relevant CST formulations, and proves a trade-off between the Dyck alphabet size and the regular language size.
Sect.~\ref{SectHomCharSuitableLengthLang} proves CST using a grammar-independent alphabet and a non-erasing homomorphism; it first introduces and studies the size of the grammar normal forms needed, then it develops the main proof, and at last presents an example. Sect.~\ref{SectHomCharBasedOnMedvedev} states the Dyck alphabet size for CF grammars in the general case, and shows that a much smaller Dyck alphabet suffices for the linear CF grammars in DGNF. The conclusion mentions directions for further research.
\section{Preliminaries and basic properties}\label{SectionPreliminaries}
For brevity, we omit most classical definitions (for which we refer primarily to ~\cite{Harrison1978} and ~\cite{Berstel79}) and just list our notation.
Let $\Sigma$ denote a finite terminal alphabet and $\varepsilon$ the empty word. For a word $x$, $|x|$ denotes the length of $x$;
the $i$-th letter of $x$ is $x(i)$, $1\le i\le |x|$, i.e., $x = x(1)x(2) \dots x(|x|)$.
For every integer $r>0$, the language $\Sigma^{<r}$ is defined as $\{x \in \Sigma^* \mid |x| < r\}$, and similarly for $\Sigma^{\leq r}$ and $\Sigma^r$.
Notice that $|\Sigma^{<r}| \in O(|\Sigma|^r)$. The reversal of a word $x$ is denoted by $x^R= x(|x|)\dots x(2)x(1) $. The right quotient of a language $L\subseteq \Sigma^*$ by a word $w \in \Sigma^*$ is denoted by $L_{/w}= \{x \mid xw \in L \}$.
\par
For finite alphabets $\Delta, \Gamma$, an \emph{alphabetic homomorphism} is a mapping $h: \Delta \to \Gamma^*$; if, for some $d\in \Delta$, $h(d)=\varepsilon$, then $h$ is called \emph{erasing}, while it is called strict or \emph{letter-to-letter} if, for every $d\in \Delta$, $h(d)$ is in $\Gamma$.
\par
Given a nondeterministic finite automaton (NFA) $A$, the language recognized by $A$ is denoted by $L(A)$.
The \emph{size} of a regular language $R=L(A)$, $\textit{size}(R)$, is the number of states of a minimal NFA that recognizes the language.
\par
A \emph{context-free} (CF) \emph{grammar} is a 4-tuple $G=(\Sigma, N, P, S)$ where $N$ is the nonterminal alphabet, $P\subseteq N \times (\Sigma\cup N)^*$ is the rule set, and $S\in N$ is the axiom.
Since we only deal with context-free grammars and languages, we often drop the word ``context-free''. For simplicity, in this paper we define the size of $G$ to be the number $|N|$ of the nonterminals of $G$.
The language generated by $G$ starting from a nonterminal $X \in N$ is $L(G,X)$; we shorten $L(G,S)$ into $L(G)$.
A word in $L(G)$ is also called a \emph{sentence}.
\par
A grammar is \emph{linear} if the right side of each rule contains at most one nonterminal symbol.
\par
A grammar is in \emph{Chomsky normal form} (CNF) if the right side of each rule is in $\Sigma$ or in $NN$.
\par
A grammar $G=(\Sigma, N, P, S)$ is in {\em Double Greibach normal form} (DGNF) if the right side of each rule is in $\Sigma$ or in $\Sigma N^* \Sigma$.
It is in {\em cubic} DGNF if it is in DGNF and there are at most three nonterminals in the right side of every rule.
A generalization of DGNF is the $(m,n)$-GNF (see, e.g.,
\cite{DBLP:journals/ipl/Yoshinaka09,DBLP:journals/tcs/BlattnerG82}) where the right-hand side of each rule is in $\Sigma^m N^* \Sigma^n$ or in $\Sigma^{< m+n}$, for $m,n\ge 1$.
\par
The family SLT of {\em strictly locally testable} languages
\cite{McPa71} is next defined,
dealing only with $\varepsilon$-free languages.
For every word $w\in \Sigma^+$, for all $k\ge 2$, let $i_k(w)$ and
$t_k(w)$ denote the prefix and, resp., the suffix of $w$ of
length $k$ if $|w|\ge k$, or $w$ itself if $|w|<k$. For $k\ge |w|$, let $f_k(w)$
denote the set of factors of $w$ of length $k$.
Extend $i_k, t_k, f_k$ to languages as usual.
\begin{definition}\label{defk-SLT}
Let $k\geq 2$. A language $L$ is $k$-{\em strictly locally testable} ($k$-\emph{SLT}),
if there exist finite sets
$W \subseteq\Sigma \cup \Sigma^2 \cup \dots \cup \Sigma ^{k-1}$,
$I_{k-1},T_{k-1}\subseteq \Sigma ^{k-1}$, and
$F_{k}\subseteq \Sigma ^{k}$ such that for every $x\in \Sigma^+$,
$x\in L$ if, and only if,
\[
x \in W \;\lor \;
\big(i_{k-1}(x)\in I_{k-1}\, \wedge \,
t_{k-1}(x)\in T_{k-1} \,\wedge\,
f_{k}(x)\subseteq F_{k}\big).
\]
A language is {\em strictly locally testable (SLT)} if it is $k$-SLT for
some $k$, called its {\em width}.
\end{definition}
Value $k=2$ yields the well-known family of {\em local
languages}. The SLT family is
strictly included in the family of \emph{regular languages} and forms a hierarchy with respect to the width. The size of a $k$-SLT language over $\Sigma$ is in $O(|\Sigma|^k)$.
\subsection{Past statements of CST}\label{subsubsectPastCSTs}
The following notation for Dyck alphabets and languages is from~\cite{Okhotin2012}.
For any finite set $X$, the \emph{Dyck alphabet} is the set, denoted by $\Omega_X$,
of brackets labeled with elements of $X$:
\[
\Omega_X = \left\{ \, [_x \, \mid x \in X \right\} \,\cup\, \left\{ \, ]_x \, \mid x \in X \right\}.
\]
The \emph{Dyck language} $D_X \subset{} \Omega^*_X$ is generated by the following grammar:
\begin{equation}
S \to [_x \, S\, ]_x \text{ for each } x \in X , \quad S \to SS, \quad S \to \varepsilon
\label{eqDyckGra}
\end{equation}
The notation $\Omega_X$ for the Dyck alphabet should not be confused with the asymptotic lower bound notation $\varOmega$, which is also used later in the paper.
\par
Let $k= |X|$. Clearly, each Dyck language $D_X$ is isomorphic to $D_{ \{1,\dots,k\}}$. For brevity we write $\Omega_k$ and $D_k$ instead of $\Omega_{\{1,\dots,k\}}$ and $D_{ \{1,\dots,k\} }$, respectively.
\par
Since it is obviously impossible for an odd-length sentence to be the image of a Dyck sentence under a letter-to-letter homomorphism,
the CST variant by Okhotin (Th. 3 in~\cite{Okhotin2012}) modifies the Dyck language by adding \emph{neutral} symbols to its alphabet, and we do the same here.
\begin{definition}\label{defAlphabetDyckWithNeutral}
Let $q, \l \geq 1$. We denote by $\Omega_{q,l}$ an alphabet containing $q$ pairs of brackets and $l$ distinct symbols, called \emph{neutral}~\cite{Okhotin2012}.
The \emph{Dyck language with neutral symbols} over alphabet $\Omega_{q,l}$, denoted by $D_{q,l}$,
is the language generated by the grammar in Eq. \eqref{eqDyckGra}, enriched with the rules $S \to c$, for each neutral symbol $c$ in $\Omega_{q,l}$.
\end{definition}
\par
We need two of the known statements of CST, the non-erasing version by Berstel and Boasson~\cite{DBLP:journals/fuin/BerstelB96}, which we present following Okhotin~\cite{Okhotin2012}, and the fixed alphabet version by Stanley~\cite{Stanley1965}: they are respectively reproduced as Th.~\ref{th-1-Okhotin} and Th.~\ref{ThStanleyNostro}.
Moreover, we prove in Th.~\ref{th-okhotinNostro} a simple statement about the exact number of brackets needed in Okhotin's construction and a slight generalization of his theorem, which will be useful in later proofs.
\begin{theorem}\label{th-1-Okhotin}
(Th. 1 of Okhotin~\cite{Okhotin2012})
A language $L\subseteq \left(\Sigma^2 \right)^*$ is context-free if, and only if, there exist an integer $k>0$, a regular language $R\subseteq \Omega_k^*$ and a letter-to-letter homomorphism $h : \Omega_k \to \Sigma$ such that $L = h\left( D_k \cap R \right)$.
\end{theorem}
\par
Following Okhotin's Lemma 1~\cite{Okhotin2012}, since $L\subseteq \left(\Sigma^2 \right)^*$, we can assume that a grammar for $L$ is in \emph{even}-DGNF, that is the grammar form such that the right side of each rule is in $\Sigma N^* \Sigma$.
\begin{theorem}\label{th-okhotinNostro}(Derived from the proof of Th. 1 of ~\cite{Okhotin2012})
\begin{enumerate}\item Let $L\subseteq \left(\Sigma^2 \right)^*$ be the language defined by a grammar $G=(\Sigma, N, P, S)$ in even-DGNF, and let $k= |P|^2+|P|$. Then, there exist a regular language $R\subseteq \Omega_k^*$ and a letter-to-letter homomorphism $h : \Omega_k \to \Sigma$ such that $L = h\left( D_k \cap R \right)$.
\item Let $G=(\Sigma, N, P, S)$ be an even-DGNF grammar and let $q = |P|^2+|N|\cdot |P|$. Then, there exists a letter-to-letter homomorphism $h : \Omega_q \to \Sigma$ such that,
for all $X \in N$, there is a regular language $R_X$ satisfying the equality
$L(G,X) = h\left( D_q \cap R_X \right)$.
\end{enumerate}
\end{theorem}
\begin{proof}
To prove part (1), we revisit the proof in~\cite{Okhotin2012}.
We assume that $L$ is generated by a CNF grammar and we convert it into an even-DGNF grammar $G=(\Sigma,N,P,S)$.
Each rule has thus the form $A \to b C_1 \ldots C_n d$, where one can further assume that nonterminals $C_1, \dots, C_n$ are pairwise distinct.
The leftmost terminal $b$ and the rightmost terminal $d$ in this rule is replaced, respectively, by an open or a closed bracket.
Each bracket is labeled with a pair of rules of $P$ of the form
$\langle X \to \xi_1 A \xi_2, A \to b C_1 \ldots C_n d\rangle$,
the first component being the ``previous'' rule $X \to \xi_1 A \xi_2$ (where, as assumed, $\xi_1 \xi_2$ have no occurrence of $A$), and the second one the ``current'' rule
$A \to b C_1 \ldots C_n d$ itself. The idea is to represent the derivation step $X \Longrightarrow \xi_1 A \xi_2 \Longrightarrow \xi_1 b C_1 \ldots C_n d \xi_2$.
Since there is no rule deriving the axiom of the grammar, we need also a distinguished label, which can just be the axiom itself, in the first component of the leftmost open bracket, e.g.,
the label of the first opening bracket can be a pair of the form $\langle S, S \to a B_1 \dots B_n c\rangle$.
Therefore, the value $k$ of the Dyck alphabet $\Omega_k$ is at most $|P|^2+|P|$, i.e., in $O\left( |P|^2\right)$.
Incidentally, an example of a Dyck sentence corresponding to a derivation is shown in Sect. \ref{sect:example}, Eq. \eqref{eqExampleOkhotinHomo}, while
Eq.~\eqref{eq-ex-h-hom} illustrates the definition of homomorphism $h$.
\par
Part (2) can be proved, by applying, for every $X \in N$, the preceding proof to the grammar, denoted by $G_X= (\Sigma, N, P, X)$, which is obtained from $G$ by selecting $X$ as the axiom.
It follows that there exist an integer $k=|P|^2+|P|$, a letter-to-letter homomorphism $h_X: \Omega_k\to \Sigma$, and a regular language $R_X\subseteq \Omega_k^+$, such that $L(G,X) = h_X(D_k\cap R_X)$.
The Dyck alphabet of $L(G,X)$ is thus composed of $|P|^2 + |P|$ pairs of brackets; however, for every $X$, each language $L(G,X)$ is defined by essentially the same grammar, except for the axiom $X$.
Therefore, the Dyck language for $L(G,X)$, is defined by the same set of $|P|^2$ bracket pairs, each labeled with a pair of productions of $P$, already defined for the Dyck set of $L(G)$; and by
$|P|$ bracket pairs whose first component is labeled $X$.
We can now define the Dyck alphabet $\Omega_q$ as the union of all the above Dyck alphabets; therefore, the total number of bracket pairs is $q = |P|^2+|N|\cdot |P|$.
\par\noindent Notice the language $R_X$ differs from $R_Y$, for $X\neq Y$, only in that it must start and end with a bracket whose label has as first component $X$ rather than $Y$.
\par\noindent
At last, it is immediate to define one
letter-to-letter homomorphism $h$ which is valid for the CST of each language $L(G,X)$.
\qed
\end{proof}
\par
Furthermore, the regular language $R$ produced in Okhotin's proof is a 2-SLT language: $R$ simply checks that every pair of adjacent brackets corresponds to the correct consecutive application of two rules in a leftmost derivation, and that the leftmost (open) bracket and the rightmost (closed) bracket are labeled with the axiom.
\par
Next we state Stanley's CST, as presented in~\cite{Ginsburg1966}, and add an immediate consequence.
\begin{theorem}\label{ThStanleyNostro}(derived from Th. 3.7.1 of Ginsburg~\cite{Ginsburg1966})
Given an alphabet $\Sigma$, there exist a Dyck alphabet $\Omega$ and an alphabetic erasing homomorphism $h : \Omega^* \to \Sigma^*$ which satisfy the following properties:
\begin{enumerate}
\item for each language $L \subseteq \Sigma^*$, $L$ is context-free if, and only if, there exists a regular language $R \subseteq \Omega^*$ such that $L= h(D \cap R)$;
\item if $L=L(G)$, with $G=(\Sigma, N, P, S)$ in CNF, then there exists a constant $k$ with $k \in O(|P|)$ such that $R$ is a $k$-SLT language.
\end{enumerate}
\end{theorem}
\begin{proof}
We only need to prove item (2), which is not considered in~\cite{Ginsburg1966}. The Dyck alphabet in~\cite{Ginsburg1966} is
$
\Omega = \Sigma\, \cup\, \Sigma' \, \cup \, \{c, c',d, d'\}
$,
where $\Sigma'$ is a primed copy of $\Sigma$; thus $|\Omega| = 2 |\Sigma| + 4$. Homomorphism $h$ erases any letter in $\{c, c',d, d'\} \cup \Sigma'$
and maps the other letters on the corresponding terminal letter. Ginsburg lists a right-linear grammar for $R$ that has rules of the following types:
\begin{equation}
\begin{array}{l}
X \to aa', \text{ if } X\to a \in P
\\
X\to aa' d' c'^i d' B , \; \text{ if } X \to a \in P\; \text{ and }i\text{ is the label of a rule } E\to AB
\\
E \to d c^i d A, \;\text{ if } i\text{ is the label of a rule } E\to AB
\end{array}
\label{eqGinsburghRLgramm}
\end{equation}
Notice that $dc^id$ and $d'c'^id'$, $1< i < |P|$, represent the integer $i$, i.e., a grammar rule label, as a unary code.
Clearly, any sentence generated by the right-linear grammar satisfies a locally testable constraint, namely that any two adjacent codes are compatible with the above rules.
Since the code length is at most $|P|+ 2$, a sliding window of width $2|P|+ 2$ suffices to test the constraint.
\qed
\end{proof}
\par
A new straightforward improvement on Stanley theorem can be obtained using a binary code instead of a unary one, to represent grammar rule labels in base two.
This allows for a sliding window of size logarithmic in the number of productions.
\begin{corollary}\label{CorollaryStanley}
Under the assumptions in Th.~\ref{ThStanleyNostro},
there exists a constant $k$, with $k \in \mathcal{O}(\log|P|)$, such that $R$ is a $k$-SLT language.
\end{corollary}
\begin{proof}
A sketch of the proof suffices. Suppose that the original grammar has $n>0$ rules, and let $h = \lceil \log (n)\rceil$,
where $\log$ is the base 2 logarithm, and assume that symbols $0,1$ are not in $\Sigma$.
Given $i$, $0\le i\le n$,
let $\llbracket i \rrbracket_2$ be the representation of number $i$ in base two using $h$ bits, which is a word over alphabet $\{0,1\}$.
Modify grammar \eqref{eqGinsburghRLgramm} for the regular language $R$ as follows: replace every rule of the form
$ X\to aa' d' c'^i d' B$ with $X\to aa' d' \llbracket i \rrbracket_2 d' B$, and replace every rule of the form $E \to d c^i d A $
with $E \to d \llbracket i \rrbracket^R_2 d A$, where $ \llbracket i \rrbracket^R_2$ is the mirror image of $\llbracket i \rrbracket_2$.
By taking $k=2h+2$ it is immediate to see that the regular language defined by this grammar is $k$-SLT.\qed
\end{proof}
Encoding grammar rules by positional numbers is also the key idea applied in Sect.~\ref{SectHomCharSuitableLengthLang}, but, since the homomorphism is not allowed to erase such numbers,
a much more sophisticated representation will be needed.
\subsection{Trade-off between Dyck alphabet and regular language sizes}
It is worth contrasting the two versions of CST reproduced as Th.~\ref{th-1-Okhotin} and Cor.~\ref{CorollaryStanley}: the former uses a larger Dyck alphabet and a simpler regular language, while the latter has a smaller Dyck alphabet and a more complex regular language. With a little thought, it is possible to formulate a precise relation of general validity between the Dyck alphabet size, the complexity of the regular language, and the number of nonterminal symbols of the CF grammar.
\par
We recall the language family $\{M^{(m)}\}$, $m>0$, defined for each $m$ as the language:
\begin{equation}
M^{(m)} = (ab)^* \cup (aab)^* \cup \dots \cup (a^nb)^*
\nonumber
\end{equation}
By a classical result of Gruska~\cite{Gruska67}, every CF grammar generating $M^{(m)}$ must have at least $m$ nonterminal symbols.
Although $M^{(m)}$ is regular, it is easy to transform it into a non-regular CF language $L^{(m)}$ having the same property, e.g.,
$L^{(m)} = \{ w w^R \mid w \in M^{(m)} \}$. It is obvious that every grammar for $L^{(m)}$ needs at least $m$ nonterminal symbols.
\par
The following proposition gives a lower bound on the size of the Dyck alphabet and on the size of the minimal NFA accepting $R$.
\begin{proposition}\label{propos:TradeOff}
For every finite alphabet $\Sigma$ with $|\Sigma|>1$, for every $m>0$ there exists a language $L\subseteq \Sigma^*$ such that
every context-free grammar for $L$ has at least $m$ nonterminals and,
for every homomorphic characterization as
$L=h\left(D\cap R\right)$ (with $D, R \subseteq \Omega^*$ for some Dyck alphabet $\Omega$), the following relation holds:
\[
\left| \Omega \right| \cdot \textit{size}^2(R) \ge m .
\]
\end{proposition}
\proof
It suffices to outline the proof.
For every $m>0$, let $\Omega^{(m)},\, h^{(m)}:\Omega^{(m)}\to\Sigma ,\, R^{(m)}$ be, respectively, a Dyck alphabet, a homomorphism and a regular language such that:
$L^{(m)} = h^{(m)}\left(D^{(m)}\cap R^{(m)}\right)$,
where $L^{(m)}$ is a CF language whose grammar requires at least $m$ nonterminal symbols.
\par\noindent
First, we construct a grammar $G$ for language $D^{(m)}\cap R^{(m)}$ by means of the classical construction in \cite{BarHillel61} (Th. 8.1),
which assumes that
each right part of a rule in the grammar is either a terminal character or a nonterminal word. A straightforward grammar in this form
for the Dyck language $D^{(m)}$ has exactly $\left|\Omega^{(m)}\right|+1$ nonterminals.
Then, the number of nonterminals of grammar $G$ is at most $\left|\Omega^{(m)}\right|\cdot \textit{size}^2(R)$.
At last, by a straightforward transformation of $G$, we
obtain a grammar $G^{(m)}$ defining language $h(L(G))=L^{(m)}$ and having the same number of nonterminals as $G$. \qed
\par
It is worth observing that, if a CST characterization of $L$ is such that the alphabet size $|\Omega|$ depends on the alphabet size $|\Sigma|$ but does not depend on the number $m$ of nonterminals,
then it follows that
a minimal NFA for the regular language $R$ must have a number of states dependent on the number of nonterminals, i.e., $R$ must reflect the size of a grammar for $L$.
In this case, a simple asymptotic lower bound on the number of states of a NFA for $R$ is clearly $\varOmega(\sqrt{m})$,
i.e., the square root of the number of nonterminals of a minimal grammar for $L$.
Obviously, in general this lower bound may be too small and $\textit{size}(R)$ may actually be quite larger: for instance, in the case of Th.~\ref{ThStanleyNostro}, the regular language $R$ has an NFA recognizer with $O(m^2)$ states.
\section{New homomorphic characterization}\label{SectHomCharSuitableLengthLang}
The section starts with the grammar normal forms to be later used in the proof of the CST, and examines their descriptive complexity, with the aim of obtaining at least an estimation of the size of the Dyck alphabet (which in~\cite{DBLP:conf/lata/Crespi-Reghizzi16} was just proved to be polynomial in the terminal alphabet size).
Then the section continues with the main theorem and its proof, and terminates with an example illustrating the central idea of the proof.
\subsection{Preliminaries on grammar normal forms}\label{subsectNormalForms}
We revisit the classic construction of DGNF starting from a CNF grammar, to establish a numerical relation between the size of the two grammars. Then we introduce a new normal form, called quotiented, and combine it with the DGNF form.
\par
The following lemma supplements a well-known result about DGNF (e.g.,~\cite{AutebertBerstelBoasson1997}) with an explicit upper bound, which is lacking in the literature, on the size of the equivalent DGNF grammar in terms of the original CNF grammar size.
\begin{lemma}\label{lm-DGNF}
Given a CNF grammar $G=(\Delta, N,P,S)$, over a finite alphabet $\Delta$, there exists an equivalent grammar $\widetilde{G}=(\Delta, \widetilde{N},\widetilde{P},\widetilde{S})$ in cubic DGNF (thesis of Theor. 3.4 of~\cite{AutebertBerstelBoasson1997}).
Grammar $\widetilde{G}$ is such that
$\left|\widetilde{N}\right| \in O\left(\left|\Delta\right| \cdot \left|N\right|^2\right)$ and
$\left|\widetilde{P}\right| \in O\left(|\Delta|^6\cdot |N|^8 \right) $.
\end{lemma}
\begin{proof}
Starting from the construction of $\widetilde{G}$ in~\cite{AutebertBerstelBoasson1997}, we estimate its size.
The construction involves four steps, but we only need the first three, since the last step computes a quadratic form not needed here.
Leftmost and rightmost derivations are respectively denoted by $\Rightarrow_L$ and $\Rightarrow_R$. We compute the sizes as we proceed.
\par
\textbf{Step 1} defines $\widetilde{N}$ as the union of $N$ and the finite set $\mathcal{H}$ next defined.
\begin{gather*}
\forall a\in\Delta, X\in N, \text{ let } L(a,X)= \left\{m \in N^* \mid X \stackrel * \Rightarrow_L \alpha \Rightarrow a m \text{ where } \alpha \in N^*\right\};
\\
\forall a\in\Delta, X\in N, \text{ let } R(X,a)= \left\{m \in N^* \mid X \stackrel * \Rightarrow_R \alpha \Rightarrow m a \text{ where } \alpha \in N^*\right\};
\\
\mathcal{L}= \left\{L(a,X) \mid a\in\Delta, X\in N \right\}
\text{ hence }\left|\mathcal{L}\right| \leq |\Delta| \cdot |N| ;
\\
\mathcal{R}= \left\{R(X,a) \mid a\in\Delta, X\in N \right\}
\text{ hence }\left|\mathcal{R}\right| \leq |\Delta| \cdot |N| ;
\\
\text{ hence }\left|\mathcal{L}\cup \mathcal{R}\right| \leq 2|\Delta| \cdot |N|;
\\
\mathcal{H} = \text{ closure of }\mathcal{L}\cup \mathcal{R} \text{ under the right and left quotients by a letter of }N ;
\\
\text{ it follows that } \left|\mathcal{H}\right| \leq 4 |\Delta| \cdot |N|^2.
\end{gather*}
Therefore, it holds:
\begin{equation}
\left|\widetilde{N}\right| \in O\left(|\Delta| \cdot |N|^2\right).
\label{eq:SizeWidetildeN}
\end{equation}
\par
Then \textbf{Step 2} and \textbf{Step 3} construct a cubic DNGF grammar $\widetilde{G}$, over the terminal alphabet $\Delta$ and the nonterminal alphabet $\widetilde{N}$,
i.e., the rules $\widetilde{P}$ are in $\widetilde{N} \times \Delta \widetilde{N}^{\le 3}\Delta$.
A rough and quick calculation, obtained from Eq. \eqref{eq:SizeWidetildeN} supposing that all nonterminals can be combined in all ways, yields the (pessimistic) estimation:
\[
\left|\widetilde{P}\right| \in O\left(|\Delta|^2\left|\widetilde{N}\right|^4\right) =
O\left(|\Delta|^2\cdot |\Delta|^4 |N|^8 \right) =
O\left(|\Delta|^6\cdot |N|^8 \right).
\]
\qed
\end{proof}
We now want to generalize Lm.~\ref{lm-DGNF} to an $(m,m)$-GNF, for values of $m$ larger than 3, since this form is convenient for proving our CST. Unfortunately, the grammar transformation algorithms known to us are not adequate here (as next explained), and we have to introduce
a new normal form for grammars, called \emph{quotiented} and then to prove an intermediate lemma.
\par
We observe that, exploiting
known results on GNF (see, e.g.,~\cite{DBLP:journals/ipl/Yoshinaka09,DBLP:journals/tcs/BlattnerG82}), it is fairly obvious that every CNF grammar can be transformed into an $(m,n)$-GNF whose size is polynomially related to the size of the original grammar.
For instance, the very simple construction provided in~\cite{DBLP:journals/ipl/Yoshinaka09} shows that, if
$G=(\Sigma, N, P, S)$ is in CNF, then an equivalent $(m,n)$-GNF grammar can be built such that the nonterminal alphabet is in $O(|N|^2)$ and the number of rules is in $O\left(|\Sigma|^{m+n+2} \cdot |N|^{2m+2n+4}\right)$.
Unfortunately, although the latter relation is polynomial in the size of the grammar, both terms, featuring the base $|\Sigma|$ or the base $|N|$, exhibit an undesirable exponential dependence on $m+n$.
\par
In contrast, anticipating our Lm. \ref{lm-partitionedCNF}, under the assumption $m=n$, i.e., for $(m,m)$-GNF grammars, we obtain that the number of rules is in $|\Sigma|^{O(m)}\cdot O(|N|^8)$,
i.e.,
the term with base $|\Sigma|$ has still an exponential dependence in the value $m$, but the the term with base $|N|$ has instead a polynomial dependence.
While for the term with base $|\Sigma|$ the preceding exponential dependence of the number of rules in the value $m$ remains, the exponential dependence disappears for the term with base $|N|$,
more precisely, the number of rules is in $|\Sigma|^{O(m)}\cdot O(|N|^8)$.
\par
We define the new normal form.
\begin{definition}\label{def-quotientedNF}
A grammar $G=(\Sigma, N, P, S)$ is \textbf{quotiented of order} $r\ge 1$ if the axiom $S$ does not occur in any right-hand side and the set $P$ of rules is partitioned in two sets, $P_q, P_r$, such that:
\[
P_q \subseteq \{S\} \times N \cdot \Sigma^{<r}
\quad \quad \text{ and }\quad P_r \subseteq \left(N-\{S\} \right)\times \left(N\;\cup\; \Sigma^r \right)^*.
\]
The two sets include, respectively, the rules for the axiom, and the rules for the other nonterminals.
If $G$ is quotiented of order $r$, then it is said to be, for the same order $r$ :
\begin{description}
\item[quotiented CNF ] (\emph{Q-CNF}), if
$P_r\subseteq \left(N \times N^2 \right)$
\; $\cup\; \left( N\times \{\Sigma^r \cup\varepsilon\}\right)$;
\item[quotiented DGNF ](\emph{Q-DGNF}), if
$P_r\subseteq \left(N \times \Sigma^r N^*\Sigma^r\right)$
\, $\cup\, \left( N\times \{\Sigma^r \cup\varepsilon\}\right)$.
\end{description}
\end{definition}
\begin{example}
We show three equivalent quotiented forms with $r=3$.
\[
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{p{2cm}|c |c }
& $P_r$ & $P_q$
\\\hline
quotiented & $X \to aaa X bbb X \mid \varepsilon,\quad
Y \to abb Y \mid abb$ &
\\\cline{1-2}
Q-CNF & $X \to X_1 X_2,\, X_1\to X_3 X,\, X_2\to X_4 X \mid \varepsilon$ &
\\ &$X_3 \to aaa, \, X_4 \to bbb$ & $S \to X aa \mid Yb$
\\ & $Y \to X_5 Y \mid abb, \, X_5 \to abb$ &
\\\cline{1-2}
Q-DGNF & $X \to a^3 XZX b^3\mid a^3 b^3\mid \epsilon$, $Z \to b^3 a^3$ &
\\& $Y \to abb Y abb \mid abb \mid \varepsilon $ &
\end{tabular}
\]
\end{example}
For a quotiented grammar, if the rule $S \to X w$, where $X \in N$ and $w\in \Sigma^{<r}$, is in $P$, then the language generated starting from $X$ is included in $L(G)_{/ w}$, which is the right quotient of $L(G)$ by $w$.
\par
The next lemma studies the complexity of the Q-CNF and Q-DGNF normal forms. Since its proof operates on an alphabet made by tuples of letters, we need the following definition.
\begin{definition}[Tuple alphabet and homomorphism]\label{DefTupleAlphabet}
For an alphabet $\Sigma$, let $\Delta_r= \left\{ \langle a_1, \dots, a_r\rangle \mid a_1, \dots, a_r \in \Sigma \right\}$ for all $r\geq 2$.
An element of the alphabet $\Delta_r$ is called an $r$-\emph{tuple} or simply
a tuple.
\par\noindent
The \emph{tuple homomorphism} $\pi_{r}: \Delta_{r} \to \Sigma^+$ is defined by
\[
\pi_{r}\left( \langle a_1, \dots, a_r \rangle \right) = a_1 \dots a_r,\;\text{ for } a_1, \dots, a_r\in \Sigma.
\]
\end{definition}
\par\noindent
The inverse morphism $\pi^{-1}_{r}$ transforms a language included in $\left(\Sigma^r\right)^+$ into a language of $r$-tuples; it
will be applied for constructing an $(r,r)$-GNF grammar.
\par
Historical remark. In our earlier paper~\cite{DBLP:conf/lata/Crespi-Reghizzi16} we already proved that, for every CNF grammar, there exists an equivalent Q-CNF grammar $G'$. The proof applied standard transformations back and forth from grammars to pushdown automata, and a suitable finite-state transduction.
That approach has two drawbacks: first, the resulting complexity of the grammar,
although polynomial in $|N|\cdot |\Sigma|^{r}$, is very high and difficult to compute with precision.
Second, the overly general constructions employed in that proof barred any significant improvement in the complexity.
To overcame such limitations, we present a new direct construction of the Q-CNF and of the Q-DGNF grammars, which allows us to prove the better (but still very pessimistic) upper bounds of
Eq. \eqref{L4}, \eqref{L5} and \eqref{L6},
and may open the way for further improvements.
\begin{lemma}\label{lm-partitionedCNF}
For every grammar $G=(\Sigma, N,P,S)$ in CNF, for every $r\ge 1$, there exist an equivalent Q-CNF grammar $G'=(\Sigma, N', P', S')$
and an equivalent Q-DGNF grammar $G''=(\Sigma, N'', P'', S'')$, both of order $r$, such that:
\begin{gather}
\label{L4} |N'| \in O( |N|\cdot |\Sigma|^{2r})
\\
\label{L5} |P'| \in O(|P|\cdot |\Sigma|^{3r});
\\
\label{L6} |P''| \in O\left(|\Sigma|^{6r}\cdot |N'|^8\right) = O\left(|\Sigma|^{22r}\cdot |N|^8\right).
\end{gather}
\end{lemma}
\begin{proof}
\noindent{\em Construction of $G'$.} Let $\dashv$ be a new symbol not in $\Sigma$.
The set $N'$ of nonterminal symbols is composed of $S'$,
and of the set of 3-tuples:
\[
N \times \Sigma^{<r}\times \Sigma^{<r}\quad \cup\quad N \times \Sigma^{<r}\times \Sigma^{<r}\dashv.
\]
The tuples have the following intuitive meaning: a nonterminal of $N'$ of the form $\langle A, u, \, w\rangle$ generates a word that entirely stays inside a word of $L(G)_{/w}$, while
a nonterminal of the form $\langle A, u, \, w\dashv\rangle$ generates a word that protrudes into a suffix $w\in \Sigma^{<r}$.
\par
Thus, $|N'|$ is in $O(|N|\cdot |\Sigma^{<r}|^2)$, i.e., since $|\Sigma^{<r}|\in O(|\Sigma|^r)$, it holds:
\begin{equation}\label{N'}
|N'| \in O(|N|\cdot |\Sigma|^{2r})
\end{equation}
\par
The grammar rules are next defined.
First, for every $w \in \Sigma^{<r}$, the rule $S' \to \langle S,\epsilon,w\dashv\rangle\, w $ is in $P'$ if $L(G)_{/w} \neq \emptyset $.
\\
Second, the remaining rules of $P'$ are defined, for all $A, B, C \in N\setminus \{S\}$, for all $a\in \Sigma$, for all $t,u,v,w \in \Sigma^{<r}$, by the following clauses:
\begin{gather}
\nonumber
\langle S,\epsilon,w\dashv\rangle \to
\langle A, \varepsilon, \, t\rangle
\langle B, t, \, w\dashv \rangle
\text{ if } S\to AB \in P
\\
\nonumber
\langle A, u, \, v\rangle \to
\langle B, u, \, t \rangle
\langle C, t , v\rangle
\text{ if } A\to BC \in P
\\
\nonumber
\langle A, u, \, w\dashv\rangle \to
\langle B, u, \, t \rangle
\langle C, t , w\dashv\rangle
\text{ if } A\to BC \in P
\\
\nonumber
\langle A, \varepsilon, \, w\dashv\rangle \to \varepsilon
\\ \label{R2}
\langle A, u, \, \varepsilon \rangle \to ua
\text{ if } A\to a \in P \text{ and } |ua| = r
\\ \label{R3}
\langle A, u,ua\, \rangle \to \varepsilon
\text{ if } A\to a \in P \text{ and } |ua|<r
\end{gather}
We assume that any rule containing unreachable and undefined nonterminals is removed from $P'$.
\par
For all $A\in N$,
for all $x, y \in \left(\Sigma^r\right)^* $, for all $t,u,v\in\Sigma^{<r}$, and for $z$ such that $uz \in \Sigma^{<r}$, we claim that
grammar $G'$ has the derivation
\begin{equation}
S' \Longrightarrow \langle S,\epsilon,w\dashv\rangle w
\stackrel * \Longrightarrow x\, \,\langle A, u, v \rangle \, v y w
\stackrel * \Longrightarrow x\, uz \, v y w
\label{eq:quotientedDerivation}
\end{equation}
if, and only if, grammar $G$ has the derivation
\begin{equation}
S \stackrel * \Longrightarrow_G x\, u A y\, w
\stackrel * \Longrightarrow_G x\, u \,z\,v\,y\,w
\label{eq:originalDerivation}
\end{equation}
The two derivations are schematized in Fig.~\ref{figSchemaDerivazioniQuozientate}.
\par
\begin{figure}[tbh!]
\begin{center}
\begin{tikzpicture}[xscale=2.8 ]
\draw[|-|] (0,0)-- (0.4,0);
\draw[|-|] (0.4,0)--(0.8,0);
\draw[dotted] (0.8,0)--(1.2,0);
\draw[|-|] (1.2,0)--(1.6,0);
\draw[dotted] (1.6,0)--(2.4,0);
\draw[|-|] (2.4,0)--(2.8,0);
\draw[|-|,dotted] (2.8,0)--(3.2,0);
\draw[|-|] (3.2,0)--(3.6,0);
\draw[|-|] (3.6,0)--(4.0,0);
\draw[|-|] (1.6,0.1)-- (1.8,0.1)
\draw[<->] (1.8,-0.8)--(2.25,-0.8)
\draw[|-|] (2.25,0.1)-- (2.4,0.1)
\draw[<->] (0,-0.8)--(1.6,-0.8)
\draw[<->] (2.4,-0.8)--(4.0,-0.8)
\draw[|-|] (4.0,0)-- (4.3,0)
\draw[thick] (0.8,-0.3)node[below] {$x\in \left(\Sigma^r\right)^*$};
\draw[thick] (1.7,0.3)node {$u$};
\draw[thick] (2.04,-0.3)node[below] {$z$};
\draw[thick] (2.3,0.3)node {$v$};
\draw[thick] (3.4,-0.3)node[below] {$y\in \left(\Sigma^r\right)^*$};
\draw[thick] (4.25,0.3)node {$w\in\Sigma^{<r}$};
\draw[-] (2.1,2.0)-- (1.8,0.1);\draw[-] (2.1,2.0)-- (2.4,0.1)
\draw[thick] (2.1,2.2)node[above] {$A$};
\draw[-] (2.1,3.3)-- (0.0,0.1);\draw[-] (2.1,3.3)-- (4.3,0.1)
\draw[thick] (2.1,3.3)node[above] {$S$};
\end{tikzpicture}
\vspace{10mm}
\begin{tikzpicture}[xscale=2.8 ]
\draw[|-|] (0,0)-- (0.4,0);
\draw[|-|] (0.4,0)--(0.8,0);
\draw[dotted] (0.8,0)--(1.2,0);
\draw[|-|] (1.2,0)--(1.6,0);
\draw[dotted] (1.6,0)--(2.4,0);
\draw[|-|] (2.4,0)--(2.8,0);
\draw[|-|,dotted] (2.8,0)--(3.2,0);
\draw[|-|] (3.2,0)--(3.6,0);
\draw[|-|] (3.6,0)--(4.0,0);
\draw[|-|] (1.6,0.1)-- (1.8,0.1)
\draw[|-|] (2.25,0.1)-- (2.4,0.1)
\draw[<->] (0,-0.8)--(1.6,-0.8)
\draw[<->] (1.8,-0.8)--(2.25,-0.8)
\draw[<->] (2.4,-0.8)--(4.0,-0.8)
\draw[|-|] (4.0,0)-- (4.3,0)
\draw[thick] (0.8,-0.3)node[below] {$x\in \left(\Sigma^r\right)^*$};
\draw[thick] (1.7,0.3)node {$u$};
\draw[thick] (2.04,-0.3)node[below] {$z$};
\draw[thick] (2.3,0.3)node {$v$};
\draw[thick] (3.4,-0.3)node[below] {$y\in \left(\Sigma^r\right)^*$};
\draw[thick] (4.25,0.3)node {$w\in\Sigma^{<r}$};
\draw[-] (2.1,2.0)-- (1.6,0.1);\draw[-] (2.1,2.0)-- (2.25,0.1)
\draw[thick] (2.1,2.2)node[above] {$\langle A, u, v \rangle$};
\draw[-] (2.1,3.6)-- (0.0,0.1);
\draw[-] (2.1,3.6)-- (4.0,0.1);
\draw[thick] (2.1,3.6)node[above] {$\langle S,\epsilon,w\dashv\rangle $};
\draw[-] (2.1,5.0)-- (2.1,3.9);\draw[-] (2.1,5.0)-- (4.3,0.1)
\draw[thick] (2.1,5.0)node[above] {$S'$};
\end{tikzpicture}
\end{center}
\caption{Scheme of the original grammar derivation (top) and the corresponding quotiented grammar derivation (bottom), respectively described in Eq. \eqref{eq:originalDerivation} and Eq. \eqref{eq:quotientedDerivation} of the proof of Lm. \ref{lm-partitionedCNF}.}\label{figSchemaDerivazioniQuozientate}
\end{figure}
\par
Notice that the empty rules \eqref{R3} check that the presence of a letter
$a$ is appropriate in a specific position of the word.
Since all the terminal letters are generated by rules of type \eqref{R2}, every sentence of
$G'$ has a length multiple of $r$.
At last, the sizes of $N'$ and $P'$, respectively in \eqref{L4} and \eqref{L5}, immediately follow from the form of the nonterminals and of the rules.
\par{\em Construction of $G''$.} To prove part \eqref{L6} of the thesis, we apply Lm.~\ref{lm-DGNF}
by first modifying $G'$ into an intermediate grammar, denoted $\widehat{G} = (\Delta, \widehat{N},\widehat{P},\widehat{S})$, on the tuple alphabet $\Delta = \Delta_1 \cup \Delta_2 \dots \cup \Delta_r$, as follows.
\par
The nonterminal alphabet $\widehat{N}$ is composed of $N'$ and of a new nonterminal $X_w$ for each rule of $G'$ of the form $S \to X w $.
\par
The rule set $\widehat{P}$ is obtained from $P'$ by the following steps:
\begin{enumerate}
\item Each rule of $G'$ of the form $S \to X w $ is replaced in $\widehat{P}$ by two rules
\[
S \to X X_w \; \text{ and } \; X_w \to \pi^{-1}_{|w|}(w)
\]
(in other words, $w$ is replaced by the corresponding tuple symbol $\pi^{-1}_{|w|}(w)$ in $\Delta_{|w|})$.
\item
Each rule of $G'$ of the form $A \to x$, where $x \in \Sigma^r$, is replaced by the rule $A \to \pi_r^{-1}(x)$.
\item Every other rule of $P'$ is in $\widehat{P}$; no other rule is in $\widehat{P}$.
\end{enumerate}
The resulting grammar $\widehat{G}$ is in CNF. The nonterminal alphabet size is $|\widehat{N|} \in O\left(|N'|+ |\Sigma|^r\right)$, which is in $O\left(|N|\cdot |\Sigma|^{2r}\right)$.
The rule set has cardinality $|\widehat{P}| \in O\left(|P'|\right)$.
\par
Then, we apply Lm.~\ref{lm-DGNF} to $\widehat{G}$, obtaining a DGNF grammar, denoted $\widetilde{G}$, with a number of rules
$\left|\widetilde{P}\right| \in O\left(|\Delta|^6\cdot |\widehat{N}|^8 \right)$, which is $O\left(|\Sigma|^{6r}\cdot |N'|^8\right)$.
Since, by~\eqref{N'}, $|N'|$ is in $O(|N|\cdot |\Sigma|^{2r})$, it immediately follows that:
\begin{equation}\label{eq-widehatSize}
|N'|\in O\left(|\Sigma|^{22r}\cdot |N|^8\right).
\end{equation}
\par\noindent
At last, it is immediate to transform grammar $\widetilde{G}$ back into a Q-DGNF grammar $G''$ of order $r$ over the alphabet $\Sigma$, with the same number of rules as $\widetilde{G}$.
\qed
\end{proof}
\paragraph{Digression: a useful construction of the $(m,m)$-GNF of a CNF grammar $G$}
Incidentally, a bonus of Lm.~\ref{lm-partitionedCNF} is the direct construction of an $(m,m)$-GNF grammar,
whose size may in general be smaller than the size produced by the standard constructions of an $(m,n)$-GNF grammar
(e.g., the one of~\cite{DBLP:journals/ipl/Yoshinaka09}), in the special but relevant case when $m=n$. We compare
the size of the $(m,m)$-GNF grammar obtained through the two approaches:
\[
\renewcommand{\arraystretch}{1.2}
\begin{array} {c|c}
\text{Case $m=n$ of~\cite{DBLP:journals/ipl/Yoshinaka09}} & \text{Construction of Lm.~\ref{lm-partitionedCNF}}
\\\hline
O\left(|\Sigma|^{2m+2}\cdot |N|^{4m+4} \right) &
O\left(|\Sigma|^{22m}\cdot |N|^{8} \right)
\end{array}
\]
For a fixed alphabet $\Sigma$ and a fixed value $m\ge 2$, when considering larger and larger grammars $G$,
the size of our equivalent $(m,m)$-GNF grammar will eventually be smaller than the size in~\cite{DBLP:journals/ipl/Yoshinaka09}.
\subsection{Main result and proof}
We are going to prove that, given any terminal alphabet $\Sigma$,
there exist a Dyck alphabet $\Omega_{q,l}$, with $l=|\Sigma|$ neutral symbols
and the number $q$ of brackets being polynomial in $\Sigma$,
and a letter-to-letter homomorphism from the Dyck alphabet to $\Sigma$, such that every CF language $L$ over $\Sigma$ has a CST characterization in terms of the Dyck language $D_{q,l}$.
We stress that the Dyck alphabet size only depends on the size of the terminal alphabet; an upper bound on the dependence is formulated later as Corollary~\ref{corollSizeDyckAlph}.
\par
Moreover, the regular language used in CST can be chosen to be strictly locally testable.
\begin{theorem}\label{theorGeneralHomomCharacterization}
For every finite alphabet $\Sigma$, there exist a number $q>0$ polynomial in $|\Sigma|$ and a letter-to-letter homomorphism $\rho:\, \Omega_{q,|\Sigma|} \to \Sigma $,
such that, for every context-free language $L\subseteq \Sigma^*$,
there exists
a regular language $T\subseteq \left(\Omega_{q,|\Sigma|}\right)^*$
satisfying $L = \rho\left(D_{q,|\Sigma|} \cap T \right)$.
\end{theorem}
The proof involves several transformations of alphabets, grammars and languages, and relies on the constructions and lemmas presented in Sect.~\ref{subsectNormalForms}.
To improve readability, we have divided the proof into two parts.
First, we formulate in Th.~\ref{th-CST-even} a case less general than Th.~\ref{theorGeneralHomomCharacterization}, which excludes odd-length sentences from the language,
yet it already involves the essential ideas and difficulties. A step of the proof requires some arithmetic analysis, to construct the coding that represents the Dyck brackets, using fixed length codes ove a smaller alphabet. In the proof, such analysis has been encapsulated into Proposition~\ref{prop-encoding}.
\par
Second, in Sect.~\ref{SubSubSectArbitraryLength} we introduce into the Dyck language the neutral symbols that are needed for handling odd-length sentences, and we easily conclude the proof of Th.~\ref{theorGeneralHomomCharacterization}.
\subsubsection{The case of even length}\label{subsubsectCaseEvenLength}
The next theorem applies to languages of even-length words.
Starting from the original CNF grammar $G$, we convert it to a Q-DGNF grammar of order $m$ (as in Lm.~\ref{lm-partitionedCNF}). We deal with each of the axiomatic rules $S \to X w$ at a time, by considering the subgrammar
having $X$ for axiom, which is in $(m,m)$-DGNF and defines the language $L(G,X)$.
\par
We apply the inverse tuple homomorphism $\pi_m^{-1}$ (Def.~\ref{DefTupleAlphabet}), thus
condensing all terminal factors of length $m$ occurring in each rule into one symbol of $\Delta_m$. The result is an almost identical grammar, here called $\widetilde{G}$,
over the tuple alphabet $\Delta_m$ rather than $\Sigma$.
Since $\widetilde{G}$ is an even-DGNF and satisfies the hypothesis of Th.~\ref{th-okhotinNostro},
there exists a non-erasing CST characterization of the tuple language generated by $\widetilde{G}$. The corresponding Dyck alphabet $\Omega_k$ has however a size $k$ dependent on the size
of $\widetilde{G}$, hence also on the size of $G$.
\par
Now, the crucial idea comes into play. We represent each one of the $k$ open brackets in $\Omega_k$ with an $m$-digit integer, represented in a base $j\ge 2$, such that only $m$ depends on the size of $G$:
we show in Proposition~\ref{prop-encoding} that, if $m$ is at least logarithmic in the size of the grammar, then there exists a suitable value of $j$, independent from the grammar, such that the open brackets are represented by codes of length $m$.
\par
To make room for such codes, we transform back each $m$-tuple symbol of grammar $\widetilde{G}$ into a word of length $m$ (using the homomorphism $\pi_m$),
obtaining again an $(m,m)$-GNF grammar, over a new Dyck alphabet $\Omega_q$. In such alphabet each symbol is a 4-tuple composed of:
\begin{itemize}[-]
\item
a symbol specifying whether the bracket is open/closed;
\item
the letter of $\Sigma$ that is represented by the symbol;
\item
the letter of $\Sigma$ that is represented by the matching closed bracket;
\item
a digit of the code in base $j$.
\end{itemize}
Notice that a closed bracket $\omega'\in \Omega_k$
is encoded as the reversal of the code that represents the matching open bracket $\omega$; in this way,
the string of the $m$ open brackets encoding $\omega$ is matched exactly by the $m$ closed brackets encoding $\omega'$.
The size of the terminal alphabet of $\widetilde{G}$ does not depend on the size of $G$ and is polynomially related with the size of $\Sigma$.
\par
We then define a regular language to check whether two codes may or may not be adjacent.
Another letter-to-letter homomorphism (denoted by $\rho$) is then used to map each 4-tuple into a letter of $\Sigma$, so that we obtain a CST characterization of $L(G,X)$ of the intended type.
\par
Next, we have to deal with the axiomatic rule $S \to X w$ of the Q-DGNF grammar.
Since by hypothesis the value $|w|<m$ is an even number, it is immediate to obtain word $w$ as the homomorphic image of a Dyck language over another alphabet that
does not depend on the size of the original grammar $G$. It is a simple matter to combine this new part with the preceding CST characterization of $L(G,X)$, thus obtaining the CST characterization of each language $L(G,X)w$.
At last, it suffices to unite the CST characterizations for each word $w$ and for each nonterminal $X$ such that a rule $S \to Xw$ is present in the original Q-DGNF grammar.
\begin{theorem}\label{th-CST-even}
For every finite alphabet $\Sigma$,
there exist a number $n>0$, polynomial in $|\Sigma|$, and a letter-to-letter homomorphism $\rho$
such that for every context-free language $L\subseteq \left(\Sigma^2\right)^*$
there exists
a regular language $T\subseteq \Omega^+_n$ ,
such that $L = \rho\left(D_n \cap T \right)$, where $D_n$ is the Dyck language over the Dyck alphabet $\Omega_n$.
\end{theorem}
\begin{proof}
Let $L\subseteq \left(\Sigma^2\right)^*$ be a CF language.
Let $m\ge 2$ be an even number.
$L$ can be generated by a grammar $G = (\Sigma, N, P,S)$ in Q-DGNF of order $m$, as in Lm.~\ref{lm-partitionedCNF}.
\par
Let $w \in\Sigma^{<m}$ and let $X \in N$ be such that $S \to Xw$ is in $P$. We deal with the language $L(G,X)$ first.
The language $\pi_m^{-1}\left(L(G,X)\right)$ can be considered as the language generated by a grammar,
called $\widetilde{G}$, in DGNF over the alphabet $\Delta_m$, where the rules are obtained from those of $G$ as follows:
\begin{itemize}[-]
\item
ignore
all rules whose left-hand side is a nonterminal unreachable from $X$;
\item
replace in the right-hand part of every other rule of $G$, every occurrence of every word $x\in\Sigma^{m}$ with the tuple symbol $\pi^{-1}_m(x)$.
\end{itemize}
The language $L(\widetilde{G})$ over the tuple alphabet can be characterized using Th.~\ref{th-okhotinNostro}, part 2, as $h\left( D_q \cap R_X \right)$,
where $q= |\widetilde{P}|^2 +|N|\cdot |\widetilde{P}|$ is the size of the Dyck alphabet,
$h: \Omega_q \to \Delta_m$ is the letter-to-letter homomorphism of Th.~\ref{th-1-Okhotin},
and $D_q, R_X\subseteq {\Omega_q}^*$ are respectively a Dyck language and a regular language (which is dependent on $X$).
\par\noindent
The reason for choosing the slightly more general version in part 2 of Th.~\ref{th-okhotinNostro}, is that we later need to extend the CST characterization from
a single language $L(G,X)$, to the union of all languages $L(G,X)$ for $X \in N$, i.e., to $L$.
\par\noindent
Hence, $L(G,X)=\pi_m(L(\widetilde{G}))$ and
\begin{equation}
L(G,X)=\pi_m\left(h\left( D_q \cap R_X \right) \right).
\label{eq:OkhotinCSTapplication}
\end{equation}
Formula \eqref{eq:OkhotinCSTapplication} is already a CST characterization for $L(G,X)$, but the value $q$
is in $O\left(|\widetilde{P}|^2 \right)$, and $|\widetilde{P}|\in O\left(|P|\right)$;
hence $q$ still depends on grammar $G$.
\paragraph{Positional encoding of brackets}\label{subsubsectPositionalEncoding}
Each element $\omega \in \Omega_q$ is identified by an integer number $\iota$, with $1\le \iota \le q$.
We want to represent each of the $q$ values $\iota$ using at most $m$ digits in a base $j$:
It is enough to satisfy the inequality $\log_j{q}\le m$.
Denoting with $\log$ the base 2 logarithm,
this requires that $j$ satisfies $\frac{\log {q}}{\log j}\le m$, i.e., $j$ and $m$ satisfy the inequality
\begin{equation}
\log j \ge \frac{\log q}{m}.
\label{eqBaseInequality}
\end{equation}
If \eqref{eqBaseInequality} is satisfied, every open bracket $\omega\in \Omega_q$ can be encoded in base $j$ by a (distinct) string with $m$ digits, to be
denoted in the following as $\left\llbracket\omega \right\rrbracket_j$. The closed parenthesis $\omega'$ matching $\omega$ has no encoding of its own, but it is
just represented by the reversal of the encoding
of $\omega$, i.e., $\left(\left\llbracket\omega \right\rrbracket_ j\right)^R$; we will see that no confusion can arise.
\par
Although an arbitrarily large value of $j$ would satisfy \eqref{eqBaseInequality}, we prefer to choose a value as small as possible.
Let $p$ be the number of nonterminals of a CNF grammar defining $L$. By Lm.~\ref{lm-partitionedCNF}, in the worst case $|P| \in
O\left(|\Sigma|^{22m}\cdot p^{8}\right)$.
Since $q \in O(|P|^2)$, it follows that $q \in O(|\Sigma|^{44m}\cdot p^{16})$.
\par
We can abstract the expression for value $q$ as $O\left(\sigma^{m} \nu\right)$, for suitable values $\sigma=|\Sigma|^{44} , \nu= p^{16}$.
The next proposition shows the correct numerical relation that eliminates the dependence of $j$ from $m$ and from the number of rules of the grammar.
\begin{proposition}\label{prop-encoding}
Given numbers $\sigma, m,\nu >0$, if
$m$ is in $\varOmega(\log \nu)$, then there exists $j \in O(\sigma)$ such that every
symbol in a set of cardinality $1 <q < \sigma^{m} \nu$ can
be represented in base $j$ by a distinct string of $m$ digits.
\end{proposition}
\begin{subproof}
We have:
\[
\renewcommand{\arraystretch}{1.2}
\begin{array}{l}
q^{1/m}<\sigma \nu^{1/m}
\\
\log{q^{1/m}}<\log{\left(\sigma \nu^\frac{1}{m}\right)}
\\
\frac{\log q}{m}< \log{\sigma} + \frac{\log \nu}{m}, \;\text{ and }\text{if }m>\log(\nu)
\\
\frac{\log q}{m}< \log{\sigma} +1= \log{\left(2\sigma\right)}.
\end{array}
\]
Hence, the condition $\log j \ge \frac{\log q}{m}$ can be satisfied, when $m$ is in $\varOmega(\log \nu)$,
by choosing $j$ such that
$\log{j}\ge \log{\left(2\sigma\right)}$, i.e., $j\ge 2\sigma$.
Thus, it suffices to choose a suitable $j$ in $O(\sigma)$.
\qedsymbol
\end{subproof}
From Proposition~\ref{prop-encoding} it follows that each one of the $q$ open brackets in $\Omega_q$ can be encoded with a distinct string composed of $m$ digits in base $j\ge 2$,
with
\begin{equation}\label{eq-jvalue}
j\in O\left(|\Sigma|^{44}\right) \text{ when } m \in \varOmega\left(\log p^{16}\right)= \varOmega(\log p).
\end{equation}
\paragraph{The Dyck alphabet $\Omega_n$}
Given the values $j,m$ computed above,
let $n=j\cdot |\Sigma|^2$, hence $n \in O\left(|\Sigma|^{46}\right)$, and define the new Dyck alphabet $\Omega_n$, to be isomorphic to the set:
\begin{equation}
\label{eqDomainOmegan}
\left\{\text{`[' , `]' }\right\} \times \Sigma \times \Sigma \times \left\{ 0,\dots, j-1\right\}
\end{equation}
Let the matching open/closed elements $\zeta, \zeta'$ in $\Omega_n$ be:
\begin{equation}\label{eqDefMatchingPairs}
\zeta= \left\langle \text{`['}, a, b, o \right\rangle \text{ matching }\zeta'= \left\langle \text{`]'}, b, a, o \right\rangle
\end{equation}
Note that in $\zeta$ and $\zeta'$ the second and third components are interchanged and component $o$ is in $0,\dots, j-1$.
\par\noindent
We sum up the structure and information contained in the Dyck alphabet $\Omega_n$. Each matching open and closed bracket, $\zeta$ and $\zeta'$, is
represented by a 4-tuple carrying the following information:
\begin{itemize}[-]
\item whether the element is an open or closed bracket;
\item the letter of $\Sigma$ to which $\zeta$ will be mapped by homomorphism $\rho$;
\item the letter of $\Sigma$ to which $\zeta'$ will be mapped by homomorphism $\rho$;
\item a digit $i$ in the given base $j$.
In any two matching elements $\zeta, \zeta'$, the digit $i$ is the same.
\end{itemize}
Let $D_n$ be the Dyck language over $\Omega_n$.
\paragraph{Definition and properties of homomorphism $\tau$}
We define a new homomorphism $\tau: \Omega_q \to \Omega_n^+$
such that the image of $D_q$ by $\tau$ is a subset of the Dyck language $D_n$, i.e., $\tau(D_q) \subset D_n$.
Such subset $\tau(D_q)$ is next obtained by means of the regular language $\tau(R_X)$, as $\tau(D_q)=D_n \cap \tau(R_X)$.
\par\noindent
To define $\tau$,
we first need the partial mapping, called \emph{combinator}:
\[
\otimes : (\Sigma_1)^+ \times (\Sigma_2)^+ \times (\Sigma_3)^+ \times (\Sigma_4)^+ \; \to \;\left(\Sigma_1 \times \Sigma_2 \times \Sigma_3 \times \Sigma_4\right)^+
\]
where each $\Sigma_i$ is a finite alphabet; the mapping combines four words of identical length into one word of the same length over the alphabet of 4-tuples.
More precisely, the \emph{combinator} $\otimes$
is defined for all $\textit{l}\geq 1$, $x_i \in (\Sigma_i)^\textit{l}$ and $1\leq i \leq 4$ as:
\[
\otimes\left(x_1 , x_2 , x_3 , x_4\right) = \left\langle x_1(1),x_2(1),x_3(1),x_4(1) \right \rangle\, \dots \, \left\langle x_1(\textit{l}),x_2(\textit{l}),x_3(\textit{l}),x_4(\textit{l}) \right\rangle.
\]
For instance, let $x_1=ab, x_2=cd, x_3=ef, x_4= ca$; then $\otimes\left( x_1, x_2 , x_3 , x_4\right) = \left\langle a,c,e,c \right\rangle \, \left\langle b,d,f,a \right\rangle$.
\par\noindent
Recall now the letter-to-letter homomorphism $h: \Omega_q\to\Delta_m$, defined in the CST characterization of Eq. \eqref{eq:OkhotinCSTapplication}.
Since $L(\widetilde{G})$ is a subset of $(\Delta_m\Delta_m)^*$,
the image $h(\omega)$ of a bracket $\omega\in\Omega_q$ is in $\Delta_m$.
\par\noindent
The definition of $\tau$ is :
\begin{gather}
\label{EqHomotau1}
\begin{array}{l}
\tau(\omega) = \otimes \left(\text{`['}^m \, , \pi_m \left(h(\omega) \right) \, , \left(\pi_m \left(h(\omega') \right)\right)^R
\, , \left\llbracket\omega \right\rrbracket_ j \right)
\\
\tau(\omega') =\otimes \left( \text{`]'}^m \, ,\pi_m \left(h(\omega') \right) \, , \left(\pi_m \left(h(\omega) \right)\right)^R
\, , \left(\left\llbracket\omega \right\rrbracket_j\right)^R
\right)
\end{array}
\end{gather}
All four arguments of $\otimes$
are words of length $m$, therefore the combinator $\otimes$ returns a word of length $m$ over the alphabet of 4-tuples.
\par\noindent
For instance, if $h(\omega)= \langle a_1, \dots, a_m \rangle \in \Delta_m$,
$h(\omega')= \langle b_m, \dots, b_1 \rangle \in \Delta_m$,
and $\left\llbracket\omega \right\rrbracket_j = o_1 o_2 \dots o_m$,
with $o_1, \dots, o_m \in \{0, \dots, j-1\},$ then $\left(\left\llbracket\omega \right\rrbracket_ j\right)^R=o_m o_{m-1} \dots o_1$ and:
\[
\begin{array}{llcccl}
\tau(\omega)=& \left\langle \text{`['}, a_1, b_1, o_1 \right\rangle &
\left\langle \text{`['}, a_2, b_2, o_2 \right\rangle &
\dots & \dots & \langle \text{`['}, a_m, b_m, o_m\rangle
\\
\tau(\omega')=&\left\langle \text{`]'}, b_m, a_m, o_m \right\rangle & \dots & \dots & \left\langle\text{`]'}, b_{2}, a_{2}, o_{2}\right\rangle
& \langle \text{`]'}, b_1 , a_1, o_1 \rangle
\end{array}
\]
An example of a complete definition of $\tau$ is given in Sec.~\ref{sect:example}, Eq.~\ref{eqExampleTau(m=2,j=2)}.
\begin{claim}\label{claimMatchingPairs}
The following two facts hold:
\begin{enumerate}[a)]
\item
Let $\omega, \omega'\in \Omega_q$ be a matching pair. Then
$\tau(\omega)= \zeta_1 \dots \zeta_m$ and
$\tau(\omega')=\zeta'_m \dots \zeta'_1$,
where for all $i$ the pairs $\zeta_i,\zeta'_i$ are matching in $\Omega_n$.
\item
$\tau(D_q) \subseteq D_n$.
\end{enumerate}
\end{claim}
\begin{subproof}
Part a). The fact that $\zeta_i,\zeta'_i$ match according to formula \eqref{eqDefMatchingPairs}, follows immediately from the definition of $\tau$.
\par\noindent
Part b). Since, for every $w \in \Omega_q^+$, $\tau(w)$ preserves
the parenthetization of $w$, if $w \in D_q$, then $\tau(w)\in D_n$.
\qedsymbol
\end{subproof}
\par
We show that the mapping $\tau$ is one-to-one:
\begin{claim}\label{Claim1to1}
For all $ w, w' \in (\Omega_q)^+$, if $\tau(w)= \tau(w')$, then $w=w'$.
\end{claim}
\begin{subproof}
Let $\omega_1, \omega_2 \in \Omega_q$; if $\omega_1 \neq \omega_2$, then
$\llbracket \omega_1 \rrbracket_j \neq \llbracket \omega_2 \rrbracket_j$ by definition of $\llbracket \dots \rrbracket_j$. Therefore the inequality
$\tau(\omega_1) \neq \tau(\omega_2)$ holds, because at least one position differs.
\qedsymbol
\end{subproof}
\paragraph{Definition and properties of the homomorphism $\rho$ used in CST}
We now define a letter-to-letter homomorphism $\rho: \Omega_n \to \Sigma$,
in order to prove later that
$
\rho\left( D_n \cap \tau(R_X) \right)
$ is exactly
$L(G,X)$.
\par\noindent
The homomorphism $\rho$, which does not depend on the grammar $G$ but only on $\Omega_n$, is simply defined as
the projection on the second component of each 4-tuple:
$
\rho\left(\left\langle x_1,x_2,x_3,x_4 \right\rangle \right) = x_2$\, (where $x_2 \in \Sigma$).
\begin{claim}\label{ClaimCommutation}
For all $ w \in (\Omega_q)^+$, the equality $\rho(\tau(w))= \pi_m (h(w))$ holds, where $\tau$ is defined in Eq.~\eqref{EqHomotau1} and $h$ in Eq.~\eqref{eq:OkhotinCSTapplication}.
\end{claim}
\begin{subproof}
By the definitions of $\tau$ and $\rho$, for every $\chi \in \Omega_q$ the equality $\rho\left( \tau(\chi)\right)= \pi_m (h(\chi))$ holds.
\qedsymbol
\end{subproof}
\begin{claim}\label{ClaimTau}
$\tau^{-1}(D_n)\subseteq D_q$.
\end{claim}
\begin{subproof}
Although $\tau^{-1}$ is not defined for every word in $D_n$, mapping $\tau$ is defined so that, if a word $w\not\in D_q$,
then $\tau(w) \not\in D_n$; hence if $\tau(w)\in D_n$, then also $w \in D_q$.\qedsymbol
\end{subproof}
\paragraph{CST characterization of $L(G,X)$}
To complete this part of the proof, it is enough to prove the following identity
\begin{equation}
\rho\left( D_n \cap \tau(R_X) \right) = \pi_m \left(h \left( D_q \cap R_X \right) \right)
\end{equation}
since $L(G,X) = \pi_m \left(h \left( D_q \cap R_X \right) \right)$.
\par\noindent
By Claim~\ref{ClaimCommutation}, $\tau(D_q) \cap \tau(R_X)= \tau(D_q \cap R_X)$;
hence, by Claim~\ref{claimMatchingPairs}, part (b),
\[
\rho\left( \tau(D_q \cap R_X) \right) \; = \;\rho\left( \tau(D_q) \cap \tau(R_X) \right) \subseteq \rho\left( D_n \cap \tau(R_X) \right).
\]
The inclusion
\[
\pi_m \left(h \left( D_q \cap R_X \right) \right)\subseteq \rho\left( \tau(D_q \cap R_X) \right)
\]
then follows: if $z\in \pi_m (h \left( D_q \cap R_X \right)) $, then there exists a word $w \in D_q \cap R_X$
such that $\pi_m (h(w))=z$, hence
$z= \rho(\tau(w))$ by Claim~\ref{ClaimCommutation}.
Since $w\in D_q \cap R_X$, then $\tau(w) \in \tau(D_q \cap R_X)$, hence
\[
z \in
\rho\left( \tau(D_q \cap R_X) \right)\subseteq \rho\left( D_n \cap \tau(R_X) \right).
\]
The opposite inclusion
$
\rho\left( D_n \cap \tau(R_X) \right) \subseteq \pi_m \left(h \left( D_q \cap R_X \right) \right)
$
also follows: if $z\in \rho\left( D_n \cap \tau(R_X) \right)$, then
there exists $w\in R_X$ such that $\tau(w)\in D_n$ and $\rho(\tau(w))=z$. By Claim~\ref{ClaimTau}, if $\tau(w)\in D_n$, then also $w \in D_q$.
Since $z = \pi_m \left(h(w)\right)$ by Claim~\ref{ClaimCommutation}, it follows that $z \in \pi_m \left(h \left( D_q \cap R_X \right) \right)$.
\par\noindent
It then follows that $L(G,X) =\rho\left( D_n \cap \tau(R_X) \right)$, where $\tau(R_X)$ is a regular language depending on the grammar $G$, while both the
homomorphism $\rho$ and the Dyck language $D_n$ do not depend on grammar $G$, but only on $\Sigma$.
\paragraph{Extending the CST characterization}
It remains to extend the CST characterization first to $L(G,X)\cdot w$ and then to
$\displaystyle{L = \bigcup_{X\in N, w \in \Sigma^{<m}} L(G,X)\cdot w}$
\par
First, we notice that the ``short'' word $w$, of even length, can be immediately associated with a suitable Dyck set.
Let $\sigma$ be the set $\left\{\,`[` \,\right\}\times \Sigma \times \Sigma \times \{0\}$ and let $\sigma'$ be the set $\left\{\,`]` \,\right\}\times \Sigma \times \Sigma \times \{0\}$.
Let $R_w$ be the regular language composed only of the words $\alpha \in (\sigma\sigma')^* \cap D_q$ such that $\rho(\alpha) = w$, and
let $T_{X,w}= R_X \cdot R_w$. Therefore, $L(G,X)\cdot w = \rho(D_{n} \cap T_{X,w})$.
\par
The original language $L$ is the union of all $L(G,X)\cdot w$, for $X \in N, w \in \Sigma^{<m}$.
Set $\Omega_{n}$ was defined in Eq.~\eqref{eqDomainOmegan}, and by selecting the width $m$ and the base $j$ as in Eq.~\eqref{eq-jvalue},
$\Omega_{n}$ is
large enough to encode every bracket of the Dyck alphabet $\Omega_q$ with a distinct string in $\left(\Omega_n\right)^m$.
\par\noindent
Hence, it is immediate to define a regular language $T$ as the union of all regular languages $T_{X,w}$, for every $Xw$ such that $S \to X w \in P$.
Therefore, $L= \rho\left(D_{n} \cap T\right)$.
\qed
\end{proof}
\begin{corollary}\label{corollSizeDyckAlphEven}
The cardinality $n$ of the Dyck alphabet of Th.~\ref{th-CST-even} is $O(|\Sigma|^{46})$.
\end{corollary}
\paragraph{Using an SLT language}
\par
We observe that the regular language $\tau(R_X)$ in the proof of Th.~\ref{th-CST-even}
is not strictly locally testable (Def.~\ref{defk-SLT}).
Yet, it would be straightforward to modify our construction to obtain an SLT language having width in $O(\text{log } p)$: for that
it suffices to modify homomorphism $\tau$ of Eq.~\eqref{EqHomotau1} so that the first bracket of each $\tau(\omega)$ and the last one of $\tau(\omega')$ are made typographically different, e.g., by using a bold font, from the remaining $m-1$ brackets
of $\tau(\omega)$ and of $\tau(\omega')$.
\par\noindent
For instance, if $h(\omega)= \langle a_1, \dots, a_m \rangle \in \Delta_m$ and $\left\llbracket\omega \right\rrbracket_j =o_1 o_2 \dots o_m$,
then
\[
\tau(\omega)=\langle \text{`\textbf{(}'}, a_1, b_1, o_1\rangle \,\langle\text{`['}, a_2, b_2, o_2\rangle
\dots \langle\text{`['}, a_m, b_m, o_m\rangle
\]
and
\[\tau(\omega')=\langle \text{`]'}, b_m, a_m, o_m\rangle \dots \langle\text{`]'}, b_2 , a_2, o_2\rangle \langle\text{`\textbf{)}'}, b_1 , a_1, o_1\rangle.
\]
Therefore, we can state:
\begin{corollary}\label{cor-SLT}
In the CST characterization of Th.~\ref{th-CST-even}, the regular language $R$ may be assumed to be strictly locally testable.
\end{corollary}
\subsubsection{An example}\label{sect:example}
The example illustrates the crucial part of our constructions, namely the homomorphism $\tau$ defined by Eq.~\eqref{EqHomotau1}.
Consider the language and grammar
\[
L= \{a^{2n+4} b^{6n} \mid n\ge 0\}, \quad \{S\to aaSb^6 \mid a^4\}
\]
This grammar, as a quotiented normal form of order 2, would be written as:
\[
S\to S_{/\varepsilon},\, S_{/\varepsilon}\to aa S_{/\varepsilon}b^6 \mid a^4
\]
We choose the value $m=2$ for the equivalent $(m,m)$-GNF, and, in accordance, the substrings of length two occurring in the language are mapped on the 2-tuples $\langle a,a\rangle, \langle a,b\rangle, \langle b,b\rangle$, shortened as $\langle aa\rangle$, etc.
\par\noindent
The following grammar in DGNF, though constructed by hand, takes the place of grammar $G''$ of Lm.~\ref{lm-partitionedCNF}:
\begin{equation}
G'' \;= \; \Big\{
1: S \to \langle aa\rangle \, S \,B\, \langle bb\rangle , \;
2: S\to \langle aa\rangle \,\langle aa\rangle , \;
3:B \to \langle bb\rangle \, \langle bb\rangle
\Big\}.
\label{eqExampleGrammOver2ples}
\end{equation}
The sentence $a^8 b^{12} \in L$ becomes $ \langle aa\rangle^4 \langle bb\rangle^6 \in L(G'')$, with the syntax tree in Fig.~\ref{fig:exampleTrees}.
\begin{figure}[h]
\begin{center}
\scalebox{0.7}{
\begin{tikzpicture}[auto, level 1/.style={sibling distance=60mm},
level 2/.style={sibling distance=24mm},
level 3/.style={sibling distance=12mm},xscale=0.9]
\node{$1:S$}
child { node{$<aa>$}}
child {
node{$1:S$}
child { node{$<aa>$}}
child {
node {$2:S$}
child{ node {$<aa>$}} child{ node { $<aa>$}}
}
child {
node {$3:B$}
child{ node {$<bb>$}} child{ node {$<bb>$}}
}
child { node {$<bb>$}}
}
child {
node {$3:B$}
child{ node {$<bb>$}} child{ node {$<bb>$}}
}
child { node {$<bb>$}}
;
\end{tikzpicture}
}
\end{center}
\caption{Syntax tree of the sentence $a^8 b^{12} \in L$ after its transformation to $ \langle aa\rangle^4 \langle bb\rangle^6 \in L(G'')$.}
\label{fig:exampleTrees}
\end{figure}
\par\noindent
For Okhotin Th. 1~\cite{Okhotin2012}, this sentence is the image by homomorphism $h$ of the following sequence
\begin{equation}
\gamma=
(^{-}_{1} \quad (^{1}_{1} \quad (^{1}_{2}\quad
)^{1}_{2} \quad (^{1}_{3}\quad )^{1}_{3} \quad )^{1}_{1} \quad (^{1}_{3} \quad )^{1}_{3}\quad )^-_{1}
\label{eqExampleOkhotinHomo}
\end{equation}
of labeled parentheses, where the numbers identify the rules and the dash (as in~\cite{Okhotin2012}) means the root of the tree. The homomorphism is specified by the table:
\setlength{\arraycolsep}{0.8cm}
\begin{equation}
\begin{array}{c|c | c|c}
\omega & \omega' & h(\omega) & h(\omega')\\\hline
(^{-}_{1} & )^-_{1} & \langle aa \rangle & \langle bb \rangle
\\
(^{1}_{1} & )^1_{1} & \langle aa \rangle & \langle bb \rangle
\\ (^{1}_{2} & )^1_{2} & \langle aa \rangle& \langle aa \rangle
\\
(^{1}_{3} & )^1_{3} & \langle bb \rangle & \langle bb \rangle
\end{array}\label{eq-ex-h-hom}
\end{equation}
\par\noindent
Applying Proposition~\ref{prop-encoding}, we choose to represent each such labeled parenthesis with a sequence of $m=2$ digits, on the basis $j=2$.
Therefore the homomorphism $\tau$ resulting from Eq.~\eqref{EqHomotau1} defines the following Dyck alphabet:
\setlength{\arraycolsep}{0.8cm}
\begin{equation}
\begin{array}{c|c | c|c}
\omega & \omega' & \tau(\omega) & \tau(\omega')\\\hline
(^{-}_{1} & )^-_{1} & [_{a,b,0} \; [_{a,b,0} & ]_{b,a,0} \; ]_{b,a,0}
\\
(^{1}_{1} & )^1_{1} & [_{a,b,0} \; [_{a,b,1} & ]_{b,a,1} \; ]_{b,a,0}
\\ (^{1}_{2} & )^1_{2} & [_{a,a,1} \; [_{a,a,0} & ]_{a,a,0} \; ]_{a,a,1}
\\
(^{1}_{3} & )^1_{3} & [_{b,b,1} \; [_{b,b,1} & ]_{b,b,1} \; ]_{b,b,1}
\end{array}
\label{eqExampleTau(m=2,j=2)}
\end{equation}
To finish, we show the value of $\tau\left(\pi_m (h(\gamma))\right)$:
\setlength{\arraycolsep}{0.7cm}
\begin{equation}
\begin{array}{l}
\overbrace{[_{a,b,0}\; [_{a,b,0}}^{(^{-}_{1}}\;
\overbrace{[_{a,b,0} \; [_{a,b,1}}^{(^{1}_{1}}
\overbrace{[_{a,a,1} \; [_{a,a,0}}^{(^{1}_{2}} \;
\overbrace{]_{a,a,0} \; ]_{a,a,1}}^{)^{1}_{2}} \;
\overbrace{[_{b,b,1} \; [_{b,b,1}}^{(^{1}_{3}}
\overbrace{]_{b,b,1} \; ]_{b,b,1}}^{)^{1}_{3}}
\\
\overbrace{]_{b,a,1} \; ]_{b,a,0}}^{)^{1}_{1}}
\overbrace{[_{b,b,1} \; [_{b,b,1}}^{(^{1}_{3}}
\overbrace{]_{b,b,1} \; ]_{b,b,1}}^{)^{1}_{3}}
\overbrace{]_{b,a,0}\; ]_{b,a,0}}^{)^-_{1}}
\end{array}
\label{exampleTauPiH}
\end{equation}
Notice that the 2-SLT language of the classical CST (applied to language $L$) is now replaced by an SLT language of higher width.
\subsection{Homomorphic characterization for languages of words of arbitrary length}\label{SubSubSectArbitraryLength}
At last, we drop the restriction to even-length sentences, thus obtaining the homomorphic characterization stated in Th.~\ref{theorGeneralHomomCharacterization} that holds for any language.
\par
As defined in Def.~\ref{defAlphabetDyckWithNeutral},
let $\Omega_{q,l}$ be an alphabet with $q$ pairs of brackets and $l\ge 1$ neutral symbols, and $D_{q,l}$ be the corresponding Dyck language with neutral symbols.
\par
In our treatment, there are exactly $l = |\Sigma|$ neutral symbols that we represent as 4-tuples of the form $\langle -, a, a, 0\rangle$ where ``$-$'' is a new symbol. Then
$\Omega_{q,l}= \Omega_q \cup \left\{\langle -, a, a, 0\rangle \mid a \in \Sigma\right\}$.
\par
Suppose that $L$ has also words of odd length. We still can apply Lm.~\ref{lm-partitionedCNF} to convert its grammar into a Q-DGNF grammar $G$ of (even) order $m$.
Let $x \in L$ have odd length. Since its length is not multiple of $m$, $x$ is derived from the axiom using a rule of $G$ of the form
$S \to X w$, for some $X \in N$, $|w|<m$ -- we remind that $L(G,X)$ generates a language of words whose length is a multiple of $m$.
The word $w$ can be factored as $w = w' a$, with $w'\in(\Sigma^2)^*$, $a\in\Sigma$.
Therefore, the same construction of the proof of Th.~\ref{th-CST-even} may be applied, by finding a CST characterization
for $L(G,X)$ and then extending it also to $L(G,X)\cdot w'$.
Hence, there exists a word $s$ over the Dyck alphabet $\Omega_{q}$
such that $w' = \rho(s)$. Just concatenate $\langle -, a, a, 0\rangle$ to the right of $s$,
and extend the definition of $\rho$ by setting $\rho\left(\langle -, a,a,0\rangle\right)=a$ for all $a\in \Sigma$.
\par\noindent
This completes the proof of Th.~\ref{theorGeneralHomomCharacterization}.
\qed
\begin{example}\label{exHomDyckNeutral}
To illustrate the case of odd length sentences, we modify the language and grammar in the example of Sect. \ref{sect:example} as follows.
\begin{equation}
\begin{array}{l}
L= \{a^{2n+4} b^{6n} \mid n\ge 0\}\cdot c
\\
\text{A grammar for $L$ (axiom $A$): } \; A\to Sc \, , \, S\to aaSb^6 \mid a^4
\\
\end{array}
\end{equation}
Then the quotiented normal form of order 2 is the grammar
\[
A\to A_{/c}\,c\, ,\, A_{/c}\to aa S_{/\varepsilon}b^6 \mid a^4
\]
Few changes are needed with respect to the previous example.
The Dyck alphabet and homomorphism $\tau$ of Eq. \eqref{eqExampleTau(m=2,j=2)} are extended with the neutral symbol $\langle -, c, c, 0\rangle$;
e.g., the Dyck sentence of Eq.~\eqref{exampleTauPiH} needs to be concatenated with $\langle -, c, c, 0\rangle$.
The
homomorphism $\rho$ in the statement of Th.~\ref{theorGeneralHomomCharacterization} is defined by extending $\rho$
of Th.~\ref{th-CST-even} with
$\rho\left(\langle -, c, c, 0\rangle \right) = c$.
\end{example}
\section{Complexity of Dyck alphabet and relation with Medvedev theorem}\label{SectHomCharBasedOnMedvedev}
We have already given in Corollary~\ref{corollSizeDyckAlphEven} the size of the Dyck alphabet used by Th.~\ref{th-CST-even}. Since the number of neutral symbols introduced in Sect. \ref{SubSubSectArbitraryLength} only linearly depends on $|\Sigma|$, we have:
\begin{corollary}\label{corollSizeDyckAlph}
The cardinality of the Dyck alphabet $\Omega_{q, |\Sigma|}$ of Th.~\ref{theorGeneralHomomCharacterization} is $O(|\Sigma|^{46})$.
\end{corollary}
The value $q$ is thus polynomial in the cardinality of the alphabet $\Sigma$.
The current bound is related to our constructions of grammars in the generalized DGNF of some order $m$.
A trivial lower bound is $\varOmega(|\Sigma|)$, but it is open whether one can always use a significantly smaller alphabet than the one computed above.
\vspace{5mm}
\par
On the other hand, it is easy to see that in the case of some linear grammars, the bound of Corollary~\ref{corollSizeDyckAlph} is largely overestimated.
In particular, suppose that a grammar is both linear and in DGNF, i.e., its rules are in
$
\left(N \times \Sigma (N\cup\{\epsilon\}) \Sigma \right) \cup \left(N \times \Sigma \right)
$.
Such grammars generate only a subset of the linear languages, but they are still an interesting case.
\par
We now proceed to characterize the languages generated by linear grammars in DGNF through a CST using a different approach, based on Medvedev's homomorphic characterization of regular languages.
In~\cite{DBLP:journals/ijfcs/Crespi-ReghizziP12} we extended the historical Medvedev theorem~\cite{Medvedev1964,Eilenberg74},
which states that every regular language $R$ can be represented as a letter-to-letter homomorphism of a 2-SLT language over a larger alphabet. Moving beyond width two, we proved the following relation between
the alphabet sizes, the complexity of language $R$ (measured by the number of states of its NFA), and the SLT width parameter.
\begin{theorem}\label{theoMedvedevEsteso}\emph{\cite{DBLP:journals/ijfcs/Crespi-ReghizziP12}}
Given a finite alphabet $\Delta$, if a regular language $R \subseteq \Delta^*$ is accepted by an NFA with $|Q|$ states, then
there exist a letter-to-letter homomorphism $f$ and an $s$-SLT language $T$ over an alphabet $\Lambda$ of size $2 |\Delta|$, such that
$R = f(T)$, with the width parameter $s\in \Theta(\log{ |Q|})$.
\end{theorem}
Our work~\cite{DBLP:journals/ijfcs/Crespi-ReghizziP12} also exhibits a language $R\subseteq\Delta^*$ such that, for any SLT language $T$ over an alphabet of size $<2 |\Delta|$, a letter-to-letter homomorphism $f$ satisfying $R = f(T)$ does not exist.
\par
Next, we apply Th.~\ref{theoMedvedevEsteso} to languages generated by linear grammars in DGNF.
\begin{proposition}\label{propo:CSTforSimpleSymmLinGram}
Let $L=L(G)\subseteq \Sigma^+$, where $G=(\Sigma, N, P, S)$ is a linear grammar in DGNF.
Then there exist a Dyck alphabet (with neutral symbols) $\Omega_{n,\l}$, a letter-to-letter homomorphism $g: \Omega_{n,\l} \to \Sigma$ and an SLT language $U$ over $\Omega_{n,l}$ such that:
\begin{enumerate}
\item $ L=g\left(D_{n,\l} \cap U \right)$,
\item $n = 2\cdot |\Sigma|^2$ and $l = |\Sigma|$,
\item $U$ is an $s$-SLT language with $s \in \log(|N|)$.
\end{enumerate}
\end{proposition}
\begin{proof}
\newcommand{\Op} {{\mathbf o}}
\newcommand{\Cl} {{\mathbf c}}
\par
For brevity, we prove the case when $L$ has only words of even length, hence neutral symbols are not needed in the Dyck alphabet. The extension to the general case is immediate.
\par\noindent
Let $\Sigma_1, \Sigma_2$ be alphabets; for all pairs $\langle a,b\rangle \in \Sigma_1 \times \Sigma_2$,
let
$|_1,\, |_2$ be the projections respectively on the first and the second component, i.e., $\langle a,b\rangle|_1 = a,\,
\langle a,b\rangle|_2 = b$.
\par\noindent
Let $\Delta = \Sigma \times \Sigma$.
From the structure of linear grammars in DGNF, it is obvious that there exists a regular language $W$ over the alphabet $\Delta$, such that:
\begin{equation}
L= \left\{w|_1 \cdot w|_2^R \,\mid \,w\in W \right\}
\label{eq:simpleEvenLinLanguage}
\end{equation}
where $w|_2^R$ (equivalent to $w^R|_2$) is the mirror image of the projection $ w|_2$.
Moreover, $W$ can be easily defined by means of an NFA having $
|N| + 1$ states.
\par\noindent
By Th.~\ref{theoMedvedevEsteso}, there exist an alphabet $\Lambda$ of size $
n=2\cdot|\Delta|= 2\cdot|\Sigma|^2$, a homomorphism $f: \Lambda\to\Delta$, a value $s\in \Theta(\log{ |N|})$, and an $s$-SLT language $T\subseteq \Lambda^*$ such that
$W=f(T)$.
\par\noindent
Let $\Op \circledast \Lambda = \{\,\Op \,\} \times \Lambda$ and $\Cl \circledast \Lambda = \{\,\Cl\,\} \times \Lambda$ be
two sets of opening and closing brackets, stipulating that, for every $\lambda \in \Lambda$,
bracket $\langle \Op , \lambda\rangle$ matches bracket $\langle \Cl, \lambda\rangle$.
Thus $\Omega_n=\left(\Op \circledast \Lambda\right) \cup \left(\Cl \circledast \Lambda \right)$ is a Dyck alphabet
and we denote the corresponding Dyck language by $D_n$.
\par\noindent
We also define $\Op \circledast \lambda_1 \dots \lambda_m \in \Lambda^+$, for every
$\lambda_1 \dots \lambda_m \in \Lambda^+$, $m \ge 1$, as
$ \langle \Op , \lambda_1\rangle \dots \langle \Op , \lambda_n\rangle$.
The notation is further extended to a language as usual, e.g., $\Op \circledast X$ for a language $X\subseteq \Lambda^+$ is the set of words
$\{\Op \circledast x \mid x \in X\}$.
The similar notations $\Cl \circledast x$ and $\Cl \circledast X$ have the obvious meaning,
e.g., $\Cl \circledast (\lambda_1 \dots \lambda_m)$, $m \ge 1$,
is the word $\langle \Cl , \lambda_1\rangle \dots \langle \Cl , \lambda_n\rangle$.
\par\noindent
We define the regular language $U$ over the alphabet $\Omega_n$ as:
\begin{equation*}
U = \left( \Op \circledast T \right) \cdot \left( \Cl \circledast \Lambda^+ \right)
\end{equation*}
Since $T$ is $s$-SLT, it is obvious that also $\Op \circledast T$ and $U$ are $s$-SLT.
\par\noindent
It is easy to see that, for all $t= \lambda_1 \dots \lambda_m \in T$,
the set $\left(\Op \circledast t \right) \cdot \left(\Cl \circledast \Lambda^+ \right) \; \cap D_n\, \subseteq U$
is a singleton including the word $\left(\Op \circledast t \right) u\in D_n$, where
$u|_1\, = \Cl^{|t|}$ and $u|_2 = t^R$, i.e., $u= \Cl \circledast (\lambda_n \dots \lambda_1)$.
We can then write $u$ as the mirror image $(\Cl \circledast t)^R$ of $\Cl \circledast t$.
\par\noindent
Denote with $U(t)$ the word $(\Op \circledast t)\cdot (\Cl \circledast t)^R$ for every $t \in T$: we have that
$U \cap D_n = \bigcup_{t \in T} U(t)$.
\par\noindent
Define the homomorphism $g: (\Cl \circledast \Lambda) \cup (\Op \circledast \Lambda)\to \Sigma$ as:
\[
\begin{displaystyle}
\left.
\begin{cases}
g(z) = \left( f(z|_2) \right)|_1 , & \text{if }
z \in \Op \circledast \Lambda
\\
g(z) = \left(f(z|_2)\right)|_2 , & \text{if } z \in \Cl \circledast \Lambda
\end{cases}
\right.
\end{displaystyle}
\]
where $f$ is the homomorphism of Th. \ref{theoMedvedevEsteso} defined above.
This definition is exemplified by
\[
\begin{displaystyle}
\left.
\begin{cases}
g\left(\langle \Op ,\lambda\rangle \right) = f(\lambda)|_1 &
\\
g\left(\langle \Cl,\lambda\rangle \right) = f(\lambda)|_2&
\end{cases}
\right.
\end{displaystyle}
\]
By definition, for every $t \in T$, it holds:
\[
\begin{displaystyle}
\begin{cases}
g\left( U(t)\right ) &=g(\Op \circledast t)\cdot g(\Cl \circledast t)^R
\\
&= \left(f\left((\Op \circledast t)|_2\right) \right)|_2 \cdot \big( f((\Cl \circledast t)|_2)|_1 \big)^R
\\
&= f(t)|_1 \cdot \left( f(t)|_2 \right)^R.
\end{cases}
\end{displaystyle}
\]
We now show that $L= g(D_n \cap U)$.
If $x \in L$, then by Eq.~\eqref{eq:simpleEvenLinLanguage} $x = w|_1\,w|_2^R$ for some $w \in W$.
Since $W= f(T)$, there is $t \in T$ such that $w=f(t)$ and, by definition of $U$, it holds $U(t) \in U$.
Hence, $g\left(U(t)\right)= f(t)|_1 \left(f(t)|_2\right)^R = w|_1 w|_2^R = x$.
\par
The converse proof is similar.
If $x \in g(D \cap U)$ then there exists $t \in T$ such that $x=g(U(t))$.
Let $w = f(t)$, hence, $w \in W$.
By definition of $g$,
$x=g(U(t))= f(t)|_1 \left(f(t)|_2\right)^R = w|_1 w|_2^R$, hence $x \in L$.
\qed
\end{proof}
\section{Conclusion}\label{SectConclusion}
The main contribution of this paper is the homomorphic characterization of context-free languages using a grammar-independent Dyck alphabet and a non-erasing homomorphism. It substantially departs from previous characterizations which either used a grammar-dependent alphabet, or had to erase an unbounded number of brackets.
\par
Our result says that, given a terminal alphabet, any language over the same alphabet can be homomorphically characterized using the \emph{same} Dyck language and the \emph{same} homomorphism together with a language-specific regular language. In other terms, the idiosyncratic properties of each context-free language are completely represented in the words of the regular language, which moreover have the same length as the original sentences. In this way, for each source alphabet size, a one-to-one correspondence between context-fre grammars and regular (more precisely strictly locally testable) languages is established. In accordance with the trade-off between the complexity of the language, the Dyck alphabet size and the regular language complexity (Proposition~\ref{propos:TradeOff}), the more complex the source language, the higher is the width of the sliding window used by the regular language.
We hope that further studies of such correspondence between the two fundamental context-free and regular language families, may lead to new insights.
\par
A technical question is open to further investigation. The Dyck alphabet size that we have proved to be sufficient is a rather high power of the source alphabet size. It may be possible to obtain substantial size reductions, for the general case and, more likely, for some subfamilies of context-free languages, as we have shown for the linear languages in double Greibach normal form.
\paragraph{Acknowledgment} We gratefully thank the anonymous reviewers for their careful and valuable suggestions.
\bibliographystyle{elsarticle-num}
\section*{\refname}
|
2,877,628,089,066 | arxiv |
\section{Introduction}\label{intro}
Let $\mathcal I=\{I_1,\dots,I_n\}$
be a set of $n$ intervals of unit length, numbered
from left to right. $U(\mathcal I)$ is the poset on the elements
1 to $n$ which is defined by $i\prec j$ if and only if interval
$I_i$ is strictly to the left of interval $I_j$.
The posets that can be described in this way are called \emph{unit interval orders}. We write $\mathcal U_n$ for the collection of unit interval orders on
$\{1,\dots,n\}$.
A \emph{Dyck path} of length $n$ is a path which
proceeds by steps of $(1,0)$ and $(0,1)$ from $(0,0)$ to $(n,n)$, while
not passing below the diagonal line from $(0,0)$ to $(n,n)$. Dyck paths are
counted by the well-known Catalan numbers. We write $\mathcal D_n$ for the collection of Dyck paths of length $n$.
A Dyck path can be specified by identifying the collection of complete
$1\times 1$ boxes with vertices at lattice points which lie between the Dyck path and the diagonal line. We refer to the collection of these boxes as the
\emph{area set} of the Dyck path.
The boxes which may appear
in the area set of a Dyck path in $\mathcal D_n$ can be indexed by pairs $(i,j)$ with
$1\leq i <j \leq n$, where the pair $(i,j)$ corresponds to the box $i-1\leq x \leq i$, $j-1\leq y\leq j$.
The \emph{area sequence} of a Dyck path $D\in\mathcal D_n$ is the
sequence $(a_1,\dots,a_n)$ consisting of the number
of boxes
in the area set on each horizontal line, from bottom to top. The possible area sequences of
Dyck paths are characterized by the fact that $a_1=0$ and
$a_i\leq a_{i-1}+1$.
For $U$ a unit interval order, let $a(U)$ be the Dyck path whose area set
is given by the boxes $(i,j)$ such that $i<j$ and $i\not\prec j$.
In Section \ref{Area}, we will recall the proof that this
defines a bijection from $\mathcal U_n$ to $\mathcal D_n$.
We now turn to the definition of a second map from unit interval orders
to Dyck paths, as described in Section \ref{PL}.
Let $w=(w_1,\dots,w_n)$ be a sequence of non-negative integers.
Associated to $w$ is a poset $P(w)$ defined as follows \cite[Section 2]{GP}.
For $1\leq i,j\leq n$, we set $i\prec j$ if either of the following is satisfied: \begin{itemize}
\item $w_j -w_i\geq 2$,
\item or $w_j-w_i =1$ and $i<j$.
\end{itemize}
For any $w$, the poset $P(w)$ is isomorphic to a unique unit interval order.
The word $w$ is referred to as a \emph{part listing} for $P(w)$.
Note that the labelling of the elements of $P(w)$ and the labelling of
the elements of the isomorphic unit interval order may not be the same.
For a unit interval order $U\in \mathcal U_n$, there is a unique part listing $w$
such that $P(w)$ is isomorphic to $U$ and $w$ is the area sequence of a
Dyck path. Define $\tilde p(U)$ to be this part listing.
Define $p(U)$ to be the Dyck path whose area sequence is $\tilde p(U)$.
We have now recalled two ways to
associate a Dyck path to a unit interval
order, namely the maps $a$ and $p$. Considering these two maps,
Matherne, Morales and Selover raised this question of how they are
related. (In fact, the definition of $p$ used in \cite{MMS} is slightly different from what we have taken as the definition, and seems to include a small inaccuracy. The authors of \cite{MMS} have confirmed to us that the map $p$ which we consider is the one which they intended.)
On the basis of computer evidence, Matherne, Morales, and Selover made the following conjecture:
\begin{conjecture}\cite{MMS} \label{con}
For $U\in \mathcal U_n$, we have that
$a(U)=\zeta(p(U))$.\end{conjecture}
The map $\zeta$ which relates $a(U)$ and $p(U)$ in the conjecture of Matherne,
Morales, and Selover is the famous zeta map of Haglund \cite{H}.
It is a bijection from $\mathcal D_n$ to $\mathcal D_n$ which plays
an important rôle in $(q,t)$-Catalan combinatorics. We recall its
definition in Section \ref{Zeta} below. This
unexpected connection between $\zeta$ and unit interval orders seems worthy of further investigation.
The main result of this paper is to prove Conjecture \ref{con} (Theorem \ref{theoremConj}) which we carry
out in Section \ref{proof}.
Additionally, in Section \ref{grevlex}, we establish that the part listing $\tilde p(U)$ can be characterized among those part listings $w$ with
$P(w)$ isomorphic to
$U$ as the one which is minimal with respect to graded reverse lexicographic
order. This is not needed for our argument, but provides an interesting alternative description of the map $\tilde p$ (and thus also of $p$).
\section{The map $a$}\label{Area}
Let us give an equivalent definition of the unit interval orders.
\begin{lemma}
\label{defequivalent}
An order $\prec$ on $\{1,\dots,n\}$ is a unit interval order if and only if for all $x,y\in \{1,\dots,n\}$, we have the two properties
\begin{itemize}
\item $x\prec y$ implies $x<y$ ,
\item $x\prec y$ implies $\forall x'\leq x$ and $\forall y'\geq y$, we have $x'\prec y'$.
\end{itemize}
\end{lemma}
\begin{proof}
It is obvious from the definition that a unit interval order satisfies the two properties. We prove the converse direction by contradiction.
It is clearly true for $n=1$. Suppose that $\prec$ is an order on $\{1,\dots,n \}$, with $n\geq 2$ minimal, that satisfies the two properties but such that it is not a unit interval order. The two properties are such that when $\prec$ is restricted to $\{1,\dots,n-1\}$, it still satisfies the two properties. By minimality of $n$, the order $\prec$ restricted to $\{1,\dots,n-1\}$ is a unit interval order. Let $\{I_1,\dots,I_{n-1}\}$ be a set of intervals of unit length realizing this unit interval order.
If the integer $n$ is not comparable to any other interval with respect to $\prec$, then the second property implies that there are no relations for $\prec$. The poset $\prec$ can therefore be realized as the unit interval order corresponding to $n$ intervals all of which overlap. This contradicts our assumption.
Otherwise let $k$ be the greatest integer such that $k\prec n$. By the second property, this means that
the intervals $I_i$ for $k<i<n$ (if any) intersect. We can therefore choose $I_n$ so that it intersects exactly these interval and no others.
Using the second property, the unit interval order defined by $\{I_1,\dots,I_n\}$ is precisely $\prec$, which is a contradiction and finishes the proof of the lemma.
\end{proof}
Given $U\in \mathcal U_n$, define
$$\tilde a(U) = \{(x,y)\mid x\not\prec y, 1\leq x<y \leq n\} $$
\begin{lemma} \label{area} For $U\in \mathcal U_n$, we have that
$\tilde a(U)$ is the area set of a Dyck path in $\mathcal D_n$. \end{lemma}
\begin{proof}
Since $(x,y)\in\tilde a(U)$ implies $x<y$, we only have boxes above the diagonal.
Let $(x,y)\in\tilde a(U)$. Then for any $x\leq x'<y'\leq y$, we have
that $(x',y')\in\tilde a(U)$. Indeed, since the intervals $I_x$ and $I_y$ overlap,
so do all the intervals indexed by numbers between $x$ and $y$.
We just proved that if there is a box $(x,y)$ in $\tilde a(U)$, all the boxes above the diagonal and weakly south east of $(x,y)$ are also in $\tilde a(U)$. This implies that $\tilde a(U)$ is the area set of a Dyck path.
\end{proof}
Thanks to Lemma \ref{area}, for $U$ a unit interval order, we can define
$a(U)$ to be the Dyck path whose area set is given by $\tilde a(U)$.
\begin{lemma} The map $a$ is a bijection from $\mathcal U_n$ to $\mathcal D_n$. \end{lemma}
\begin{proof}
We give the inverse map. Let $D\in \mathcal D_n$ be a Dyck path with area set $A$. Let $\prec$ be the order on $\{1,\dots,n\}$ defined by $x\prec y$ if $x<y$ and $(x,y)\not\in A$. Proving that $\prec$ is a unit interval order would finish the proof, as the map sending $D$ to $\prec$ would be the inverse of $a$. Since $A$ is the area set of a Dyck path we know that if $(x,y)\not\in A$, then $\forall x'\leq x$ and $\forall y'\geq y$, we have that $(x',y')\not\in A$. The order $\prec$ is therefore a unit interval order by Lemma \ref{defequivalent}. \end{proof}
\section{Part listings} \label{PL}
In this section, we study the way that unit interval orders can be
defined via part listings, as already described in
Section \ref{intro}. We begin by giving an algorithm which, starting from
a unit interval order $U\in\mathcal U_n$, gives the part listing $\tilde p(U)$ defined in the introduction as the unique part listing whose associated poset is isomorphic to $U$ and such that it is the area sequence of a Dyck path.
Given $U$ a unit interval order in $\mathcal U_n$, we inductively define a function $\operatorname{\ell}$ from $\{1,\dots,n\}$ to $\mathbb Z_{\geq 0}$, as follows. We fix that
$\operatorname{\ell}(1)=0$. We suppose that $\operatorname{\ell}(i)$ has been defined for all
$1\leq i \leq j-1$. Now define $\operatorname{\ell}(j)$ to be $\max_{i\prec j} \operatorname{\ell}(i)+1$.
If $j$ is a minimal element of $U$, so that the set over which we are taking
the maximum is empty, we define $\operatorname{\ell}(j)=0$. We call $\ell(i)$ the level of $i$ (or of the interval $I_i$).
\begin{Algorithm} \label{algo}
Let $U\in \mathcal U_n$. We will successively define words $q_1$, $q_2$, \dots,
$q_n$. The word $q_i$ is of length $i$, and is obtained by inserting
a copy of $\ell(i)$ into $q_{i-1}$.
\begin{itemize}
\item We begin by defining $q_1=0$. Now suppose that $q_{i-1}$ has
already been constructed.
\item Let $C_i$ be the number of elements of level $\ell(i)-1$ comparable to $i$.(Note that they are necessarily to the left of $i$.)
The letter $\ell(i)$ is added into $q_{i-1}$ directly after the occurrences of the letter $\ell(i)$ (if any) immediately following the $C_i$-th letter $\ell(i)-1$.
\end{itemize}
Finally, define $q(U)=q_n$.
\end{Algorithm}
See Figure \ref{fig:algo} for an example of this algorithm.
\input{Figure/figalgo}
\begin{lemma} \label{asequ} For $U\in \mathcal U_n$, we have that $q(U)$ is a part listing corresponding to the area sequence of a Dyck path in $\mathcal D_n$. \end{lemma}
\begin{proof}
Recall that the area sequences of Dyck paths in $\mathcal D_n$ are the sequences of non-negative integers $(a_1,\dots,a_n)$ characterized by the two properties that $a_1=0$ and $a_i \leq a_{i-1}+1$.
Let $U$ be a unit interval order and $q(U)$ be the sequence constructed by the above algorithm. By definition of the algorithm, $0$ is the first element added to the list and will remain at the first position in $q(U)$ since every level $\ell(i)$ will be added after this $0$. Thus, we have that $q(U)$ starts with a $0$.
Also by construction, we have that $\ell(i)$ is inserted directly after a letter which is either $\ell(i)-1$ or $\ell(i)$. Thus, when $\ell(i)$ is inserted, it satisfies the condition that its value is at most one more than the value of its preceding letter. And since at each step we add letters greater or equal to the maximal letter of the previous word, the condition will remain true as we carry out
all subsequent insertions.
\end{proof}
Given a part listing $w$ of length $n$, we define a partial order on
$\{1,\dots,n\}$ as described in the introduction. We denote it by $P(w)$, and we write
$\prec_{P(w)}$ for its order relation.
\begin{definition}
Given a unit interval order $U\in \mathcal U_n$, and its corresponding part listing
$w=q(U)$, we define a permutation
$f_U:\{1,\dots,n\}\rightarrow \{1,\dots,n\}$ as follows. We first
consider the positions $i$ in the part listing with $w_i=0$, and we
number them starting with 1 from left to right. We then number the positions $i$ with $w_i=1$ from left to right, and continue in the same way. The
number that is eventually assigned to position $i$ is $f(i)$.
\end{definition}
We now define a new order $\prec_f$ on $\{1,\dots,n\}$ by saying that
$i\prec_f j$ if and only if $f^{-1}(i) \prec_{P(w)} f^{-1}(j)$. By definition,
this poset is isomorphic to $P(w)$.
\begin{proposition}\label{iso} For $U$ a unit interval order, $w=q(U)$ the corresponding part
listing, and $f$ the bijection defined above, $\prec_f$ agrees with the
original order on $U$. \end{proposition}
\begin{proof} Note that $\ell$ is a weakly increasing function of $\{1,\dots,n\}$.
Let $1\leq i\leq n$. Because the copies of $\ell$ in $w$ are
inserted from left to right, $f^{-1}(i)$ is the position in $w$ where
$\ell(i)$ wound up, that is to say, $w_{f^{-1}(i)}=\ell(i)$.
Let $1\leq i<j\leq n$. Suppose $I_i$ is strictly to the left of $I_j$ in $U$.
Thus, $\ell(j)>\ell(i)$. Suppose first that $\ell(j) \geq \ell(i)+2$.
In this case, $w_{f^{-1}(j)} \geq w_{f^{-1}(i)}+2$, so $j \succ_f i$, as desired.
Suppose now that $\ell(j)=\ell(i)+1$. Suppose that $I_j$ is strictly to
the right of $C$ intervals of level $\ell(i)$. Note that $I_i$ is one of
them by assumption. Thus, when $\ell(j)$ was inserted into $w$, it was inserted to the
right of the letter $\ell(i)$; this persists as other letters are inserted.
It follows that also in this case, $j\succ_f i$, as desired.
Suppose now that $I_i$ and $I_j$ overlap. In this case, $\ell(j)\leq \ell(i)+1$, because the $I_k$ strictly to the left of $I_j$ are weakly to the left of
$I_i$, so have level at most $\ell(i)$. We must therefore consider two cases,
when $\ell(j)=\ell(i)$ and when $\ell(j)=\ell(i)+1$.
Consider first the case where $\ell(j)=\ell(i)$. In this case,
$w_{f^{-1}(j)}=w_{f^{-1}(i)}$, so $i$ and $j$ are incomparable with respect to
$\prec_f$, as desired.
The case where $\ell(j)=\ell(i)+1$ is disposed of similarly to the
case $\ell(j)=\ell(i)+1$ where $I_i$ and $I_j$ do not overlap; in this
case, the result is that $f^{-1}(j)<f^{-1}(i)$, with the result that
$i$ and $j$ are incomparable with respect to $\prec_f$, as desired.
This completes the proof.
\end{proof}
In the introduction, for $U$ a unit interval order, we defined
$\tilde p(U)$ to be the unique part listing $w$ such that $P(w)$ is isomorphic to $U$ and $w$ is also an area sequence of a Dyck path. We are
now in a position to establish that
the map $\tilde p$ is well-defined.
\begin{proposition}
For $U$ a unit interval order, we have $\tilde p(U)$ is well-defined and $\tilde p(U)=q(U)$.
\end{proposition}
\begin{proof}
For each unit interval order $U\in \mathcal U_n$, the part listing $q(U)$ is
the area sequence of a Dyck path
by Lemma \ref{asequ}.
By Proposition \ref{iso}, the poset $P(q(U))$ is isomorphic to $U$. This shows in particular that the map $q$ must be injective. Since there are the same number of unit interval orders in $\mathcal U_n$ as of Dyck paths in $\mathcal D_n$, $q$ must be a bijection. Thus, Proposition \ref{iso} tells us that if $w$ and $w'$ are two different area sequences of Dyck paths, then $P(w)$ and $P(w')$ cannot be isomorphic. It follows that, for any $U$, there is exactly one Dyck path $w$ such that $P(w)$ is isomorphic to $U$, namely, $q(U)$. Thus, $\tilde p(U)$ is well-defined and equals $q(U)$.
\end{proof}
\section{The zeta map} \label{Zeta}
We now describe the map $\zeta:\mathcal D_n\rightarrow \mathcal D_n$.
Start with $D\in \mathcal D_n$. We begin by labelling
the lattice points that make up the path $D$ (except the very first):
we label the top end-point of an up step with the letter $a$, and we label
the right endpoint of a right step with the letter $b$.
We then read the labels: first on the line $y=x$, from bottom left to top right,
then on the line $y=x+1$, again in the same direction, then on the line
$y=x+2$, etc. Interpret $b$ as designating an up step, and $a$ as
designating a right step. This defines a lattice path from $(0,0)$ to
$(n,n)$. Define this to be $\zeta(D)$. See Figure \ref{fig:zeta} for an example of this map.
\begin{lemma}\label{app} Starting from $D\in\mathcal D_n$, the path $\zeta(D)$ is a
Dyck path. \end{lemma}
\input{Figure/figzeta}
\begin{proof}
Let $D\in\mathcal D_n$.
We define a matching between the up steps and the right steps of $D$ as follows. For all non-negative integers $t$, look at the part of $D$ between the lines $y=x+t$ and $y=x+(t+1)$. This necessarily consists of an alternating sequence of the same number of up steps and right steps. We define our pairing by matching the $i$-{th} up step with the $i$-{th} right step. By definition of $\zeta$, these two matched edges contribute an up step and a right step to $\zeta(D)$, and the up step comes before the right step. Thus $\zeta(D)$ will always stay above the diagonal.
\end{proof}
\section{Proof of the conjecture}\label{proof}
The proof of the conjecture (Theorem \ref{theoremConj}) will proceed by induction. We suppose that for a unit interval order $U$, we know that $\zeta(p(U))$ and $a(U)$ coincide. We then consider what happens when we add a new rightmost interval to $U$. Proving that this changes the result of applying each of the maps in the same way, we conclude that the two maps also coincide on the larger poset. This proves the conjecture by induction.
\begin{definition}
A peak in a Dyck path consists of an up step followed by a
right step. In the Dyck word, this amounts to an occurrence of the
consecutive pair of letters `$ab$' and in the corresponding area sequence $(a_1,\dots,a_n)$ this amounts to have $a_{i}\leq a_{i-1}$ for a given $i$.
By adding a peak to a Dyck path, we mean the insertion,
at some position, of `$ab$' into the Dyck path. The result of adding a peak is
again a Dyck path. Note that adding a peak does not necessarily
increase the number of peaks: if the peak is added after a letter `$a$' and
before a letter `$b$', the number of peaks does not change.
We say that a peak of a Dyck path is a maximal peak if the top of the up step lies on the highest line of slope $1$ that touches the Dyck path. We will refer to the last peak of the Dyck path as the final peak, and to the last maximal peak as the final maximal peak.
We say that a peak is of height $i$ if the top of the up step is on the line
$y=x+i$.
\end{definition}
In the sequel, let $U\in \mathcal U_n$ and $U'$ be obtained from
$U$ by adding an $(n+1)$-st interval of unit length to the right of those
of $U$. We note $\ell$ the level of this added interval.
For an illustration of the following lemmas, see Figure \ref{fig:expan}.
\begin{lemma}\label{add} Let $U$ be a unit interval order, and let $U'$ be obtained from it by adding an interval to the right of the intervals of $U$.
The Dyck path $p(U')$ is obtained from $p(U)$ by adding a final maximal peak.
\end{lemma}
\begin{proof}
By Algorithm \ref{algo}, in going from $p(U)$ to $p(U')$, we insert a letter $\ell$ into $p(U)$ such that the letter before is either $\ell$ or $\ell-1$. The letters before are at most $\ell$ and the letters
after are at most $\ell-1$. Adding a letter $\ell$ in this way
adds a final maximal peak, since we add the rightmost maximal letter $\ell$ to the area sequence $p(U)$.
\end{proof}
\input{Figure/figexpan}
\begin{lemma}\label{deb}
Let $U\in \mathcal U_n$, and let $U'$ be obtained from it by adding an interval to the right of the intervals of $U$.
The Dyck path
$\zeta(p(U'))$ is obtained from $\zeta(p(U))$ by adding a final peak in position $(n-r,n+1)$, where $r$ is the sum of the number
of occurrences of the letter $\ell$ in $p(U)$ and of the number of occurrences of the letter $\ell-1$ appearing after the position of the added letter $\ell$ in $p(U')$.
\end{lemma}
\begin{proof}
By Lemma \ref{add}, we know that $p(U')$ is obtained from $p(U)$ by adding a final maximal peak. The height of this peak is $\ell+1$ since we added $\ell$ in the area sequence $p(U)$ to obtain this peak. The fact that this is the final maximal peak means that the peaks after are of heights smaller than $\ell+1$.
Let us recall that the map $\zeta$ builds a Dyck path by scanning along the lines of slope $1$ from bottom left to top right, by order of increasing height. Adding the peak of height $\ell+1$ does not change what
we read on any height below $\ell$, nor what we read on height $\ell$ before
we reach the right endpoint of the right step of the added peak. When we reach this right endpoint, in $\zeta(p(U'))$ we put an up step `$a$'. Then on the same height $\ell$ we read top endpoints of peaks of height $\ell$ appearing after the added peak (if any). Such peaks correspond to the letters $\ell-1$ appearing after the position of the added letter $\ell$ in $p(U')$, which will give right steps `$b$' in $\zeta(p(U'))$. Finally, we read the line of height $\ell+1$, putting a right step `$b$' in $\zeta(p(U'))$ for all maximal peaks of $p(U')$, which correspond to the occurrences of the letter $\ell$ in $p(U')$. Thus after the last up step `$a$' of $\zeta(p(U'))$ we have exactly $r+1$ right steps `$b$', so the added peak is in position $(n-r,n+1)$.
\end{proof}
\begin{lemma}
\label{lemmea}
The Dyck path $a(U')$ is obtained from $a(U)$ by adding a final peak in position $ (n-s,n+1)$, where $s$ is the number of intervals in $U'$ not comparable to
the rightmost interval $I_{n+1}$.
\end{lemma}
\begin{proof}
Let us recall that by definition $a(U)$ is the Dyck path whose area set is given by the boxes $(i,j)$ such that $i<j$ and $i\not\prec j$. We then add $I_{n+1}$, which is not comparable to the last $s$ intervals in $U$ (those of level $\ell$ together with a subset of those of level $\ell-1$). Thus in $U'$ we have only the new non comparable relations $i\not\prec n+1$ for $i\in\{n-s+1,\dots,n\}$, giving the corresponding boxes $(i,n+1)$ in the area set. This proves the lemma.
\end{proof}
We can now prove the main result of the paper.
\begin{Theorem}
\label{theoremConj}
The maps $\zeta \circ p$ and $a$ coincide.
\end{Theorem}
\begin{proof}
We proceed by induction. We know that $a(U)=\zeta \circ p(U)$ is true for the unique $U\in \mathcal U_1$, thus the initial case is verified.
Let $n\geq 1$. Suppose that $a(U)=\zeta \circ p(U)$ is true for all $U\in \mathcal U_n$. Let $U'\in \mathcal U_{n+1}$. There exists a unit interval order poset $U\in \mathcal U_n$ such that $U'$ is obtained from $U$ by adding an $(n+1)$-st interval $I_{n+1}$ of level $\ell$ to the right of those
of $U$.
We now establish that the number $r$ in Lemma \ref{deb} is equal to the number $s$ in Lemma \ref{lemmea}. Indeed, in $U'$ the intervals not comparable to $I_{n+1}$ are all the intervals of level $\ell$ in $U$ and a subset of those of level $\ell-1$. The latter are exactly those which correspond to the occurrences of the letter $\ell-1$ appearing after the position of the added $\ell$ in $p(U')$.
Then using Lemma \ref{deb} and Lemma \ref{lemmea}, and the induction hypothesis that gives $a(U) = \zeta(p(U))$, we obtain that $a(U') = \zeta(p(U'))$, thereby finishing the induction and the proof of the theorem.
\end{proof}
\section{Graded reverse lexicographic minimality of $\tilde p(U)$} \label{grevlex}
We define the graded reverse lexicographic order on finite sequences of
$n$ non-negative integers as follows. We say that $(a_1,\dots,a_n)<(b_1,\dots,b_n)$ if $\sum_i a_i<\sum_i b_i$, or in the case that the sums are equal if $a_j>b_j$, where $j$ is the index of the first position where the two strings differ.
(Note that the inequality $a_j>b_j$ is reversed from what one might expect!
The expected inequality, $a_j<b_j$, defines graded lexicographic order.)
\begin{lemma} \label{grevlex-lemma} Let $U$ be a unit interval order.
The graded reverse lexicographically minimal part listing for $U$ is the area
sequence
of a Dyck path. \end{lemma}
\begin{proof} Let $(a_1,\dots, a_n)$ be the graded reverse lexicographically minimal part
listing for $U$. Suppose, seeking a contradiction, that $a_i>a_{i-1}+1$.
Consider the part listing in which $a_i$ and $a_{i-1}$ have swapped
positions. This part listing defines an isomorphic poset, and the new
part listing is lower in graded reverse lexicographic order, so we would
have preferred it. This is a contradiction.
Now suppose that $a_1>0$. Consider the part listing in which $a_1$ is
removed and $a_1-1$ is inserted at the end. This part listing produces
an isomorphic poset, and since its sum is lower, it is lower in graded
reverse lexicographic order, so we would have preferred it. This, too,
is a contradiction.
It follows that the graded reverse lexicographically minimal part listing
for $U$ is the area sequence of a Dyck path.
\end{proof}
\begin{corollary} For $U$ a unit interval order, the part listing $\tilde p(U)$ is the graded reverse lexicographically minimal part listing for $U$.
\end{corollary}
\begin{proof} By Lemma \ref{grevlex-lemma}, the graded reverse lexicographically
minimal part listing for $U$ is the area sequence of some Dyck path.
We know that $\tilde p$ defines a bijection from unit interval orders to area
sequences of Dyck paths. Thus, there is at most one area sequence of a Dyck
path which, when interpreted as a part listing, yields a poset isomorphic
to $U$. The
graded reverse lexicographically minimal part listing for $U$ is
therefore $\tilde p(U)$.
\end{proof}
\subsection*{Acknowledgements}
The authors would like to thank Mathieu Guay-Paquet, Alejandro Morales,
and Viviane Pons
for helpful discussions.
The authors benefitted from the support of the NSERC Discover Grants program and the Canada Research Chairs program. F.G. was supported by an NSERC USRA.
|
2,877,628,089,067 | arxiv | \section{Introduction}
This paper is an attempt to begin the construction of an abstract theory of sensor management, in the hope that it will help to provide both a theoretical underpinning for the solution of practical problems and insights for future work. A key component of sensor management is the amount of information a sensor in a given configuration can gain from a measurement, and how that information gain changes as the configuration does. In this vein, it is interesting to observe how information theoretic \emph{surrogates} have been used in a range of applications as objective functions for sensor management; see, for instance, \cite{bell1,donoho1,sameh1}. Our aim here is to abstract from various papers including these, those by Kershaw and Evans \cite{kershaw} as well as others, the mathematical principles required for this theory. Our approach is set within the mathematical context of differential geometry.
The problem of estimation is expressed in terms of a likelihood; that is, a probability density $p(x|\theta)$, where $x$ denotes the measurement and $\theta$ the parameter to be estimated. It is well known,\cite{coverthomas} that the Fisher Information associated with this likelihood provides a measure of information gained from the measurement.
While the concepts discussed here are, in some sense, generic and can be applied to any sensor system that has the capability to modify its characteristics, for simplicity and to keep in mind a motivating example, we focus on a particular problem: that of localization of a \emph{target}. Of course, a sensor system is itself just a (more complex) aggregate sensor, but it will be convenient, for the particular problems we will discuss, to assume a discrete collection of disparate (at least in terms of location) sensors, that together provide measurements of aspects of the location of the target. This distributed collection of sensors, each drawing measurements that provide partial information about the location of a target, using known likelihoods, defines a particular \emph{sensor configuration state}. As the individual sensors move, they change their sensing characteristics and thereby the collective Fisher Information associated with estimation of target location. The Fisher information matrix defines a metric, the Fisher-Rao metric, over the physical space where the target resides \citep{fisher1,fisher2}, or in more generality a metric over the parameter space in which the estimation process takes place, and this metric is a function of the location of the sensors. This observation permits analysis of the information content of the system, as a function of sensor parameters, in the framework of differential geometry (``information geometry") \citep{amari1,amari2}. A considerable literature is dedicated to the problem of optimizing the configuration so as to maximize information retrieval (see Sec. II. A) \citep{moran121,moran122,optimpaper,bell1}. The mathematical machinery of information geometry has led to advances in several signal processing problems, such as blind source separation \cite{blind}, gravitational wave parameter estimation \citep{app2}, and dimensionality reduction for image retrieval \citep{app3} or shape analysis \citep{app4}.
In sensor management/adaptivity applications, the performance of the sensor configuration (in terms of some function of the Fisher Information) becomes a cost associated with finding the optimal sensor configuration, and tuning the metric by changing the configuration is important. Literally hundreds of papers, going back to the seminal work of \cite{kershaw} and perhaps beyond, use the Fisher Information as a measure of sensor performance. In this context, parametrized families of Fisher-Rao metrics arise (\emph{e.g.} \cite{fishfam1,fishfam2}). Sensor management then
becomes one of choosing an optimal metric (based on the value of the estimated parameter), from among a family of such, to permit the acquisition of the maximum amount of information about that parameter.
As we have stated, the focus, hopefully clarifying, example of this paper, is that of estimating the location of a target using measurements from mobile sensors (\emph{c.f.} \cite{famdec1,famdec2}). The information content of the system depends both on the location of the target and on the spatial locations of the sensors, because the covariance of measurements is sensitive to the distances and angles made between the sensors and the target. As the sensors move in space, the associated likelihoods vary, as do the resulting Fisher matrices, describing the information content of the system, for every possible sensor arrangement. It is this interaction between sensors and target that this paper sets out to elucidate in the context of information geometry.
The collection of all Riemannian metrics on a Riemannian manifold itself admits the structure of an infinite-dimensional Riemannian manifold \citep{gil1,gil2}. Of interest to us is only the subset of Riemannian metrics corresponding to Fisher informations of sensor configurations, and this allows us to restrict attention to a finite-dimensional sub-manifold of the manifold of metrics, called the \emph{sensor manifold} \citep{moran121,moran122}. In particular, a continuous adjustment of the sensor configuration, say by moving one of the individual sensors, results in a continuous change in the associated Fisher metric and so a movement in the sensor manifold.
Though computationally difficult, the idea of regarding the Fisher metric as a measure of performance of a given sensor configuration and then understanding variation in sensor configuration in terms of the manifold of such metrics is powerful. It permits questions concerning optimal target trajectories, as discussed here, to minimize information passed to the sensors and, as will be discussed in a subsequent paper, optimal sensor trajectories to maximize information gleaned about the target. In particular, we remark that the metric on the space of Riemannian metrics that appears naturally in a mathematical context in \cite{gil1,gil2}, also has a natural interpretation in a statistical context.
Our aims here are to further develop the information geometry view of sensor configuration begun in \cite{moran121,moran122}. While the systems discussed are simple and narrow in focus, already they point to concepts of information collection that appear to be new. Specifically, we set up the target location problem in an information geometric context and we show that the optimal (in a manner to be made precise in Sec. II) sensor trajectories, in a physical sense, are determined by solving the geodesic equations on the sensor manifold (Sec. III). Various properties of geodesics on this space are derived, and the mathematical machinery is demonstrated using concrete physical examples (Sec IV).
\section{The Information in Sensor Measurements}
\label{sec:inform-cont-sens}
\begin{figure*}[h]
\begin{center}
\includegraphics[width=0.8\columnwidth]{sensorconfig-1}
\end{center}
\caption{Diagrammatic representation of the sensor model; sensors are at $\lambda_i\in M$ taking measurements of a target at $\boldsymbol{\theta}\in M$. A measure of distance between different sensor configurations, physically corresponding to change in information content is obtained through a suitable restriction of the metric $G_g$ \eqref{eq:gilmed} to the configuration manifold $\mathcal{M}(\Gamma) \subset \mathcal{M}$, the space of all Riemannian metric on $M$. $\mathcal{M}$ is almost certainly not topologically spherical, it is merely drawn here as such for simplicity. \label{geomsetup}}
\end{figure*}
In general, sensor measurements, as considered in this paper, can be formulated as follows. Suppose we have, in a fixed manifold $M$, a collection of $N$ sensors located at $\lambda_i$ $(i = 1, \ldots , N )$. For instance the manifold may be $\mathbb R^3$ and location may just mean that in the usual sense in Euclidean space. The measurements from these sensors are used to estimate the location of a target $\boldsymbol{\theta}$ also in $M$ (see the left of Figure~\ref{geomsetup}). Each sensor draws measurements ${x}_i$ from a distribution with probability density function (PDF) $p_i({x}_i|\boldsymbol{\theta})$. A measurement $\mathbf{x}=\{x_i\}_{i=1}^N$ is the collected set of individual measurement from each of the sensors with likelihood
\begin{equation}
\label{eq:loglik}
p(\mathbf{x}|\boldsymbol{\theta}) = \prod_{i=1}^N p_i({x}_i|\boldsymbol{\theta}).
\end{equation}
Measurements here are assumed independent\footnote{While this assumption is probably not necessary, it allows one to define the aggregate likelihood \eqref{eq:loglik} as a simple product over the individual likelihoods, which renders the problem computationally more tractable.} between sensors and over time.
Given a measurement $\mathbf{x}$ of a target at $\boldsymbol{\theta}$, the likelihood that the same measurement could be obtained from a target at $\boldsymbol{\theta}'$, $L(\boldsymbol{\theta},\boldsymbol{\theta}')$, is given by the log odds expression
\[L(\boldsymbol{\theta},\boldsymbol{\theta}') = \log\frac{p(\mathbf{x}|\boldsymbol{\theta})}{p(\mathbf{x}|\boldsymbol{\theta}')},\]
and the average over all measurements is, by definition, the Kullback-Leibler divergence \citep{kl}, $D(\boldsymbol{\theta}||\boldsymbol{\theta}')$:
\begin{equation}
\label{eq:6}
D(\boldsymbol{\theta}||\boldsymbol{\theta}') = \mathbb{E}_{\mathbf{x}}\left[ L(\boldsymbol{\theta},\boldsymbol{\theta}')\right] = \int p(\mathbf{x}|\boldsymbol{\theta}) \log\frac{p(\mathbf{x}|\boldsymbol{\theta})}{p(\mathbf{x}|\boldsymbol{\theta}')} \,d\mathbf{x}
\end{equation}
This would, ostensibly, be a good measure on the information in the sensor measurements as it is non-negative and $D(\boldsymbol{\theta}||\boldsymbol{\theta})=0$, but it lacks desirable features of a metric: it is not symmetric and does not satisfy the triangle inequality. We recall that the Kullback-Leibler divergence is related to mutual information, and refer the reader to Section 17.1 of \cite{coverthomas} for a discussion of this connection. It is widely used as a measure of the difference in information available about the target between the locations $\boldsymbol{\theta}$ and $\boldsymbol{\theta}'$.
In the limit as $\boldsymbol{\theta}'\to\boldsymbol{\theta}$ the first non-zero term in the series expansion for $D$ is second order, viz.
\begin{equation}
\label{eq:7}
\lim_{\boldsymbol{\theta}'\to\boldsymbol{\theta}}D(\boldsymbol{\theta}||\boldsymbol{\theta}') = \lim_{\boldsymbol{\theta}'\to\boldsymbol{\theta}}(\boldsymbol{\theta}-\boldsymbol{\theta}')^T g (\boldsymbol{\theta}-\boldsymbol{\theta}') + O\left((\boldsymbol{\theta}-\boldsymbol{\theta}')^3\right),
\end{equation}
where $g$ is an $n\times n$ symmetric matrix, and $n$ is the dimension of the manifold $M$.
This location-dependent matrix defines a metric over $M$, the \emph{Fisher Information Metric} \citep{fisher1,fisher2}. It can also be calculated, under mild conditions, as the expectation of the tensor product of the gradients of the log-likelihood $\ell = \log p(\mathbf{x}|\boldsymbol{\theta})$ as
\begin{equation} \label{eq:fishdef}
g = \mathbb{E}_{\mathbf{x}|\boldsymbol{\theta}} \left[ d_{\boldsymbol{\theta}} \ell \otimes d_{\boldsymbol{\theta}} \ell \right].
\end{equation}
Since Fisher Information is additive over independent measurements, the Fisher Information Metric provides a measure of the instantaneous change in information the sensors can obtain about the target. In this paper, we adopt a relatively simplistic view that the continuous case of measurements is a limit of measurements discretized over time. Because sensor measurements depend on the relative locations of the sensors and target, this incremental change depends on the direction the target is moving; the Fisher metric \eqref{eq:fishdef} can naturally be expressed in coordinates that represent the sensor locations (see Sec. IV), but also depends on parameters that represent target location, which may be functions of time in a dynamical situation. Once the Fisher metric \eqref{eq:fishdef} has been evaluated, one can proceed to optimize it in an appropriate manner.
\subsection{D-Optimality}
Because the information of the sensor system described in Section~\ref{sec:inform-cont-sens} is a matrix-valued function it is not obvious what it means to maximize the `total information' with respect to the sensor parameters. We require a definition of `optimal' in an information theoretic context. Several different optimization criteria exist (\emph{e.g.} \cite{dopt1,dopt2}), defined by constructing scalar invariants from the matrix entries of $g$, and maximizing those functions in the usual way.
We adopt the notion of \emph{D-optimality} in this paper; we consider the maximization of the determinant of \eqref{eq:fishdef}. Equivalently, D-optimality maximizes the differential Shannon entropy of the system with respect to the sensor parameters \citep{seb97}, and minimizes the volume of the elliptical confidence regions for the sensors estimate of the location of the target $\boldsymbol{\theta}$ \citep{dopt3}.
A complication in applying D-optimality (or any other) criterion to this problem is that the sensor locations and distributions are not fixed. Conventionally, measurements are drawn from sensors with \emph{fixed} properties, with a view to estimating a parameter $\boldsymbol{\theta}$. Permitting sensors to move throughout $M$ produces an infinite family of sensor configurations, and hence Fisher-Rao metrics \eqref{eq:fishdef}, parametrized by the locations of the sensors. One aim of this structure is to move sensors to locations that serve to maximize information content, given some prior distribution for a particular $\boldsymbol{\theta} \in M$. This necessitates a tool to measure the difference between information provided by members of a family of Fisher-Rao metrics; this is explored in Section~\ref{sec:conf-manif}.
\subsection{Geodesics on the Sensor Manifold}
\label{sec:geod-sens-mani}
We now consider the case where the target is in motion, so that $\boldsymbol{\theta}$ varies along a path $\gamma(t)\subset M$. The instantaneous information gain by the sensor(s) at time $t$ is then $ g\left(\gamma'(t),\gamma'(t)\right)$, where $g$ is the Fisher Information Metric \eqref{eq:fishdef}. This observation is based on the assumption that the measurements are all independent. The total information $I$ gained along $\gamma$ is
\begin{equation}
\label{eq:3}
I(T) = \int_0^T g(\gamma'(t),\gamma'(t)) \,dt,
\end{equation}
which is the equivalent of the energy functional in differential geometry [e.g. Chapter 9 of \cite{docarmo}], and this has the same extremal paths as $l_{g}(\gamma)$, the arc-length of the path $\gamma$,
\begin{equation}
\label{eq:2}
l_{g}(\gamma) = \int_0^T \sqrt{g\left(\gamma'(t),\gamma'(t)\right)}\,dt.
\end{equation}
Paths with extremal values of this length are \emph{geodesics} and these can be interpreted as the evasive action that can be taken by the target to minimize amount of information it gives to the sensors.
\subsection{Kinematic Conditions on Information}
\label{sec:kinconst}\stodo{Please read and critique; Attempted AS 03/10}
While the curves that are extrema of the Information functional and of arc-length are the same as sets, a geodesic only minimizes the information functional if traversed at speed
\begin{equation}
\label{eq:speed}
dl_{g}/dt = +\sqrt{g(\gamma'(t),\gamma'(t))}.
\end{equation}
In differential geometric terms, this is equivalent to requiring the arc-length parametrization of the geodesic to fulfill the energy condition.
In order to minimize information about its
location, the target should move along a geodesic of $g$ at exactly the speed
$dl_{g}/dt$ \eqref{eq:speed}. This direct kinematic condition on information is unusual and difficult to reconcile with our current view of information theory.
While aspects of this speed constraint are still unclear to us,
an analogy that may be useful is to regard the target as moving though a ``tensorial information fluid''. Now moving slower relative to the fluid will result in ``pressure'' building behind, requiring information (energy) to be expended to maintain the slower speed. Moving faster also requires more information to push through the slower moving fluid.
In the fluid dynamics analogy, the energy expended in moving though a fluid is proportional to the square of the difference in speed between the fluid and the object. The local energy is proportional to the difference between actual speed and the speed desired by the geodesic; that is, the speed that minimizes the energy functional. Pursuing the fluid dynamics analogy, the density of the fluid mediates the relationship between the energy and the relative speed.
\begin{equation*}
E\propto g(\delta \mathbf v,\delta \mathbf v)
\end{equation*}
In particular, the scalar curvature, which depends on $G$, influences the energy and hence the information flow. We will explore this issue further in a future publication.
\section{The Information of Sensor Configurations}
\label{sec:conf-manif}
A sensor configuration is a set $\Gamma=\{\lambda_i\}_{i=1}^N$ of sensor parameters. The Fisher-Rao metric $g$ can be viewed as a function of $\Gamma$ as well as $\theta$, the location of the target. To calculate the likelihood that a measurement came from one configuration $\Gamma_0$ over another $\Gamma_1$ requires the calculation of $p(x|\Gamma_0,\theta)/p(x|\Gamma_1,\theta)$, which is difficult as the value of $\theta$ is not known exactly. Measurements can be used to construct an estimate $\hat\theta$, however, the distribution of this estimate is hard to quantify and even harder to calculate. Instead, here, the maximum entropy distribution is used. This is normally distributed with mean $\hat\theta$, and covariance $g^{-1}(\hat\theta)$, the inverse of the Fisher information metric at the estimated target location.
The information gain due to the sensor configuration $\Gamma$ is now $D(p(x|\Gamma,\hat{\theta})||1)$ because there was no prior information about the location (the uniform distribution\footnote{Note that the uniform distribution is, in general, an improper prior in the Bayesian sense unless the manifold $M$ is of finite volume. It may be necessary, therefore, to restrict attention to some (compact) submanifold $\Omega \subset M$ for $D(p(x|\Gamma,\hat{\theta})||1)$ to be well-defined; see also the discussion below equation \eqref{eq:8}.}) before the sensors were configured compared with the maximum entropy distribution $p$ after. Evaluating this gives
\begin{equation}
\label{eq:1}
D(p||1) = \log\left((2\pi e)^n\det g^{-1}(\Gamma,\hat{\theta})\right)
\end{equation}
The Fisher Information metric $G$ for this divergence can be calculated from
\begin{equation}
\label{eq:8}
G(h,k) = {\mathbb E}\left[d_g^2D(p||1)\right] = \int_M tr(g^{-1}hg^{-1}k)\operatorname{vol}(g)\,d\mu
\end{equation}
where $h$ and $k$ are tangent vectors to the space of metrics. The integral defining \eqref{eq:8} may not converge for non-compact $M$, so restriction to a compact submanifold $\Omega$ of $M$ is assumed throughout as necessary (\emph{c.f.} Figure~\ref{geomsetup}).
\subsection{The Manifold of Riemannian Metrics}
\label{sec:manif-riem-metr}
The set of all Riemannian metrics over a manifold $M$ can itself be imbued with the structure of an infinite-dimensional Riemannian manifold \citep{gil1,gil2}, which we call $\mathcal{M}$. Points of $\mathcal{M}$ are Riemmanian metrics on $M$; i.e. each point $G \in \mathcal{M}$ bijectively corresponds to a positive-definite, symmetric $(0,2)$-tensor in the space $S^{2}_{+}T^{\star}M$. Under reasonable assumptions, an $L^{2}$ metric on $\mathcal{M}$ \citep{clarkephd,gil1} may be defined as:
\begin{equation} \label{eq:gilmed}
G(h,k) = \int_M tr(g^{-1}hg^{-1}k)\operatorname{vol}(g)\,d\mu,
\end{equation}
which should be compared to \eqref{eq:8}.
It should be noted that the points of the manifold $\mathcal{M}$ comprise \emph{all} of the metrics that can be put on $M$, most of which are irrelevant for our physical sensor management problem. We restrict consideration to a sub-manifold of $\mathcal{M}$ consisting only of those Riemannian metrics that are members of the family of Fisher information matrices \eqref{eq:fishdef} corresponding to feasible sensor configurations. This particular sub-manifold is called the `sensor' or `configuration' manifold \citep{moran121,moran122} and is denoted by $\mathcal{M}({\Gamma})$, where now the objects $h$ and $k$ are now elements of the now finite dimensional tangent space $T \mathcal{M}({\Gamma})$. The dimension of $\mathcal{M}({\Gamma})$ is $N \times \dim(M)$ since each point of $\mathcal{M}({\Gamma})$ is uniquely described by the locations of the $N$-sensors, each of which require $\dim(M)$ numbers to denote their coordinates. A visual description of these spaces is given in Figure~\ref{geomsetup}. For all cases considered in this paper, the integral defined in \eqref{eq:gilmed} is well-defined and converges (see however the discussion in \cite{famdec1}).
For the purposes of computation, it is convenient to have an expression for the metric tensor components of \eqref{eq:gilmed} in some local coordinate system. In particular, in a given coordinate basis $z^{i}$ over $\mathcal{M}({\Gamma})$ (not to be confused with the coordinates on $\Omega$; see Sec. IV), the metric \eqref{eq:gilmed} reads
\begin{equation} \label{eq:metten}
G(h,k) = \int_{\Omega} g^{nk} g^{\ell m} h_{mn} k_{\ell k} \text{vol}(g).
\end{equation}
where $h$ and $k$ are tangents vectors in $T \mathcal{M}({\Gamma})$ given in coordinates by
\begin{equation}
\label{eq:9}
T \mathcal{M}(\Gamma) = \text{span}\left\{ \frac{\partial}{\partial z^i} g_{mn}\right\}_{i=1}^{\dim \mathcal{M}({\Gamma})}.
\end{equation}
From the explicit construction \eqref{eq:metten}, all curvature quantities of $\mathcal{M}({\Gamma})$, such as the Riemann tensor and Christoffel symbols, can be computed.
\subsection{D-Optimal Configurations}
D-optimality in the context of the sensor manifold described above is discussed in this section. Suppose that the sensors are arranged in some arbitrary configuration $\Gamma_{0}$. The sensors now move in anticipation of target behaviour; a prior distribution is adopted to localize a target position $\boldsymbol{\theta}$. The sensors move continuously to a new configuration $\Gamma_{1}$, where $\Gamma_{1}$ is determined by maximizing the determinant of $G$, i.e. $\Gamma_{1}$ corresponds to the sensor locations for which $\det({G})$, computed from \eqref{eq:metten}, is maximized. The physical situation is depicted graphically in Figure \ref{evolvingsensor}. This process can naturally be extended to the case where real measurement data is used. In particular, as measurements are drawn, a (continuously updated) posterior distribution for $\boldsymbol{\theta}$ becomes available, and this can be used to update the Fisher metric (and hence the metric $G$) to define a sequence $\Gamma_{t}$ of optimal configurations; see Sec. V.
\begin{figure*}[h]
\includegraphics[width=\columnwidth]{sensorconfig-2}
\justifying
\caption{Graphical illustration of D-optimal sensor dynamics; a sensor configuration $\Gamma_0$ evolves to a new configuration $\Gamma_1$ by moving the $N$ sensors in $\Omega$-space to new positions that are determined by maximizing the determinant of the metric ${G}$, given by equation \eqref{eq:metten}, on the sensor manifold $\mathcal{M}({\Gamma})$. Each sensor $\lambda^{i}$ traverses a path $\gamma_{i}$ through $\Omega$-space to end up in the appropriate positions constituting $\Gamma_1$. As shown in Section \ref{sec:geod-conf-manif}, the paths $\gamma_{i}$ are entropy-minimizing if they are geodesic on the sensor manifold $\mathcal{M}({\Gamma})$. Note that the target is shown as stationary in this particular illustration. \label{evolvingsensor}
}
\end{figure*}
\subsection{Geodesics for the Configuration Manifold}
\label{sec:geod-conf-manif}
While D-optimality allows us to determine \emph{where} the sensors should move given some prior, it provides us with no guidance on which \emph{path} the sensors should traverse, as they move through $\Omega$, to reach their new, optimized positions.
A path $\Upsilon(t)$ from one configuration $\Gamma_0$ to another $\Gamma_1$ is a set of paths $\Gamma(t) = \left\{\gamma_i(t)\right\}_{i=1}^N$ for each sensor $S^i$ from location $\lambda_i^0$ to $\lambda_i^1$. Varying the sensor locations is equivalent to varying the metric $g(t) = g(\Gamma(t),\hat{\theta}(t))$ and the estimate of the target location $\hat{\theta}$. The information gain along $\Upsilon$ is then
\begin{equation}
\label{eq:10}
\int_\Upsilon G_{g(t)}\left(g'(t),g'(t)\right)\, dt,
\end{equation}
and the extremal paths are again the geodesics of the metric $G$. Also the speed constraint observed earlier in Section \ref{sec:inform-cont-sens}.\ref{sec:kinconst} for the sensor geodesics is in place here and given by
\begin{equation}
\label{eq:11}
\frac{dl_G}{dt} = \sqrt{G_{g(t)}\left(g'(t),g'(t)\right)}.
\end{equation}
Again, this leads to the conclusion that there are kinematic constraints on the rate of change of sensor parameters that lead to the collection of the maximum amount of information.
\section{Configuration for Bearings-only Sensors}
To illustrate the mathematical machinery developed in the previous sections consider first the configuration metric for two bearings-only sensors. The physical space $\Omega$, where the sensors and the target reside, is chosen to be the square $[-10,10] \times [-10,10] \subset \mathbb{R}^{2}$. Effectively, we assume an uninformative (uniform) prior over the square $\Omega$.
The goal is to estimate a position $\boldsymbol{\theta}$ from bearings-only measurements taken by the sensors, as in previous work \citep{moran121,moran122}. We assume that measurements are drawn from a Von Mises distribution,
\begin{equation} \label{eq:mises}
M_n \sim p_n(\cdot|\theta)={\frac {e^{\kappa \cos\left[\cdot -\arg\left(\theta-{\lambda}_{n} \right) \right]}}{2\pi I_{0}(\kappa )}},
\end{equation}
where $\kappa$ is the concentration parameter and $I_r$ is the $r$th modifed Bessel function of the first kind, \citep{misesvar}, and ${\lambda}_{n} = (x_{n},y_{n})$ is the location of the $n$-th sensor in Cartesian coordinates. Note that, in reality, the parameter $\kappa$ will depend on location, since the signal-to-noise ratio will decrease as the sensors move farther away from the target. This is beyond the scope of this paper and will be addressed in future work.\bill{I think Xuezhi would say that since we're measuring angle the distance doesn't matter. This needs to be clarified}
For the choice \eqref{eq:mises}, the Fisher metric \eqref{eq:fishdef} can be computed easily, and has components
\begin{multline} \label{eq:bearingsfish}
{g} = \kappa \left(1-\frac{I_2(\kappa)}{2I_0(\kappa)}\right)\\ \sum_{i=1}^N \frac1{(x-x_i)^2+{(y-y_i)}^2}
\begin{pmatrix}
(y-y_i)^2 & -(x-x_i)(y-y_i)\\
-(x-x_i)(y-y_i) & (x-x_i)^2
\end{pmatrix}.
\end{multline}
\subsection{Target Geodesics}
\label{sec:visu-targ-geod}
A geodesic, $\boldsymbol{\gamma}(t)$, starting at $\mathbf{p}$ with initial direction $\mathbf{v}$ for a manifold with metric $g$ is the solution of the coupled second-order initial value problem for the components $\gamma^i(t)$:
\begin{equation}
\label{eq:geoivp}
\frac{d^2\gamma^i}{dt^2} = \Gamma^i_{jk}\frac{d\gamma^j}{dt}\frac{d\gamma^k}{dt},\quad \boldsymbol{\gamma}(0)=\mathbf{p},\
\boldsymbol{\gamma}'(0)=\mathbf{v},
\end{equation}
where $\Gamma^i_{jk}$ are the Christoffel symbols for the metric $g$.
Figure \ref{fig:geodesic_eqn} shows directly integrated solutions to the geodesic equation \eqref{eq:geoivp} a target at $(-1,-3)$ and sensors at $(-7,-6)$ and $(0,1)$. The differing paths correspond to the initial direction vector $(\cos\phi,\sin\phi)$ of the target, varying as $\phi$ varies from 0 to $2\pi$ radians in steps of $0.25$ radians.
An alternative way to numerically compute the geodesics connecting two points on the manifold $M$ is using the Fast Marching Method~(FMM) \cite{sethian1996fast, sethian1999fast,tsitsiklis1995efficient}. Since the Fisher-Rao Information Metric is a Riemannian metric on the Riemannian manifold $M$,
one can show that the geodesic distance map $u$, the geodesic distance from the initial point $\mathbf{p}$ to a point $\boldsymbol{\theta}$, satisfies the Eikonal equation
\begin{equation}\label{eq: eikonal}
|| \nabla u(\boldsymbol{\theta}) ||_{g_{\boldsymbol{\theta}}^{-1}} = 1
\end{equation}
with initial condition $u(\mathbf{p}) = 0$. By using a mesh over the parameter space, the (isotropic or weakly anisotropic) Eikonal equation (\ref{eq: eikonal}), can be solved numerically by Fast Marching. The geodesic is then extracted by integrating numerically the ordinary differential equation
\begin{equation}
\frac{d \boldsymbol{\gamma}(t)}{dt} = -\eta_t g_{\boldsymbol{\theta}(t)}^{-1} \nabla u(\boldsymbol{\theta}(t)),
\end{equation}
where $\eta_t > 0$ is the mesh size.
The computational complexity of FMM is $O(N \log N)$, where $N$ is the total number of mesh grid points. For Eikonal equations with strong anisotropy, a generalized version of FMM, Ordered Upwind Method~(OUM)~ \cite{sethian2003ordered} is preferred.
Compare Figure~\ref{fig:geodesic_eqn} with Figure~\ref{fig:geodesic_fm} which uses a Fast Marching algorithm to calculate the geodesic distance from the same point.
Figure~\ref{fig:geodesic_speed} shows the speed required along a geodesic travelling in the direction of the vector $(1,-1)$. \stodo{Again critique required}In the case of these bearing-only sensors the trade-off over speed is between time-on-target and rate of change of relative angle between the sensors and the target. Travelling slowly means more time for the sensors to integrate measurements but the change in angle is slower, resulting in more accurate position estimates. Conversely, faster movement than the geodesic speed results in larger change in angle measurements but less time for measurements again resulting in more accurate measurements. Only at the geodesic speed is the balance reached and the minimum information criterion achieved.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\columnwidth]{target_geodesics}
\caption{Solutions to the geodesic equation on $\Omega = [-10,10]\times [-10,10]$ for a target starting at $(-1,-3)$ and sensors at $(-7,-6)$ and $(0,1)$. The differing paths correspond to the initial direction vector $(\cos\theta,\sin\theta)$ of the target, varying as $\theta$ varies from 0 to $2\pi$ radians in steps of 0.25 radians.}
\label{fig:geodesic_eqn}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\columnwidth]{BOSgeodist}
\caption{Geodesic distance on $\Omega = [-10,10]\times [-10,10]$ from the point $(-1,-3)$ with sensors at $(-7,-6)$ and $(0,1)$. The distance was calculated using a Fast Marching formulation of the geodesic equation. The fact that the geodesics follow the gradient of this distance allows comparison with Figure~\ref{fig:geodesic_eqn} }
\label{fig:geodesic_fm}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\columnwidth]{BOSsedet}
\caption{Geodesic speed for each point in $\Omega = [-10,10]\times [-10,10]$ for targets departing in direction of the vector (1,-1)}
\label{fig:geodesic_speed}
\end{figure}
\subsection{Configuration Metric Calculations}
\label{sec:conf-metr-calul}
The coordinates $\boldsymbol{z}$ on $S_{\Omega}$ are $\{x_{1},y_{1},x_{2},y_{2}\}$. The sensor management problem amounts to, given some initial configuration, identifying a choice of $\{x_{1},y_{1},x_{2},y_{2}\}$ for which the determinant of \eqref{eq:metten} is maximized, and traversing geodesics $\gamma^{i}$ in $S_\Omega$, starting at the initial locations and ending at the D-optimal locations; see Figure~\ref{evolvingsensor}. We assume that the target location is given by an uninformative prior distribution; that is, $\mathbb{P}(\boldsymbol{\theta} \in A) = \text{vol}(A)/\text{vol}(\Omega)$ for all $A \subset \Omega$.
To make the problem tractable, we consider a simple case where one of the sensor trajectories is fixed; that is, we consider a 2-dimensional submanifold of $\mathcal{M}(\Gamma)$ parametrized by the coordinates $(x_{2},y_{2})$ of $S_{2}$ only. Figure \ref{detg1} shows a contour plot of $\det({G})$, as a function of $(x_{2},y_{2})$, for the case where $S_{1}$ moves from $(0,1)$ (yellow dot) to $(2,3)$ (black dot). The second sensor $S_{2}$ begins at the point $(-6,-7)$ (red dot). Figure~\ref{detg1} demonstrates that, provided $S_{1}$ moves from $(0,1)$ to $(2,3)$, $\det({G})$ is maximized at the point $(-1,-5.5)$, implying that $S_{2}$ moving to $(-1,-5.5)$ is the D-optimal choice. The geodesic path $\gamma_{1}$ beginning at $(0,1)$ and ending at $(2,3)$ is shown by the dashed yellow curve; this is the information-maximizing path of $S_{1}$ through $\Omega$. Similarly for $S_{2}$, the geodesic path $\gamma^{2}$ beginning at $(1,-3)$ and ending at $(-1,-5.5)$ is shown by the dotted red curve.
\begin{figure}[h]
\includegraphics[width=0.9\columnwidth]{GeodesicCase1.png}
\justifying
\caption{Contour plot of $\det(\boldsymbol{G})$, for $\boldsymbol{G}$ given by \eqref{eq:metten}, for the case where $S^{1}$ moves from $(0,1)$ (yellow dot) to $(2,3)$ (black dot). The second sensor $S^{2}$ begins at the point $(-6,-7)$ (red dot). Brighter shades indicate a greater value of $\det(\boldsymbol{G})$; $\det(\boldsymbol{G})$ is maximum at $(-1,-5.5)$. The geodesic path linking the initial and final positions of $S^{1}$ is shown by the dashed yellow curve, while the dashed red curve shows the geodesic path linking $(-6,-7)$, the initial position of $S^{2}$, to the D-optimal location $(-1,-5.5)$. \label{detg1}
}
\end{figure}
\subsection{Visualizing Configuration Geodesics}
\label{sec:visu-config-geod}
Figure \ref{fig:cgeodesic_eqn} shows solutions to the geodesic equation on $\Omega = [-10,10]\times [-10,10]$ for a sensor starting at $(0,1)$ with the other sensor stationary at $(-7,-6)$ and the target at $(-1,-3)$. The differing paths correspond to $\phi$ varying through $[0,2\pi]$ in steps of $0.25$ radians in the target initial direction vector $(\cos\phi,\sin\phi)$. It is interesting to note the concentration in direction of the geodesics in spite of the even distribution of initial directions. Both groups improve information, in a way that is well understood for bearing-only sensors, by increasing the rate of change of bearing. One group acheives this by closing with the target, the other by moving at right angles. This should be compared with Figure~\ref{fig:cgeodesic_fm}, which uses a Fast Marching algorithm \citep{SethianJournal1998,peyre2010geodesic} to calculate the geodesic distance from the same point.
Figure~\ref{fig:cgeodesic_speed} shows the speed required along a geodesic travelling through each point in the direction of the vector (1,1).
\begin{figure}[h]
\centering
\includegraphics[width=0.9\columnwidth]{config_geodesics}
\caption{Solutions to the geodesic equation on $\Omega = [-10,10]\times [-10,10]$ for a sensor starting at $(0,1)$ with the other sensor stationary at $(-7,-6)$ and the target at (-1,-3). The differing paths correspond to the initial direction vector $(\cos\phi,\sin\phi)$ of the target, varying as $\phi$ varies from 0 to $2\pi$ radians in steps of 0.25 radians.}
\label{fig:cgeodesic_eqn}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\columnwidth]{BOCgeodist}
\caption{Geodesic distance on $\Omega = [-10,10]\times [-10,10]$ for a sensor starting at $(0,1)$ with the other sensor stationary at $(-7,-6)$ and the using an uninformative prior for the target. The distance was calculated using a Fast Marching formulation of the geodesic equation. The fact that the geodesics follow the gradient of this distrance allows comparison with Figure~\ref{fig:cgeodesic_eqn}}
\label{fig:cgeodesic_fm}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\columnwidth]{BOCsedet}
\caption{Geodesic speed sensor moving along a geodesic in direction (1,-1) at each point in $\Omega = [-10,10]\times [-10,10]$. The other sensor is stationary at $(-7,-6)$, and an uninformative prior is used for the target.}
\label{fig:cgeodesic_speed}
\end{figure}
\section{Discussion}
We consider the problem of sensor management from an information-geometric standpoint \citep{amari1,amari2}. A physical space houses a target, with an unknown position, and a collection of mobile sensors, each of which takes measurements with the aim of gaining information about target location \citep{moran121,moran122,optimpaper}. The measurement process is parametrized by the relative positions of the sensors. For example, if one considers an array of sensors that take bearings-only measurements to the target \eqref{eq:mises}, the amount of information that can be extracted regarding target location clearly depends on the angles between the sensors. In general, we illustrate that in order to optimize the amount of information the sensors can obtain about the target, the sensors should move to positions which maximize the norm of the volume form (`D-optimality') on a particular manifold imbued with a metric \eqref{eq:metten} which measures the distance (information content difference) between Fisher matrices \citep{seb97,dopt3}. We also show that, if the sensors move along geodesics [with respect to \eqref{eq:metten}] to reach the optimal configuration, the amount of information that they \emph{give away} to the target is minimized. This paves the way for (future) discussions about game-theoretic scenarios where both the target and the sensors are competitively trying to acquire information about one another from stochastic measurements; see e.g. \cite{game1,game2} for a discussion on such games. Differential games along these lines will be addressed in forthcoming work.
We hope that this work may eventually have realistic applications to signal processing problems involving parameter estimation using sensors. We have demonstrated that there is a theoretical way of choosing sensor positions, velocities, and possibly other parameters in an optimal manner so that the maximum amount of useful data can be harvested from a sequence of measurements taken by the sensors. For example, with sensors that take continuous or discrete measurements, this potentially allows one to design a system that minimizes the expected amount of time taken to localize (with some given precision) the position of a target. If the sensors move along paths that are geodesic with respect to \eqref{eq:metten}, then the target, in some sense, learns the least about its trackers. This allows the sensors to prevent either intentional or unintentional evasive manoeuvres; a unique aspect of information-geometric considerations. Ultimately, these ideas may lead to improvements on search or tracking strategies available in the literature [e.g. \cite{track1,track2}]. Though we have only considered simple sensor models in this paper, the machinery can, in principle, be adopted to systems of arbitrary complexity. It would certainly be worth testing the theoretical ideas presented in this paper experimentally using various sensor setups.
\section*{Acknowledgements}
This work was supported in part by the US Air Force Office of Scientific Research (AFOSR) Under Grant No. FA9550-12-1-0418.
Simon Williams acknowledges an Outside Studies Program grant from Flinders University.
All the authors declare that they have no further conflict of interest.
\bibliographystyle{spbasic}
|
2,877,628,089,068 | arxiv | \section{Introduction}
We work in the category $\Schk$ of separated finite type schemes over a field $k$ of characteristic zero. Recall that a proper morphism of schemes $\pi:\tilde{X}\to X$ is an envelope if for every subvariety $V\subset X$, there exists a subvariety $\tilde{V}\subset \tilde{X}$, such that $\pi$ maps $\tilde{V}$ birationally onto $V$.
Gillet in \cite{GilletSeq} proved that if $\pi:\tilde{X} \to X$ is an envelope, with $\pi$ projective, then the following sequence is exact:
\begin{equation} \label{eq-gillet}
A_*(\tilde{X}\times_X\tilde{X}) \xrightarrow{{p_1}_*-{p_2}_*} A_*(\tilde{X}) \xrightarrow{\pi_*} A_*(X) \xrightarrow{} 0.
\end{equation}
Here $A_*$ is either the Chow theory or the $K$-theory of coherent sheaves, $p_i: \tilde{X}\times_X\tilde{X} \to \tilde{X}$ are the two projections and ${p_i}_*, \pi_*$ are the push-forward maps in either theory.
The goal of this article is to prove the exactness of the analogous sequence in the cobordism theory $\Omega_*$ defined by Levine and Morel \cite{Levine-Morel}.
\begin{theorem} \label{thm-seq1} Let $\pi: \tilde{X}\to X$ be an envelope, with $\pi$ projective. Then the sequence
\[ \Omega_*(\tilde{X}\times_X\tilde{X}) \xrightarrow{{p_1}_*-{p_2}_*} \Omega_*(\tilde{X}) \xrightarrow{\pi_*} \Omega_*(X) \xrightarrow{} 0\]
is exact.
\end{theorem}
This theorem has several applications. We mention here two of them. The first application is in the relationship between algebraic cobordism $\Omega_*(X)$ and algebraic K-theory $G_0(X)$ (the Grothendieck group of the category of coherent sheaves on $X$). Levine and Morel in \cite{Levine-Morel} constructed a natural morphism
\[ \Omega_*(X)\otimes_\LL \ZZ[\beta,\beta^{-1}] \longrightarrow G_0(X)[\beta,\beta^{-1}],\]
and proved it to be an isomorphism for smooth $X$. Dai in \cite{DaiPaper} extended this isomorphism to all schemes $X$ that can be embedded in a smooth scheme; this includes all quasiprojective schemes $X$. In \cite{ktheory} we build on Dai's work to prove this isomorphism for all schemes $X$ in $\Schk$. Since every scheme $X$ admits a quasiprojective envelope, Theorem~\ref{thm-seq1} implies that the cobordism theory $\Omega_*$ as a functor on $\Schk$ is determined by its restriction to the full subcategory of quasiprojective schemes. A similar statement for $G_0$ proved by Gillet \cite{GilletSeq} then reduces the problem to the case proved by Dai.
The second application of Theorem~\ref{thm-seq1} is in the study of operational bivariant theories of Fulton and MacPherson \cite{Fulton-MacPherson}. Kimura in \cite{Kimura} used the exact sequence (\ref{eq-gillet}) to give an inductive method for finding the operational Chow groups of singular varieties from the Chow groups of smooth varieties. His proof can be generalized to algebraic cobordism and other homology theories. In \cite{bivariant} we study the operational bivariant theories associated to certain homology theories that have the exact descent sequence. As a special case, we describe the operational equivariant cobordism theory of toric varieties. This result is based on previous work by Payne \cite{Payne} and Krishna and Uma \cite{Krishna-Uma}.
\section{An Overview of Algebraic Cobordism Theory} \label{section.overview.cobordism}
Algebraic cobordism theory was defined by Levine and Morel in \cite{Levine-Morel}. Later, Levine and Pandharipande \cite{Levine-Pandharipande} found a simpler presentation of the cobordism groups. We will use the construction of Levine-Pandharipande as the definition, but refer to Levine-Morel for its properties.
Let $\Smk$ be the full subcategory of $\Schk$ whose objects are smooth quasiprojective schemes over $\Spec k$. By a smooth morphism we always mean a smooth and quasiprojective morphism.
For $X$ in $\Schk$, let $\cM(X)$ be the set of isomorphism classes of projective morphisms $f: Y\to X$ for $Y\in \Smk$. This set is a monoid under disjoint union of the domains; let $\Mp(X)$ be its group completion. The elements of $\Mp(X)$ are called cycles. The class of $f: Y\to X$ in $\Mp(X)$ is denoted by $[f: Y\to X]$. The group $\Mp(X)$ is free abelian, generated by the cycles $[f: Y\to X]$ where $Y$ is irreducible.
A double point degeneration is a morphism $\pi: Y\to \PP^1$, with $Y \in \Smk$ of pure dimension, such that $Y_\infty = \pi^{-1}(\infty)$ is a smooth divisor on $Y$ and $Y_0=\pi^{-1}(0)$ is a union $A\cup B$ of smooth divisors intersecting transversely along $D=A\cap B$. Define $\PP_D = \PP(\cO_D(A)\oplus \cO_D)$, where $\cO_D(A)$ stands for $\cO_Y(A)|_D$. (Notice that $\PP(\cO_D(A)\oplus \cO_D) \isom \PP(\cO_D(B) \oplus \cO_D)$ because $\cO_D(A+B)\isom \cO_D$.)
Let $X\in \Schk$ and let $Y\in \Smk$ have pure dimension. Let $p_1, p_2$ be the two projections of $X\times \PP^1$. A double point relation is defined by a projective morphism $\pi: Y\to X\times\PP^1$, such that $p_2\circ \pi: Y\to \PP^1$ is a double point degeneration. Let
\[ [Y_\infty \to X], \quad [A\to X],\quad [B\to X], \quad [\PP_D \to X] \]
be the cycles obtained by composing with $p_1$. The double point relation is
\[ [Y_\infty \to X] -[A\to X] - [B\to X] + [\PP_D\to X] \in \Mp(X). \]
Let $\Rel(X)$ be the subgroup of $\Mp(X)$ generated by all the double point relations. The cobordism group of $X$ is defined to be
\[ \Omega_*(X) = \Mp(X)/\Rel(X).\]
The group $\Mp(X)$ is graded so that $[f: Y\to X]$ lies in degree $\dim Y$ when $Y$ has pure dimension. Since double point relations are homogeneous, this grading gives a grading on $\Omega_*(X)$. We write $\Omega_n(X)$ for the degree $n$ part of $\Omega_*(X)$.
There is a functorial push-forward homomorphism $f_*: \Omega_*(X)\to \Omega_*(Z)$ for $f: X\to Z$ projective, and a functorial pull-back homomorphism $g^*: \Omega_*(Z)\to \Omega_{*+d}(X)$ for $g: X\to Z$ a
smooth morphism of relative dimension $d$. These homomorphisms are both defined on the cycle level. Levine and Morel also construct pull-backs along l.c.i. morphisms and, more generally, refined l.c.i. pullbacks. We will not need these pullbacks below.
The cobordism theory has exterior products
\[ \Omega_*(X)\times \Omega_*(W) \longrightarrow \Omega_*(X\times W),\]
defined on the cycle level:
\[ [Y\to X] \times [Z\to W] = [Y\times Z \to X\times W].\]
These exterior products turn $\Omega_*(\Spec k)$ into a graded ring and $\Omega_*(X)$ into a graded module over $\Omega_*(\Spec k)$. When $X$ is in $\Smk$, we denote by $1_X$ the class $[id_X: X\to X]$.
\subsection{First Chern Class Operators}
Algebraic cobordism is endowed with first Chern class operators
\[ \ch(L): \Omega_*(X)\to \Omega_{*-1}(X),\]
associated to any line bundle $L$ on $X$.
This operator is also denoted by $\ch(\mathcal{L})$, where $\mathcal{L}$ is the invertible sheaf of sections of $L$.
We recall some properties of these operators that are needed below.
A formal group law on a commutative ring $R$ is a power series $F_R(u,v)\in R\llbracket u,v\rrbracket $ satisfying
\begin{enumerate}[(a)]
\item $F_R(u,0) = F_R(0,u) = u$,
\item $F_R(u,v)= F_R(v,u)$,
\item $F_R(F_R(u,v),w) = F_R(u,F_R(v,w))$.
\end{enumerate}
Thus
\[ F_R(u,v) = u+v +\sum_{i,j>0} a_{i,j} u^i v^j,\]
where $a_{i,j}\in R$ satisfy $a_{i,j}=a_{j,i}$ and some additional relations coming from property (c). We think of $F_R$ as giving a formal addition
\[ u+_{F_R} v = F_R(u,v).\]
There exists a unique power series $\chi(u) \in R\llbracket u\rrbracket $ such that $F_R(u,\chi(u)) = 0$. Denote $[-1]_{F_R} u = \chi(u)$. Composing $F_R$ and $\chi$, we can form linear combinations
\[ [n_1]_{F_R} u_1 +_{F_R} [n_2]_{F_R} u_2 +_{F_R} \cdots +_{F_R} [n_r]_{F_R} u_r \in R\llbracket u_1,\ldots,u_r\rrbracket \]
for $n_i\in \ZZ$ and $u_i$ variables.
There exists a universal formal group law $F_\LL$, and its coefficient ring $\LL$ is called the \emph{Lazard ring}. This ring can be constructed as the quotient of the polynomial ring $\ZZ[A_{i,j}]_{i,j>0}$ by the relations imposed by the three axioms above. The images of the variables $A_{i,j}$ in the quotient ring are the coefficients $a_{i,j}$ of the formal group law $F_\LL$. The ring $\LL$ is graded, with $A_{i,j}$ having degree $i+j-1$. The power series $F_\LL(u,v)$ is then homogeneous of degree $-1$ if $u$ and $v$ both have degree $-1$.
It is shown in \cite{Levine-Morel} that the graded group $\Omega_*(\Spec k)$ is isomorphic to $\LL$. The formal group law on $\LL$ describes the first Chern class operators of tensor products of line bundles (property (FGL) below).
We list three properties satisfied by the first Chern class operators $\ch(L): \Omega_*(Y) \to \Omega_{*-1}(Y)$ for $Y \in \Smk$ and $L$ a line bundle on $Y$:
\begin{itemize}
\item[(Dim)] For $L_1,\ldots,L_r$ line bundles on $Y$, $r>\dim Y$,
\[\ch(L_1)\circ\cdots\circ \ch(L_r) (1_Y) = 0.\]
\item[(Sect)] If $L$ is a line bundle on $Y$ and $s\in H^0(Y,L)$ is a section such that the zero subscheme $i:Z\hookrightarrow Y$ of $s$ is smooth, then
\[ \ch(L)(1_Y) = i_*(1_Z).\]
\item[(FGL)] For two line bundles $L$ and $M$ on $Y$,
\[ \ch(L\otimes M)(1_Y) = F_\LL(\ch(L), \ch(M))(1_Y).\]
\end{itemize}
In the terminology of \cite{Levine-Morel, Levine-Pandharipande} the three properties imply that $\Omega_*$ is an oriented Borel-Moore functor of geometric type.
The first Chern class operators of two line bundles commute: $\ch(L)\circ\ch(M) = \ch(M)\circ \ch(L)$, and they are compatible with smooth (l.c.i.) pull-backs, projective push-forwards and exterior products. The property (Sect) above implies that if $L$ is a trivial line bundle on $X$, then the first Chern class operator of $L$ is zero.
\subsection{Divisor Classes} \label{sec-div-cl}
Recall that a divisor $D$ on a smooth scheme $Y \in \Schk$ has strict normal crossings (s.n.c.) if at every point $p\in Y$ there exists a system of regular parameters $y_1,\ldots,y_n$, such that $D$ is defined by the equation $y_1^{m_1}\cdots y_n^{m_n}=0$ near $p$ for some integers $m_1, \ldots, m_n$.
Let $D = \sum_{i=1}^r n_i D_i$ be a nonzero s.n.c. divisor on a scheme $Y \in \Smk$, with $D_i$ irreducible. Let us recall the construction by Levine and Morel \cite{Levine-Morel} of the class $[D\to |D|] \in \Omega_*(|D|)$.
Let
\[ F^{n_1,\ldots,n_r}(u_1,\ldots,u_r) = [n_1]_{F_\LL} u_1 +_{F_\LL} [n_2]_{F_\LL} u_2 +_{F_\LL} \cdots +_{F_\LL} [n_r]_{F_\LL} u_r \in \LL\llbracket u_1,\ldots,u_r\rrbracket .\]
We decompose this power series as
\[ F^{n_1,\ldots,n_r}(u_1,\ldots,u_r) = \sum_J F_J^{n_1,\ldots,n_r}(u_1,\ldots,u_r) \prod_{i\in J} u_i,\]
where the sum runs over nonempty subsets $J\subset \{1,\ldots, r\}$. The power series $F_J^{n_1,\ldots,n_r}$ are such that $u_i$ does not divide any nonzero term in $F_J^{n_1,\ldots,n_r}$ if $i\notin J$.
For $i=1,\ldots,r$, let $L_i=\cO_Y(D_i)$. If $J\subset \{1,\ldots,r\}$, let $i^J: D^J=\cap_{i\in J} D_i \hookrightarrow |D|$, and $L_i^J = L_i|_{D^J}$. The class $[D\to|D|]$ is defined in \cite{Levine-Morel} as
\begin{equation} \label{eq2}
[D\to|D|] = \sum_J i_*^J F_J^{n_1,\ldots,n_r} (L_1^J, \ldots, L_r^J) (1_{D^J}),
\end{equation}
where the sum runs over nonempty subsets $J\subset \{1,\ldots,r\}$ and
$F_J^{n_1,\ldots,n_r} (L_1^J, \ldots, L_r^J)$ is the power series $F_J^{n_1,\ldots,n_r}$ evaluated on the first Chern classes of $L_1^J, \ldots, L_r^J$.
When pushed forward to $Y$, the class $[D\to|D|]$ becomes equal to $\ch(\cO(D))(1_Y)$. To see this, compute,
\begin{alignat*}{2}
\ch(\cO(D))(1_Y) &= F^{n_1,\ldots,n_r} (L_1, \ldots, L_r) (1_Y)\\
&= \sum_J F_J^{n_1,\ldots,n_r} (L_1, \ldots, L_r) \prod_{i\in J} \ch(L_i) (1_Y).
\end{alignat*}
Applying the property (Sect) repeatedly, we get
\[ \prod_{i\in J} \ch(L_i) (1_Y) = [ D^J \hookrightarrow Y].\]
Compatibility of first Chern class operators with pull-backs of line bundles then gives the desired divisor class formula pushed forward to $Y$.
We note that in the definition of divisor classes it is not necessary to assume that $D_i$ are irreducible. We may let them be smooth but possibly reducible divisors and then the same formula holds.
\subsection{A Pair of Smooth Divisors.}
Let $A$ and $B$ be smooth divisors on $Y\in \Smk$ that intersect transversely along $D=A\cap B$. Consider
\[ [A+B \to |A\cup B|] = [A\to A\cup B]+ [B\to A \cup B]+ i_* F^{1,1}_{\{1,2\}}(\cO_D(A), \cO_D(B)) (1_D),\]
where $i: D\hookrightarrow |A\cup B|$ and $\cO_D(A), \cO_D(B)$ stand for $\cO_Y(A)|_D, \cO_Y(B)|_D$. Let
\[ \PP_D = \PP(\cO_D(A)\oplus \cO_D).\]
\begin{lemma} \label{lem-pair}
With notation as above, let $\cO_Y(A+B+C)|_D = \cO_D$ for some divisor $C$ on $Y$. Then
\[ F^{1,1}_{\{1,2\}}(\cO_D(A), \cO_D(B)) (1_D) = -[\PP_D\to D] + \beta,\]
where
\[ \beta = \sum_{i,j\geq 0, l> 0} b_{ijl} \ch(\cO_D(A))^i \ch(\cO_D(B))^j \ch(\cO_D(C))^l (1_D),\]
for some universally defined $b_{ijl}\in \LL$ that do not depend on $Y,A,B,C$.
\end{lemma}
\begin{proof} It is shown in the proof of \cite[Lemma~9]{Levine-Pandharipande} that
\[ -[\PP_D\to D] = F^{1,1}_{\{1,2\}}(\cO_D(A), \cO_D(-A)) (1_D).\]
Substituting $\cO_D(B+C) = \cO_D(-A)$, we get
\begin{alignat*}{2} -[\PP_D\to D] &= F^{1,1}_{\{1,2\}}(\cO_D(A), \cO_D(B+C)) (1_D) \\
&= \sum_{i,j>0} a_{ij} \ch(\cO_D(A))^{i-1} \ch(\cO_D(B+C))^{j-1}(1_D),
\end{alignat*}
with $a_{ij}$ the coefficients in the formal group law of $\LL$.
To compute $\ch(\cO_D(B+C))$, we apply the formal group law again:
\[ \ch(\cO_D(B+C)) = \ch(\cO_D(B))+\ch(\cO_D(C))+ \sum_{i,j> 0} a_{ij} \ch(\cO_D(B))^i \ch(\cO_D(C))^j.\]
Now substituting this into the expression of $-[\PP_D\to D]$, the terms that involve only $\ch(\cO_D(A))$ and $\ch(\cO_D(B))$ give $F^{1,1}_{\{1,2\}}(\cO_D(A), \cO_D(B))(1_D)$, the remaining terms give the class $-\beta$.
\end{proof}
\section{Proof of the main theorem} \label{section.gillet.cobordism}
We will now prove Theorem~\ref{thm-seq1}. Let us start with some notation. Recall that
\[ \Omega_*(X) = \Mp(X)/\Rel(X),\]
where $\Mp(X)$ is the group of cycles and $\Rel(X)$ is the subgroup of relations, generated by double point relations. We identify $\Omega_*(\Spec k)$ with the Lazard ring $\LL$. Then $\Omega_*(X)$ is an $\LL$-module.
Composition with $\pi$ gives the push-forward map $\pi_*: \Mp(\tilde{X})\to \Mp(X)$, taking $\Rel(\tilde{X})$ to $\Rel(X)$. The induced map $\Omega_*(\tilde{X}) \to \Omega_*(X)$ is also denoted $\pi_*$. Define
\[ \Omega_*(X)_\pi = \pi_* \Mp(\tilde{X}) / \pi_* \Rel(\tilde{X}).\]
If $K_\pi = \Ker(\pi_*: \Mp(\tilde{X})\to \Mp(X))$, then
\begin{equation} \label{eq1}
\Omega_*(X)_\pi \isom \Mp(\tilde{X}) / (\Rel(\tilde{X})+K_\pi).
\end{equation}
There are natural maps of $\LL$-modules that factor $\pi_*$ as
\[ \Omega_*(\tilde{X}) \stackrel{\phi}{\longrightarrow} \Omega_*(X)_\pi \stackrel{\psi}{\longrightarrow} \Omega_*(X).\]
The map $\phi$ is surjective by (\ref{eq1}). We claim that $\psi$ is also surjective. For every subvariety $Y\subset X$, choose a resolution of singularities given by a projective birational morphism $\tilde{Y}\to Y$. Then the cycles $[\tilde{Y}\to X]$ generate $\Omega_*(X)$ as an $\LL$-module \cite{Levine-Morel}. We can choose the resolutions so that $\tilde{Y}\to X$ factor through $\pi:\tilde{X}\to X$, hence these classes lie in $\pi_* \Mp(\tilde{X})$.
The following is the main ingredient for proving Theorem~\ref{thm-seq1}.
\begin{proposition} \label{prop-isom}
The map $\psi: \Omega_*(X)_\pi \to \Omega_*(X)$ is an isomorphism.
\end{proposition}
{\em Proof of Theorem~\ref{thm-seq1}}. Let us assume Proposition~\ref{prop-isom} and prove Theorem~\ref{thm-seq1}. Clearly the sequence is a complex by functoriality of push-forward. Moreover, $\pi_* = \psi\circ\phi$ is surjective. We only need to prove exactness in the middle: $\Ker \pi_* \subset \Img ( {p_1}_*-{p_2}_* )$. By Proposition~\ref{prop-isom}, $\Ker \pi_* = \Ker \phi$, which by (\ref{eq1}) is the image of $K_\pi$ in $\Omega_*(\tilde{X})$. Now $K_\pi$ is generated by cycles
\[ [f:Y\to \tilde{X}] - [g:Y\to \tilde{X}],\]
where $\pi\circ f= \pi\circ g$. These generators lift to cycles
\[
\pushQED{\qed}
[(f,g): Y\to \tilde{X}\times_X\tilde{X}] \in \Omega_*(\tilde{X}\times_X\tilde{X}). \qedhere
\popQED
\]
To prove Proposition~\ref{prop-isom}, we follow the argument in \cite[Chapter 6]{Levine-Morel} showing that $\Omega_*(X)_D \to \Omega_*(X)$ is an isomorphism. Here $\Omega_*(X)_D$ is the group defined by cycles and relations transverse to a divisor $D$. The proof has two steps:
\begin{enumerate}
\item Define a distinguished lifting $\Mp(X) \xrightarrow{d} \Omega_*(X)_\pi$, such that the composition
\[ \Mp(\tilde{X}) \stackrel{\pi_*}{\longrightarrow} \Mp(X) \stackrel{d}{\longrightarrow} \Omega_*(X)_\pi \]
is the canonical homomorphism.
\item Show that $d$ maps $\Rel(X)$ to zero, hence it descends to $d: \Omega_*(X) \to \Omega_*(X)_\pi$, providing a left inverse to $\psi$ and proving that $\psi$ is injective.
\end{enumerate}
\subsection{Elimination of Indeterminacies}
We will need to eliminate the indeterminacies of rational maps $Y\dra
\tilde{X}$. We use the following well-known theorem.
\begin{theorem} (Hironaka \cite{Hironaka}) \label{thm-resolution} Let
$Y$ be a smooth variety, $D\subset Y$ a divisor with strict normal
crossings. Let $f: \tilde{Y}\to Y$ be a projective birational morphism,
$U\subset Y$ a nonempty open set such that $f: f^{-1}(U)\to U$ is an
isomorphism. Then there exists a sequence of morphisms
\[ Y=Y_1 \stackrel{g_1}{\longleftarrow} Y_2
\stackrel{g_2}{\longleftarrow} Y_3 \longleftarrow \ldots
\stackrel{g_{m-1}}{\longleftarrow} Y_m,\]
such that
\begin{enumerate}
\item For each $i$ the map $g_i$ is the blowup of $Y_{i}$ along a
smooth center $C_i$, where $C_i$ lies over $Y\setmin U$ and intersects
the union of the pull-back of $D$ and the exceptional locus of $Y_i\to Y$
normally.
\item The rational map $Y_m\to Y\dra \tilde{Y}$ extends to a morphism
$Y_m\to \tilde{Y}$.
\end{enumerate}
\end{theorem}
Recall that if $Y$ is smooth and $D$ is an s.n.c. divisor on $Y$, then a
smooth subscheme $C\subset Y$ is said to intersect $D$ normally if at
every point $p\in Y$ we can choose a regular system of parameters
$y_1,\ldots, y_r$ so that $D$ is defined by $y_1^{n_1}\cdots
y_r^{n_r}=0$ for some $n_1,\ldots,n_r\in \ZZ$ and $C$ is defined by
vanishing of $y_{i_1},\ldots,y_{i_j}$ for some $i_1,\ldots,i_j$. If $D$
is an s.n.c. divisor and $C$ intersects it normally, then the blowup of
$Y$ along $C$ is smooth and the pull-back of $D$ together with the
exceptional divisor is again an s.n.c. divisor.
The proof of the above theorem is reduced to the problem of principalizing an
ideal sheaf as follows. One may assume that $f: \tilde{Y}\to Y$ is the
blowup of a coherent sheaf of ideals $I$ on $Y$ with co-support in
$Y\setmin U$. If the sequence of blowups $g: Y_m\to Y$ is such that
$g^*(I)$ is principal, then the birational map $Y_m\to \tilde{Y}$ is a
morphism.
We will apply Theorem~\ref{thm-resolution} in the following situation.
\begin{corollary}\label{cor-resolution}
Let $Y$ be a smooth variety and $D$ an s.n.c. divisor on $Y$. Let $\phi:
Y\to X$ be a proper morphism. Since $\pi:\tilde{X}\to X$ is an envelope,
there exists a subvariety $Z\subset \tilde{X}$ mapping birationally onto
$\phi(Y)\subset X$. Let $V\subset \phi(Y)$ be a nonempty open subset
such that $\pi|_{Z}^{-1}(V) \to V$ is an isomorphism, and let $U =
\phi^{-1}(V)\subset Y$.
Then there exists a sequence of blowups $g:Y_m\to Y$ of smooth centers
that lie over $Y\setmin U$ and intersect the inverse image of $D$
together with the exceptional locus normally, such that the composition
$Y_m\to Y\to X$ factors through $\tilde{X}$.
\end{corollary}
\begin{proof}
Let $\tilde{Y}$ be the component of $Y \times_X Z$ that dominates $Y$.
Since $Z \to \phi(Y)$ is projective and birational, the projection $f:
\tilde{Y}\to Y$ is also projective and birational. Moreover, $f$ is an
isomorphism over $U$. We may now apply Theorem~\ref{thm-resolution}.
\end{proof}
The following result can be viewed as an embedded elimination of
indeterminacies.
\begin{corollary} \label{cor-res} Let $W$ be a smooth variety, $D, E$ be
effective divisors on $W$ such that $D+E$ has s.n.c., and $\phi:W\to X$ be a
proper morphism. Then there exists a birational morphism $g:\tilde{W}\to
W$, obtained by a sequence of blowups of smooth centers that lie over
$|D|$ and intersect the pull-back of $D+E$ together with the exceptional
locus normally, such that for every component $\tilde{D}_i$ of the
pull-back $\tilde{D}=g^*(D)$, the composition $\tilde{D}_i\to \tilde{W}
\to W\to X$ factors through $\tilde{X}\to X$.
\end{corollary}
\begin{proof}
Let $D_i$ be a component of $D$. Let $D'$ be the divisor
$D'=(D+E-qD_i)|_{D_i}$, where $q$ is the coefficient of $D_i$ in $D+E$.
We apply Corollary~\ref{cor-resolution} to the map $D_i \to X$ and the
s.n.c. divisor $D'$ on $D_i$. The result is a sequence of blowups
$\tilde{D}_i\to D_i$ so that the composition $\tilde{D}_i\to D_i\to X$
factors through $\tilde{X}$. The centers of the blowups all lie over
$D_i\setmin U$, where $U=\phi^{-1}(V)$ for some nonempty open $V\subset
\phi(D_i)$.
Let us now perform the same sequence of blowups on $W$ (blow up $W$
along the same centers lying in $D_i$ and in the strict transforms of
$D_i$), to get $g: \tilde{W}\to W$. Then $\tilde{D}_i$ is isomorphic to
the strict transform of $D_i$ in $\tilde{W}$. Such blowups introduce new
components to the divisor $g^*(D)$. However, we claim that all these new
exceptional components have image in $X$ of smaller dimension than the
image of $D_i$. Indeed, by the choice of $U$, the centers of blowups lie
over the closed set $\phi(D_i)\setmin V \subset X$, and so do the
exceptional divisors.
Thus, by induction on the dimension of the
image in $X$, we can resolve the indeterminacies of all components of
(the pull-back of) $D$.
\end{proof}
\subsection{Distinguished Liftings}
Let $[Y\to X]$ be an element of $\cM(X)$, with $Y$ irreducible. We construct a lifting of this cycle to $\Omega_*(X)_\pi$.
Let $W=Y\times \PP^1$, $D=Y\times\{0\} \subset W$, and $f:W\to X$ be the composition of the projection to $Y$ and $Y\to X$. We apply Corollary~\ref{cor-res} to this situation (with $E=0$) to find a blowup $g:\tilde{W}\to W$. Let $\tilde{D} = g^*(D)$ be the pull-back of $D$, a possibly nonreduced s.n.c. divisor.
Consider the class $[\tilde{D}\to |\tilde{D}|] \in \Omega_*(|\tilde{D}|)$.
Note that any map $Z\to |\tilde{D}|$ from an irreducible variety $Z$ has image in a component $\tilde{D}_i$ of $\tilde{D}$. Since $\tilde{D}_i\to X$ factors through $\tilde{X}\to X$, the composition $Z\to \tilde{D}_i \to X$ also factors. Similarly, an irreducible double point degeneration in $|\tilde{D}|$, when pushed forward to $X$, factors through $\tilde{X}$. It follows that the push-forward map $\Omega_*(|\tilde{D}|) \to \Omega_*(X)$ factors through $\Omega_*(X)_\pi$. We define a distinguished lifting of $[Y\to X]$ to be the image of $[\tilde{D}\to |\tilde{D}|]$ in $\Omega_*(X)_\pi$. Note that a distinguished lifting depends on the choice of the blowup $\tilde{W}\to W$, but not on the liftings $\tilde{D}_i\to \tilde{X}$ of $\tilde{D}_i\to X$.
\subsection{Product of Divisor Classes}
Let $D$ and $E$ be effective divisors on a scheme $W \in \Smk$, such that $D+E$ has s.n.c. We define the class
\[ [D\bullet E \to |D|\cap|E|] \in \Omega_*( |D|\cap|E|)\]
with the property that, when pushed forward to $W$, it becomes equal to
\[ \ch(\cO_W(D))\circ \ch(\cO_W(E)) (1_W).\]
Let $D=\sum_i n_i D_i$ and $E=\sum_i p_i D_i$, where $D_i$, $i=1,\ldots,r$ are irreducible divisors. For $i=1,\ldots,r$, let $L_i=\cO_W(D_i)$. If $J\subset \{1,\ldots,r\}$ is such that $n_j\neq 0$ and $p_i\neq 0$ for some $i,j\in J$, let $i^J: D^J=\cap_{i\in J} D_i \hookrightarrow |D|\cap |E|$, and $L_i^J = L_i|_{D^J}$.
Let the class $[D\bullet E \to |D|\cap|E|]$ be defined by the formula
\[ \sum_{I, J} i_*^{I\cup J} F_J^{n_1,\ldots,n_r} (L_1^{I\cup J}, \ldots, L_r^{I\cup J}) F_I^{p_1,\ldots,p_r} (L_1^{I\cup J}, \ldots, L_r^{I\cup J}) \prod_{i\in I\cap J} \ch(L_i^{I\cup J}) (1_{D^{I\cup J}}).\]
Here the sum runs over pairs of nonempty subsets $I, J\subset \{1,\ldots,r\}$, such that $n_j \neq 0$ and $p_i\neq 0$ for all $j\in J$ and $i\in I$.
As in the case of divisor classes, it is enough to assume that the divisors $D_i$ are smooth but not necessarily irreducible.
We claim that $[D\bullet E \to |D|\cap|E|]$, when pushed forward to $|D|$, becomes equal to $\ch(\cO_W(E)|_{|D|}) [D\to |D|]$. To see this, we apply $\ch(\cO_W(E)|_{|D|})$ (which we shorten to $\ch(\cO(E))$) to the definition of $[D\to |D|]$:
\[ \ch(\cO(E)) [D\to |D|] =\\
\sum_J i_*^J F_J^{n_1,\ldots,n_r} (L_1^J, \ldots, L_r^J) \ch(\cO(E)) (1_{D^J}).\]
We can now use the same divisor class formula to compute $\ch(\cO(E)) (1_{D^J})$:
\begin{alignat*}{2}
\ch(\cO(E)) (1_{D^J}) &= F^{p_1,\ldots,p_r} (L_1^J, \ldots, L_r^J) (1_{D^J})\\
&= \sum_I F_I^{p_1,\ldots,p_r} (L_1^J, \ldots, L_r^J) \prod_{i\in I} \ch(L_i^J)(1_{D^J}).
\end{alignat*}
Applying the property (Sec), we get
\[ \prod_{i\in I} \ch(L_i^J)(1_{D^J}) = \prod_{i\in I\cap J} \ch(L_i^J) ([D^{I\cup J}\to D^J]).\]
Now putting the formulas back together and using the compatibility of the first Chern class operators with pull-backs proves the claim.
Note that the above definition is symmetric in $E$ and $D$,
\[ [D\bullet E \to |D|\cap|E|] = [E\bullet D \to |D|\cap|E|].\]
This implies that, when pushed forward to $|D|\cup|E|$, the classes $\ch(\cO(E)) [D\to |D|]$ and $\ch(\cO(D)) [E\to |E|]$ become equal.
When $D$ is a smooth divisor that does not have common components with $E$, then $E'=E|_{|D|}$ is an s.n.c. divisor on $|D|$ and one has that
\[ [D\bullet E \to |D|\cap|E|] = [E' \to |E'|].\]
To prove this, let $D=D_1$, $E=\sum_{i>1} p_i D_i$, and let the sum in the definition run over nonempty subsets $J\subset\{1\}$ and $I\subset\{2,\ldots,r\}$. Since $F_J^{1,0,\ldots,0} = 1$ for $J=\{1\}$, the sum simplifies to the expression defining the divisor class $[E' \to |E'|]$.
\subsection{Double Cobordisms}
Let $W$ be a scheme in $\Smk$, $f: W\to \PP^1\times \PP^1$ a morphism, $D=f^{*}(\PP^1\times\{0\})$, $E=f^{*}(\{0\}\times \PP^1)$. Assume that $D+E$ is an s.n.c. divisor on $W$. Write
\begin{alignat*}{3}
D &=& \sum_i a_i D_i +\sum_i \alpha_i F_i,\\
E &= &\sum_i b_i E_i +\sum_i \beta_i F_i,
\end{alignat*}
where $F_i$ are the components of $D+E$ lying over $(0,0)\in\PP^1\times \PP^1$ and $D_i, E_i$ are the other components of $D$ and $E$. We may assume that $\alpha_i, \beta_i >0$.
\begin{lemma} \label{lem-double}
With notation as above, let $D' = \sum_i a_i D_i$. Then the classes
\[ [E\bullet D \to |E|\cap|D|]\qquad\text{and}\qquad [E\bullet D' \to |E|\cap|D'|] \]
become equal when pushed forward to $|E|$.
\end{lemma}
\begin{proof}
Pushed forward to $|E|$, the class $[E\bullet D \to |E|\cap|D|]$ becomes equal to
\begin{gather*}
\ch(\cO(D))[E\to|E|] = \ch(\cO(D')\otimes \cO(\sum_i \alpha_i F_i) )[E\to|E|] \\
= \ch(\cO(D'))[E\to|E|] + \ch(\cO(\sum_i \alpha_i F_i) )[E\to|E|] \\
\qquad + \sum_{j,l\geq 1} a_{j,l} \ch(\cO(D'))^j \ch(\cO(\sum_i \alpha_i F_i) )^l [E\to|E|],
\end{gather*}
where $a_{j,l}\in\LL$ are the coefficients of the formal group law. The first term in the sum gives $[E\bullet D' \to |E|\cap|D'|]$ pushed forward to $|E|$. It suffices to prove that the second term vanishes, because in that case the third term also vanishes. Now $\ch(\cO(\sum_i \alpha_i F_i) )[E\to|E|]$ is the push-forward to $|E|$ of the class $[(\sum_i \alpha_i F_i) \bullet E \to |\sum_i \alpha_i F_i| \cap |E|]$. If we instead push this class forward to $|\sum_i \alpha_i F_i|$, we get $\ch(\cO(E)) [\sum_i \alpha_i F_i \to |\sum_i \alpha_i F_i| ] = 0$ because $\cO(E)$ is trivial on $|\sum_i \alpha_i F_i|$. Since $\beta_i>0$ for all $i$, the inclusion maps factor:
\[ |\sum_i \alpha_i F_i| \cap |E| \hookrightarrow |\sum_i \alpha_i F_i| \hookrightarrow |E|,\]
and then the push-forward maps also factor.
\end{proof}
\subsection{Uniqueness of Distinguished Liftings}
\begin{lemma}
Let $[Y\to X]$ be a cycle in $\cM(X)$, with $Y$ irreducible, and consider two distinguished liftings of it defined by the images of $[\tilde{D}_1\to |\tilde{D}_1|]$ and $[\tilde{D}_2\to |\tilde{D}_2|]$ in $\Omega_*(X)_\pi$, where
\begin{gather*}
g_1: \tilde{W}_1 \to Y\times \PP^1, \quad \tilde{D}_1 = g_1^*(Y\times\{0\}),\\
g_2: \tilde{W}_2 \to Y\times \PP^1, \quad \tilde{D}_2 = g_2^*(Y\times\{0\}).
\end{gather*}
Then the two distinguished liftings are equal in $\Omega_*(X)_\pi$.
\end{lemma}
\begin{proof} We may assume that the birational map $ \tilde{W}_1 \dra \tilde{W}_2$ is a morphism. (Otherwise find a third variety $\tilde{W}_3$ that maps to both of them.) It suffices to prove that the class $[\tilde{D}_1\to |\tilde{D}_1|]$, when pushed forward to $|\tilde{D}_2|$, becomes equal to $[\tilde{D}_2\to |\tilde{D}_2|]$.
Since the morphism $\tilde{W}_1 \to \tilde{W}_2$ is proper and both varieties are quasi-projective, the morphism is projective. By the weak factorization theorem \cite{Wlodarczyk, AKMW}, we can factor the birational morphism $ \tilde{W}_1 \to \tilde{W}_2$ into a sequence of blowups and blowdowns along smooth centers. Moreover, the factorization can be chosen so that if $Z_{i+1}\to Z_{i}$ is one blowup of $C\subset Z_i$ in this factorization, then the birational map $g_i: Z_i \dra \tilde{W}_2$ is a projective morphism, $g_i^*(\tilde{D}_2)$ is an s.n.c. divisor on $Z_i$, the center $C$ lies in the support of the divisor $g_i^*(\tilde{D}_2)$ and intersects it normally.
We may thus assume that $ \tilde{W}_1 \to \tilde{W}_2$ is the blowup of $\tilde{W}_2$ along a smooth center $C\subset \tilde{W}_2$ that lies in the support of $\tilde{D}_2$ and intersects it normally.
Let $\tilde{V}_2 = \tilde{W}_2\times\PP^1$. Let $\tilde{V}_1$ be the blowup of $\tilde{V}_2$ along $C\times\{0\} \subset \tilde{V}_2$. Let $f: \tilde{V}_1 \to \PP^1\times\PP^1$ be the projection. Consider the divisors $D=f^*(\PP^1\times\{0\})$ and $E=f^*(\{0\}\times \PP^1)$. Then $D+E$ is an s.n.c. divisor. Moreover, $D=D'+F$, where $F$ is the exceptional divisor of the blowup, lying over $(0,0)\in\PP^1\times\PP^1$, and $D'\isom \tilde{W}_1$. Since $D'$ is smooth, having no common component with $E$, and $E|_{D'} = \tilde{D}_1$, we get
\[ [D'\bullet E\to |D'|\cap |E|] = [\tilde{D}_1\to |\tilde{D}_1|]. \]
Lemma~\ref{lem-double} implies that, when pushed forward to $|E|$, this class becomes $\ch(\cO(D))[E\to |E|]$, which itself is equal to $[\tilde{D}_2\to |\tilde{D}_2|]$ pushed forward to $|E|$ by a section $s: |\tilde{D}_2|\to |E|$ of the projection $|E|\to |\tilde{D}_2|$ . The two classes are equal when pushed forward to $|\tilde{D}_2|$.
\end{proof}
The previous lemma proves that distinguished liftings are unique.
We extend the liftings of generators $[Y\to X]$ linearly to a group homomorphism $d: \Mp(X)\to \Omega_*(X)_\pi$. The distinguished lifting of the class $\pi_*[Y\to \tilde{X}] \in \Mp(X)$ is the class $\pi_*[Y\to \tilde{X}] \in \Omega_*(X)_\pi$, hence the composition
\[ \Mp(\tilde{X}) \stackrel{\pi_*}{\longrightarrow} \Mp(X) \stackrel{d}{\longrightarrow} \Omega_*(X)_\pi \]
is the canonical projection.
\begin{remark} \label{rem-dist-lift} The proof of the lemma shows that, to define the distinguished lifting of $[Y\to X]$, it is not necessary to require that $\tilde{W}$ is the blowup of $Y\times\PP^1$; we can allow both blowups and blowdowns along centers lying over $Y\times \{0\}$. More precisely, we need a proper birational map $g: \tilde{W}\dra Y\times \PP^1$ from a smooth quasi-projective variety $\tilde{W}$, satisfying
\begin{enumerate}
\item The map $g$ is a regular isomorphism over $Y\times (\PP^1\setmin\{0\})$.
\item The composition with the second projection $ \tilde{W}\dra Y\times \PP^1 \to \PP^1$ is a morphism whose fiber over $0$ is a divisor $\tilde{D}$ with s.n.c. on $ \tilde{W}$.
\item The rational map $\tilde{W}\dra Y\times \PP^1 \to X\times \PP^1$ extends to a projective morphism.
\item The morphism $\tilde{D}\to X$ factors through $\pi: \tilde{X}\to X$ on each component of $\tilde{D}$.
\end{enumerate}
Then the image of the class $[\tilde{D}\to |\tilde{D}|]$ in $\Omega_*(X)_\pi$ defines the distinguished lifting of $[Y\to X]$.
\end{remark}
\subsection{Completion of the Proof of Proposition~\ref{prop-isom}}
It remains to prove that the distinguished lifting $d: \Mp(X)\to \Omega_*(X)_\pi$ maps $\Rel(X)$ to zero. Consider a double point degeneration $f:W\to \PP^1$, $W\to X$. Let $W_\infty = f^{-1}(\infty)$ be a smooth fiber and $W_0 = f^{-1}(0) = A \cup B$. Recall that the double point relation is
\[ [W_\infty\to X] - [A\to X] - [B\to X] +[\PP_{A\cap B}\to X],\]
where $\PP_{A\cap B}= \PP(\cO_{A\cap B}(A) \oplus \cO_{A\cap B})$. Since $\Rel(X)$ is generated by the double point relations, it suffices to prove that
\[ d[W_\infty\to X] - d[A\to X] -d [B\to X] +d[\PP_{A\cap B}\to X] = 0 .\]
Let $V=W\times \PP^1$. We blow up $V$ along smooth centers lying over $W\times \{0\}$ that intersect the pull-back of $W_0\times \PP^1+ W_\infty\times \PP^1+ W\times\{0\}$ normally. Let the result be $\tilde{V}$, such that the map from the inverse image of $W\times \{0\}$ to $X$ lifts to $\tilde{X}$ on every irreducible component. Let
\[ g: \tilde{V} \to W\times\PP^1 \to \PP^1\times\PP^1.\]
Define
\[ E=g^*(\PP^1\times \{0\}), \quad D_0=g^*(\{0\}\times \PP^1), \quad D_\infty=g^*(\{\infty\}\times \PP^1).\]
Let $D_0'$, $D_\infty'$ be the sums of components in $D_0$, $D_\infty$ that do not map to $(0,0)$ or $(\infty,0)$ in $\PP^1\times\PP^1$. Then $D_\infty'$ is the blowup of $W_\infty\times\PP^1$ along centers lying over $W_\infty\times \{0\}$. Since $D'_\infty$ is smooth and has no component in common with $E$, it follows that $[D_\infty'\bullet E \to |D_\infty'|\cap |E|]$ is the divisor class of $E|_{D'_\infty}$, which gives the distinguished lifting of $[W_\infty\to X]$.
Similarly, $D_0'$ is the blowup of $W_0\times\PP^1 = (A\cup B)\times \PP^1$ along centers lying over $W_0\times \{0\}$. The divisor $D_0'$ is a union $A'\cup B'$ of two smooth divisors, the blowups of $A\times \PP^1$ and $B\times \PP^1$. The intersection of these divisors is a blowup of $(A\cap B)\times\PP^1$. We claim that when pushed forward, $[D_0'\bullet E \to |D_0'|\cap |E|]$ gives the class $d[A\to X] +d [B\to X] -d[\PP_{A\cap B}\to X]$.
Let $A'=D_1$, $B'=D_2$ and $E=\sum_{i>2} p_i D_i$. Then in the formula defining $[D_0'\bullet E \to |D_0'|\cap |E|]$ we may take the sum over nonempty subsets $J\subset\{1,2\}$ and $I\subset\{3,\ldots,r\}$. We divide the formula into three pieces corresponding to $J=\{1\}, J=\{2\}, J=\{1,2\}$:
\begin{alignat}{2} \label{3sums}
[D_0'\bullet E \to |D_0'|\cap |E|]
& = \sum_{I} i_*^{I\cup \{1\}} F_I^{p_3,\ldots,p_r} (L_3^{I\cup \{1\}}, \ldots, L_r^{I\cup \{1\}}) (1_{D^{I\cup \{1\}}}) \notag \\
& + \sum_{I} i_*^{I\cup \{2\}} F_I^{p_3,\ldots,p_r} (L_3^{I\cup \{2\}}, \ldots, L_r^{I\cup \{2\}}) (1_{D^{I\cup \{2\}}})\\
& + \sum_{I} i_*^{I\cup \{1,2\}} F_{\{1,2\}}^{1,1} (L_1^{I\cup \{1,2\}}, L_2^{I\cup \{1,2\}}) F_I^{p_3,\ldots,p_r} (L_3^{I\cup \{1,2\}}, \ldots, L_r^{I\cup \{1,2\}}) (1_{D^{I\cup \{1,2\}}}).\notag
\end{alignat}
Here we used that $F^{1,1}_J = 1$ if $|J|=1$. In this triple sum, the first sum is the push-forward of the class $[E|_{A'}\to |E|\cap A']$, hence it gives the distinguished lifting of $[A\to X]$. Similarly, the second sum gives the distinguished lifting of $[B\to X]$.
In the remainder of the proof we show that the third sum in Equation~(\ref{3sums}) gives the distinguished lifting of $-[\PP_{A\cap B}\to X]$. This is sufficient to finish the proof.
Indeed, when pushed forward to $|E|$, by Lemma~\ref{lem-double} the classes $[D_\infty'\bullet E \to |D_\infty'|\cap |E|]$ and $[D_0'\bullet E \to |D_0'|\cap |E|]$ are equal to $\ch(\cO_{|E|}(D_\infty))[E\to |E|] = \ch(\cO_{|E|}(D_0))[E\to |E|]$. Since in $|E|\to X$ every component lifts to $\tilde{X}$, the two classes are equal in $\Omega_*(X)_\pi$.
Consider for each subset $I\subset \{3,\ldots,r\}$ the smooth divisors $A'|_{D^I}$ and $B'|_{D^I}$ on $D^I$, intersecting transversely along $D^{I\cup\{1,2\}}$. Define
\[ \PP_{D^{I\cup\{1,2\}}} = \PP (\cO_{D^{I\cup\{1,2\}}}(A')\oplus \cO_{D^{I\cup\{1,2\}}}).\]
Using the same formula for $I=\emptyset$ defines $\PP_{D^{\{1,2\}}} = \PP_{A'\cap B'}$. Then clearly
\[ \PP_{D^{I\cup\{1,2\}}} = \PP_{A'\cap B'} \times_{A'\cap B'} D^{I\cup\{1,2\}}.\]
Let $D_0 = A' + B' + C'$, where $C'=\sum_{i>2} c_i D_i$ is a divisor lying over $(0,0)\in\PP^1\times\PP^1$. Then by Lemma~\ref{lem-pair},
\begin{equation} \label{eq-pp}
F_{\{1,2\}}^{1,1} (L_1^{I\cup \{1,2\}}, L_2^{I\cup \{1,2\}}) (1_{D^{I\cup\{1,2\}}}) = -[\PP_{D^{I\cup\{1,2\}}} \to D^{I\cup\{1,2\}}] + \beta_I,
\end{equation}
where
\[ \beta_I = \sum_{i,j\geq 0, l> 0} b_{ijl} \ch(\cO_{D^{I\cup\{1,2\}}}(A'))^i \ch(\cO_{D^{I\cup\{1,2\}}}(B'))^j \ch(\cO_{D^{I\cup\{1,2\}}}(C'))^l (1_{D^{I\cup\{1,2\}}}),\]
and the coefficients $b_{ijl}\in\LL$ are independent of $I$.
Consider the morphism $\PP_{A'\cap B'} \to A'\cap B' \to \PP^1$. The projective bundles $\PP_{A'\cap B'}$ and $\PP_{A\cap B}\times \PP^1$ are isomorphic over $\PP^1\setmin\{0\}$. Moreover,
the induced birational map $\PP_{A'\cap B'} \to \PP_{A\cap B}\times \PP^1$ satisfies the conditions of Remark~\ref{rem-dist-lift}, hence $\PP_{A'\cap B'}$ can be used to define the distinguished lifting of $[\PP_{A\cap B}\to X]$. This distinguished lifting (with minus sign) is obtained by substituting (\ref{eq-pp}) into the third sum of Equation~(\ref{3sums}) and setting all $\beta_I=0$:
\[ \sum_{I} i_*^{I\cup \{1,2\}} F_I^{p_3,\ldots,p_r} (L_3^{I\cup \{1,2\}}, \ldots, L_r^{I\cup \{1,2\}}) (-[\PP_{D^{I\cup\{1,2\}}} \to D^{I\cup\{1,2\}}]).\]
Now it suffices to prove that the sum of the terms in Equation~(\ref{3sums}) involving $\beta_I$ vanishes when pushed forward to $|E|$. Then it also vanishes in $\Omega_*(X)_\pi$.
We claim that there exists a class $\alpha\in\Omega_*(|E|)$, such that for each $I$, the class $\beta_I$ pushed forward to $|E|$ is equal to
\[ \left(\prod_{i\in I} \ch(\cO_{|E|}(D_i))\right)(\alpha).\]
For this note that the first Chern class operators on $\Omega_*(|E|)$ commute in the following sense. For each $i,j>2$, when pushed forward to $|E|$, the classes
\[ \ch(\cO_{D_j}(D_i) )(1_{D_j}) \ \ \textnormal{and} \ \ \ch(\cO_{D_i}(D_j)) (1_{D_i})\]
become equal. Now consider $\ch(\cO_{D^{I\cup\{1,2\}}}(C')) (1_{D^{I\cup\{1,2\}}})$. Using the formal group law, we can express $\ch(\cO_{D^{I\cup\{1,2\}}}(C'))$ using $\ch(\cO_{D^{I\cup\{1,2\}}}(D_i))$ for $i>2$ and by the commutativity property, $\ch(\cO_{D^{I\cup\{1,2\}}}(C')) (1_{D^{I\cup\{1,2\}}})$ when pushed forward to $|E|$ becomes equal to
\[ \left(\prod_{i\in I} \ch(\cO_{|E|}(D_i))\right)(\alpha')\]
for some $\alpha'\in \Omega_*(|E|)$, which is independent of $I$. Now set
\[ \alpha = \sum_{i,j\geq 0, l> 0} b_{ijl} \ch(\cO_{|E|}(A'))^i \ch(\cO_{|E|}(B'))^j \ch(\cO_{|E|}(C'))^{l-1} (\alpha').\]
Then clearly this $\alpha$ satisfies the required property.
The sum of terms in Equation~(\ref{3sums}) involving $\beta_I$, when pushed forward to $|E|$, becomes equal to
\[ F^{p_3,\ldots,p_r} ( \cO_{|E|}(D_3), \ldots, \cO_{|E|}(D_r)) (\alpha) = \ch(\cO_{|E|}(E))(\alpha).\]
Since $\cO_{|E|}(E)$ is trivial, this class vanishes.
\qed
\bibliographystyle{plain}
|
2,877,628,089,069 | arxiv | \section{Introduction} \label{intro}
Finite volume weighted essentially non$-$oscillatory (WENO) reconstruction scheme represents the state of art numerical methods in one$-$ and two$-$dimensional hyperbolic conservation laws \cite{titarev2005weno,Mignone-2014,Titarev-2004,Jiang-1996,Liu-1994,balsara2000monotonicity}. Finite volume methods deal with the volume averages, which changes only when there is an imbalance of the fluxes across the control volume \cite{Mignone-2014}. Flux evaluation at an interface requires an important task of reconstructing the cell averaged value at the interface \cite{Mignone-2014}. High order reconstruction is preferred for the cases of complex flow phenomena including discontinuous flows \cite{dumbser2013efficient,shu2009high}, smooth flows with turbulence \cite{dumbser2013high} \cite{Shu-2003}, aeroacoustics \cite{Shu-2003}, sediment transport \cite{vcrnjaric2004extension} and magnetohydrodynamics (MHD) \cite{Jiang-1999,balsara2009divergence,balsara2009efficient}. In a plethora of reconstruction techniques including $p^{th}$ order accurate essentially non$-$oscillatory (ENO) scheme \cite{Casper-1993}, second order total variation diminishing (TVD) methods \cite{Mignone-2014}, discontinuous Galerkin methods \cite{Shu-2003}, and modified piecewise parabolic method (PPM) \cite{Mignone-2014,colella1984piecewise,colella2008limiter,mccorquodale2011high}, WENO stands a chance by its virtue of attaining a convexly combined $(2p-1)^{th}$ order of convergence for smooth flows aided with a novel ENO strategy for maintaining high order accuracy even for the discontinuous flows \cite{Mignone-2014,Casper-1993}.
The conventional WENO scheme is specifically designed for the reconstruction in Cartesian coordinates on uniform grids \cite{Jiang-1996, Liu-1994}. For an arbitrary curvilinear mesh, the procedure of using a Jacobian, in order to map a general curvilinear mesh to a uniform Cartesian mesh, is employed \cite{Casper-1993}. However, the employment of Cartesian-based reconstruction scheme on a curvilinear grid suffers from a number of drawbacks, e.g., in the original PPM paper \cite{colella1984piecewise}, reconstruction was performed in volume coordinates (than the linear ones) so that algorithm for a Cartesian mesh can be used on a cylindrical/spherical mesh. However, the resulting interface states became first order accurate even for smooth flows \cite{colella1984piecewise}. Another example can be the volume average assignment to the geometrical cell center of finite volume than the centroid \cite{monchmeyer1989conservative, doi:10.1093/mnras/250.3.581, ziegler2011semi}. The reconstruction in general coordinates can be performed with the aid of two techniques: genuine multi$-$dimensional reconstruction and dimension$-$by$-$dimension reconstruction \cite{Casper-1993}. Genuine multi$-$dimensional reconstruction is computationally expensive and highly complicated since it considers all of the finite volumes while constructing the polynomial \cite{Casper-1993}. A better approach is to perform a dimension$-$by$-$dimension reconstruction since it consists of less expensive one$-$dimensional sweeps in every dimension and most of the problems of engineering interests are considered in orthogonally$-$curvilinear coordinates like Cartesian, cylindrical, and spherical coordinates with regularly$-$spaced and irregularly$-$spaced grids. A breakthrough in the field of high order reconstruction in these coordinates is the application of the Vandermonde$-$like linear systems of equations with spatially varying coefficients \cite{Mignone-2014}. It is reintroduced in the present work to build a basis for the derivation of the high order WENO schemes. Mignone \cite{Mignone-2014} restricted the work to the usage of the third order WENO approach with the weight functions provided by Yamaleev and Carpenter \cite{Yamaleev-2009} and did not extend it to multi$-$dimensions (2D and 3D). In Mignone's paper \cite{Mignone-2014}, modified piecewise parabolic method (PPM$_5$) of order $\sim 2-3$ gave better results when compared with the modified third order WENO. However, the latter reconstruction scheme gave consistent values for all the numerical tests performed. Also, there is a drop of accuracy in the modified third order WENO scheme for discontinuous flow cases \cite{Mignone-2014} when the standard weights derived by Jiang and Shu \cite{Jiang-1996} are used, as they are specifically restricted to the Cartesian grids.
The motivation for the present work is to develop a fifth order finite volume WENO$-$C reconstruction scheme in orthogonally$-$curvilinear coordinates for regularly$-$spaced and irregularly$-$spaced grids. It is based on the concepts of linear weights by Mignone \cite{Mignone-2014} and optimal weights, smoothness indicators by Jiang and Shu \cite{Jiang-1996}. Also, the present work provides a computationally efficient extension of this scheme to multi$-$dimensions and deals with the source terms straightforwardly.
The present work is divided into four sections. Section \ref{WENO} includes the fifth order finite volume WENO$-$C reconstruction procedure for a regularly$-$/irregularly$-$spaced grid in orthogonally$-$curvilinear coordinates. It is followed by Section \ref{tests} in which 1D and 2D numerical benchmark tests involving smooth and discontinuous flows in cylindrical and spherical coordinates are presented. Finally, Section \ref{conclusions} concludes the paper. Appendix at the end is divided into two sections. The first section includes the analytical values of the weights required for WENO$-$C reconstruction and flux/source term integration for standard uniform grids, whereas the second section includes linear stability analysis of the proposed scheme.
\section{Fifth order finite volume WENO$-$C reconstruction} \label{WENO}
\subsection{Finite volume discretization in curvilinear coordinates}
The scalar conservation law in an orthogonal system of coordinates $(x_1,x_2,x_3)$ having the scale factors $h_1,h_2,h_3$ and unit vectors $(\bf{\hat{e}_1,\hat{e}_2,\hat{e}_3})$ in the respective directions, is given in Eq. (\ref{eq:1}).
\begin{equation} \label{eq:1}
\frac{\partial{Q}}{\partial{t}}+\nabla{\bf{.F}} =S
\end{equation}
where $Q$ is the conserved quantity of the fluid, ${\bf{F}}=(F_1,F_2,F_3)$ is the corresponding flux vector, and $S$ is the source term. The divergence operator is further expressed in the form of Eq. (\ref{eq:2}).
\begin{equation} \label{eq:2}
\nabla{\bf{.F}}=\frac{1}{h_1h_2h_3}\bigg[{\frac{\partial}{\partial{x_1}}(h_2h_3F_1)+\frac{\partial}{\partial{x_2}}(h_1h_3F_2)+\frac{\partial}{\partial{x_3}}(h_1h_2F_3)}\bigg]
\end{equation}
Eq. (\ref{eq:1}) is discretized over a computational domain comprising $N_1 \times N_2 \times N_3$ cells in the corresponding directions with the grid sizes given in Eq. (\ref{eq:3}).
\begin{equation} \label{eq:3}
\Delta{x_{1,i}}=x_{1,i+\frac{1}{2}}-x_{1,i-\frac{1}{2}},\quad \Delta{x_{2,j}}=x_{2,j+\frac{1}{2}}-x_{2,j-\frac{1}{2}},\quad \Delta{x_{3,k}}=x_{3,k+\frac{1}{2}}-x_{3,k-\frac{1}{2}}
\end{equation}
For the sake of simplicity, the notation $(i,j,k)$ is mentioned as $\bf{i}$ where $\bf{i} \in \mathbb{Z}^3$; and $\bf\mathbb{Z}^3$ is a vector of coordinate index in the computational domain with $1\le i \le N_1$, $1\le j \le N_2$, and $1\le k \le N_3$. Also, the position of a cell interface orthogonal to any direction $(d)$ is given by $\bf\hat{e}_d$ and it is denoted by $\bf{i}\pm\frac{1}{2}\bf{\hat{e}_d}$. For example, $\bf{i}\pm\frac{1}{2}\bf{\hat{e}_1}$ refers to the ${i\pm\frac{1}{2}}$ interfaces of the cell $\bf i$ in $\bf{\hat{e}_1}$ direction. The cell volume is given in Eq. (\ref{eq:4}).
\begin{equation}\label{eq:4}
\Delta{\mathcal{V}_{i,j,k}=\int_{x_{3,k-\frac{1}{2}}}^{x_{3,k+\frac{1}{2}}}\int_{x_{2,j-\frac{1}{2}}}^{x_{2,j+\frac{1}{2}}}\int_{x_{1,i-\frac{1}{2}}}^{x_{1,i+\frac{1}{2}}}h_1h_2h_3dx_1dx_2dx_3}
\end{equation}
The flux $F_d$ is averaged over the surface$-$area $A_d$ of the interface $\bf{i}+\frac{1}{2}\bf{\hat{e}_1}$, as given in Eq. (\ref{eq:5}).
\begin{equation}\label{eq:5}
\tilde{F}_{1,{\bf{i}+\frac{1}{2}\bf{\hat{e}_1}}}=\frac{1}{A_{1,{\bf{i}+\frac{1}{2}\bf{\hat{e}_1}}}}\int_{x_{3,k-\frac{1}{2}}}^{x_{3,k+\frac{1}{2}}}\int_{x_{2,j-\frac{1}{2}}}^{x_{2,j+\frac{1}{2}}}F_1 h_2 h_3 dx_2 dx_3
\end{equation}
where the cross$-$sectional area ${A_{1,{\bf{i}+\frac{1}{2}\bf{\hat{e}_1}}}}$ is provided in Eq. (\ref{eq:6}). Here the scale factors $h_2,h_3$ are the functions of the position vector at the interface ${\bf{i}+\frac{1}{2}\bf{\hat{e}_1}}$.
\begin{equation}\label{eq:6}
A_{1,{\bf{i}+\frac{1}{2}\bf{\hat{e}_1}}}=\int_{x_{3,k-\frac{1}{2}}}^{x_{3,k+\frac{1}{2}}}\int_{x_{2,j-\frac{1}{2}}}^{x_{2,j+\frac{1}{2}}}h_2 h_3 dx_2 dx_3
\end{equation}
Similarly, the expressions for the other directions ($d=2,3$) can be obtained by cyclic permutations. The final form of the discretized conservation law can be derived by integrating Eq. (\ref{eq:1}) over the cell volume and applying the Gauss theorem to the flux term yielding Eq. (\ref{eq:7}), where $\bar{Q}_{\bf{i}}$ and $\bar{S}_{\bf{i}}$ are respectively the conservative variable and the source term averaged over the finite volume $\bf{i}$.
\begin{equation}\label{eq:7}
\frac{\partial}{\partial{t}} \bar{Q}_{\textbf{i}}+ \frac{1}{\Delta \mathcal{V}_\textbf{i}}\sum\limits_{d}\bigg[(A_d\tilde{F}_d)_{\textbf{i}+\frac{1}{2}\bf{\hat{e}_d}}-(A_d\tilde{F}_d)_{\textbf{i}-\frac{1}{2}\bf{\hat{e}_d}}\bigg]=\bar{S}_{\textbf{i}}
\end{equation}
In cylindrical coordinates, ($x_1,x_2,x_3$)$\equiv$($R,\theta,z$), ($h_1,h_2,h_3$)$\equiv$($1,R,1$), and Eq. (\ref{eq:7}) transforms into Eq. (\ref{eq:8}).
\begin{equation}\label{eq:8}
\begin{split}
\frac{\partial}{\partial{t}}\bar{Q}_{\bf{i}}=-{\frac{(\tilde{F}_RR)_{\bf{i}+\frac{1}{2}\bf{\hat{e}_r}}-(\tilde{F}_RR)_{\bf{i}-\frac{1}{2}\bf{\hat{e}_r}}}{\Delta{\mathcal{V}_{R,i}}}}-\frac{(\tilde{F}_\theta)_{\bf{i}+\frac{1}{2}\bf{\hat{e}_{\theta}}}-(\tilde{F}_\theta)_{\bf{i}-\frac{1}{2}\bf{\hat{e}_{\theta}}}}{R_i\Delta{\theta_j}} \\
-\frac{(\tilde{F}_z)_{\bf{i}+\frac{1}{2}\bf{\hat{e}_z}}-(\tilde{F}_z)_{\bf{i}-\frac{1}{2}\bf{\hat{e}_z}}}{\Delta{z_k}}+\bar{S}_{\bf{i}}
\end{split}
\end{equation}
where ($\tilde{F}_R,\tilde{F_{\theta}},\tilde{F}_z$) are the surface averaged flux vector ($\bf{F}$) components in ($R,\theta,z$) directions and $\Delta{\mathcal{V}_{R,i}}=(R_{i+\frac{1}{2}}^2-R_{i-\frac{1}{2}}^2)/2$ is the cell radial volume.
In spherical coordinates, ($x_1,x_2,x_3$)$\equiv$($r,\theta,\phi$), ($h_1,h_2,h_3$)$\equiv$($1,r,rsin\theta$), and Eq. (\ref{eq:7}) transforms into Eq. (\ref{eq:9}).
\begin{equation}\label{eq:9}
\begin{split}
\frac{\partial}{\partial{t}}\bar{Q}_{\bf{i}}=-{\frac{(\tilde{F}_rr^2)_{\bf{i}+\frac{1}{2}\bf{\hat{e}_r}}-(\tilde{F}_rr^2)_{\bf{i}-\frac{1}{2}\bf{\hat{e}_r}}}{\Delta{\mathcal{V}_{r,i}}}}-\frac{(\tilde{F}_{\theta}sin\theta)_{\bf{i}+\frac{1}{2}\bf{\hat{e}_{\theta}}}-(\tilde{F}_{\theta}sin\theta)_{\bf{i}-\frac{1}{2}\bf{\hat{e}_{\theta}}}}{\tilde{r}_i\Delta{\mu_j}} \\
-{\frac{\Delta{\theta_j}}{\Delta{\mu_j}}}\frac{(\tilde{F}_\phi)_{\bf{i}+\frac{1}{2}\bf{\hat{e}_{\phi}}}-(\tilde{F}_\phi)_{\bf{i}-\frac{1}{2}\bf{\hat{e}_{\phi}}}}{\tilde{r}_i\Delta{\phi_k}}+\bar{S}_{\bf{i}}
\end{split}
\end{equation}
where ($\tilde{F}_r,\tilde{F_{\theta}},\tilde{F}_{\phi}$) are the surface averaged flux vector components in ($r,\theta,\phi$) directions and the remaining geometrical factors are provided in Eq. (\ref{eq:10}).
\begin{equation} \label{eq:10}
\Delta{\mathcal{V}_{r,i}}=\frac{(r_{i+\frac{1}{2}}^3-r_{i-\frac{1}{2}}^3)}{3};\quad \tilde{r}_i=\frac{2}{3}\frac{(r_{i+\frac{1}{2}}^3-r_{i-\frac{1}{2}}^3)}{(r_{i+\frac{1}{2}}^2-r_{i-\frac{1}{2}}^2)}; \quad \Delta{\mu_j}=cos{\theta_{j-\frac{1}{2}}}-cos{\theta_{j+\frac{1}{2}}}
\end{equation}
\subsection{Evaluation of the linear weights} \label{linearweights}
A non$-$uniform grid spacing with zone width $\Delta{\xi}_{i}={\xi}_{i+\frac{1}{2}}-{\xi}_{i-\frac{1}{2}}$ is considered having $\xi \in (x_1,x_2,x_3)$ as the coordinate along the reconstruction direction and ${\xi}_{i+\frac{1}{2}}$ denoting the location of the cell interface between zones $i$ and $i+1$. Let $\bar{Q}_{i}$ be the cell average of conserved quantity $Q$ inside zone $i$ at some given time, which can be expressed in form of Eq. (\ref{eq:11}).
\begin{equation} \label{eq:11}
\bar{Q}_{i} = \frac{1}{{\Delta\mathcal{V}_{i}}}{\int_{{\xi}_{i-\frac{1}{2}}}^{{\xi}_{i+\frac{1}{2}}}Q_i(\xi)\frac{\partial{\mathcal{V}}}{\partial\xi}d\xi}
\end{equation}
where the local cell volume $\Delta{\mathcal{V}}_i$ of $i^{th}$ cell in the direction of reconstruction given in Eq. (\ref{eq:12})
\begin{equation} \label{eq:12}
\Delta{\mathcal{V}}_i={\int_{{\xi}_{i-\frac{1}{2}}}^{{\xi}_{i+\frac{1}{2}}}\frac{\partial{\mathcal{V}}}{\partial\xi}d\xi}
\end{equation}
$\frac{\partial{\mathcal{V}}}{\partial\xi}$ is a one$-$dimensional Jacobian whose values for volumetric operations are summarized in Table \ref{tab:1} for structured grids in standard coordinates.
\begin{table}[h!]
\centering
\caption{One$-$dimensional Jacobian $\big(\frac{\partial{\mathcal{V}}}{\partial\xi}\big)$ values for the regularly$-$spaced grids for volumetric operations}
\begin{tabular}{ |c | c | c |}
\hline
\mbox{$Coordinates$} & {$Direction(s)$} & {$\frac{\partial{\mathcal{V}}}{\partial\xi}$}\\
\hline
Cartesian & $x, y, z$ & $\xi^0$\\
\hline
\multirow{2}{*}{Cylindrical} & $R$ & $\xi^1$\\
\cline{2-3}
& $\theta,z$ & $\xi^0$\\
\hline
\multirow{3}{*}{Spherical} & $r$ & $\xi^2$\\
\cline{2-3}
& $\theta$ & $sin\xi$\\
\cline{2-3}
& $\phi$ & $\xi^0$\\
\hline
\end{tabular}
\label{tab:1}
\end{table}
Now, our aim is to find a $p^{th}$ order accurate approximation to the actual solution by constructing a $(p-1)^{th}$ order polynomial distribution, as given in Eq. (\ref{eq:13}).
\begin{equation} \label{eq:13}
Q_i(\xi) = a_{i,0} +a_{i,1}({\xi}-{\xi_i^c})+a_{i,2}({\xi}-{\xi_i^c})^2 +...+a_{i,p-1}({\xi}-{\xi_i^c})^{p-1}
\end{equation}
where ${a_{i,n}}$ corresponds to a vector of the coefficients which to be determined and ${\xi_i^c}$ can be taken as the cell centroid. However, the final values at the interface are independent of the particular choice of ${\xi_i^c}$ and one may as well set ${\xi_i^c}=0$ \cite{Mignone-2014}. Unlike the cell center, the centroid is not equidistant from the cell interfaces in the case of curvilinear coordinates, and the cell averaged values are assigned at the centroid \cite{Mignone-2014}. Further, the method has to be locally conservative, i.e., the polynomial $Q_i(\xi)$ must fit the neighboring cell averages, satisfying Eq. (\ref{eq:14}).
\begin{equation} \label{eq:14}
{\int_{{\xi}_{i+s-\frac{1}{2}}}^{{\xi}_{i+s+\frac{1}{2}}}Q_i(\xi)\frac{\partial{\mathcal{V}}}{\partial\xi}d\xi} = {{\Delta\mathcal{V}_{i+s}}}\bar{Q}_{i+s}\quad\quad\textrm{for}\quad-i_L\le s \le i_R
\end{equation}
where the stencil includes $i_L$ cells to the left and $i_R$ cells to the right of the $i^{th}$ zone such that $i_L+i_R+1 = p$. Implementing Eqs. (\ref{eq:12}) and (\ref{eq:13}) in Eq. (\ref{eq:14}) along with a simplification leads to a $p\times p$ linear system (\ref{eq:15}) in the coefficients \{${a_{i,n}}$\}.
\begin{equation} \label{eq:15}
\begin{pmatrix}
\beta_{i-i_L,0} & \dots & \beta_{i-i_L,p-1} \\
\vdots & \ddots & \vdots \\
\beta_{i+i_R,0} & \dots & \beta_{i+i_R,p-1}
\end{pmatrix}
\begin{pmatrix}
a_{i,0} \\
\vdots \\
a_{i,p-1}
\end{pmatrix}
=
\begin{pmatrix}
\bar{Q}_{i-i_L} \\
\vdots \\
\bar{Q}_{i+i_R}
\end{pmatrix}
\end{equation}
where
\begin{equation} \label{eq:16}
\beta_{i+s,n}=\frac{1}{\Delta{\mathcal{V}}_{i+s}}{\int_{{\xi}_{i+s-\frac{1}{2}}}^{{\xi}_{i+s+\frac{1}{2}}}({\xi-\xi_i^c})^{n}\frac{\partial{\mathcal{V}}}{\partial\xi}d\xi}
\end{equation}
Eq. (\ref{eq:15}) can be written in the short notation using a $p\times p$ matrix $\bf{B}$ with the rows ranging from $s=-i_L,...,i_R$ and columns ranging from $n=0,...,p-1$.
\begin{equation} \label{eq:17}
\sum \limits_{n=0}^{p-1}{\bf{B}}_{sn}a_{i,n}=\bar{Q}_{i+s}
\end{equation}
However, evaluation of the weights $a_{i,k}$ in Eqs. (\ref{eq:15}) and (\ref{eq:17}) requires zone averaged values $\bar{Q}_{i}$, thus, increasing the computational cost of the whole process as it needs to be evaluated at every time step. The coefficients $\{a_{i,n}\}$ extracted from Eq. (\ref{eq:15}) will also satisfy condition (\ref{eq:18}).
\begin{equation} \label{eq:18}
q_i^+=\lim_{\xi \to \xi_{i+\frac{1}{2}}^{(-)}}Q_i(\xi)=\sum \limits_{n=0}^{p-1}a_{i,n}(\xi_{i+\frac{1}{2}}-\xi_i^c)^n; \quad q_i^-=\lim_{\xi \to \xi_{i-\frac{1}{2}}^{(+)}}Q_i(\xi)=\sum \limits_{n=0}^{p-1}a_{i,n}(\xi_{i-\frac{1}{2}}-\xi_i^c)^n
\end{equation}
A more efficient approach for evaluating left and right interface values is using a linear combination of the adjacent cell averaged values \cite{Mignone-2014}, as given in Eq. (\ref{eq:19}).
\begin{equation} \label{eq:19}
q_i^{\pm} = \sum\limits_{s=-i_L}^{i_R}{w_{i,s}^{\pm}}\bar{Q}_{i+s}
\end{equation}
From Eq. (\ref{eq:17}), after inverting the matrix ${\bf{B}}$, we get relation (\ref{eq:20}).
\begin{equation} \label{eq:20}
a_{i,n}=\sum \limits_{s=-i_L}^{i_R}{\bf{C}}_{ns}\bar{Q}_{i+s}
\end{equation}
where ${\bf{C}}={\bf{B}}^{-1}$ corresponds to the inverse of matrix ${\bf{B}}$, which will exist only if matrix ${\bf{B}}$ exists and is nonsingular.
After combining Eqs. (\ref{eq:18}) and (\ref{eq:20}), we get
\begin{equation} \label{eq:21}
q_i^\pm=\sum \limits_{n=0}^{p-1}\bigg(\sum \limits_{s=-i_L}^{i_R}{\bf{C}}_{ns}\bar{Q}_{i+s} \bigg)(\xi_{i\pm \frac{1}{2}}-\xi_i^c)^n=\sum \limits_{s=-i_L}^{i_R}\bar{Q}_{i+s}\bigg(\sum \limits_{n=0}^{p-1}{\bf{C}}_{ns}(\xi_{i\pm \frac{1}{2}}-\xi_i^c)^n\bigg)
\end{equation}
By comparing Eqs. (\ref{eq:19}) and (\ref{eq:21}), we can extract the matrix of weights $w_{i,s}^\pm$.
\begin{equation} \label{eq:22}
w_{i,s}^\pm=\sum \limits_{n=0}^{p-1}{\bf{C}}_{ns}(\xi_{i\pm \frac{1}{2}}-\xi_i^c)^n
\end{equation}
Since, ${\bf{C}}_{ns}={(\bf{C}^T)_{sn}}=((\bf{B}^{T})^{-1})_{sn}$, Eq. (\ref{eq:22}) can be finally written in the form of Eq. (\ref{eq:23}).
\begin{equation} \label{eq:23}
{\sum \limits_{s=-i_L}^{i_R}{{(\bf{B}}^{T})_{ns}w_{i,s}^\pm=(\xi_{i\pm \frac{1}{2}}-\xi_i^c)^n}}
\end{equation}
Therefore, it is evident that the weights $w_{i,s}^{\pm}$ are shown to satisfy Eq. (\ref{eq:24}) \cite{Mignone-2014}, which is the fundamental equation for reconstruction in orthogonally$-$curvilinear coordinates.
\begin{equation} \label{eq:24}
\begin{pmatrix}
\beta_{i-i_L,0} & \dots & \beta_{i-i_L,p-1} \\
\vdots & \ddots & \vdots \\
\beta_{i+i_R,0} & \dots & \beta_{i+i_R,p-1}
\end{pmatrix}
^T
\begin{pmatrix}
w_{i,-i_L}^{\pm} \\
\vdots \\
w_{i,i_R}^{\pm}
\end{pmatrix}
=
\begin{pmatrix}
1 \\
\vdots \\
(\xi_{i\pm\frac{1}{2}}-\xi_i^c)^{p-1}
\end{pmatrix}
\end{equation}
Also, the grid dependent linear weights ($w_{i,s}^\pm$) satisfy the normalization condition (\ref{eq:25})\cite{Mignone-2014}.
\begin{equation} \label{eq:25}
{\sum \limits_{s=-i_L}^{i_R}w_{i,s}^\pm=1}
\end{equation}
Some important remarks on the linear weights in the proposed scheme are as follows:
\begin{enumerate}
\item Eq. (\ref{eq:24}) is capable of evaluating the grid generated linear weights for any regularly$-$/irregularly$-$spaced mesh in orthogonally$-$curvilinear coordinates.
It is observed that these weights are independent of the mesh size for standard regularly$-$spaced grid cases, but depend on the grid type. Also, they can be evaluated and stored (at a nominal cost) independently before the actual computation, after the grid type is finalized.
\item For fifth order WENO, three sets of third order ($p=3$) stencils ($S_k$) are chosen namely
\begin{itemize}
\item $S_0 (i-2,i-1,i)$ :: $-i_L=2,i_R=0$ \item $S_1 (i-1,i,i+1)$ :: $-i_L=1,i_R=1$ \item $S_2 (i,i+1,i+2)$ :: $-i_L=0,i_R=2$.
\end{itemize}
In addition to this, another symmetric stencil $S_5$ :: $(i-2,i-1,i,i+1,i+2)$ is used to extract the values of the optimal weights in the subsection \ref{optimalweights}.
\item The final interface values (\ref{eq:19}) and the linear weights depend only on the order of the reconstruction polynomial and not on $\xi_i^c$, which can be set to zero \cite{Mignone-2014}.
\item The values are simplified when the Jacobian is a simple power of $\xi$ i.e. $\frac{\partial{\mathcal{V}}}{\partial\xi}=\xi^m$. Then, $\beta_{i+s,n}$ of Eq. (\ref{eq:16}) can be written in the simplified form (\ref{eq:26}).
\begin{equation} \label{eq:26}
\beta_{i+s,n}=\frac{m+1}{n+m+1}\frac{\xi_{i+s+\frac{1}{2}}^{n+m+1}-\xi_{i+s-\frac{1}{2}}^{n+m+1}}{\xi_{i+s+\frac{1}{2}}^{m+1}-\xi_{i+s-\frac{1}{2}}^{m+1}}
\end{equation}
\item For the spherical$-$meridional coordinate, $\beta_{i+s,n}$ of Eq. (\ref{eq:16}) becomes highly complex as ($\frac{\partial{\mathcal{V}}}{\partial\xi}=sin\xi$). The value of $\beta_{i+s,n}$ can be computed from Eq. (\ref{eq:27}) and needs to be solved numerically e.g. by using LU decomposition method.
\begin{equation} \label{eq:27}
\beta_{i+s,n}=\frac{1}{cos\xi_{i_{s-}}-cos\xi_{i_{s+}}}\sum_{k=0}^{n}k!\begin{pmatrix}
n\\
k
\end{pmatrix}
\bigg[\xi_{i_{s-}}^{n-k}cos\bigg(\xi_{i_{s-}}+\frac{k\pi}{2}\bigg)- \xi_{i_{s+}}^{n-k}cos\bigg(\xi_{i_{s+}}+\frac{k\pi}{2}\bigg) \bigg]
\end{equation}
where $i_{s\pm}$ refers to $i+s\pm \frac{1}{2}$.
\item For non$-$standard grids, $\frac{\partial{\mathcal{V}}}{\partial{\xi}}$ is not a simple function, which makes the direct integration highly complex and time consuming. Therefore, such cases are tackled using numerical integration of the Eq. (\ref{eq:16}) and then matrix inversion of the Eq. (\ref{eq:24}).
\item Eq. (\ref{eq:24}) can also be used to compute the point$-$values of $Q(\xi)$ at any other points than the interfaces e.g. the cell center ($q_i^M$). The value at the cell center is obtained by setting the right hand side of the matrix (\ref{eq:24}) as $(1,0,0,...,0)^T$ with $\xi_i^c=0$, which is important in the case of nonlinear systems of equations where the reconstruction of the primitive variables is done instead of the conserved variables \cite{Mignone-2014}.
\item The linear positive ($w_i^+$), middle ($w_i^M$) and negative ($w_i^-$) weights for the WENO reconstruction for the standard cases of regularly$-$spaced grid in Cartesian, cylindrical, and spherical coordinates are summarized in the \ref{Cartesianlinearweights}, \ref{cylindricallinearweights}, and \ref{sphericallinearweights} respectively. The analytical solutions for the spherical$-$meridional coordinate $(\theta)$ and irregularly$-$spaced grid are highly intricate and case$-$specific respectively. Thus, they are not mentioned in this paper as they need to be dealt numerically. \end{enumerate}
The weights and the stencil are denoted by $w_{i,l,k}^{p\pm}$ and $S_{l}^{p\pm}$ respectively, where $k$ is sequence of the weight$-$applied cell with respect to the cell considered for reconstruction $(i)$, $p$ is the order of reconstruction ($p=i_L+i_R+1$), $l$ is the stencil number, and `$\pm$' represents the positive and negative weights i.e. weights for reconstructing right ($+$) and left ($-$) interface values respectively. The derivation of middle (mid$-$value) weights ($w_{i,l,k}^{pM}$) also follow the same procedure.
The reconstructed values ${q}_{i,l}^{p\pm}$ represents the ${p^{th}}-$order reconstructed value at right ($+$) or left ($-$) interface of $i^{th}$ cell on stencil $l$. The formulation for the interpolated values at the interface for the WENO reconstruction are given by the linear system of Eq. (\ref{eq:28}), where $i_L$ and $i_R$ depend on the stencil $l$.
\begin{equation} \label{eq:28}
q_{i,l}^{p\pm}=\sum\limits_{s=-i_L}^{i_R}w_{i,l,s}^{p\pm}\bar{Q}_{i+s}
\end{equation}
\subsection{Optimal weights} \label{optimalweights}
The weights which optimize the sum of the lower order interpolated variables into a higher order accurate variable, are known as optimal weights \cite{Jiang-1996,Liu-1994}. For the case of fifth order WENO interpolation, the third order interpolated variables are optimally weighed in order to achieve fifth order accurate interpolated values as given in Eq. (\ref{eq:29}) for the case of $p=3$.
\begin{equation} \label{eq:29}
q_{i,0}^{(2p-1)\pm}=\sum\limits_{l=0}^{p-1}C_{i,l}^\pm q_{i,l}^{p\pm}
\end{equation}
where $C_{i,l}^\pm$ is the optimal weight for the positive/negative cases on the $i^{th}$ finite volume. $C_{i,l}^M$ for mid$-$value weights also follow the same procedure.
So, Eqs. (\ref{eq:24}) and (\ref{eq:26}) are used again to evaluate the weights for the fifth order ($2p-1=5$) interpolation ($i_L=2,i_R=2$). The fifth order interpolated variable at the interface is equated with the sum of optimally weighed third order interpolated variables, as given in Eq. (\ref{eq:29}). The optimal weights $C_{i,l}^\pm$ are evaluated by equating the coefficients of $\bar{Q}$ resulting in ($2p-1$) equations with $p$ unknowns. For the fifth order WENO$-$C reconstruction, the case is simplified to a system of linear equations as given in Eq. (\ref{eq:30}), by selecting $\bar{Q}_{i-2}$, $\bar{Q_i}$, and $\bar{Q}_{i+2}$ coefficients to reduce the computational cost.
\begin{equation} \label{eq:30}
C_{i,0}^\pm=\frac{w_{i,0,-2}^{5\pm}}{{w_{i,0,-2}^{3\pm}}};\quad C_{i,2}^\pm=\frac{w_{i,0,+2}^{5\pm}}{{w_{i,2,+2}^{3\pm}}};\quad C_{i,1}^\pm=\frac{w_{i,0,0}^{5\pm}-C_{i,0}^\pm w_{i,0,0}^{3\pm}-C_{i,2}^\pm w_{i,2,0}^{3\pm}}{w_{i,1,0}^{3\pm}}
\end{equation}
Some remarks regarding the optimal weights are given below:
\begin{enumerate}
\item The summation of the optimal weights always yield unity value and their value is independent of the coefficients of $\bar{Q}$ equated in Eq. (\ref{eq:29}).
\item Since weights are independent of the conserved variables, optimal weights are also constants for a selected orthogonally$-$curvilinear mesh and can be computed in advance with a little storage cost.
\item The analytical values in the Cartesian, cylindrical$-$radial, and spherical$-$radial coordinates for a regularly$-$spaced grid are provided in \ref{Cartesianoptimalweights}, \ref{cylindricaloptimalweights}, and \ref{sphericaloptimalweights} respectively.
\item The only case where the optimal weights are mirror$-$symmetric is of the regularly$-$spaced grid in Cartesian coordinates. The optimal weights are the same as of the conventional fifth order WENO reconstruction \cite{Titarev-2004,Jiang-1996} in this case and also when $i \to \infty$ (limiting curvature) in the case of regularly$-$spaced grid cases in the cylindrical$-$radial and spherical$-$radial coordinates.
\item The weights for spherical$-$radial coordinates are much more complex. For spherical coordinates, it is advised to use the fifth order weights and linear weights to evaluate the optimal weights or use direct numerical operation after mesh generation since the analytical values of optimal weights contain high order ($i^{16}$) terms. Moreover, the concept of optimal weights can be completely removed with the aid of WENO$-$AO type modification by Balsara et al. \cite{balsara2016efficient} to the present work. However, the present work remains general and provides the backbone to such construction techniques.
\end{enumerate}
\subsection{Smoothness indicators and the nonlinear weights} \label{smoothnesslimiter}
The smoothness indicators are the nonlinear tools employed to differentiate in between a smooth and a discontinuous flows \cite{Jiang-1996,Liu-1994} on a stencil. They are employed in order to discard the discontinuous stencils and maintain a high order accuracy even for the discontinuous flows. From the original idea of \cite{Jiang-1996}, the present analysis is performed. Jiang and Shu \cite{Jiang-1996} proposed a novel technique of evaluating the smoothness indicators ($IS_{i,l}$). Since, for a regularly$-$/irregularly$-$spaced grid, ($IS_{i,l}$) varies with the grid index $i$, therefore we will use ($IS_{i,l}$) later in this paper. The idea involves minimization of the $L_2-$norm of the derivatives of the reconstruction polynomial, thus, emulating the idea of minimizing the total variation of the approximation. The mathematical definition of the smoothness indicator is given in Eq. (\ref{eq:31}) \cite{Titarev-2004,Jiang-1996}.
\begin{equation} \label{eq:31}
IS_{i,l}=\sum \limits_{m=1}^{p-1}\int_{\xi_{j-\frac{1}{2}}}^{\xi_{j+\frac{1}{2}}}\bigg(\frac{d^m}{d\xi^m}Q_{i,l}(\xi)\bigg)^2\Delta{\xi_i^{2m-1}d\xi}, \quad l=0,...,p-1
\end{equation}
To evaluate the value of $IS_{i,l}$, a third order polynomial interpolation on $i^{th}$ cell is required using positive and negative reconstructed values by stencil $S_l$, as given in Eq. (\ref{eq:32}).
\begin{equation} \label{eq:32}
Q_{i,l}(\xi)=a_{i,l,0}+a_{i,l,1}(\xi_i-\xi_i^c)+a_{i,l,2}(\xi_i-\xi_i^c)^2
\end{equation}
Let $\xi_{i+1/2}-\xi_i^c=\xi_i^+$, $\xi_{i-1/2}-\xi_i^c=-\xi_i^-$, and $\xi_i^++\xi_i^-=\Delta{\xi_i}$. The polynomial will satisfy the constraints (\ref{eq:33}) for all kinds of finite volumes.
\begin{equation} \label{eq:33}
\int_{{\xi}_{i-\frac{1}{2}}}^{{\xi}_{i+\frac{1}{2}}}Q_{i,l}(\xi)d\xi=\bar{Q}_i\quad,\quad
q_{i,l}^\pm=Q_{i,l}(\xi_{i\pm\frac{1}{2}})
\end{equation}
Finally, we get the values of the $a_{i,l,0}, a_{i,l,1},$ and $a_{i,l,2}$.
\begin{equation} \label{eq:34}
\centering
\begin{split}
a_{i,l,0}=\frac{6 \bar{Q}_i \xi_i^- \xi_i^++q_{i,l}^+ \xi_i^-(\xi_i^--2 \xi_i^+)+q_{i,l}^- \xi_i^+(\xi_i^+ - 2 \xi_i^-)}{(\xi_i^++\xi_i^-)^2}\\
a_{i,l,1}=\frac{2 q_{i,l}^-(\xi_i^- - 2 \xi_i^+)-6 \bar{Q}_i (\xi_i^- - \xi_i^+) - 2 q_{i,l}^+ (\xi_i^+ - 2\xi_i^-)}{(\xi_i^++\xi_i^-)^2}\\
a_{i,l,2}=3\frac{(q_{i,l}^\pm-2 \bar{Q}_i+q_{i,l}^\pm)}{(\xi_i^++\xi_i^-)^2}
\end{split}
\end{equation}
For the regularly$-$spaced grids, the values of $\xi^+$ and $\xi^-$ are constant throughout the grid, which are given below for the standard coordinates.
\begin{itemize}
\item Cartesian coordinates: \newline ($x,y,z$) direction:\quad$\xi^+=\xi^-=\frac{\Delta{\xi}}{2}$
\item Cylindrical coordinates: \newline Radial ($R$) direction:\quad$\xi^+=\Delta{R}\bigg(\frac{1}{2}-\frac{1}{12i-6}\bigg)$, $\xi^-=\Delta{R}\bigg(\frac{1}{2}+\frac{1}{12i-6}\bigg)$ \newline where $i=\Delta R/R_{i+1/2}$ \newline ($\theta,z$) direction:\quad$\xi^+=\xi^-=\frac{\Delta{\xi}}{2}$
\item Spherical coordinates: \newline Radial ($r$) direction:\quad$\xi^+=\Delta{r}\bigg(\frac{1}{2}-\frac{2i-1}{4(3i^2-3i+1)}\bigg)$, $\xi^-=\Delta{r}\bigg(\frac{1}{2}+\frac{2i-1}{4(3i^2-3i+1)}\bigg)$ \newline where $i=\Delta r/r_{i+1/2}$ \newline Meridional ($\theta$) direction:\quad$\xi^+=\theta_{i+\frac{1}{2}}-\theta_i^c$, $\xi^-=-(\theta_i^c-\theta_{i-\frac{1}{2}})$ \newline where $\theta_i^c=\frac{\theta_{i-\frac{1}{2}}cos\theta_{i-\frac{1}{2}}-sin\theta_{i-\frac{1}{2}}-\theta_{i+\frac{1}{2}}cos\theta_{i+\frac{1}{2}}+sin\theta_{i+\frac{1}{2}}}{cos\theta_{i-\frac{1}{2}}-cos\theta_{i+\frac{1}{2}}}$ \newline ($\phi$) direction:\quad$\xi^+=\xi^-=\frac{\Delta{\phi}}{2}$
\end{itemize}
These values on a regularly$-$spaced grid in Cartesian coordinates ($\xi^+=\xi^-=\frac{\Delta{\xi}}{2}$) transform relation (\ref{eq:31}) into the one given in \cite{Jiang-1996,Luo-2013}.
Now, putting the values of $a_{i,l,0}, a_{i,l,1},$ and $a_{i,l,2}$ obtained from Eq. (\ref{eq:34}) in Eq. (\ref{eq:32}) and then finally evaluating the smoothness indicator from Eq. (\ref{eq:31}) yields the following fundamental relation (\ref{eq:35}) for evaluating the smoothness indicators in the proposed scheme.
\begin{equation} \label{eq:35}
IS_{i,l}=4(39\bar{Q}_i^2-39\bar{Q}_i(q_{i,l}^-+q_{i,l}^+)+10((q_{i,l}^-)^2+(q_{i,l}^+)^2)+19q_{i,l}^-q_{i,l}^+)
\end{equation}
Some remarks regarding the smoothness indicators are as follows:
\begin{itemize}
\item Eq. (\ref{eq:35}) is a general relation for every standard grid and depends only on the third order reconstructed variables at the interface ($q_i^\pm$).
\item $q_i^\pm$ are the third order reconstructed variables obtained from Eq. (\ref{eq:28}) after using suitable grid dependent linear weights.
\item For a regularly$-$spaced grid in Cartesian coordinates, the formulation for fifth order WENO$-$C is the same as of WENO$-$JS \cite{Titarev-2004,Jiang-1996,Luo-2013} after the linear weights are substituted.
\end{itemize}
The nonlinear weight ($\omega_{i,l}^\pm$) for the WENO$-$C interpolation is defined as follows \cite{Titarev-2004,Jiang-1996}.
\begin{equation} \label{eq:36}
\omega_{i,l}^\pm=\frac{\alpha_{i,l}^\pm}{\sum_{l=0}^{p-1}\alpha_{i,l}^\pm} \quad \quad l=0,1,2
\end{equation}
where
\begin{equation} \label{eq:37}
\alpha_{i,l}^\pm=\frac{C_{i,l}^\pm}{(\epsilon+IS_{i,l})^2} \quad \quad l=0,1,2
\end{equation}
where $\epsilon$ is a small positive number used to avoid denominator becoming zero \cite{shu2009high}. Its value is a small percentage of the typical size of the reconstructed variable $\bar{Q}_i$ in such a way that Eq. (\ref{eq:37}) stays scale invariant \cite{shu2009high}. Typically, its value is chosen to be $10^{-6}$ \cite{Jiang-1996,Luo-2013,shu2009high}. The choice of non-linear weight is not unique. There is another set of non-linear weight formulation proposed by \cite{henrick2005mapped,borges2008improved} using the same smoothness indicator definitions, which can enhance the accuracy at smooth points especially at smooth extrema \cite{shu2009high,henrick2005mapped,borges2008improved}. The final interpolated interface values are evaluated from Eq. (\ref{eq:38}).
\begin{equation} \label{eq:38}
q_i^{(2p-1)\pm}={\sum_{l=0}^{p-1}\omega_{i,l}^{p\pm}q_{i,l}^{p\pm}}
\end{equation}
\subsection{Extension to multi-dimensions} \label{extensiontomultid}
The interface values calculated after the initial application are the point values only when the domain is 1D. For 2D and 3D domains, the reconstructed variables are line and area average values respectively \cite{Mignone-2014,zhang2011order,buchmuller2014improved}. If these values are used to evaluate flux, the scheme drops down to the second order of accuracy \cite{Mignone-2014,zhang2011order,buchmuller2014improved}. Buchmuller and Helzel \cite{buchmuller2014improved} proposed a very simple and effective way of achieving the original order of accuracy, just by using one point at each boundary. In this section, we are simply extending their work from Cartesian grids to general grids in orthogonally$-$curvilinear coordinates.
For the sake of simplicity, a 2D grid in orthogonally-curvilinear coordinates having unit vectors {\bf{$\bf{{\hat{e}_1}}$}} and {\bf{$\bf{{\hat{e}_2}}$}} in the corresponding orthogonal directions is considered, as shown in Fig.\ref{fig:1}. After reconstructing the left and the right interface averaged values in the first WENO sweep, the second sweep is performed to yield the point values. For the 3D case, line averaged values are yielded at this point and thus, require another reconstruction of line averaged values in the direction orthogonal previous reconstructions to obtain the point values. The Jacobian values for the conversion from volume averaged value to point values are summarized in Table \ref{tab:1}. Since this is the same principle as what we have already described in Sections \ref{linearweights} and \ref{optimalweights}, the theory and derivation are not discussed again. However, this time, the line average values are converted to the point values at the mid$-$point of the interface with the aid of adjacent interfaces' line averaged values. Also, since the quantities have been reconstructed using WENO scheme in the first face$-$normal sweep (blue$-$colored left face in {\bf{$\bf{{\hat{e}_2}}$}} direction), as shown in Fig.\ref{fig:1} (left), the second sweep of interface in the tangential direction {\bf{$\bf{{\hat{e}_1}}$}} doesn't require WENO procedure because it already contains the required smoothness information. Thus, fifth order accurate weights required for the mid$-$point value evaluation can be directly calculated by considering $\xi$ in {\bf{$\bf{{\hat{e}_1}}$}} direction with the same fifth order centered stencil, $\xi_i^c=0$, and substituting $\xi_{i}$ in the place of $\xi_{i\pm\frac{1}{2}}$ in Eq. (\ref{eq:24}). The values of the weights are the fifth order weights in the corresponding direction as evaluated earlier in Section \ref{optimalweights}.
Then, the fluxes can be evaluated from the left and the right hand side conserved variables at the interface by solving the Riemann problem \cite{toro2013riemann}. In the future, the method will be extended to gas$-$kinetic scheme (GKS) \cite{xu2001gas}.
\begin{figure}
\centering
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{fig1a.pdf}
\end{subfigure}%
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{fig1b.pdf}
\end{subfigure}
\caption{High order interface flux evaluation procedure. Left: Mid$-$point value reconstruction at each interface inside a cell using adjacent interface average values.
Right: Line averaged flux evaluation by solving the Riemann problem at each mid$-$point and averaging using five adjacent points}
\label{fig:1}
\end{figure}
The evaluated fluxes at the mid$-$points of the interfaces are averaged using polynomial interpolation, as shown in Fig. \ref{fig:1}. One-dimensional Jacobians for flux integration are coordinate specific. Since the final integrated value is a surface averaged value, it is inherently related only to the corresponding two dimensions of that surface. For example, while integrating in spherical ($r-\theta$) plane, the one$-$dimensional Jacobians are $\xi$ (not $\xi^2$) and unity (not $sin \xi$) in $r$ and $\theta$ directions respectively. This is because the averaging procedure is independent of the third dimension $\phi$ which adds $rd\phi$ term to the integration. So, the altered one$-$dimensional Jacobians for 2D planar averaging are summarized in Table \ref{tab:2}.
\begin{table}[h!]
\centering
\caption{One$-$dimensional Jacobian $\big(\frac{\partial{\mathcal{V}}}{\partial\xi}\big)$ values for interface flux reconstruction for the regularly$-$spaced 3D grids }
\begin{tabular}{ |c | c | c | c |}
\hline
\mbox{Grid type}& {Face coordinates ($i-j$)} & {$\frac{\partial{\mathcal{V}_i}}{\partial\xi_i}$}& {$\frac{\partial{\mathcal{V}_j}}{\partial\xi_j}$}\\
\hline
Cartesian & ($x-y$),($y-z$),($x-z$) & 1 & 1\\
\hline
\multirow{2}{*}{Cylindrical} & ($r-\theta$) & $\xi$ & $1$\\
\cline{2-4}
& ($r-z$),($\theta-z$) & $1$ & $1$\\
\hline
\multirow{2}{*}{Spherical} & ($r-\theta$),($r-\phi$) & $\xi$ & $1$\\
\cline{2-4}
& ($\theta-\phi$) & $sin\xi$ & $1$\\
\hline
\end{tabular}
\label{tab:2}
\end{table}
Consider a $p^{th}$ order accurate polynomial of any variable, say flux $Q$ in this case, joining $p$ consecutive points, say mid$-$points of the interface as represented in Fig. \ref{fig:1} (right). It can be expressed in the same form as provided in Eq. (\ref{eq:13}), which takes the matrix form given in Eq. (\ref{eq:39}).
\begin{equation} \label{eq:39}
Q_{i}(\xi)=
\begin{pmatrix}
1 & (\xi-\xi_i^c) & \dots & (\xi-\xi_i^c)^{p-1} \\
\end{pmatrix}
\begin{pmatrix}
a_{i,0} \\
a_{i,1} \\
\vdots \\
a_{i,p-1}
\end{pmatrix}
\end{equation}
But this time, instead of calculating the point values from the line averaged values, vice$-$versa operation is performed. Eq. (\ref{eq:13}) is valid for the values from $i-i_L$ (leftmost value) to $i+i_R$ (rightmost value), where $i_L+i_R+1=p$. A system of $p$ equations is obtained after substituting the values at each considered point, the matrix form of which is given in Eq. (\ref{eq:40}).
\begin{equation} \label{eq:40}
\begin{pmatrix}
Q_{i,-i_L} \\
Q_{i,-i_L+1} \\
\vdots \\
Q_{i,i_R}
\end{pmatrix}
=
\begin{pmatrix}
1 & (\xi_{i-i_L}-\xi_i^c) & \dots & (\xi_{i-i_L}-\xi_i^c)^{p-1} \\
1 & (\xi_{i-i_L+1}-\xi_i^c) & \dots & (\xi_{i-i_L+1}-\xi_i^c)^{p-1} \\
\vdots & \dots & \ddots & \vdots \\
1 & (\xi_{i+i_R}-\xi_i^c) & \dots & (\xi_{i+i_R}-\xi_i^c)^{p-1} \\
\end{pmatrix}
\begin{pmatrix}
a_{i,0} \\
a_{i,1} \\
\vdots \\
a_{i,p-1}
\end{pmatrix}
\end{equation}
where $Q$ is any-arbitrary variable which needs to be averaged in $\big[\xi_{i-\frac{1}{2}},\xi_{i+\frac{1}{2}}\big]$. It can be written in a much simpler matrix form given in Eq. (\ref{eq:41}).
\begin{equation}\label{eq:41}
[\bf{Q}]=[\bf{XI}][\bf{A}]
\end{equation}
where $[{\bf{Q}}]=[Q_{i,-i_L},Q_{i,-i_L+1},...,Q_{i,i_R}]^T$, $[{\bf{XI}}]=\begin{pmatrix}
1 & (\xi_{i-i_L}-\xi_i^c) & \dots & (\xi_{i-i_L}-\xi_i^c)^{p-1} \\
1 & (\xi_{i-i_L+1}-\xi_i^c) & \dots & (\xi_{i-i_L+1}-\xi_i^c)^{p-1} \\
\vdots & \dots & \ddots & \vdots \\
1 & (\xi_{i+i_R}-\xi_i^c) & \dots & (\xi_{i+i_R}-\xi_i^c)^{p-1} \\
\end{pmatrix}$, and $[{\bf{A}}]=[a_{i,0},a_{i,1},...,a_{i,p-1}]^T$
Using the same procedure as described in Sections \ref{linearweights} and \ref{optimalweights} and performing the average of the polynomial as given in Eq. (\ref{eq:39}) similar to Eq. (\ref{eq:11}) over the domain $[\xi_{i-1/2},\xi_{i+1/2}]$, Eq. (\ref{eq:42}) is obtained.
\begin{equation} \label{eq:42}
{\bar{Q}_i}=[\bf{\widetilde{XI}}][\bf{A}]
\end{equation}
where $[{\bf{\widetilde{XI}}}]=\bigg[\frac{1}{\Delta{\mathcal{V}}_{i}}{\int_{{\xi}_{i-\frac{1}{2}}}^{{\xi}_{i+\frac{1}{2}}}({\xi-\xi_i^c})^{0}\frac{\partial{\mathcal{V}}}{\partial\xi}d\xi}, \frac{1}{\Delta{\mathcal{V}}_{i}}{\int_{{\xi}_{i-\frac{1}{2}}}^{{\xi}_{i+\frac{1}{2}}}({\xi-\xi_i^c})^{1}\frac{\partial{\mathcal{V}}}{\partial\xi}d\xi},...,\frac{1}{\Delta{\mathcal{V}}_{i}}{\int_{{\xi}_{i-\frac{1}{2}}}^{{\xi}_{i+\frac{1}{2}}}({\xi-\xi_i^c})^{p-1}\frac{\partial{\mathcal{V}}}{\partial\xi}d\xi}\bigg]$
From Eqs. (\ref{eq:41}) and (\ref{eq:42}), a general form of equation for integration from a lower dimension to a higher dimension can be derived, as given by Eq. (\ref{eq:43}).
\begin{equation} \label{eq:43}
{\bar{Q}_i}=\{[\bf{\widetilde{XI}}][\bf{XI}]^{-1}\}[\bf{Q}]
\end{equation}
The term $\{[\bf{\widetilde{XI}}][\bf{XI}]^{-1}\}$ includes the weights essential for converting the mid$-$point interface flux values to the line averaged interface flux values, as shown in Fig. \ref{fig:1} (right). The next integration sweep in the transverse direction yields the area$-$averaged flux values at the interface.
The weights for integrations in the corresponding directions are provided in \ref{cart_int_wt}, \ref{cyl_int_wt}, and \ref{sph_int_wt} for the standard cases. Integration is preferred to be performed in the exact vice$-$versa fashion as of reconstruction from the surface averages.
\subsection{Source term integration} \label{sourcetermavg}
The source terms need to be dealt with extreme accuracy since any contamination in it might deteriorate the high order accuracy. The source term integration is performed based on the works by Mignone \cite{Mignone-2014}. For 1D test cases, it is preferred to reconstruct the mid$-$point of each cell using WENO procedure, weights of which are provided in \ref{cart_source_wt}, \ref{cyl_source_wt}, and \ref{sph_source_wt}. Reconstructing at Gauss$-$Lobatto 4 points (fifth order) instead of mid$-$point and performing quadrature also yields the same results (not shown in the paper), therefore, mid$-$point reconstruction with 3 point Simpson quadrature is advised.
The present work is a significant extension to \cite{Mignone-2014} since point values are considered for the source term evaluation, unlike the constant radius averages \cite{Mignone-2014}, which can only achieve second order of accuracy in multi$-$dimensional problems \cite{zhang2011order,buchmuller2014improved}. The theory for deriving the weights for the source term integration is exactly the same as of flux integration given in Section \ref{extensiontomultid}. However, reconstruction of the source$-$term integration is performed in every dimension, so the original one$-$dimensional Jacobians given in Table \ref{tab:1} can be used for the integration. If non$-$radial integration is performed in the first place, `${1/R}$' factor in all of the tangential terms at $R=0$ will yield an infinite value, so only numerators are integrated with the original weights. Moreover, since the source terms contain `$1/R$' factor, the radial integration weights need to be regularized \cite{Mignone-2014}, by reconsidering the integration of Eq. (\ref{eq:41}) with a regularized factor of the source term in Eq. (\ref{eq:14}) i.e. $ {\int_{{\xi}_{i-\frac{1}{2}}}^{{\xi}_{i+\frac{1}{2}}}\frac{\hat{Q}_i(\xi)}{\xi}\frac{\partial{\mathcal{V}}}{\partial\xi}d\xi} = {{\Delta\mathcal{V}_{i}}}\bar{Q}_{i}$, where $Q$ represents the original source term (e.g. if $Q_i=(p_i/R_i)$, then $\hat{Q}_i=p_i$) in this context.
First integration tangential to the surface is performed in one direction involving five points, to calculate the line average value of the source term. In the next step, five line averaged values are integrated in the transverse direction to the first sweep, tangential to the interface as shown in Fig. \ref{fig:2} (left). Finally, a face normal interpolation is performed by utilizing the face averaged source terms of six faces i.e. $(i-5/2)^+,(i-3/2)^+,(i-1/2)^+,(i+1/2)^-,(i+3/2)^-,(i+5/2)^-$ faces, as illustrated in Fig. \ref{fig:2} (right). The weights for the source term integration are provided for the standard cases in \ref{cart_source_wt}, \ref{cyl_source_wt}, and \ref{sph_source_wt}.\newline
In addition to the approach discussed above, interior points can also be used to evaluate the source terms. For 1D tests, it is feasible to utilize the mid$-$point values and perform Simpson quadrature to achieve fifth order accuracy using the weights given in the appendix. However, evaluation at the interior points becomes very expensive in multi$-$dimensions.
\begin{figure}
\centering
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{fig2a.pdf}
\end{subfigure}%
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{fig2b.pdf}
\end{subfigure}
\caption{Fifth order source term integration procedure. Left: Fifth order using middle values. Right: Sixth order integration using face values}
\label{fig:2}
\end{figure}
\subsection{WENO$-$C final algorithm} \label{finalalgo}
The final algorithm for WENO$-$C reconstruction is as follows:
\begin{itemize}
\item After mesh$-$generation, calculate the values of linear and optimal weights, fifth order middle (mid$-$value) interpolation weights, weights for interface flux and source term integration in every dimension. For standard uniform grids, weights are provided in the appendix.
\item Convert the volume averaged conservative variables into the interface averaged values by one$-$dimensional WENO sweeps in {{$\bf{{\hat{e}_1}}$}},{{$\bf{{\hat{e}_2}}$}}, and {{$\bf{{\hat{e}_3}}$}} directions using the evaluated weights and smoothness indicator given in Eq. (\ref{eq:39}). Refer to Sections \ref{linearweights}, \ref{optimalweights}, and \ref{smoothnesslimiter}.
\item Perform reconstruction of the interface averaged variables to mid$-$line averages values in the plane of the interface. Perform another reconstruction of the mid$-$line values in the orthogonal direction to the previous reconstruction in the plane of the interface, to achieve the point value at the mid$-$point of the interface. Refer to Section \ref{extensiontomultid}.
\item Calculate flux at the mid$-$point of each interface by solving the Riemann problem \cite{toro2013riemann}.
\item Perform volume and surface averaging of the source and flux terms respectively using dimensional$-$by$-$dimension approach by the weights provided in the appendix. \textit{Key tip}: If all of the source terms contain `$1/R$' factor, it is advised not to involve radius ($1/{R}$) term in the tangential averaging, if performed before the radial averaging. While radial averaging, regularized relations are preferred, if the considered points contain $R=0$ terms. Refer to Sections \ref{extensiontomultid} and \ref{sourcetermavg}.
\end{itemize}
\section{Numerical tests} \label{tests}
In this section, several tests on scalar and nonlinear system of equations are performed to analyze the performance of the WENO$-$C reconstruction scheme. The test cases include scalar advection (1D) on regularly$-$/irregularly$-$spaced grids, smooth (1D) and discontinuous inviscid flows (1D/2D) governed by a system of nonlinear equations (Euler equations) on regularly$-$spaced grids in cylindrical and spherical coordinates. For the sake of comparison solely on the grounds of the high order reconstruction, time marching in all WENO reconstructed 1D test cases is achieved by explicit third order TVD Runge$-$Kutta scheme \cite{gottlieb1998total,Mignone-2014}. For 2D test cases, explicit fifth order Runge$-$Kutta scheme cite{buchmuller2014improved}, is employed to reduce the computation time. Since high order spatial reconstruction with a lower order time marching requires a lower effective value of CFL number (or time step) to check the dominance of temporal errors over spatial errors, the empirical formula to evaluate the time step is given in Eq. (\ref{eq:44}).
\begin{equation} \label{eq:44}
\Delta t=C_a \Bigg[\max\limits_{\textbf{i}}^{} \Bigg(\frac{1}{D}\Bigg)\sum\limits_{d}^{}\frac{\lambda_{d,\textbf{i}}}{(\Delta l_{d,\textbf{i}})^{(ss/tt)}}\Bigg]^{-1}
\end{equation}
where $C_a$ is the CFL number, $D$ is the number of spatial dimensions $d$, while $\Delta l_d$ and $\lambda_d$ are the grid length and maximum signal speed inside zone $\textbf{i}$ in the direction {\bf{$\bf{{\hat{e}_d}}$}}. $ss$ and $tt$ are the spatial and temporal orders of convergence respectively.
For all tests performed in this paper, the initial condition on the conserved variables is averaged over the corresponding finite volumes $\Delta \mathcal{V}_{\textbf{i}}$ using seven$-$point Gaussian quadrature in a dimension$-$by$-$dimension fashion. Numerical benchmark test cases for the scalar conservation laws are reported in Section \ref{scalaradvactiontests}, while the verification tests for nonlinear systems are presented in Section \ref{eulertest}.
Errors $\epsilon_1$ are computed using the $L_1$ discrete norm defined in Eq. (\ref{eq:45}). In case of a linear system, $Q$ is a generic flow quantity while in case of a nonlinear system of equations, error in density $\rho$ is considered.
\begin{equation} \label{eq:45}
\epsilon_1(Q)=\frac{\sum\limits_{\textbf{i}}^{} |\bar{Q}_{\textbf{i}}-\bar{Q}_{\textbf{i}}^{ref} | \Delta{\mathcal{V}_{\textbf{i}} }}{\sum\limits_{\textbf{i}}^{}\Delta{\mathcal{V}_{\textbf{i}} }}
\end{equation}
where summation is performed on all finite volumes $ \Delta \mathcal{V}_{\textbf{i}} $ with $\bar{Q}_{\textbf{i}}^{ref}$ to be the volume average of the reference (or exact) solution. Finally, the experimental order of convergence ($EOC$) is computed from Eq. (\ref{eq:46}).
\begin{equation} \label{eq:46}
EOC=\frac{log \Bigg(\frac{\epsilon_1^c(Q)}{\epsilon_1^f(Q)}\Bigg)}{log \Bigg(\frac{\prod \limits_{d=1}^{D}N^f_d}{\prod \limits_{d=1}^{D}N^c_d}\Bigg)}
\end{equation}
where the superscript $c$ and $f$ refer to the coarse and fine mesh respectively and $N$ is the number of finite volumes in {\bf{$\hat{e}_d$}} direction.
\subsection{Scalar advection tests} \label{scalaradvactiontests}
As a first benchmark, 1D scalar advection equations Eq. (\ref{eq:48}) in cylindrical$-$radial and spherical$-$radial coordinates, and Eq. (\ref{eq:52}) in spherical$-$meridional coordinates are solved. Two different tests (tests A and B) are performed on a regularly$-$spaced grid, while test A is also performed on an irregularly$-$spaced grid. Test A subsumes a monotonic profile while test B is a more stringent test involving a non$-$monotonic profile.
For the irregularly$-$spaced grid, the grid spacing increases linearly with the radial distance. The summation of all zone lengths is fixed, i.e., length of the computational domain and the number of cells $N$ is given. A parameter $Ratio$ is introduced in Eq. (\ref{eq:47}) which is an indicator of the level of non$-$uniformity in the computational domain.
\begin{equation} \label{eq:47}
Ratio=\frac{\text{Grid spacing of any cell in an N$-$cell uniform grid}}{\text{Grid spacing of the first cell (or the smallest cell) in an N$-$cell nonuniform grid}}
\end{equation}
\subsubsection{Advection equation in cylindrical$-$radial and spherical$-$radial coordinates} \label{scalaradvactiontestscylsph}
The governing 1D scalar advection equation in cylindrical$-$radial and spherical$-$radial coordinates is formulated in Eq. (\ref{eq:48}).
\begin{equation} \label{eq:48}
\frac{\partial{Q}}{\partial{t}}+\frac{1}{\xi^m}\frac{\partial{}}{\partial{\xi}}(\xi^m Q v) =0
\end{equation}
where the $\xi^m$ is the one$-$dimensional Jacobian and therefore, $m=1$ and $2$ respectively correspond to cylindrical$-$radial and spherical$-$radial coordinates. Velocity $v$ varies linearly with the radial coordinate $\xi$ i.e. $v=\alpha \xi$ and $\alpha=1$. Eq. (\ref{eq:48}) admits an exact solution given in Eq. (\ref{eq:49}).
\begin{equation} \label{eq:49}
Q^{ref}(\xi,t)=e^{-(m+1) \alpha t}Q(\xi e^{-\alpha t},0)
\end{equation}
where $Q(\xi e^{-\alpha t},0)$ is the initial condition. For the present case, a Gaussian profile, given in Eq. (\ref{eq:50}), is employed.
\begin{equation} \label{eq:50}
Q(\xi,0)=e^{-a^2(\xi-b)^2}
\end{equation}
where $a$ and $b$ are constants. For the two test cases, $\{a=10, b=0\}$ is employed for test A which yields a monotonically decreasing profile and $\{a=16, b=1/2\}$ is employed for test B corresponds to a more stringent non$-$monotonic profile having a maxima at $\xi=1/2$. The computational domain extends from $\xi=0$ to $\xi=2$ consisting of $N$ zones, where boundary conditions include symmetry at the origin ($\xi=0$) and zero$-$gradient at $\xi=2$. Computations are performed until $t=1$ with CFL number of $0.9$ and the interface flux is computed using Eq. (\ref{eq:51}).
\begin{equation} \label{eq:51}
\tilde{F}_{i+\frac{1}{2}}=\frac{1}{2}\Bigg[v_{i+\frac{1}{2}}(Q_{i+1}^-+Q_i^+)-|v_{i+\frac{1}{2}}|(Q_{i+1}^-+Q_i^+)\Bigg]
\end{equation}
\begin{figure}[ht]
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{fig3a.pdf}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{fig3b.pdf}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{fig3c.pdf}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{fig3d.pdf}
\end{minipage}
\caption{Spatial profiles at $t=1$ for the radial advection problem in cylindrical$-$radial (top) and spherical$-$radial (bottom) coordinates. Left and right figures correspond to test A \{$a=10,b=0$\} and test B \{$a=16,b=1/2$\} respectively.}
\label{fig:3}
\end{figure}
Fig. \ref{fig:3} shows the spatial variation of $Q$ with the radial distance ($\xi=R$) for the two test cases (tests A and B) on a uniform grid in cylindrical$-$radial (top) and spherical$-$radial (bottom) coordinates. For a monotonically decreasing profile (test A), even $N \ge 64$ gives accurate results for both the test cases. However, for test B, $N=64$ yields slightly lower peaks than the exact solution.
When compared with Fig. 2 of Mignone \cite{Mignone-2014}, a slightly higher peak is observed for test A, since it is a less severe test case. The differences are much more prominent while performing test B. It can be observed that the peaks of $N=64$ for test B in Fig. \ref{fig:3} are significantly higher than earlier published results \cite{Mignone-2014}.
\begin{table}[]
\centering
\caption{$L_1$ norm errors and experimental order of convergence ($EOC$) for radial advection test in cylindrical$-$radial and spherical$-$radial coordinates at $t=1$ for test A \{$a=10,b=0$\} and test B \{$a=16,b=1/2$\}.}
\begin{tabular}{c|cc|cc|cc|cc}
\hline
& \multicolumn{4}{c}{Cylindrical} & \multicolumn{4}{|c}{Spherical}\\\cline{2-5} \cline{6-9}
& \multicolumn{2}{c|}{Test A} &\multicolumn{2}{c|}{Test B} & \multicolumn{2}{c|}{Test A} & \multicolumn{2}{c}{Test B} \\\cline{2-3} \cline{4-5} \cline{6-7} \cline{8-9}
$N$ & $\epsilon_1(Q)$ & $O_{L_1}$ & $\epsilon_1(Q)$ & $O_{L_1}$ & $\epsilon_1(Q)$ & $O_{L_1}$ & $\epsilon_1(Q)$ & $O_{L_1}$ \\
\hline
\hline
32 & 9.22E-05 & $-$ & 1.07E-02 & $-$ & 1.19E-05 & $-$ & 3.94E-03 & $-$ \\
64 & 1.14E-05 & 3.016 & 2.10E-03 & 2.356 & 1.28E-06 & 3.208 & 7.94E-04 & 2.312 \\
128 & 4.91E-07 & 4.537 & 1.95E-04 & 3.425 & 5.28E-08 & 4.602 & 7.44E-05 & 3.415 \\
256 & 1.94E-08 & 4.663 & 9.39E-06 & 4.378 & 2.16E-09 & 4.610 & 3.58E-06 & 4.378 \\
512 & 6.20E-10 & 4.965 & 3.14E-07 & 4.900 & 6.34E-11 & 5.093 & 1.19E-07 & 4.906 \\
1024 & 5.81E-11 & 3.415 & 1.02E-08 & 4.941 & 4.53E-12 & 3.806 & 3.88E-09 & 4.942
\end{tabular} \label{tab:3}
\end{table}
From the experimental order of convergence ($ EOC $) Table \ref{tab:3}, it is clear that WENO$-$C approaches to the desired fifth order of convergence. The same tests performed in Cartesian coordinates using conventional WENO and present WENO$-$C (both are equivalent) showed same errors and order of convergence (not shown here), and similar behavior as of the cylindrical and spherical grid cases. When compared with Table 1 in \cite{Mignone-2014}, present results indicate a superior performance in terms of accuracy and order of convergence. Modified piecewise parabolic method (PPM$_5$) approaches the fifth order of convergence for test A. However, its order drops down to $\sim 2.4$ for test B \cite{Mignone-2014}.
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{fig4a.pdf}
\label{fig:sub1.1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{fig4b.pdf}
\label{fig:sub1.2}
\end{subfigure}
\caption{Spatial profiles at $t=1$ for the radial advection problem (test A: \{$a=10,b=0$\}) using $N=16$ with different values of $Ratio$ (degree of non$-$uniformity) in cylindrical$-$radial (left) and spherical$-$radial (right) coordinates}
\label{fig:4}
\end{figure}
Fig. \ref{fig:4} illustrates the spatial variation of the conserved variable $Q$ on a non$-$uniform grid ($N=16$) during test A. It can be clearly interpreted from the plot that the numerical results approach towards the exact solution with an increase in $Ratio$ (defined in Eq. (\ref{eq:47})), i.e., biasing towards the origin. It can be well analyzed from Table \ref{tab:4} that a considerable reduction in errors is observed along with a rapid increase of $EOC$ to desired fifth order when the grid spacing is biased towards the origin.
\begin{table}[]
\centering
\caption{$L_1$ norm errors and experimental order of convergence ($EOC$) for the radial advection problem (test A: \{$a=10,b=0$\}) with different values of $Ratio$ (degree of non$-$uniformity) in cylindrical$-$radial and spherical$-$radial coordinates}
\begin{tabular}{c|cc|cc|cc|cc}
\hline
& \multicolumn{2}{c|}{$Ratio=1$} &\multicolumn{2}{c|}{$Ratio=2$} & \multicolumn{2}{c|}{$Ratio=4$} & \multicolumn{2}{|c}{$Ratio=8$} \\\cline{2-3} \cline{4-5} \cline{6-7} \cline{8-9}
$N$ & $\epsilon_1(Q)$ & $O_{L_1}$ & $\epsilon_1(Q)$ & $O_{L_1}$ & $\epsilon_1(Q)$ & $O_{L_1}$ & $\epsilon_1(Q)$ & $O_{L_1}$ \\
\hline
\hline
& \multicolumn{8}{c}{Cylindrical}\\
\hline
16 & 5.54E-04 & $-$ & 1.85E-04 & $-$ & 1.70E-04 & $-$ & 1.80E-04 & $-$ \\
32 & 9.22E-05 & 2.587 & 3.44E-05 & 2.429 & 2.78E-05 & 2.607 & 3.03E-05 & 2.573 \\
64 & 1.14E-05 & 3.016 & 1.81E-06 & 4.247 & 1.26E-06 & 4.468 & 1.39E-06 & 4.440 \\
128 & 4.91E-07 & 4.537 & 7.89E-08 & 4.519 & 5.47E-08 & 4.523 & 5.96E-08 & 4.548 \\
\hline
& \multicolumn{8}{c}{Spherical}\\
\hline
16 & 5.32E-05 & $-$ & 2.40E-05 & $-$ & 2.19E-05 & $-$ & 2.47E-05 & $-$ \\
32 & 1.19E-05 & 2.167 & 4.48E-06 & 2.420 & 3.81E-06 & 2.523 & 4.20E-06 & 2.557 \\
64 & 1.28E-06 & 3.208 & 2.33E-07 & 4.267 & 1.72E-07 & 4.475 & 1.92E-07 & 4.449 \\
128 & 5.28E-08 & 4.602 & 9.64E-09 & 4.594 & 6.90E-09 & 4.635 & 7.57E-09 & 4.669
\end{tabular} \label{tab:4}
\end{table}
\subsubsection{Advection equation in spherical$-$meridional coordinates} \label{scalaradvactiontestssphmer}
The governing 1D scalar advection equation in spherical$-$meridional coordinates is given in Eq. (\ref{eq:52}).
\begin{equation} \label{eq:52}
\frac{\partial{Q}}{\partial{t}}+\frac{1}{sin \theta}\frac{\partial{}}{\partial{\theta}}(sin \theta Q v) =0
\end{equation}
where the velocity $v$ varies linearly with the $\theta$ coordinate i.e. $v=\alpha \theta$ and $\alpha=1$. Eq. (\ref{eq:52}) admits an exact solution given in Eq. (\ref{eq:53}).
\begin{equation} \label{eq:53}
Q^{ref}(\xi,t)=e^{-\alpha t}\frac{sin\big(e^{-\alpha t \theta}\big)}{sin \theta} Q\big(e^{-\alpha t}\theta,0\big)
\end{equation}
A 1D computational grid spanning the interval $\theta \in [0,\pi/2]$ is divided into $N$ zones. Initial condition ($t=0$) for the problem is given in Eq. (\ref{eq:54}).
\begin{equation} \label{eq:54}
Q(\theta,0) =
\begin{cases}
\text{$\Bigg[\frac{1+cos(a(\theta- b))}{2}\Bigg]^2$} & \text{$|\theta - b|<\frac{\pi}{a}$}\\
0 & \text{otherwise}\\
\end{cases}
\end{equation}
where $a$ and $b$ are constants. Two different tests are performed namely, test A with $\{a=10, b=0\}$ yielding a monotonically decreasing profile and a more stringent test B $\{a=16, b=\pi/a\}$ resulting in a non$-$monotonic profile having a maxima at $\theta=\pi/a$. The computational domain extends from $\theta=0$ to $\theta=\pi/2$, where the boundary conditions include symmetry at the origin ($\theta=0$) and zero$-$derivative at $\theta=\pi/2$. Computations are performed till $t=1$ with CFL number of $0.9$ and the interface flux is computed using Eq. (\ref{eq:51}).
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{fig5a.pdf}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{fig5b.pdf}
\end{subfigure}
\caption{Spatial profiles at $t=1$ for the scalar advection problem in spherical$-$meridional coordinates with different mesh points. Left and right subfigures refer to test A $\{a=10, b=0\}$ and B \{$a=15,b=\pi/a$\} respectively.}
\label{fig:5}
\end{figure}
Fig. \ref{fig:5} shows the variation of conserved variable $ Q$ with angle $\theta$ for both the tests. For test A, even $N=16$ give accurate results, while for test B, $N \ge 32$ provide a good approximation of the exact solution. Table \ref{tab:5} illustrates the achievement of the desired fifth order of convergence for both the test cases. When the results obtained by the present scheme are compared with the previously proposed schemes (Table 2 of \cite{Mignone-2014}), it can be realized that WENO$-$C shows superior performance. For the non$-$uniform mesh case, a fifth order of convergence is still preserved with a rapid achievement, as summarized in Table \ref{tab:6}. Moreover, Fig. \ref{fig:6} shows that mesh biasing leads to a significant reduction in the errors when compared with a uniform mesh of the same number of cells.
\begin{table}[h!]
\centering
\caption{$L_1$ norm errors and experimental order of convergence ($EOC$) for scalar advection test in spherical$-$meridional coordinates coordinates at $t=1$ for test A \{$a=10,b=0$\} and test B \{$a=16,b=\pi/a$\} respectively.}
\begin{tabular}{c|cc|cc}
\hline
&\multicolumn{2}{c|}{Test A} &\multicolumn{2}{c}{Test B} \\\cline{2-3} \cline{4-5}
$N$ & $\epsilon_1(Q)$ & $O_{L_1}$ & $\epsilon_1(Q)$ & $O_{L_1}$\\
\hline
\hline
32 & 1.71E-04 & $-$ & 1.57E-03 & $-$ \\
64 & 1.99E-05 & 3.103 & 2.11E-04 & 2.894 \\
128 & 7.10E-07 & 4.808 & 1.62E-05 & 3.699 \\
256 & 2.25E-08 & 4.978 & 4.81E-07 & 5.078
\end{tabular} \label{tab:5}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{fig6.pdf}
\caption{Spatial profiles at $t=1$ for the scalar advection problem (test A: \{$a=10,b=0$\}) using $N=16$ with different values of $Ratio$ (degree of non$-$uniformity) in spherical$-$meridional coordinates.}
\label{fig:6}
\end{figure}
\begin{table}[]
\centering
\caption{$L_1$ norm errors and experimental order of convergence ($EOC$) for the scalar advection problem (test A: \{$a=10,b=0$\}) in spherical$-$meridional coordinates with different values of $Ratio$ (degree of non$-$uniformity)}
\begin{tabular}{c|cc|cc|cc|cc}
\hline
& \multicolumn{2}{c|}{$Ratio=1$} &\multicolumn{2}{c|}{$Ratio=2$} & \multicolumn{2}{c|}{$Ratio=4$} & \multicolumn{2}{|c}{$Ratio=8$} \\\cline{2-3} \cline{4-5} \cline{6-7} \cline{8-9}
$N$ & $\epsilon_1(Q)$ & $O_{L_1}$ & $\epsilon_1(Q)$ & $O_{L_1}$ & $\epsilon_1(Q)$ & $O_{L_1}$ & $\epsilon_1(Q)$ & $O_{L_1}$ \\
\hline
\hline
16 & 7.43E-04 & $-$ & 4.27E-04 & $-$ & 4.61E-04 & $-$ & 5.05E-04 & $-$ \\
32 & 1.71E-04 & 2.120 & 9.18E-05 & 2.217 & 1.01E-04 & 2.195 & 1.16E-04 & 2.128 \\
64 & 1.99E-05 & 3.103 & 8.33E-06 & 3.463 & 9.13E-06 & 3.465 & 1.07E-05 & 3.438 \\
128 & 7.10E-07 & 4.808 & 2.45E-07 & 5.085 & 2.72E-07 & 5.069 & 3.24E-07 & 5.040
\end{tabular} \label{tab:6}
\end{table}
\subsection{Euler equations based tests} \label{eulertest}
The present reconstruction scheme is now tested for more challenging test cases involving nonlinear systems of equations, i.e., Euler equations. Although primitive variable reconstruction is preferred in the past due to the well$-$behaved results, in the case of curvilinear coordinates, the involvement of the higher order derivatives in the extraction of the primitive variables causes spurious oscillations \cite{Mignone-2014}. Therefore, we restrict our work to the reconstruction of the conserved variables instead of computationally expensive and intricate primitive variable reconstruction. Maximum characterstic speed is employed to evaluate the time step from Eq. (\ref{eq:44}). Several tests are performed in cylindrical and spherical coordinates to investigate the essentially non$-$oscillatory property of WENO$-$C for discontinuous flows and the convex combination property for smooth flows.
\subsubsection{Isothermal radial wind problem}
The isothermal 1D radial wind problem is performed to analyze the deviations of spatial reconstruction schemes near the origin in curvilinear coordinates \cite{Mignone-2014}.
The general form of Euler equation in 1D Cartesian / cylindrical$-$radial / spherical$-$radial coordinates can be written in the form of Eq. (\ref{eq:55}).
\begin{equation} \label{eq:55}
\frac{\partial{}}{\partial{t}}
\begin{pmatrix}
\rho \\
\rho v \\
E
\end{pmatrix}
+\frac{1}{\xi^m}\frac{\partial{}}{\partial{\xi}}
\begin{pmatrix}
\rho v \xi^m \\
(\rho v^2+p) \xi^m \\
(E+p)v \xi^m
\end{pmatrix}
=
\begin{pmatrix}
0\\
mp/\xi\\
0
\end{pmatrix}
\end{equation}
where $\rho$ is the mass density, $v$ is the radial velocity, $p$ is the pressure, $E$ is the total energy, and $m=0,1,$ and $2$ for Cartesian, cylindrical$-$radial ($\xi=R$), and spherical$-$radial ($\xi=r$) coordinates respectively. For an isothermal flow, the energy equation is discarded whereas Eq. (\ref{eq:56}) serves as the adiabatic equation of state (EOS).
\begin{equation} \label{eq:56}
E=\frac{p}{\gamma - 1}+\frac{1}{2} \rho v^2
\end{equation}
where $\gamma=5/3$ is assumed for this case. At $\xi=0$, axisymmetric boundary conditions apply, while at the outer edge, density, pressure, and scaled velocity (${v}/\bar{\xi}$) have zero gradients. The initial conditions are provided in Eq. (\ref{eq:57}) and the interface flux is evaluated with Lax-Friedrichs scheme with local speed estimate \cite{rusanov1962calculation}.
\begin{equation} \label{eq:57}
\rho(\xi,0)=1;\quad\quad v(\xi,0)=100\xi;\quad\quad p(\xi,0)=1/\gamma
\end{equation}
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{fig7a.pdf}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{fig7b.pdf}
\end{subfigure}
\caption{Spatial profiles of density $\rho$ (left) and scaled radial velocity $v/\bar{\xi}$ (right) for the isothermal radial wind problem \cite{blondin1993piecewise, Mignone-2014} with constant density after one timestep in cylindrical$-$radial (orange, diamonds) and spherical$-$radial (blue, circles) coordinates. Only the region close to the origin shown.}
\label{fig:7}
\end{figure}
The computational domain spanning $0\le \xi \le 2$ is divided into $N=100$ points. The spatial profiles of density ($\rho$; left) and scaled velocity ($v/\bar{\xi}$; right) are plotted in Fig. \ref{fig:7} after one integration step $\Delta t=7 \times 10^{-5}$ for the case of cylindrical and spherical grid. Here, $\bar{\xi}$ represents the location of the centroid as discussed in section \ref{smoothnesslimiter}. By comparing it with the previously published results \cite{Mignone-2014,blondin1993piecewise}, it can be noted that the density and the scaled velocity remain linear and no signs of deviations are observed near the origin.
\subsubsection{Acoustic wave propagation}
A smooth problem involving a nonlinear system of 1D gas dynamical equations is solved to test fifth order accuracy. The original problem, introduced by Johnsen and Colonius \cite{johnsen2006implementation}, is adapted to cylindrical and spherical coordinates \cite{wang2017high}. The governing equations and the initial conditions for this test are provided in Eqs. (\ref{eq:55}, \ref{eq:56}) and (\ref{eq:58}) respectively.
\begin{equation} \label{eq:58}
\rho(r,0)=1+\varepsilon f(r), \quad
u(r,0) = 0,\quad
p(r,0)=1/\gamma+\varepsilon f(r)
\end{equation}
with the perturbation,
\begin{equation} \label{eq:59}
f(r) =
\begin{cases}
\text{$\frac{sin^4(5\pi r)}{r}$} & \text{if $0.4\le r \le0.6$}\\
0 & \text{otherwise}\\
\end{cases}
\end{equation}
where $\gamma =1.4$. A sufficiently small $\varepsilon$ ($\varepsilon=10^{-4}$) yields a smooth solution. The interface flux is evaluated using Lax$-$Friedrichs scheme with local speed estimate \cite{rusanov1962calculation} with a CFL number of $0.3$.
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{fig8a.pdf}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{fig8b.pdf}
\end{subfigure}
\caption{Spatial profiles of density ($\rho$) acoustic wave propagation problem \cite{johnsen2006implementation,wang2017high} at time $t=0.3$ in cylindrical$-$radial (left) and spherical$-$radial (right) coordinates.}
\label{fig:8}
\end{figure}
\begin{table}[h!]
\centering
\caption{$L_1$ norm errors and experimental order of convergence ($EOC$) for acoustic wave propagation test in cylindrical$-$radial and spherical$-$radial coordinates coordinates at $t=0.3$.}
\begin{tabular}{c|cc|cc}
\hline
& \multicolumn{2}{c|}{Cylindrical} &\multicolumn{2}{c}{Spherical} \\\cline{2-3} \cline{4-5}
$N$ & $\epsilon_1(Q)$ & $O_{L_1}$ & $\epsilon_1(Q)$ & $O_{L_1}$\\
\hline
\hline
16 & 1.01E-05 & $-$ & 7.98E-06 & $-$ \\
32 & 4.91E-06 & 1.036 & 3.90E-06 & 1.033 \\
64 & 6.74E-07 & 2.865 & 5.40E-07 & 2.852 \\
128 & 3.24E-08 & 4.380 & 2.59E-08 & 4.383 \\
256 & 1.27E-09 & 4.670 & 1.01E-09 & 4.675
\end{tabular}\label{tab:7}
\end{table}
The initial perturbation splits into two acoustic waves traveling in opposite directions. The final time ($t=0.3$) is set such that the waves remain in the domain and the problem is free from the boundary effects. The computational domain of unity length is uniformly divided into $N$ different zones i.e. $N=16, 32, 64, 128, 256$. Although an exact solution known up to O($\varepsilon^2$) is known, the solution on the finest mesh $N=1024$ is taken as the reference. Error in density is evaluated from Eq. (\ref{eq:45}). Fig. \ref{fig:8} illustrate the spatial variation of density at $t=0.3$ inside the domain in cylindrical$-$radial (left) and spherical$-$radial (right) coordinates. The location of the peaks is same. However, the height of the peaks differs due to different one$-$dimensional Jacobians for both the coordinates. From Table \ref{tab:7}, it clear that the scheme approaches the desired fifth order of convergence ($EOC$) for both the cases.
\subsubsection{Sedov explosion test}
Sedov explosion test is performed to investigate code's ability to deal with strong shocks and non$-$planar symmetry \cite{fryxell2000flash}. The problem involves a self$-$similar evolution of a cylindrical/spherical blastwave from a localized initial pressure perturbation (delta$-$function) in an otherwise homogeneous medium. Governing equations for this problem are the same as given in Eq. (\ref{eq:55}) earlier. For the code initialization, dimensionless energy $\epsilon$ ($\epsilon=1$) is deposited into a small region of radius $\delta r$, which is three times the cell size at the center. Inside this region, the dimensionless pressure $P^{'}_0$ is given by Eq. (\ref{eq:60}).
\begin{equation}\label{eq:60}
P^{'}_0=\frac{3(\gamma-1)\epsilon}{(m+2)\pi \delta r^{(m+1)}}
\end{equation}
where $\gamma=1.4$ and $m=1,2$ for cylindrical, spherical geometries respectively. Reflecting boundary condition is employed at the center ($r=0$), whereas boundary condition at $r=1$ is not required for this problem. The initial velocity and density inside the domain are 0 and 1 respectively and the initial pressure everywhere except the kernel is $10^{-5}$. Due to reflecting boundary condition at the center, the high pressure region (kernel) consists of 6 cells, i.e., 3 ghost cells and 3 interior cells. As the source term is very stiff, the CFL number set to be $0.1$. The final time is $t=0.05$. In a self$-$similar blastwave that develops, the analytical results are available in the literature \cite{fryxell2000flash,kamm2007efficient}.
\begin{figure}[]
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{fig9a.pdf}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{fig9b.pdf}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{fig9c.pdf}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{fig9d.pdf}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{fig9e.pdf}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{fig9f.pdf}
\end{minipage}
\caption{Variation of density (first row), velocity (second row), and pressure (third row) with the radius for cylindrical$-$radial (left column) and spherical$-$radial (right column) coordinates for the Sedov explosion test \cite{fryxell2000flash,wang2017high}. Domain is restricted to $R=0.4$ for the sake of clarity.}
\label{fig:9}
\end{figure}
Fig. \ref{fig:9} shows the variations in density, velocity, and pressure with radius on a uniform grid ($N=100,200$) in 1D cylindrical$-$radial and spherical$-$radial coordinates along with their analytical values \cite{kamm2007efficient}. The peak values of pressure, velocity, and density show similar behavior as given in \cite{wang2017high}, but the locations of the shocks are different due to different $\epsilon$ and final time values.
\subsubsection{Sod test}
Sod test \cite{sod1978survey} is considered in 1D cylindrical$-$radial, spherical$-$radial, and 2D cylindrical ($r-\theta$) coordinates. For 1D radial cases, governing equation is given in Eq. (\ref{eq:55}), while governing equation for cylindrical ($r-\theta$) coordinates is given in Eq. (\ref{eq:61}).
\begin{equation} \label{eq:61}
\frac{\partial{}}{\partial{t}}
\begin{pmatrix}
\rho \\
\rho v_r \\
\rho v_{\theta} \\
\rho e
\end{pmatrix}
+
\frac{1}{r}
\frac{\partial{}}{\partial{r}}
\begin{pmatrix}
\rho v_{r}r\\
(\rho v_{r}^2+p)r \\
\rho v_r v_{\theta} r \\
(\rho e+p)v_r r
\end{pmatrix}
+\frac{1}{r}\frac{\partial{}}{\partial{\theta}}
\begin{pmatrix}
\rho v_{\theta}\\
\rho v_r v_{\theta} \\
\rho v_{\theta}^2 + p \\
(\rho e+p)v_{\theta}
\end{pmatrix}
=
\begin{pmatrix}
0 \\
(p+\rho v_{\theta}^2)/r \\
-\rho v_r v_{\theta}/r \\
0
\end{pmatrix}
\end{equation}
where terms $(\rho v_{\theta}^2)/r$ and $(\rho v_r v_{\theta})/r$ are related to the centrifugal and Coriolis forces respectively. In this problem, the interface flux is evaluated with HLL Riemann solver \cite{harten1997upstream}. The initial condition consists of two regions (left and right states) inside the domain separated by a diaphragm at $r=0.5$ as provided in Eq. (\ref{eq:62}).
\begin{equation} \label{eq:62}
\begin{pmatrix}
\rho\\
v_{r} \\
v_{\theta} \\
p
\end{pmatrix}_L
=
\begin{pmatrix}
1 \\
0 \\
0 \\
1
\end{pmatrix};
\quad \quad
\begin{pmatrix}
\rho\\
v_{r} \\
v_{\theta} \\
p
\end{pmatrix}_R
=
\begin{pmatrix}
0.125 \\
0 \\
0 \\
0.1
\end{pmatrix}
\end{equation}
The computational domain ($0\le r \le 1$) for 1D tests is uniformly divided in $N$ zones ($N=100, 500$), while for the 2D test, the computational domain ($0\le r \le 1$, $0\le \theta \le \pi/2$) is uniformly divided into $100 \times 100$ zones in the corresponding directions. The boundary conditions for 1D cases are not required, however, for 2D case, symmetry of conserved variables at $r=0$ (except radial velocity which is antisymmetric) is considered along with outflow boundary condition applied to all other boundaries ($r=1$, $\theta=0$, and $ \theta=\pi/2$). The computation is performed till $t=0.2$ with a CFL number of $0.3$. For first order and second order (MUSCL \cite{van1979towards}) spatial reconstruction, Euler time marching and Maccormack (predictor$-$corrector) schemes \cite{maccormack1982numerical} are respectively employed.
\begin{figure}[]
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{fig10a.pdf}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{fig10b.pdf}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{fig10c.pdf}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{fig10d.pdf}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{fig10e.pdf}
\end{minipage}
\hfill
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{fig10f.pdf}
\end{minipage}
\caption{Variation of density (first row), velocity (middle row), and pressure (third row) with the radius at $t=0.2$ for cylindrical (left column) and spherical$-$radial (right column) coordinates for the modified Sod test \cite{sod1978survey,wang2017high}.}
\label{fig:10}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{fig11.png}
\caption{Variation of density with the radius at $t=0.2$ for cylindrical ($r-\theta$) coordinates in the Cartesian plane for the modified Sod test \cite{sod1978survey,wang2017high}.}
\label{fig:11}
\end{figure}
Fig. \ref{fig:10} shows the spatial profiles of density, velocity, and pressure for Sod test case in 1D/2D cylindrical coordinates (left) and 1D spherical$-$radial (right) coordinates. WENO$-$C performs better than first order and second order (MUSCL \cite{van1979towards}) reconstruction techniques. The 2D test results exactly overlap with the 1D test results in cylindrical coordinates. Fig. \ref{fig:11} shows the spatial variation of the density in the 2D Cartesian plane at time $t=0.2$. When compared with the results obtained from fifth order finite difference WENO \cite{wang2017high}, it is clear that WENO$-$C yields similar but less oscillatory results.
\subsubsection{Modified 2D Riemann problem in cylindrical (R$-$z) coordinates}
The final test for the present scheme involves a modified 2D Riemann problem in cylindrical ($R-z$) coordinates, as illustrated in Fig. \ref{fig:12}. The problem corresponds to configuration 12 of \cite{lax1998solution} involving two contact discontinuity and two shocks as the initial condition, resulting in the formation of a self$-$similar structure propagating towards the low density$-$low pressure region (region 3). To make the problem symmetric about the origin, the original problem \cite{lax1998solution} is rotated by an angle of 45 degrees in the clockwise direction. The governing equations are provided in Eq. (\ref{eq:63}).
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{fig12.pdf}
\caption{A schematic of modified 2D Riemann problem in cylindrical ($r-z$) coordinates.}
\label{fig:12}
\end{figure}
\begin{equation} \label{eq:63}
\frac{\partial{}}{\partial{t}}
\begin{pmatrix}
\rho \\
\rho v_R \\
\rho v_z \\
\rho e
\end{pmatrix}
+
\frac{1}{R}
\frac{\partial{}}{\partial{R}}
\begin{pmatrix}
\rho v_{R} R\\
(\rho v_{R}^2+p)R \\
\rho v_R v_{z}R \\
(\rho e+p)v_R R
\end{pmatrix}
+\frac{\partial{}}{\partial{z}}
\begin{pmatrix}
\rho v_{z}\\
\rho v_R v_{z} \\
\rho v_{z}^2 + p \\
(\rho e+p)v_{z}
\end{pmatrix}
=
\begin{pmatrix}
0 \\
p/R \\
0 \\
0
\end{pmatrix}
\end{equation}
\begin{figure}[]
\centering
\begin{minipage}[b]{\linewidth}
\centering
\includegraphics[width=0.5\linewidth]{fig13a.pdf}
\end{minipage}
\begin{minipage}[b]{\linewidth}
\centering
\includegraphics[width=0.5\linewidth]{fig13b.pdf}
\end{minipage}
\begin{minipage}[b]{\linewidth}
\centering
\includegraphics[width=0.5\linewidth]{fig13c.pdf}
\end{minipage}
\hfill
\caption{Density contours with different reconstruction techniques (first order (top), second order MUSCL \cite{van1979towards} (middle), and WENO$-$C (bottom)) at $t=0.2$ for the modified Riemann problem in cylindrical ($r-z$) coordinates}
\label{fig:13}
\end{figure}
The computations are performed until $t=0.2$ with a CFL number of $0.5$ on a domain ($r,z$)$=$[0,1]$\times$[0,1] divided into 500$\times$500 zones. The boundary conditions include symmetry at the center (except for the antisymmetric radial velocity) and outflow elsewhere. For the first order and second order (MUSCL \cite{van1979towards}) spatial reconstructions, Euler time marching and Maccormack (predictor$-$corrector) schemes \cite{maccormack1982numerical}, are respectively employed. Rich small$-$scale structures in the contact$-$contact region (region 1) can be observed from Fig. \ref{fig:13} for WENO$-$C reconstruction, when compared with first and second order MUSCL reconstruction. Structures are highly smeared for the case of first order reconstruction.
\section{Conclusions} \label{conclusions}
The fifth order finite volume WENO$-$C reconstruction scheme provides a more general framework in orthogonally$-$curvilinear coordinates to achieve high order spatial accuracy with minimal computational cost. Analytical values of linear weights, optimal weights, weights for mid$-$point interpolation, and flux/source term integration are derived for the standard grids. The proposed reconstruction scheme can be applied to both regularly$-$spaced and irregularly$-$spaced grids. A grid independent smoothness indicator is derived from the basic definition. For uniform grids, the analytical values in Cartesian, cylindrical$-$radial, and spherical$-$radial coordinates for $R\to \infty$ conform to WENO$-$JS. A simple and computationally efficient extension to multi$-$dimensions is employed. 1D Scalar advection tests are performed in curvilinear coordinates on regularly$-$spaced and irregularly$-$spaced grids followed by several smooth and discontinuous flow test cases in 1D spherical coordinates and 1D/2D cylindrical coordinates, which testify for the fifth order accuracy and ENO property of the scheme. For a multi$-$dimensional test case, only the interface values are considered to integrate the source term, while for 1D test cases, mid$-$point values are also used. As a final note, it is emphasized that the present scheme can be extended to arbitrary order of accuracy and different techniques of reconstruction in multi$-$dimensions.
\section{Acknowledgement}
The current research is supported by Hong Kong Research Grant Council (16207715, 16206617) and National Science Foundation of China (11772281, 91530319).
|
2,877,628,089,070 | arxiv | \section{Introduction}
The violation of charge-parity (CP) symmetry is a crucial concept in particle physics and serves as an indispensable ingredient to dynamically generate the matter-antimatter asymmetry in our universe~\cite{Sakharov:1967dj,Bodeker:2020ghk}. In the Standard Model (SM), CP violation has been observed in the quark sector~\cite{Christenson:1964fg,KTeV:1999kad,BaBar:2001pki}. Moreover, since the neutrino oscillation experiments have firmly established that neutrinos are massive and lepton flavors are significantly mixed~\cite{Xing:2020ijf}, CP violation is also expected in the leptonic sector~\cite{Branco:2011zb}, which is the primary goal of the future long-baseline accelerator neutrino oscillation experiments~\cite{Hyper-KamiokandeProto-:2015xww,DUNE:2015lol,Hyper-Kamiokande:2016srs,T2K:2018rhz}.
From the theoretical point of view, the violation of CP symmetry in the fermionic sector in a specific model comes from the complex couplings in the Lagrangian.\footnote{For the bosonic sources of CP violation, such as the $\theta$-term in quantum chromodynamics (QCD), the condition for CP conservation is trivially the vanishing of all couplings of CP-violating terms. In this paper, we only consider the fermionic sources of CP violation.} However, one should keep in mind that the couplings are \emph{not} invariant under the basis transformation in the flavor space. Thus the sufficient and necessary condition for CP conservation should be: \emph{It is possible to find a specific flavor basis such that in this basis every coupling parameter in the Lagrangian is real.} This criterion suffers from the flavor-basis dependence that one has to change the values of parameters from one flavor basis to another. Therefore, it is well motivated to introduce some quantities composed of the coupling parameters in the Lagrangian that are invariant under the flavor-basis transformations and thus one only needs to calculate these basis-independent quantities to judge whether there is CP violation in a given model. This is exactly the reason why flavor invariants are physically interesting. Furthermore, since any physical observables calculated from the parameters in the Lagrangian must be independent of the flavor basis, it will be helpful to express them as some functions of flavor invariants.
The first flavor invariant was constructed by Jarlskog~\cite{Jarlskog:1985ht,Jarlskog:1985cw,Jarlskog:1986mm} to characterize the CP violation in the quark sector. As is well known, the CP-violating phase in the Cabibbo-Kobayashi-Maskawa (CKM) matrix~\cite{Kobayashi:1973fv} appears in the quark charged-current interaction, leading to the CP violation in the neutral meson systems. Although the Yukawa coupling matrices $Y_{\rm u}^{}$ and $Y_{\rm d}^{}$ of up- and down-type quarks are not invariant under the flavor-basis transformations, it is possible to define the following basis-independent quantity~\cite{Jarlskog:1985ht, Jarlskog:1985cw, Jarlskog:1986mm}
\begin{eqnarray}
\label{eq:Jarlskog}
J\equiv {\rm Det}\left(\left[Y_{\rm u}^{}Y_{\rm u}^\dagger,Y_{\rm d}^{}Y_{\rm d}^\dagger\right]\right)=\frac{1}{3}{\rm Tr}\left(\left[Y_{\rm u}^{}Y_{\rm u}^\dagger,Y_{\rm d}^{}Y_{\rm d}^\dagger\right]_{}^3\right) \;.
\end{eqnarray}
It can be checked in general that any CP-violating observable in the quark sector is proportional to $J$, so the vanishing of $J$ is equivalent to the absence of CP violation in the quark sector.
The application of flavor invariants to studying CP violation was later generalized to arbitrary generations of quarks~\cite{Branco:1986quark} and to the leptonic sector~\cite{Branco:1986lepton}. The situation becomes more complicated in the leptonic sector if neutrinos are Majorana particles~\cite{Majorana:1937vz,Racah:1937qq}, because there are two extra Majorana-type CP phases entering the lepton flavor mixing matrix, i.e., the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix~\cite{Pontecorvo:1957cp, Maki:1962mu}. In this case, the minimal sufficient and necessary conditions for CP conservati-\\on in the leptonic sector are the vanishing of three flavor invariants~\cite{Dreiner:2007yz, Yu:2019ihs, Yu:2020xyy}, which are analogous to $J$ in Eq.~(\ref{eq:Jarlskog}). Moreover, if neutrino masses were degenerate, there would be redundant degrees of freedom and the number of conditions to guarantee CP conservation would accordingly be reduced~\cite{ Branco:1986lepton,Branco:1998bw,Mei:2003gu,Yu:2020gre}.
It is always possible to construct an infinite number of invariants in the flavor space though not all of them are independent. In fact, since the addition or multiplication of any two flavor invariants is also a flavor invariant, all of them form a \emph{ring} in the flavor space in the sense of algebraic structure. It was noticed in Refs.~\cite{Manohar:2009dy,Manohar:2010vu} that the Hilbert series (HS) in the invariant theory~\cite{sturmfels2008algorithms,derksen2015computational} provides a powerful tool in investigating the algebraic structure of the invariants in the flavor space and establishing the relations between flavor invariants and physical parameters.\footnote{The HS has also been widely applied to counting the number of independent gauge- and Lorentz-invariant effective operators of a certain mass dimension in the effective theories~\cite{Lehman:2015via, Henning:2015daa, Lehman:2015coa, Henning:2015alf, Henning:2017fpj, Graf:2020yxt}. In addition, the construction of the flavor invariants in the scalar sector of multi-Higgs-doublet models were thoroughly studied and the corresponding HS were calculated in the literature~\cite{Trautner:2018ipq, Trautner:2020qqo, Bento:2020jei, Bento:2021hyo}. The explicit and spontaneous breaking of a flavor group to its subgroups were also studied using the HS, see Ref.~\cite{Merle:2011vy}.} The maximum number of the algebraically-independent invariants in the flavor space, namely, the \emph{primary} invariants, is equal to the number of independent physical parameters in the theory~\cite{Manohar:2009dy,Manohar:2010vu}. Furthermore, as long as the symmetry group of the flavor space is reductive, the ring of invariants can be finitely generated~\cite{sturmfels2008algorithms, derksen2015computational}, which means there exist a finite number of \emph{basic} invariants such that any invariant in the flavor space can be expressed as the polynomial of the basic ones. The plethystic program~\cite{Hanany:2006qr} provides a convenient method to find out all the basic invariants as well as the polynomial identities (known as syzygies) among them. In the type-I seesaw model~\cite{Minkowski:1977sc, Yanagida:1979as, Gell-Mann:1979vob, Glashow:1979nm, Mohapatra:1979ia} and its low-energy effective theory with only the dimension-five Weinberg operator~\cite{Weinberg:1979sa}, the HS in the flavor space have been calculated and the basic flavor invariants are explicitly constructed~\cite{Manohar:2009dy, Manohar:2010vu, Wang:2021wdq, Yu:2021cco}. Moreover, the renormalization-group equations of all the basic flavor invariants in the effective theory have been derived as well~\cite{Wang:2021wdq}.
Recently, the number of independent CP-violating phases in the Standard Model effective field theory (SMEFT) with operators of mass dimension $d\leqslant6$ ~\cite{Buchmuller:1985jz,Grzadkowski:2010es,Trott:2017vri} has been systematically counted in Ref.~\cite{Bonnefoy:2021tbt}. Furthermore, an equal number of CP-odd flavor invariants are explicitly constructed and the vanishing of these CP-old invariants guarantees the CP conservation in the SMEFT up to dimension six. However, the $d=5$ Weinberg operator~\cite{Weinberg:1979sa} is not included, so the flavor mixing and CP violation in the leptonic sector are ignored in Ref.~\cite{Bonnefoy:2021tbt}. In this paper, we aim to explore the leptonic CP violation using the language of invariant theory in the framework of seesaw effective field theory (SEFT) at the tree-level matching,\footnote{The full one-loop matching of the type-I seesaw model onto the SMEFT has been accomplished recently~\cite{Zhang:2021jdf, Ohlsson:2022hfl, Du:2022vso}, where 31 dimension-six operators in the Warsaw basis of the SMEFT are involved. However, as we shall demonstrate later, the Wilson coefficients of ${\cal O}_5^{\alpha\beta}$ and ${\cal O}_6^{\alpha\beta}$ are already adequate to incorporate all physical information about the full seesaw model. Therefore, we will restrict ourselves within the SEFT at the tree-level matching.} which includes the $d=5$ Weinberg operator ${\cal O}^{\alpha \beta}_5 = \overline{\ell^{}_{\alpha \rm L}} \widetilde{H} \widetilde{H}^{\rm T} \ell^{\rm C}_{\beta \rm L}$ and one $d=6$ operator ${\cal O}^{\alpha \beta}_6 =\left(\overline{\ell^{}_{\alpha \rm L}} \widetilde{H}\right)i\slashed{\partial}\left(\widetilde{H}^\dagger \ell^{}_{\beta \rm L}\right)$~\cite{Yu:2022nxj}, where $\alpha, \beta = e, \mu, \tau$ are lepton flavor indices. The main motivation for such an exploration is three-fold.
First, as has been mentioned before, the flavor mixing and CP violation in the leptonic sector are switched off in Ref.~\cite{Bonnefoy:2021tbt}. However, neutrino oscillation experiments have revealed that neutrinos are indeed massive. In the spirit of the SMEFT, the most natural way to explain the nonzero neutrino masses is to introduce the $d=5$ Weinberg operator ${\cal O}_5^{}$~\cite{Weinberg:1979sa}, which accounts for Majorana neutrino masses after the spontaneous gauge symmetry breaking. Once ${\cal O}_5^{}$ is included, there will be extra sources of CP violation from the leptonic sector, which are observable in the low-energy oscillation experiments~\cite{Xing:2013ty, Xing:2013woa, Wang:2021rsi}. Therefore, it is \emph{necessary} to take into account the source of CP violation from ${\cal O}_5^{}$ apart from the $d=6$ operators.
Second, it has been shown that there are totally 699+1+1+6 independent CP phases in the SMEFT with operators of $d\leqslant6$~\cite{Bonnefoy:2021tbt}, the number of which is too large to receive restrictive constraints from low-energy experiments. If the Weinberg operator is included and the sources of CP violation from the leptonic sector are considered, the number of CP-violating phases will be further increased. Instead, we take in this paper the ultraviolet (UV) theory to be the type-I seesaw model, which is one of the most natural models to simultaneously explain nonzero neutrino masses and generate the cosmological matter-antimatter asymmetry via leptogenesis~\cite{Fukugita:1986hr}, and explore the sources of CP violation at the low-energy scale.
Finally, it is well known that there are six independent CP phases in type-I seesaw model in the three-generation case that appear in the CP asymmetries in the decays of right-handed (RH) neutrinos. The inclusion of ${\cal O}_5^{}$ at the low-energy scale contains only three CP phases in the PMNS matrix, which is obviously not enough to incorporate all the CP-violating sources in the full theory. It was firstly mentioned in Refs.~\cite{Broncano:2002rw,Broncano:2003fq} that the simultaneous inclusion of ${\cal O}_5^{}$ and ${\cal O}_6^{}$ reproduces the same number of independent physical parameters as in the full seesaw model if the number of RH neutrinos matches that of active neutrinos. This observation indicates that both ${\cal O}_5^{}$ and ${\cal O}_6^{}$ have already been \emph{adequate} to cover all physical information about the full theory. In this paper, we will show that this is indeed the case through the language of invariant theory. In particular, we shall establish the intimate connection between the invariant ring of the flavor space in the effective theory and that in the full theory. The matching between the flavor invariants in the SEFT and those in the full seesaw model will be accomplished by a proper procedure. Through the matching of the flavor invariants, one can directly link the CP asymmetries in leptogenesis to those in neutrino-neutrino and neutrino-antineutrino oscillations at low energies in a basis-independent way.
The remaining part of this paper is structured as follows. In Sec.~\ref{sec:framework}, we first define the flavor-basis transformation in the SEFT and set up our formalism and notations. In Sec.~\ref{sec:construction2g} and Sec.~\ref{sec:construction3g} we systematically study the algebraic structure of the invariant ring in the SEFT and construct the flavor invariants using the tool of invariant theory for the two- and three-generation case, respectively. Moreover, we will explain how to extract all physical parameters in the SEFT from the primary flavor invariants. The minimal sufficient and necessary conditions for CP conservation are also given in terms of CP-odd flavor invariants. Phenomenological applications of the flavor invariants are discussed in Sec.~\ref{sec:observables}, where we illustrate how to express a general CP-violating observable as the linear combination of CP-odd flavor invariants. In Sec.~\ref{sec:matching} we explore the connection between the SEFT and the full seesaw model through the matching between flavor invariants at low- and high-energy scales. The intimate relationship between two sets of basic flavor invariants will be revealed. Our main conclusions are summarized in Sec.~\ref{sec:summary}. Finally, some indispensable details are presented in two appendices. In Appendix~\ref{app:HS} we calculate the HS in the SEFT while in Appendix~\ref{app:matching} the matching procedure between flavor invariants in the SEFT and those in the full theory is given.
\section{Flavor-basis transformation in the SEFT}
\label{sec:framework}
The type-I seesaw model extends the SM by introducing three RH neutrinos $N_{\rm R}^{}$, which are singlets under the SM gauge symmetry. The Lagrangian of the type-I seesaw reads
\begin{eqnarray}
\label{eq:full lagrangian}
{\cal L}_{\rm seesaw}^{}={\cal L}_{\rm SM}^{}+\overline{N_{\rm R}^{}}{\rm i}\slashed{\partial}N_{\rm R}^{}-\left[\overline{\ell_{\rm L}^{}}Y_\nu^{}\widetilde{H}N_{\rm R}^{}+\frac{1}{2}\overline{N_{\rm R}^{\rm C}}M_{\rm R}^{}N_{\rm R}^{}+{\rm h.c.}\right]\;,
\end{eqnarray}
where ${\cal L}_{\rm SM}$ is the SM Lagrangian, ${\ell }_{\rm L}^{}\equiv \left(\nu_{\rm L}^{},l_{\rm L}^{}\right)_{}^{\rm T}$ and $\widetilde{H}\equiv {\rm i}\sigma^2 H_{}^{*}$ stand for the left-handed lepton doublet and the Higgs doublet, respectively. In addition, $Y_\nu^{}$ denotes the Dirac neutrino Yukawa coupling matrix and $M_{\rm R}^{}$ is the Majorana mass matrix of RH neutrinos. Note that $N_{\rm R}^{\rm C}\equiv {\cal C} \overline{N_{\rm R}^{}}^{\rm T}$ has been defined with ${\cal C}\equiv {\rm i}\gamma_{}^2\gamma_{}^0$ being the charge-conjugation matrix.
If the mass scale of RH neutrinos $\Lambda={\cal O}\left(M_{\rm R}^{}\right)$ is much higher than the electroweak scale characterized by the vacuum expectation value $v \approx 246\, {\rm GeV}$ of the Higgs field, one can integrate $N_{\rm R}^{}$ out at the tree level to obtain the low-energy effective theory. The effective Lagrangian to the order of ${\cal O}\left(1/\Lambda^2\right)$ turns out to be
\begin{eqnarray}
\label{eq:effective lagrangian}
{\cal L}_{\rm SEFT}^{}={\cal L}_{\rm SM}^{}-\left(\frac{C_5^{}}{2\Lambda} {\cal O}_5^{}+{\rm h.c.}\right)+\frac{C_6^{}}{\Lambda^2}{\cal O}_6^{}\;,
\end{eqnarray}
with
\begin{eqnarray}
{\cal O}_5^{}=\overline{\ell_{\rm L}^{}}\widetilde{H}\widetilde{H}_{}^{\rm T}\ell_{\rm L}^{\rm C}\;,\quad
{\cal O}_6^{}=\left(\overline{\ell_{\rm L}^{}}\widetilde{H}\right)i\slashed{\partial}\left(\widetilde{H}_{}^\dagger\ell_{\rm L}^{}\right)\;.
\end{eqnarray}
Note that the lepton flavor indices have been suppressed. At the tree-level matching, the Wilson coefficients are related to the Yukawa couplings in the full theory as
\begin{eqnarray}
\label{eq:wilson coe}
C_5^{}=-Y_\nu^{}Y_{\rm R}^{-1}Y_\nu^{\rm T}\;, \quad
C_6^{}=Y_\nu^{} \left(Y_{\rm R}^{\dagger}Y_{\rm R}^{}\right)_{}^{-1}Y_\nu^\dagger\;,
\end{eqnarray}
where we have defined the dimensionless quantity $Y_{\rm R}^{}\equiv M_{\rm R}^{}/\Lambda$. Given the full Lagrangian in Eq.~(\ref{eq:full lagrangian}), one can perform the general unitary transformation in the flavor space in the leptonic sector
\begin{eqnarray}
\label{eq:field trans}
\ell_{\rm L}^{}\to \ell_{\rm L}^{\prime}=U_{\rm L}^{}\ell_L^{}\;,\quad
l_{\rm R}^{}\to l_{\rm R}^{\prime}=V_{\rm R}l_{\rm R}^{}\;,\quad
N_{\rm R}^{}\to N_{\rm R}^{\prime}=U_{\rm R}^{}N_{\rm R}^{}\;,
\end{eqnarray}
where $l_{\rm R}^{}$ is the RH charged-lepton field, $U_{\rm L}^{}, V_{\rm R} \in {\rm U}(m)$ and $U_{\rm R}^{} \in {\rm U}(n)$ (for $m$ generations of lepton doublets and $n$ generations of RH neutrinos) are three arbitrary unitary matrices. Then Eq.~(\ref{eq:full lagrangian}) is unchanged if we treat the Yukawa coupling matrices as spurions, namely, taking them as spurious fields that transform as
\begin{eqnarray}
\label{eq:Yukawa trans}
Y_l^{} \to Y_l^\prime=U_{\rm L}^{}Y_l^{}V_{\rm R}^\dagger\;,\quad
Y_\nu^{} \to Y_\nu^\prime=U_{\rm L}^{}Y_\nu^{}U_{\rm R}^\dagger\;,\quad
Y_{\rm R}^{} \to Y_{\rm R}^\prime=U_{\rm R}^* Y_{\rm R}^{}U_{\rm R}^\dagger\;,
\end{eqnarray}
where $Y_l^{}$ denotes the charged-lepton Yukawa coupling matrix. At the matching scale, the transformati-\\on in Eq.~(\ref{eq:Yukawa trans}) together with Eq.~(\ref{eq:wilson coe}) induces the transformation of Wilson coefficients in the flavor space of the SEFT, i.e.,
\begin{eqnarray}
\label{eq:wilson coe trans}
C_5^{}\to C_5^\prime=U_{\rm L}^{}C_5^{}U_{\rm L}^{\rm T}\;,\quad
C_6^{}\to C_6^\prime=U_{\rm L}^{}C_6^{}U_{\rm L}^\dagger\;.
\end{eqnarray}
It is easy to verify that the SEFT Lagrangian in Eq.~(\ref{eq:effective lagrangian}) is also unchanged under the transformation in Eqs.~(\ref{eq:field trans})-(\ref{eq:wilson coe trans}), as it should be.
Now it is obvious that the building blocks for the construction of flavor invariants in the SEFT are $\left\{X_l^{}\equiv Y_l^{}Y_l^\dagger,C_5^{},C_6^{}\right\}$ with the symmetry group ${\rm U}(m)$, while the building blocks in the full seesaw model are $\left\{Y_l^{},Y_\nu^{},Y_{\rm R}^{}\right\}$ with the symmetry group ${\rm U}(m)\otimes{\rm U}(n)$.\footnote{Notice that the flavor transformation of RH charged-lepton fields [i.e., $V_{\rm R}^{}$ in Eq.~(\ref{eq:field trans})] is unphysical. Therefore one may use $Y_l^{}Y_l^\dagger$ as a building block instead of $Y_l^{}$ when constructing flavor invariants and calculating the HS.} Note that by flavor invariants, we refer to the \emph{polynomial} matrix invariants constructed from the building blocks that are unchanged under the unitary transformation in the flavor space~\cite{procesi1976invariant,procesi2017invariant}.
\underline{\emph{Notations for flavor invariants}}: Throughout this paper, we use ${\cal I}_{abc}^{}$ (or ${\cal J}_{abc}$) to label the flavor invariant with the degrees $\left\{a,b,c\right\}$ of the building blocks $\left\{X_l^{},C_5^{},C_6^{}\right\}$ for the two- (or three-) generation case in the SEFT. On the other hand, $I_{abc}^{}$ (or $J_{abc}^{}$) will be used to label the flavor invariant with the degrees $\left\{a,b,c\right\}$ of the building blocks $\left\{Y_l^{},Y_\nu^{},Y_{\rm R}^{}\right\}$ for the two- (or three-) generation case in the full seesaw model. Here $a,b,c$ are all non-negative integers.
\section{Algebraic structure of the SEFT flavor invariants: Two-generation case}
\label{sec:construction2g}
Let us start with the case of only two-generation leptons, which is unrealistic but very instructive for the study of the three-generation case.
The first step is to compute the HS in the flavor space, which encodes all information about the ring of invariants. Given the transformation rules of the building blocks in Eq.~(\ref{eq:wilson coe trans}) with $m=2$, one can calculate the HS using the Molien-Weyl (MW) formula~\cite{molien1897invarianten,weyl1926darstellungstheorie} (see Appendix~\ref{app:HS} for details)
\begin{eqnarray}
\label{eq:HS eff 2g main}
{\mathscr H}_{\rm SEFT}^{(2\rm g)}(q)=\frac{1+3q^4+2q^5+3q^6+q^{10}}{\left(1-q\right)^2\left(1-q^2\right)^4\left(1-q^3\right)^2\left(1-q^4\right)^2}\;.
\end{eqnarray}
The numerator of the HS in Eq.~(\ref{eq:HS eff 2g main}) exhibits the palindromic structure as expected while the denominator has 10 factors, which implies the maximum number of the algebraically-independent invariants (i.e., primary invariants) in the flavor space is 10. This number, as a highly nontrivial result, is also equal to the number of independent physical parameters in the two-generation SEFT.
In order to find out the generators of the invariant ring, one can substitute Eq.~(\ref{eq:HS eff 2g main}) into Eq.~(\ref{eq:PL def}) to calculate the plethystic logarithm (PL) function,
\begin{eqnarray}
\label{eq:PL eff 2g main}
{\rm PL}\left[{\mathscr H}_{\rm SEFT}^{(2\rm g)}(q)\right]=2q+4q^2+2q^3+5q^4+2q^5+3q^6-6q^8-{\cal O}\left(q^9\right)\;,
\end{eqnarray}
whose leading positive terms correspond to the number and degrees of the basic invariants~\cite{Hanany:2006qr}. More explicitly, there are in total 18 basic invariants in the ring, two of degree 1, four of degree 2, two of degree 3, five of degree 4, two of degree 5 and three of degree 6. With the help of Eq.~(\ref{eq:PL eff 2g main}) we explicitly construct all the basic flavor invariants in the SEFT, as summarized in Table~\ref{table:2g eff}. The CP parities listed in the last column of Table~\ref{table:2g eff} describe the behaviors of the flavor invariants under the CP transformation: CP-even invariants are unchanged under the CP-conjugate transformation while CP-odd invariants bring about an extra minus sign. The 18 basic invariants (12 CP-even and 6 CP-odd) in Table~\ref{table:2g eff} serve as the generators of the invariant ring, in the sense that any flavor invariant can be written as the polynomial of them. For example, the CP-even counterparts of the 6 CP-odd basic invariants in Table~\ref{table:2g eff} can be decomposed into the polynomials of the CP-even basic invariants in Table~\ref{table:2g eff} as follows:\footnote{See Appendix C of Ref.~\cite{Wang:2021wdq} for a general algorithm to decompose an arbitrary invariant into the polynomial of the basic ones and to find out all the syzygies among the basic invariants at a certain degree.}
\begin{eqnarray}
&&{\cal I}_{121}^{(+)}\equiv {\rm Re}\, {\rm Tr}\left(X_l^{}X_5^{}C_6^{}\right)=\frac{1}{2}\left[{\cal I}_{020}^{}\left({\cal I}_{101}^{}-{\cal I}_{100}^{}{\cal I}_{001}^{}\right)+{\cal I}_{100}^{}{\cal I}_{021}^{}+{\cal I}_{001}^{}{\cal I}_{120}^{}\right]\;,\\
&&{\cal I}_{141}^{(+)}\equiv {\rm Re}\, {\rm Tr}\left(X_5^{}C_6^{}G_{l5}^{}\right)=\frac{1}{4}\left[{\cal I}_{100}^{}{\cal I}_{001}^{}\left({\cal I}_{040}^{}-{\cal I}_{020}^2\right)+2\left({\cal I}_{020}^{}{\cal I}_{121}^{(1)}+{\cal I}_{120}^{}{\cal I}_{021}^{}\right)\right]\;,\\
&&{\cal I}_{221}^{(+)}\equiv {\rm Re}\, {\rm Tr}\left(X_l^{}G_{l5}^{}C_6^{}\right)=\frac{1}{2}\left[{\cal I}_{101}^{}{\cal I}_{120}^{}+{\cal I}_{001}^{}{\cal I}_{220}^{}+{\cal I}_{100}^{}\left({\cal I}_{121}^{(1)}-{\cal I}_{001}^{}{\cal I}_{120}^{}\right)\right]\;,\\
&&{\cal I}_{122}^{(+)}\equiv {\rm Re}\, {\rm Tr}\left(C_6^{}G_{56}^{}X_l^{}\right)=\frac{1}{2}\left[{\cal I}_{101}^{}{\cal I}_{021}^{}+{\cal I}_{100}^{}{\cal I}_{022}^{}+{\cal I}_{001}^{}\left({\cal I}_{121}^{(1)}-{\cal I}_{100}^{}{\cal I}_{021}^{}\right)\right]\;,\\
&&{\cal I}_{240}^{(+)}\equiv {\rm Re}\, {\rm Tr}\left(X_l^{}X_5^{}G_{l5}^{}\right)=\frac{1}{4}\left[{\cal I}_{100}^2\left({\cal I}_{040}^{}-{\cal I}_{020}^2\right)+2\left({\cal I}_{120}^2+{\cal I}_{020}^{}{\cal I}_{220}^{}\right)\right]\;,\\
&&{\cal I}_{042}^{(+)}\equiv {\rm Re}\, {\rm Tr}\left(C_6^{}X_5^{}G_{56}^{}\right)=\frac{1}{4}\left[{\cal I}_{001}^2\left({\cal I}_{040}^{}-{\cal I}_{020}^2\right)+2\left({\cal I}_{021}^2+{\cal I}_{020}^{}{\cal I}_{022}^{}\right)\right]\;.
\end{eqnarray}
Although there are 18 basic invariants in the SEFT, not all of them are algebraically independent. There exist nontrivial polynomial identities among them that are identically equal to zero, known as syzygies in the invariant theory.\footnote{However, note that none of the basic invariants in the ring can be written as the \emph{polynomial} of the other 17 basic ones. This is different from the case of vector space, where the statement that a set of vectors are not linearly independent means any vector of this set can be written as the linear combination of the others.} For example, the six lowest-degree syzygies begin to appear at degree 8 (in the sense that the total degree of each term in the syzygies is 8):
\begin{eqnarray}
\label{eq:syzygy1}
&{\cal I}_{121}^{(2)}&\left(2{\cal I}_{220}^{}-{\cal I}_{100}^{}{\cal I}_{120}^{}\right)+{\cal I}_{221}^{}\left({\cal I}_{100}^{}{\cal I}_{020}^{}-2{\cal I}_{120}^{}\right)+{\cal I}_{240}^{}\left({\cal I}_{001}^{}{\cal I}_{100}^{}-2{\cal I}_{101}^{}\right)\nonumber\\
&&+{\cal I}_{141}^{}\left({\cal I}_{100}^2-2{\cal I}_{200}^{}\right)=0\;,\\
\label{eq:syzygy2}
&{\cal I}_{121}^{(2)}&\left(2{\cal I}_{022}^{}-{\cal I}_{001}^{}{\cal I}_{021}^{}\right)-{\cal I}_{122}^{}\left({\cal I}_{001}^{}{\cal I}_{020}^{}-2{\cal I}_{021}^{}\right)-{\cal I}_{042}^{}\left({\cal I}_{001}^{}{\cal I}_{100}^{}-2{\cal I}_{101}^{}\right)\nonumber\\
&&-{\cal I}_{141}^{}\left({\cal I}_{001}^2-2{\cal I}_{002}^{}\right)= 0\;,\\
\label{eq:syzygy3}
&{\cal I}_{121}^{(2)}&\left(2{\cal I}_{121}^{(1)}-{\cal I}_{001}^{}{\cal I}_{120}^{}\right)+{\cal I}_{221}^{}\left({\cal I}_{001}^{}{\cal I}_{020}^{}-2{\cal I}_{021}^{}\right)+{\cal I}_{240}^{}\left({\cal I}_{001}^2-2{\cal I}_{002}^{}\right)\nonumber\\
&&+{\cal I}_{141}^{}\left({\cal I}_{001}^{}{\cal I}_{100}^{}-2{\cal I}_{101}^{}\right) = 0\;,\\
\label{eq:syzygy4}
&{\cal I}_{121}^{(2)}&\left(2{\cal I}_{121}^{(1)} -{\cal I}_{100}^{}{\cal I}_{021}^{}\right)-{\cal I}_{122}^{}\left({\cal I}_{100}^{}{\cal I}_{020}^{}-2{\cal I}_{120}^{}\right)-{\cal I}_{042}^{}\left({\cal I}_{100}^2-2{\cal I}_{200}^{}\right)\nonumber\\
&&-{\cal I}_{141}^{}\left({\cal I}_{001}^{}{\cal I}_{100}^{}-2{\cal I}_{101}^{}\right) = 0\;,
\end{eqnarray}
together with another two syzygies involving only CP-even invariants. These six syzygies correspond to the first negative term $-6q_{}^8$ in Eq.~(\ref{eq:PL eff 2g main}). Eqs.~(\ref{eq:syzygy1})-(\ref{eq:syzygy4}) establish 4 linear relations among the 6 CP-odd basic invariants, which is consistent with the fact that there are only $6-4=2$ independent phases in the SEFT for the two-generation case.
Among the 18 basic invariants in Table~\ref{table:2g eff}, the 10
primary ones that are algebraically independent are labeled with ``$(*)$" in the first column. Note that the choice of primary invariants is by no means unique. Later on we will show that from the 10 primary flavor invariants one can explicitly extract all the physical parameters in the two-generation SEFT. In this sense, the set of primary invariants is \emph{equivalent to} the set of independent physical parameters in the theory.
\subsection{Physical parameters in terms of primary invariants}
\label{subsec:extract2g}
\renewcommand\arraystretch{1.2}
\begin{table}[t!]
\centering
\begin{tabular}{l|c|c}
\hline \hline
flavor invariants & degree & CP parity \\
\hline \hline
${\cal I}_{100}^{}\equiv {\rm Tr}\left(X_l^{}\right)\quad (*)$ & 1 & $+$ \\
\hline
${\cal I}_{001}^{}\equiv {\rm Tr}\left(C_6^{}\right)\quad (*)$ & 1 & $+$\\
\hline
${\cal I}_{200}^{}\equiv {\rm Tr}\left(X_l^2\right)\quad (*)$ & 2 & $+$\\
\hline
${\cal I}_{101}^{}\equiv {\rm Tr}\left(X_l^{}C_6^{}\right)$ & 2 & $+$\\
\hline
${\cal I}_{020}^{}\equiv {\rm Tr}\left(X_5^{}\right)\quad (*)$ & 2 & $+$\\
\hline
${\cal I}_{002}^{}\equiv {\rm Tr}\left(C_6^2\right)\quad (*)$ & 2 & $+$\\
\hline
${\cal I}_{120}^{}\equiv {\rm Tr}\left(X_l^{}X_5^{}\right)\quad (*)$ & 3 & $+$\\
\hline
${\cal I}_{021}^{}\equiv {\rm Tr}\left(C_6^{}X_5{}\right)\quad (*)$ & 3 & $+$\\
\hline
${\cal I}_{220}^{}\equiv {\rm Tr}\left(X_l{}G_{l5}^{}\right)\quad (*)$ & 4 & $+$\\
\hline
${\cal I}_{121}^{(1)}\equiv {\rm Tr}\left(G_{l5}^{}C_6^{}\right)$ & 4 & $+$\\
\hline
${\cal I}_{121}^{(2)}\equiv {\rm Im}\,{\rm Tr}\left(X_l^{}X_5{}C_6^{}\right)$ & 4 & $-$\\
\hline
${\cal I}_{040}^{}\equiv {\rm Tr}\left(X_5^{2}\right)\quad (*)$ & 4 & $+$\\
\hline
${\cal I}_{022}^{}\equiv {\rm Tr}\left(C_6^{}G_{56}^{}\right)\quad (*)$ & 4 & $+$\\
\hline
${\cal I}_{221}^{}\equiv {\rm Im}\,{\rm Tr}\left(X_l^{}G_{l5}^{}C_6^{}\right)$ & 5 & $-$\\
\hline
${\cal I}_{122}^{}\equiv {\rm Im}\,{\rm Tr}\left(C_6^{}G_{56}^{}X_l^{}\right)$ & 5 & $-$\\
\hline
${\cal I}_{240}^{}\equiv {\rm Im}\,{\rm Tr}\left(X_l^{}X_5^{}G_{l5}^{}\right)$ & 6 & $-$\\
\hline
${\cal I}_{141}^{}\equiv {\rm Im}\,{\rm Tr}\left(X_5^{}C_6^{}G_{l5}^{}\right)$ & 6 & $-$\\
\hline
${\cal I}_{042}^{}\equiv {\rm Im}\,{\rm Tr}\left(C_6^{}X_5^{}G_{56}^{}\right)$ & 6 & $-$\\
\hline
\hline
\end{tabular}
\vspace{0.5cm}
\caption{\label{table:2g eff}Summary of the basic flavor invariants along with their degrees and CP parities in the SEFT for two-generation leptons, where the subscripts of the invariants denote the degrees of $X_l^{}\equiv Y_l^{}Y_l^\dagger$, $C_5^{}$ and $C_6^{}$, respectively. Note that we have defined $X_5^{}\equiv C_5^{}C_5^\dagger$, $G_{l5}^{}\equiv C_5^{}X_l^*C_5^\dagger$ and $G_{56}^{}\equiv C_5^{}C_6^*C_5^\dagger$ that transform adjointly under the flavor-basis transformation. There are totally 12 CP-even and 6 CP-odd basic invariants in the invariant ring of the flavor space. The 10 primary invariants that are algebraically independent have been labeled with ``$(*)$" in the first column.}
\end{table}
\renewcommand\arraystretch{1}
Without loss of generality, we start with the flavor basis where $C_5^{}$ is diagonal with real and non-negative eigenvalues, namely, $C_5^{}={\rm Diag}\{c_1^{},c_2^{}\}$, while $X_l^{}\equiv Y_l^{} Y_l^\dagger$ and $C_6^{}$ are two general $2\times 2$ Hermitian matrices
\begin{eqnarray}
\label{eq:parametrization of C6 2g}
X_l^{}=\left(
\begin{matrix}
a_{11}^{}&a_{12}^{}e_{}^{{\rm i}\alpha}\\
a_{12}^{}e_{}^{-{\rm i}\alpha}&a_{22}^{}
\end{matrix}
\right)\;,\quad
C_6^{}=\left(
\begin{matrix}
b_{11}^{}&b_{12}^{}e_{}^{{\rm i}\beta}\\
b_{12}^{}e_{}^{-{\rm i}\beta}&b_{22}^{}
\end{matrix}
\right)\;,
\end{eqnarray}
where $a_{ij}^{}$ and $b_{ij}^{}$ are real numbers while $\alpha$ and $\beta$ are two phases. In this basis, the 10 independent physical parameters are $\{c_1^{}, c_2^{}, a_{11}^{}, a_{12}^{}, a_{22}^{}, b_{11}^{}, b_{12}^{}, b_{22}^{}, \alpha, \beta\}$.\footnote{Here we assume that these parameters are not equal or vanishing and $c_1^{}<c_2^{}$, which are true in general.} First, the eigenvalues of $C_5^{}$ can be extracted from ${\cal I}_{020}^{}\equiv {\rm Tr}\left(X_5^{}\right)=c_1^2+c_2^2$ and ${\cal I}_{040}^{}\equiv {\rm Tr}\left(X_5^2\right)=c_1^4+c_2^4$ with $X_5^{}\equiv C_5^{} C_5^\dagger$, i.e.,
\begin{eqnarray}
c_{1,2}^{}=\frac{1}{\sqrt{2}}\sqrt{{\cal I}_{020}^{}\mp\sqrt{2{\cal I}_{040}^{}-{\cal I}_{020}^2}}\;.
\end{eqnarray}
Second, from ${\cal I}_{100}^{}\equiv {\rm Tr}\left(X_l^{}\right)=a_{11}^{}+a_{22}^{}$ and ${\cal I}_{120}^{}\equiv {\rm Tr}\left(X_l^{}X_5^{}\right)=c_1^2a_{11}^{}+c_2^2a_{22}^{}$ one can extract $a_{11}^{}$ and $a_{22}^{}$ as follows
\begin{eqnarray}
a_{11,22}^{}=\frac{1}{2}\left({\cal I}_{100}^{}\pm\frac{{\cal I}_{100}{\cal I}_{020}-2{\cal I}_{120}}{\sqrt{2{\cal I}_{040}-{\cal I}_{020}^2}}\right)\;.
\end{eqnarray}
Then from ${\cal I}_{200}^{}\equiv {\rm Tr}\left(X_l^2\right)=a_{11}^2+2a_{12}^2+a_{22}^2$, we obtain\footnote{It is always possible to choose the phase $\alpha$ to make $a_{12}^{}$ positive, and likewise for $b_{12}^{}$ via the phase $\beta$.}
\begin{eqnarray}
a_{12}^{}=\frac{1}{\sqrt{2}}\sqrt{\frac{{\cal I}_{100}\left({\cal I}_{100}{\cal I}_{040}-2{\cal I}_{020}{\cal I}_{120}\right)+{\cal I}_{200}\left({\cal I}_{020}^2-2{\cal I}_{040}\right)+2{\cal I}_{120}^2}{{\cal I}_{020}^2-2{\cal I}_{040}}}\;.
\end{eqnarray}
Finally, using ${\cal I}_{220}^{}\equiv {\rm Tr}\left(X_l^{}G_{l5}^{}\right)=c_1^2 a_{11}^2+c_2^2 a_{22}^2+2a_{12}^2c_1^{}c_2^{}\cos2\alpha$, one can get
\begin{eqnarray}
\cos2\alpha=\frac{\left({\cal I}_{100}^2{\cal I}_{020}-4{\cal I}_{100}{\cal I}_{120}+2{\cal I}_{220}\right)\left({\cal I}_{020}^2-{\cal I}_{040}\right)+2\left({\cal I}_{020}{\cal I}_{120}^2-{\cal I}_{040}{\cal I}_{220}\right)}{\sqrt{2}\sqrt{{\cal I}_{020}^2-{\cal I}_{040}}\left[{\cal I}_{200}^{}\left({\cal I}_{020}^2-{\cal I}_{040}\right)+{\cal I}_{040}\left({\cal I}_{100}^2-{\cal I}_{200}\right)-2{\cal I}_{120}\left({\cal I}_{100}{\cal I}_{020}-{\cal I}_{120}\right)\right]}\;. \quad
\end{eqnarray}
Similarly, the parameters in $C_6^{}$ can be extracted in a parallel manner. The final results are given by
\begin{eqnarray}
\label{eq:extract C6 2g 1}
b_{11,22}^{}&=&\frac{1}{2}\left({\cal I}_{001}^{}\pm\frac{{\cal I}_{001}{\cal I}_{020}-2{\cal I}_{021}}{\sqrt{2{\cal I}_{040}-{\cal I}_{020}^2}}\right)\;,\\
\label{eq:extract C6 2g 2}
b_{12}^{}&=&\frac{1}{\sqrt{2}}\sqrt{\frac{{\cal I}_{001}\left({\cal I}_{001}{\cal I}_{040}-2{\cal I}_{020}{\cal I}_{021}\right)+{\cal I}_{002}\left({\cal I}_{020}^2-2{\cal I}_{040}\right)+2{\cal I}_{021}^2}{{\cal I}_{020}^2-2{\cal I}_{040}}}\;,\\
\label{eq:extract C6 2g 3}
\cos2\beta&=&\frac{\left({\cal I}_{001}^2{\cal I}_{020}-4{\cal I}_{001}{\cal I}_{021}+2{\cal I}_{022}\right)\left({\cal I}_{020}^2-{\cal I}_{040}\right)+2\left({\cal I}_{020}{\cal I}_{021}^2-{\cal I}_{040}{\cal I}_{022}\right)}{\sqrt{2}\sqrt{{\cal I}_{020}^2-{\cal I}_{040}}\left[{\cal I}_{002}^{}\left({\cal I}_{020}^2-{\cal I}_{040}\right)+{\cal I}_{040}\left({\cal I}_{001}^2-{\cal I}_{002}\right)-2{\cal I}_{021}\left({\cal I}_{001}{\cal I}_{020}-{\cal I}_{021}\right)\right]}\;.
\end{eqnarray}
When calculating the observables in the SEFT, one usually turns to the basis where the Higgs field acquires its vacuum expectation value and the electroweak gauge symmetry is spontaneously broken down. In this case, two neutrino masses are related to the eigenvalues of $C_5^{}$ via
\begin{eqnarray}
\label{eq:extract neutrino mass 2g}
m_{1,2}^{}=\frac{v^2}{2\Lambda}c_{1,2}^{}=\frac{v^2}{2\sqrt{2}\Lambda}\sqrt{{\cal I}_{020}^{}\mp\sqrt{2{\cal I}_{040}^{}-{\cal I}_{020}^2}}\;.
\end{eqnarray}
On the other hand, the masses of charged leptons can be obtained via the diagonalization of $X_l^{}$: $2\,{\rm Diag}\left\{m_e^2,m_\mu^2\right\}/v^2=V_2^{}X_l^{}V_2^\dagger$,
where
\begin{eqnarray}
\label{eq:parametrization of V 2g}
V_2^{}=\left(
\begin{matrix}
\cos\theta&\sin\theta\\
-\sin\theta&\cos\theta
\end{matrix}
\right)\cdot
\left(
\begin{matrix}
e_{}^{{\rm i}\phi}&0\\
0&1
\end{matrix}
\right)
\end{eqnarray}
is the flavor mixing matrix in the two-generation case, with $\theta$ the flavor mixing angle and $\phi$ the Majorana-type CP phase. Thus the charged-lepton masses, flavor mixing angle and CP phase can be related to the elements in $X_l^{}$ by
\begin{eqnarray}
m_{e,\mu^{}}=\frac{v}{2}\sqrt{a_{11}^{}+a_{22}^{}\pm\frac{2a_{12}^{}}{\sin2\theta}}\;,\quad
\tan2\theta=\frac{2a_{12}}{a_{11}-a_{22}}\;,\quad
\phi=-\alpha\;.
\end{eqnarray}
More explicitly, we express these physical parameters in terms of the flavor invariants as below
\begin{eqnarray}
\label{eq:extract chargd-lepton mass 2g}
m_{e,\mu}&=&\frac{v}{2}\sqrt{{\cal I}_{100}^{}\mp\sqrt{2{\cal I}_{200}^{}-{\cal I}_{100}^2}}\;,\\
\label{eq:extract theta}
\cos2\theta&=&\frac{2{\cal I}_{120}-{\cal I}_{100}{\cal I}_{020}}{\sqrt{2{\cal I}_{040}-{\cal I}_{020}^2}\sqrt{2{\cal I}_{200}-{\cal I}_{100}^2}}\;,\\
\label{eq:extract phi}
\cos2\phi&=&\frac{\left({\cal I}_{100}^2{\cal I}_{020}-4{\cal I}_{100}{\cal I}_{120}+2{\cal I}_{220}\right)\left({\cal I}_{020}^2-{\cal I}_{040}\right)+2\left({\cal I}_{020}{\cal I}_{120}^2-{\cal I}_{040}{\cal I}_{220}\right)}{\sqrt{2}\sqrt{{\cal I}_{020}^2-{\cal I}_{040}}\left[{\cal I}_{200}^{}\left({\cal I}_{020}^2-{\cal I}_{040}\right)+{\cal I}_{040}\left({\cal I}_{100}^2-{\cal I}_{200}\right)-2{\cal I}_{120}\left({\cal I}_{100}{\cal I}_{020}-{\cal I}_{120}\right)\right]}\;. \quad
\end{eqnarray}
To summarize, we have shown that the 10 physical parameters after the gauge symmetry breaking
$$\left\{m_1^{},m_2^{},m_e^{},m_\mu^{},\theta,\phi,b_{11}^{},b_{12}^{},b_{22}^{},\beta\right\}$$
can be extracted from the 10 primary flavor invariants
$$\left\{{\cal I}_{020}^{},{\cal I}_{040}^{}, {\cal I}_{100}^{}, {\cal I}_{001}^{}, {\cal I}_{200}, {\cal I}_{002}^{}, {\cal I}_{120}^{}, {\cal I}_{021}^{}, {\cal I}_{220}^{}, {\cal I}_{022}^{} \right\}$$
by Eqs.~(\ref{eq:extract C6 2g 1})-(\ref{eq:extract C6 2g 3}), Eq.~(\ref{eq:extract neutrino mass 2g}) and Eqs.~(\ref{eq:extract chargd-lepton mass 2g})-(\ref{eq:extract phi}).
\subsection{Conditions for CP conservation}
\label{subsec:conditions2g}
Although there are 6 CP-odd basic invariants in the ring, they are not algebraically independent but related to each other by the syzygies in Eqs.~(\ref{eq:syzygy1})-(\ref{eq:syzygy4}). On the other hand, there are only 2 independent physical phases in the leptonic sector (i.e., $\phi$ and $\beta$ in the symmetry-breaking basis). Therefore, the \emph{minimal} conditions to guarantee CP conservation in the leptonic sector is the vanishing of only 2 CP-odd invariants. In the symmetry-breaking basis chosen above, it is straightforward to explicitly calculate the following two CP-odd flavor invariants
\begin{eqnarray}
\label{eq:i1212}
{\cal I}_{121}^{(2)}&\equiv&{\rm Im}\,{\rm Tr}\left(X_l^{}X_5^{}C_6^{}\right)=\frac{1}{v^2}\left(m_\mu^2-m_e^2\right)\left(c_2^2-c_1^2\right)b_{12}^{}\sin2\theta\sin\left(\beta+\phi\right)\;,\\
\label{eq:i240}
{\cal I}_{240}^{}&\equiv&{\rm Im}\,{\rm Tr}\left(X_l^{}X_5^{}G_{l5}^{}\right)=-\frac{1}{v^4}\left(m_\mu^2-m_e^2\right)_{}^2\left(c_2^2-c_1^2\right)c_1^{}c_2^{}\sin_{}^22\theta\sin2\phi\;.
\end{eqnarray}
We have assumed that there is neither degeneracy for the lepton masses nor texture zero in the matrix elements of $C_6^{}$ and $X_l^{}$ in general. Thus the vanishing of ${\cal I}_{121}^{(2)}$ and ${\cal I}_{240}^{}$ enforces the phases to take only trivial values
\begin{eqnarray}
\beta, \; \phi=k\pi\qquad {\rm or} \qquad
\beta, \; \phi=\frac{2k+1}{2}\pi\;,
\end{eqnarray}
where $k$ is an arbitrary integer. In this basis, one can verify that all the CP-violating observables are proportional to $\sin2\phi$, $\sin2\beta$ or $\sin\left(\beta\pm\phi\right)$, so ${\cal I}_{121}^{(2)}={\cal I}_{240}^{}=0$ serves as the minimal sufficient and necessary conditions for CP conservation in the leptonic sector. As will be shown in Sec.~\ref{sec:observables}, ${\cal I}_{121}^{(2)}$ and ${\cal I}_{240}^{}$ are responsible for the CP violation in neutrino-neutrino and neutrino-antineutrino oscillations, respectively. Moreover, any CP-violating observable in the two-generation SEFT can be expressed as the linear combination of ${\cal I}_{121}^{(2)}$ and ${\cal I}_{240}^{}$ with the combination coefficients being functions of only CP-even invariants. As a consequence, the vanishing of ${\cal I}_{121}^{(2)}$ and ${\cal I}_{240}^{}$ implies the vanishing of any CP-violating observable.
\section{Algebraic structure of the SEFT flavor invariants: Three-generation case}
\label{sec:construction3g}
Now we proceed with the realistic case of three-generation leptons. As we shall see, the algebraic structure of the invariant ring in the three-generation SEFT is much more complicated than that in the two-generation case.
From the transformation behaviors of the building blocks under ${\rm U}(3)$ group in Eq.~(\ref{eq:wilson coe trans}), one can calculate the HS using the MW formula (see Appendix~\ref{app:HS} for more details). Though very tedious, the result can be recast into the standard form
\begin{eqnarray}
\label{eq:HS eff 3g main}
{\mathscr H}_{\rm SEFT}^{(3\rm g)}(q)=\frac{{\mathscr N}_{\rm SEFT}^{(3\rm g)}(q)}{{\mathscr D}_{\rm SEFT}^{(3\rm g)}(q)}\;,
\end{eqnarray}
where
\begin{eqnarray}
\label{eq:numerator eff 3g main}
{\mathscr N}_{\rm SEFT}^{(3\rm g)}(q)&=&q^{65}+2 q^{64}+4 q^{63}+11 q^{62}+23 q^{61}+48 q^{60}+120 q^{59}+269 q^{58}+587 q^{57}+1258 q^{56}\nonumber\\
&&+2543 q^{55}+4895 q^{54}+9124 q^{53}+16281 q^{52}+27963 q^{51}+46490 q^{50}+74644 q^{49}\nonumber\\
&&+115871q^{48}+174433 q^{47}+254494 q^{46}+360055 q^{45}+494873 q^{44}+660820 q^{43}\nonumber\\
&&+857677 q^{42}+1083226 q^{41}+1331628 q^{40}+1593650 q^{39}+1858178 q^{38}+2111158 q^{37}\nonumber\\
&&+2337226 q^{36}+2522435
q^{35}+2654026 q^{34}+2721987 q^{33}+2721987 q^{32}+2654026q^{31}\nonumber\\
&&+2522435 q^{30}+2337226 q^{29}+2111158 q^{28}+1858178 q^{27}+1593650 q^{26}+1331628 q^{25}\nonumber\\
&&+1083226 q^{24}+857677 q^{23}+660820
q^{22}+494873 q^{21}+360055 q^{20}+254494 q^{19}\nonumber\\
&&+174433 q^{18}+115871 q^{17}+74644 q^{16}+46490 q^{15}+27963 q^{14}+16281 q^{13}+9124 q^{12}\nonumber\\
&&+4895 q^{11}+2543 q^{10}+1258 q^9+587 q^8+269 q^7+120
q^6+48 q^5+23 q^4+11 q^3+4 q^2\nonumber\\
&&+2 q+1\;,
\end{eqnarray}
and
\begin{eqnarray}
\label{eq:denominator eff 3g main}
{\mathscr D}_{\rm SEFT}^{(3\rm g)}(q)=\left(1-q^2\right)^3 \left(1-q^3\right) \left(1-q^4\right)^5 \left(1-q^5\right)^6 \left(1-q^6\right)^6\;.
\end{eqnarray}
As a nontrivial cross-check, the denominator of the HS in Eq.~(\ref{eq:denominator eff 3g main}) do have 21 factors, exactly matching the number of independent physical parameters in the three-generation SEFT. Moreover, the awesomely large numbers in the numerator of the HS in Eq.~(\ref{eq:numerator eff 3g main})
indicate that the richness of the flavor structure and the complexity of the invariant ring grow very quickly with the generation of leptons when there exist Majorana-type building blocks.\footnote{This is very different from the case in the quark sector, where all the fermions are Dirac particles and all the building blocks transform adjointly under the unitary group. The Majorana character of $C_5^{}$ enforces it to transform as the symmetric rank-2 tensor representation, thus leading to a much more complicated algebraic structure of the invariant ring in the leptonic sector.} The complexity of the algebraic structure of the invariant ring for the three-generation case can be better understood by substituting Eq.~(\ref{eq:HS eff 3g main}) into Eq.~(\ref{eq:PL def}) to calculate the PL function of the HS, which is an infinite series of $q$ and encodes all information about the basic invariants and the polynomial relations among them (i.e., syzygies)
\begin{eqnarray}
\label{eq:PL eff 3g main}
{\rm PL}\left[{\mathscr H}_{\rm SEFT}^{(3\rm g)}(q)\right]&=&2q+4q^2+6q^3+9q^4+14q^5+33q^6+44q^7+72q^8+74q^9+21q^{10}\nonumber\\
&&-208q^{11}-708q^{12}-1904q^{13}-3806q^{14}-{\cal O}\left(q_{}^{15}\right)\;.
\end{eqnarray}
From Eq.~(\ref{eq:PL eff 3g main}) one can observe that there are nearly 300 basic invariants in the ring, more than 200 syzygies at degree 11, more than 700 syzygies at degree 12, and so on. Therefore, it is very difficult to explicitly construct all the basic invariants due to the complexity of the invariant ring. However, we have known that there are only 21 independent physical parameters in the theory, which is also the maximum number of the algebraically-independent invariants (i.e., primary invariants) in the ring. Hence, we shall construct only the primary flavor invariants for the three-generation case, with which one can extract all the physical parameters so that any physical observable can be written as the function of flavor invariants.
\subsection{Physical parameters in terms of primary invariants}
\label{subsec:extract3g}
\renewcommand\arraystretch{1.2}
\begin{table}[t!]
\centering
\begin{tabular}{l|c|c}
\hline \hline
flavor invariants & degree & CP parity \\
\hline \hline
${\cal J}_{100}^{}\equiv {\rm Tr}\left(X_l^{}\right) $ & 1 & $+$ \\
\hline
${\cal J}_{001}^{}\equiv {\rm Tr}\left(C_6^{}\right) $ & 1 & $+$\\
\hline
${\cal J}_{200}^{}\equiv {\rm Tr}\left(X_l^2\right)$ & 2 & $+$\\
\hline
${\cal J}_{020}^{}\equiv {\rm Tr}\left(X_5^{}\right) $ & 2 & $+$\\
\hline
${\cal J}_{002}^{}\equiv {\rm Tr}\left(C_6^2\right)$ & 2 & $+$\\
\hline
${\cal J}_{120}^{}\equiv {\rm Tr}\left(X_l^{}X_5^{}\right) $ & 3 & $+$\\
\hline
${\cal J}_{021}^{}\equiv {\rm Tr}\left(C_6^{}X_5{}\right) $ & 3 & $+$\\
\hline
${\cal J}_{220}^{}\equiv {\rm Tr}\left(X_l{}G_{l5}^{}\right) $ & 4 & $+$\\
\hline
${\cal J}_{040}^{}\equiv {\rm Tr}\left(X_5^{2}\right) $ & 4 & $+$\\
\hline
${\cal J}_{022}^{}\equiv {\rm Tr}\left(C_6^{}G_{56}^{}\right) $ & 4 & $+$\\
\hline
${\cal J}_{140}^{}\equiv {\rm Tr}\left(X_l^{}X_5^{2}\right)$ & 5 & $+$\\
\hline
${\cal J}_{041}^{}\equiv {\rm Tr}\left(C_6^{}X_5^{2}\right)$ & 5 & $+$\\
\hline
${\cal J}_{240}^{}\equiv {\rm Tr}\left(X_l^{2}X_5^{2}\right)$ & 6 & $+$\\
\hline
${\cal J}_{240}^{(2)}\equiv {\rm Im}\,{\rm Tr}\left(X_l^{}X_5^{}G_{l5}^{}\right)$ & 6 & $-$\\
\hline
${\cal J}_{060}^{}\equiv {\rm Tr}\left(X_5^{3}\right)$ & 6 & $+$\\
\hline
${\cal J}_{042}^{}\equiv {\rm Tr}\left(C_6^{2}X_5^{2}\right)$ & 6 & $+$\\
\hline
${\cal J}_{042}^{(2)}\equiv {\rm Im}\,{\rm Tr}\left(C_6^{}X_5^{}G_{56}^{}\right)$ & 6 & $-$\\
\hline
${\cal J}_{260}^{}\equiv {\rm Im}\,{\rm Tr}\left(X_l^{}X_5^{2}G_{l5}^{}\right)$ & 8 & $-$\\
\hline
${\cal J}_{062}^{}\equiv {\rm Im}\,{\rm Tr}\left(C_6^{}X_5^{2}G_{56}^{}\right)$ & 8 & $-$\\
\hline
${\cal J}_{280}^{}\equiv {\rm Im}\,{\rm Tr}\left(X_l^{}X_5^{2}G_{l5}^{}X_5^{}\right)$ & 10 & $-$\\
\hline
${\cal J}_{082}^{}\equiv {\rm Im}\,{\rm Tr}\left(C_6^{}X_5^{2}G_{56}^{}X_5^{}\right)$ & 10 & $-$\\
\hline
\hline
\end{tabular}
\vspace{0.5cm}
\caption{\label{table:3g eff} Summary of the primary flavor invariants along with their degrees and CP parities in the SEFT for three-generation leptons, where the subscripts of the invariants denote the degrees of $X_l^{}\equiv Y_l^{}Y_l^\dagger$, $C_5^{}$ and $C_6^{}$, respectively. We have also defined $X_5^{}\equiv C_5^{}C_5^\dagger$, $G_{l5}^{}\equiv C_5^{}X_l^*C_5^\dagger$ and $G_{56}^{}\equiv C_5^{}C_6^*C_5^\dagger$. There are in total 21 primary invariants in the invariant ring of the flavor space and 6 of them are CP-odd, corresponding to the 6 independent phases in the three-generation SEFT.}
\end{table}
\renewcommand\arraystretch{1}
The 21 primary invariants have been explicitly constructed in Table~\ref{table:3g eff}. Then we show how to extract all physical parameters in the SEFT from the primary invariants.
For convenience, we again start with the basis where $C_5^{}$ is diagonal with real and non-negative eigenvalues, i.e., $C_5^{}={\rm Diag}\left\{c_1^{},c_2^{},c_3^{}\right\}$, while $X_l^{}$ and $C_6^{}$ are two arbitrary $3\times 3$ Hermitian matrices\footnote{As in the two-generation case, we assume that $c_1^{}\neq c_2^{}\neq c_3^{}$ and there are in general no texture zeros in the matrix elements of $X_l^{}$ and $C_6^{}$.}
\begin{eqnarray}
X_l^{}=\left(
\begin{matrix}
a_{11}^{}&a_{12}^{}e_{}^{{\rm i}\alpha_{12}}&a_{31}e_{}^{-{\rm i}\alpha_{31}}\\
a_{12}e_{}^{-{\rm i}\alpha_{12}}&a_{22}^{}&a_{23}^{}e_{}^{{\rm i}\alpha_{23}}\\
a_{31}e_{}^{{\rm i}\alpha_{31}}&a_{23}^{}e_{}^{-{\rm i}\alpha_{23}}&a_{33}^{}
\end{matrix}
\right)\;,\quad
C_6^{}=\left(
\begin{matrix}
b_{11}^{}&b_{12}^{}e_{}^{{\rm i}\beta_{12}}&b_{31}e_{}^{-{\rm i}\beta_{31}}\\
b_{12}e_{}^{-{\rm i}\beta_{12}}&b_{22}^{}&b_{23}^{}e_{}^{{\rm i}\beta_{23}}\\
b_{31}e_{}^{{\rm i}\beta_{31}}&b_{23}^{}e_{}^{-{\rm i}\beta_{23}}&b_{33}^{}
\end{matrix}
\right)\;,
\end{eqnarray}
where $a_{ij}^{}$ and $b_{ij}^{}$ are real numbers while $\alpha_{ij}^{}$ and $\beta_{ij}^{}$ are phases. First, the eigenvalues of $C_5^{}$, corresponding to the masses of light Majorana neutrinos, can be extracted from ${\cal J}_{020}^{}$, ${\cal J}_{040}^{}$ and ${\cal J}_{060}^{}$ as follows
\begin{eqnarray}
\label{eq:extract C5 3g 1}
{\cal J}_{020}^{}&\equiv& {\rm Tr}\left(X_5^{}\right)=c_1^2+c_2^2+c_3^2\;,\\
\label{eq:extract C5 3g 2}
{\cal J}_{040}^{}&\equiv& {\rm Tr}\left(X_5^{2}\right)=c_1^4+c_2^4+c_3^4\;,\\
\label{eq:extract C5 3g 3}
{\cal J}_{060}^{}&\equiv& {\rm Tr}\left(X_5^{3}\right)=c_1^6+c_2^6+c_3^6\;.
\end{eqnarray}
In principle $c_1^{}$, $c_2^{}$ and $c_3^{}$ can be solved from Eqs.~(\ref{eq:extract C5 3g 1})-(\ref{eq:extract C5 3g 3}) in term of ${\cal J}_{020}^{}$, ${\cal J}_{040}^{}$ and ${\cal J}_{060}^{}$. However, the most general solution is too lengthy to be listed here. For illustration, we only consider two hierarchical scenarios. For the normal hierarchical mass ordering with $0 \leqslant c_1^{}<c_2^{}\ll c_3^{}$, we have
\begin{eqnarray}
c_{1,2}^{}=\sqrt{\frac{{\cal J}_{020}^2-{\cal J}_{040}}{4{\cal J}_{060}^{1/3}}\mp\sqrt{\left(\frac{{\cal J}_{020}^2-{\cal J}_{040}}{4{\cal J}_{060}^{1/3}}\right)_{}^2-\frac{{\cal J}_{020}^3-3{\cal J}_{020}{\cal J}_{040}+2{\cal J}_{060}}{6{\cal J}_{060}^{1/3}}}}\;,\quad
c_3^{}={\cal J}_{060}^{1/6}\;;
\end{eqnarray}
while for the inverted hierarchical mass ordering with $0 \leqslant c_3^{}\ll c_1^{}<c_2^{}$, we get
\begin{eqnarray}
c_{1,2}^{}=\frac{1}{\sqrt{2}}\sqrt{{\cal J}_{020}^{}-c_3^2\mp\sqrt{2{\cal J}_{040}^{}-{\cal J}_{020}^2+2{\cal J}_{020}^{}c_3^2}}\;,\quad
c_3^{}=\sqrt{\frac{{\cal J}_{020}^3-3{\cal J}_{020}{\cal J}_{040}+2{\cal J}_{060}}{3\left({\cal J}_{020}^2-{\cal J}_{040}\right)}}\;.
\end{eqnarray}
Then the invariants ${\cal J}_{100}^{}$, ${\cal J}_{120}^{}$ and ${\cal J}_{140}$ can be used to extract the diagonal elements of $X_l^{}$, i.e.,
\begin{eqnarray*}
{\cal J}_{100}^{}&\equiv&{\rm Tr}\left(X_l^{}\right)=a_{11}^{}+a_{22}^{}+a_{33}^{}\;,\\
{\cal J}_{120}^{}&\equiv&{\rm Tr}\left(X_l^{}X_5^{}\right)=c_1^2a_{11}^{}+c_2^2a_{22}^{}+c_3^2a_{33}^{}\;,\\
{\cal J}_{140}^{}&\equiv&{\rm Tr}\left(X_l^{}X_5^{2}\right)=c_1^4a_{11}^{}+c_2^4a_{22}^{}+c_3^4a_{33}^{}\;,
\end{eqnarray*}
from which we obtain
\begin{eqnarray}
\label{eq:extract aii}
a_{ii}^{}=\frac{\left(c_j^2+c_k^2\right){\cal J}_{120}-c_j^2c_k^2{\cal J}_{100}-{\cal J}_{140}}{\left(c_k^2-c_i^2\right)\left(c_i^2-c_j^2\right)}\;,
\end{eqnarray}
where $\left(i,j,k\right)=\left(1,2,3\right)$ or $\left(2,3,1\right)$ or $\left(3,1,2\right)$. As for the off-diagonal elements of $X_l^{}$, we can use
\begin{eqnarray*}
{\cal J}_{200}^{}&\equiv& {\rm Tr}\left(X_l^2\right)=a_{11}^2+a_{22}^2+a_{33}^2+2\left(a_{12}^{2}+a_{23}^{2}+a_{31}^{2}\right)\;,\\
{\cal J}_{220}^{}&\equiv& {\rm Tr}\left(X_l^2X_5^{}\right)=c_1^2a_{11}^2+c_2^2a_{22}^2+c_3^2a_{33}^2+\left(c_1^2+c_2^2\right)a_{12}^2+\left(c_2^2+c_3^2\right)a_{23}^2+\left(c_3^2+c_1^2\right)a_{31}^2\;,\\
{\cal J}_{240}^{}&\equiv& {\rm Tr}\left(X_l^2X_5^{2}\right)=c_1^4a_{11}^2+c_2^4a_{22}^2+c_3^4a_{33}^2+\left(c_1^4+c_2^4\right)a_{12}^2+\left(c_2^4+c_3^4\right)a_{23}^2+\left(c_3^4+c_1^4\right)a_{31}^2\;,
\end{eqnarray*}
to derive\footnote{Without loss of generality, the phases $\alpha_{ij}^{}$ and $\beta_{ij}^{}$ can be chosen to ensure $a_{ij}^{}>0$ and $b_{ij}^{} > 0$ for $i\neq j$.}
\begin{eqnarray}
\label{eq:extract aij}
a_{ij}^{}=\sqrt{\frac{\left(c_1^2c_2^2+c_2^2c_3^2+c_3^2c_1^2-c_k^4\right){\cal J}_{200}^{\prime}-\left(c_i^2+c_j^2\right){\cal J}_{220}^{\prime}+{\cal J}_{240}^{\prime}}{\left(c_j^2-c_k^2\right)\left(c_k^2-c_i^2\right)}}\;,
\end{eqnarray}
where $\left(i,j,k\right)=\left(1,2,3\right)$ or $\left(2,3,1\right)$ or $\left(3,1,2\right)$ and
\begin{eqnarray*}
{\cal J}_{200}^\prime &\equiv& \frac{1}{2}\left({\cal J}_{200}^{}-a_{11}^2-a_{22}^2-a_{33}^2\right)\;,\\
{\cal J}_{220}^\prime &\equiv& {\cal J}_{220}^{}-\left(c_1^2a_{11}^2+c_2^2a_{22}^2+c_3^2a_{33}^2\right)\;,\\
{\cal J}_{240}^\prime &\equiv& {\cal J}_{240}^{}-\left(c_1^4a_{11}^2+c_2^4a_{22}^2+c_3^4a_{33}^2\right)\;.
\end{eqnarray*}
Finally, the phases in $X_l^{}$ can be conveniently determined by using CP-odd invariants
\begin{eqnarray*}
{\cal J}_{240}^{(2)}&\equiv&{\rm Im}\,{\rm Tr}\left(X_l^{}X_5^{}G_{l5}^{}\right)\\
&=&-a_{12}^2c_1^{}c_2^{}\left(c_1^2-c_2^2\right)\sin2\alpha_{12}^{}-a_{23}^2c_2^{}c_3^{}\left(c_2^2-c_3^2\right)\sin2\alpha_{23}^{}-a_{31}^2c_3^{}c_1^{}\left(c_3^2-c_1^2\right)\sin2\alpha_{31}^{}\;,\\
{\cal J}_{260}^{}&\equiv&{\rm Im}\,{\rm Tr}\left(X_l^{}X_5^{2}G_{l5}^{}\right)\\
&=&-a_{12}^2c_1^{}c_2^{}\left(c_1^4-c_2^4\right)\sin2\alpha_{12}^{}-a_{23}^2c_2^{}c_3^{}\left(c_2^4-c_3^4\right)\sin2\alpha_{23}^{}-a_{31}^2c_3^{}c_1^{}\left(c_3^4-c_1^4\right)\sin2\alpha_{31}^{}\;,\\
{\cal J}_{280}^{}&\equiv&{\rm Im}\,{\rm Tr}\left(X_l^{}X_5^{2}G_{l5}^{}\right)\\
&=&-a_{12}^2c_1^{3}c_2^{3}\left(c_1^2-c_2^2\right)\sin2\alpha_{12}^{}-a_{23}^2c_2^{3}c_3^{3}\left(c_2^2-c_3^2\right)\sin2\alpha_{23}^{}-a_{31}^2c_3^{3}c_1^{3}\left(c_3^2-c_1^2\right)\sin2\alpha_{31}^{}\;,
\end{eqnarray*}
from which we have
\begin{eqnarray}
\label{eq:extract alphaij}
\sin2\alpha_{ij}^{}=\frac{c_k^4{\cal J}_{240}^{(2)}-c_k^2{\cal J}_{260}+{\cal J}_{280}}{a_{ij}^{2}c_i c_j\left(c_1^2-c_2^2\right)\left(c_2^2-c_3^2\right)\left(c_3^2-c_1^2\right)}\;,
\end{eqnarray}
where $\left(i,j,k\right)=\left(1,2,3\right)$ or $\left(2,3,1\right)$ or $\left(3,1,2\right)$. Similarly, the elements in $C_6^{}$ can be extracted in a parallel manner
\begin{eqnarray}
\label{eq:extract bii}
b_{ii}^{}&=&\frac{\left(c_j^2+c_k^2\right){\cal J}_{021}-c_j^2c_k^2{\cal J}_{001}-{\cal J}_{041}}{\left(c_k^2-c_i^2\right)\left(c_i^2-c_j^2\right)}\;,\\
\label{eq:extract bij}
b_{ij}^{}&=&\sqrt{\frac{\left(c_1^2c_2^2+c_2^2c_3^2+c_3^2c_1^2-c_k^4\right){\cal J}_{002}^{\prime}-\left(c_i^2+c_j^2\right){\cal J}_{022}^{\prime}+{\cal J}_{042}^{\prime}}{\left(c_j^2-c_k^2\right)\left(c_k^2-c_i^2\right)}}\;,\\
\label{eq:extract betaij}
\sin2\beta_{ij}^{}&=&\frac{c_k^4{\cal J}_{042}^{(2)}-c_k^2{\cal J}_{062}+{\cal J}_{082}}{b_{ij}^{2}c_i c_j\left(c_1^2-c_2^2\right)\left(c_2^2-c_3^2\right)\left(c_3^2-c_1^2\right)}\;,
\end{eqnarray}
where $\left(i,j,k\right)=\left(1,2,3\right)$ or $\left(2,3,1\right)$ or $\left(3,1,2\right)$ and
\begin{eqnarray*}
{\cal J}_{002}^\prime &\equiv& \frac{1}{2}\left({\cal J}_{002}^{}-b_{11}^2-b_{22}^2-b_{33}^2\right)\;,\\
{\cal J}_{022}^\prime &\equiv& {\cal J}_{022}^{}-\left(c_1^2b_{11}^2+c_2^2b_{22}^2+c_3^2b_{33}^2\right)\;,\\
{\cal J}_{042}^\prime &\equiv& {\cal J}_{042}^{}-\left(c_1^4b_{11}^2+c_2^4b_{22}^2+c_3^4b_{33}^2\right)\;.
\end{eqnarray*}
In summary, in the basis where $C_5^{}$ is real and diagonal, the 21 physical parameters in the leptonic sector for the three-generation case
$$\left\{c_1^{},c_2^{},c_3^{},a_{11}^{},a_{22}^{},a_{33}^{},a_{12}^{},a_{23}^{},a_{31}^{},\alpha_{12}^{},\alpha_{23}^{},\alpha_{31}^{},b_{11}^{},b_{22}^{},b_{33}^{},b_{12}^{},b_{23}^{},b_{31}^{},\beta_{12}^{},\beta_{23}^{},\beta_{31}^{}\right\}$$
can be extracted from the 21 primary invariants in Table~\ref{table:3g eff} by Eqs.~(\ref{eq:extract C5 3g 1})-(\ref{eq:extract C5 3g 3}) and Eqs.~(\ref{eq:extract aii})-(\ref{eq:extract betaij}). Note that among the primary invariants we choose 6 of them to be CP-odd and the others to be CP-even, in accordance with the fact that there are 6 independent CP-violating phases in the three-generation SEFT.
After the spontaneous breaking of the gauge symmetry, the masses of neutrinos are given by
\begin{eqnarray}
m_i^{}=\frac{v^2}{2\Lambda}c_i^{}\;,\qquad
i=1,2,3\;,
\end{eqnarray}
while the masses of charged leptons are determined by the eigenvalue of $X_l^{}$
\begin{eqnarray}
m_\alpha^{}=\frac{v}{\sqrt{2}}\sqrt{l_\alpha^{}}\;,\qquad
\alpha=e,\mu,\tau\;,
\end{eqnarray}
with $l_\alpha^{}$ the eigenvalues of $X_l^{}$. Furthermore, the flavor mixing matrix $V_3^{}$ is related to $X_l^{}$ by $X_l^{}=V_3^\dagger{\rm Diag}\left\{l_e^{},l_\mu^{},l_\tau^{}\right\}V_3^{}$ and $X_l^{2}=V_3^{\dagger}{\rm Diag}\left\{l_e^{2},l_\mu^{2},l_\tau^{2}\right\}V_3^{}$, from which one can extract the matrix elements of $V_3^{}$ as below
\begin{eqnarray}
\label{eq:extract Vii}
\left|V_{\alpha i}\right|^2&=&\frac{\left(X_l\right)_{ii}\left(l_\beta+l_\gamma\right)-\left(X_l^2\right)_{ii}-l_\beta l_\gamma}{\left(l_\gamma-l_\alpha\right)\left(l_\alpha-l_\beta\right)}\;,\qquad
i=1,2,3\;,\\
\label{eq:extract Vij}
V_{\alpha i}^*V_{\alpha j}^{}&=&\frac{\left(X_l^{}\right)_{ij}\left(l_\beta+l_\gamma\right)-\left(X_l^2\right)_{ij}}{\left(l_\gamma-l_\alpha\right)\left(l_\alpha-l_\beta\right)}\;,\qquad
i,j=1,2,3\; (i\neq j)\;,
\end{eqnarray}
with $\left(\alpha,\beta,\gamma\right)=\left(e,\mu,\tau\right)$ or $\left(\mu,\tau,e\right)$ or $\left(\tau,e,\mu\right)$, and $V_{\alpha i}^{}$ denoting the $\left(\alpha,i\right)$-element of $V_3^{}$. In the standard parametrization of the PMNS matrix~\cite{ParticleDataGroup:2020ssz}, $V_3^{}$ is written as
\begin{eqnarray}
\label{eq:standard para}
V_3^{}=\left( \begin{matrix} c^{}_{13} c^{}_{12} & c^{}_{13} s^{}_{12} & s^{}_{13} e_{}^{-{\rm i}\delta} \cr -s_{12}^{} c_{23}^{} - c_{12}^{} s_{13}^{} s_{23}^{} e^{ {\rm i}\delta}_{} & + c_{12}^{} c_{23}^{} - s_{12}^{} s_{13}^{} s_{23}^{} e^{{\rm i}\delta}_{} & c_{13}^{} s_{23}^{} \cr + s_{12}^{} s_{23}^{} - c_{12}^{} s_{13}^{} c_{23}^{} e^{ {\rm i}\delta}_{} & - c_{12}^{} s_{23}^{} - s_{12}^{} s_{13}^{} c_{23}^{} e^{{\rm i}\delta}_{} & c_{13}^{} c_{23}^{} \end{matrix} \right) \cdot \left(\begin{matrix} e^{{\rm i}\rho}_{} & 0 & 0 \\ 0 & e^{{\rm i}\sigma}_{} & 0 \\ 0 & 0 & 1\end{matrix}\right) \; ,
\end{eqnarray}
where $c_{ij}^{}\equiv\cos\theta_{ij}^{}$ and $s_{ij}^{}\equiv\sin\theta_{ij}^{}$ (for $ij=12,13,23$). Therefore, the flavor mixing angles $\left\{\theta_{12}^{},\theta_{13}^{},\theta_{23}^{}\right\}$, the Dirac-type phase $\delta$ and two Majorana-type phases $\left\{\rho,\sigma\right\}$ can also be extracted from primary flavor invariants through
\begin{eqnarray}
&&s_{13}^{} = |V_{e3}|\;,\quad s_{12}^{} = \frac{|V_{e2}|}{\sqrt{1-|V_{e3}|^2}}\;,\quad s_{23}^{} = \frac{|V_{\mu 3}|}{\sqrt{1-|V_{e3}|^2}}\;,\quad
\sin \delta =\frac{{\rm Im}\left(V_{e2}^{}V_{e3}^{*}V_{\mu 2}^{*}V_{\mu 3}\right)}{s_{12}^{} c_{12}^{} s_{23}^{} c_{23}^{} s_{13}^{} c_{13}^2}\;,\nonumber\\
&&\rho = - \delta - {\rm Arg}\left(\frac{V_{e1}^* V_{e3}^{}}{c_{12}^{} c_{13}^{} s_{13}^{}}\right)\;,\quad
\sigma = - \delta - {\rm Arg}\left(\frac{V_{e2}^* V_{e3}^{}}{s_{12}^{} c_{13}^{} s_{13}^{}}\right)\;,
\end{eqnarray}
and Eqs.~(\ref{eq:extract Vii})-(\ref{eq:extract Vij}).
\subsection{Conditions for CP conservation}
\label{subsec:conditions3g}
In this subsection we investigate the conditions for CP conservation in the three-generation SEFT. As in the two-generation case, one would expect the minimal conditions to guarantee CP conservation in the leptonic sector require the vanishing of 6 CP-odd invariants, which is also the number of the independent phases in the theory. An immediate choice is the 6 CP-odd primary invariants in Table~\ref{table:3g eff}. However, it can be shown that the vanishing of all the 6 CP-odd invariants in Table~\ref{table:3g eff} is \emph{not} sufficient to guarantee CP conservation in the leptonic sector. An explicit counter example is $\alpha_{ij}^{}=\beta_{ij}^{}=\pi/2$ (for $ij=12,23,31$), which is of course a solution to ${\cal J}_{240}^{(2)}={\cal J}_{042}^{(2)}={\cal J}_{260}^{}={\cal J}_{062}^{}={\cal J}_{280}^{}={\cal J}_{082}^{}=0$. But, in this case, the following Jarlskog-like flavor invariant
\begin{eqnarray}
\label{eq:j360}
{\cal J}_{360}^{} &\equiv& {\rm Im}\,{\rm Tr}\left(X_l^2X_5^2X_l^{}X_5^{}\right)=\frac{1}{2{\rm i}}{\rm Det}\left(\left[X_l^{},X_5^{}\right]\right)\nonumber\\
&=&-a_{12}^{}a_{23}^{}a_{31}^{}\left(c_1^2-c_2^2\right)\left(c_2^2-c_3^2\right)\left(c_3^2-c_1^2\right)\sin\left(\alpha_{12}^{}+\alpha_{23}^{}+\alpha_{31}^{}\right)\;,
\end{eqnarray}
is nonzero if the neutrino masses are not degenerate and there are no texture zeros in $X_l^{}$. The invariant ${\cal J}_{360}^{}$ will appear in the CP asymmetries of neutrino oscillations and cause CP violation in the leptonic sector. In Ref.~\cite{Yu:2019ihs} we have actually proved that in the presence of only the charged-lepton mass matrix and the Majorana neutrino mass matrix [i.e., the effective theory up to ${\cal O}\left(1/\Lambda\right)$], the minimal sufficient and necessary conditions for CP conservation in the leptonic sector are give by
\begin{eqnarray}
\label{eq:cp conservation condition 1}
{\cal J}_{360}^{}&\equiv& {\rm Im}\,{\rm Tr}\left(X_l^2X_5^2X_l^{}X_5^{}\right)=0\;,\\
\label{eq:cp conservation condition 2}
{\cal J}_{240}^{(2)}&\equiv&{\rm Im}\,{\rm Tr}\left(X_l^{}X_5^{}G_{l5}^{}\right)=0\;,\\
\label{eq:cp conservation condition 3}
{\cal J}_{260}^{}&\equiv&{\rm Im}\,{\rm Tr}\left(X_l^{}X_5^{2}G_{l5}^{}\right)=0\;.
\end{eqnarray}
Eqs.~(\ref{eq:cp conservation condition 1})-(\ref{eq:cp conservation condition 3}) enforce the three CP phases in the flavor mixing matrix to take only trivial values:
\begin{eqnarray*}
&&({\rm i})\; \delta=\rho=\sigma=0 \iff \alpha_{12}^{}, \; \alpha_{23}^{}, \; \alpha_{31}^{}=k\pi\\
&&({\rm ii})\; \delta=\rho=0\,,\;\sigma=\pi/2 \iff \alpha_{12}^{}, \; \alpha_{23}^{}=\frac{2k+1}{2}\pi\,,\; \alpha_{31}^{}=k\pi\\
&&({\rm iii})\; \delta=\sigma=0\,,\;\rho=\pi/2 \iff \alpha_{12}^{}, \; \alpha_{31}^{}=\frac{2k+1}{2}\pi\,,\; \alpha_{23}^{}=k\pi\\
&&({\rm iv})\; \delta=0\,,\;\rho=\sigma=\pi/2 \iff \alpha_{23}^{}, \; \alpha_{31}^{}=\frac{2k+1}{2}\pi\,,\; \alpha_{12}^{}=k\pi
\end{eqnarray*}
where $k$ is an arbitrary integer. It is easy to check that in any of the four scenarios (i)-(iv), CP is conserved in the leptonic sector up to ${\cal O}\left(1/\Lambda\right)$. This is because in the standard parametrization in Eq.~(\ref{eq:standard para}), any CP-violating observable is proportional to $\sin\left(l\delta+2m\rho+2n\sigma\right)$ with $l,m,n$ being arbitrary integers and would vanish in any of the scenarios (i)-(iv). Particularly, the Jarlskog-like invariant ${\cal J}_{360}^{}$ in Eq.~(\ref{eq:j360}) also vanishes. When the dimension-six operator ${\cal O}^{}_6$ is included, three additional phases $\beta_{ij}^{}$ appear. It is evident that in the basis where $C_5^{}$ is real and diagonal, $\alpha_{ij}^{}$ and $\beta_{ij}^{}$ play the parallel role in describing CP violation. This inspires us to construct the following three CP-odd invariants
\begin{eqnarray}
{\cal J}_{121}^{}&\equiv&{\rm Im}\,{\rm Tr}\left(X_l^{}X_5^{}C_6^{}\right)\;,\\
{\cal J}_{141}^{}&\equiv&{\rm Im}\,{\rm Tr}\left(X_l^{}X_5^{2}C_6^{}\right)\;,\\
{\cal J}_{161}^{}&\equiv&{\rm Im}\,{\rm Tr}\left(X_5^{}X_l^{}X_5^{2}C_6^{}\right)\;,
\end{eqnarray}
and the vanishing of them gives three homogeneous linear equations of $\sin\left(\alpha_{ij}^{}-\beta_{ij}^{}\right)$, namely,
\begin{eqnarray}
\label{eq:j121}
{\cal J}_{121}^{}&=&-a_{12}^{}b_{12}^{}\left(c_1^2-c_2^2\right)\sin\left(\alpha_{12}^{}-\beta_{12}^{}\right)-a_{23}^{}b_{23}^{}\left(c_2^2-c_3^2\right)\sin\left(\alpha_{23}^{}-\beta_{23}^{}\right)\nonumber\\
&&-a_{31}^{}b_{31}^{}\left(c_3^2-c_1^2\right)\sin\left(\alpha_{31}^{}-\beta_{31}^{}\right)=0\;,\\
\label{eq:j141}
{\cal J}_{141}^{}&=&-a_{12}^{}b_{12}^{}\left(c_1^4-c_2^4\right)\sin\left(\alpha_{12}^{}-\beta_{12}^{}\right)-a_{23}^{}b_{23}^{}\left(c_2^4-c_3^4\right)\sin\left(\alpha_{23}^{}-\beta_{23}^{}\right)\nonumber\\
&&-a_{31}^{}b_{31}^{}\left(c_3^4-c_1^4\right)\sin\left(\alpha_{31}^{}-\beta_{31}^{}\right)=0\;,\\
\label{eq:j161}
{\cal J}_{161}^{}&=&-a_{12}^{}b_{12}^{}c_1^2c_2^2\left(c_1^2-c_2^2\right)\sin\left(\alpha_{12}^{}-\beta_{12}^{}\right)-a_{23}^{}b_{23}^{}c_2^2c_3^2\left(c_2^2-c_3^2\right)\sin\left(\alpha_{23}^{}-\beta_{23}^{}\right)\nonumber\\
&&-c_3^2c_1^2a_{31}^{}b_{31}^{}\left(c_3^2-c_1^2\right)\sin\left(\alpha_{31}^{}-\beta_{31}^{}\right)=0\;.
\end{eqnarray}
The determinant of the coefficient matrix in Eqs.~(\ref{eq:j121})-(\ref{eq:j161}) is simply
$$a_{12}^{}a_{23}^{}a_{31}^{}b_{12}^{}b_{23}^{}b_{31}^{}\left(c_1^2-c_2^2\right)_{}^2\left(c_2^2-c_3^2\right)_{}^2\left(c_3^2-c_1^2\right)_{}^2\neq 0\;,$$
thus Eqs.~(\ref{eq:j121})-(\ref{eq:j161}) lead to
\begin{eqnarray}
\label{eq:integer condition}
\alpha_{ij}^{}-\beta_{ij}^{}=k\pi\;,\quad
k\; {\rm integer}
\end{eqnarray}
for $ij=12,23,31$. One can verify that if the phases in the SEFT satisfy Eq.~(\ref{eq:integer condition}) along with the conditions in any one of four scenarios (i)-(iv), then all the CP-violating observables would vanish. Therefore, the vanishing of $\left\{{\cal J}_{360}^{},{\cal J}_{240}^{(2)},{\cal J}_{260}^{},{\cal J}_{121}^{},{\cal J}_{141}^{},{\cal J}_{161}^{}\right\}$ serves as the minimal sufficient and necessary conditions for leptonic CP conservation in the three-generation SEFT.
We close this section by giving some comments on the equivalence between the set of physical observables and the set of primary invariants. Strictly speaking, there are indeed discrete leftover degeneracies when extracting physical parameters by using primary invariants. This is also the origin of the counter example in the beginning of Sec.~\ref{subsec:conditions3g}: It is $\sin(2\alpha_{ij}^{})$ rather than $\sin\alpha_{ij}^{}$ that has been extracted from the primary invariants $\left\{{\cal J}_{240}^{(2)}, {\cal J}_{260}^{}, {\cal J}_{280}^{}\right\}$,
so ${\cal J}_{360}^{}$ cannot be expanded in terms of ${\cal J}_{240}^{(2)}$, ${\cal J}_{260}^{}$, ${\cal J}_{280}^{}$, and ${\cal J}_{240}^{(2)}{\cal J}_{260}^{}{\cal J}_{280}^{}$ when $\alpha_{ij}=\pi/2^{}$. Therefore, it is more strict to state that the set of physical parameters is equivalent to the set of primary invariants in the ring \emph{up to some discrete degeneracies}. The existence of such degeneracies can be ascribed to the complicated structure of the ring: There are high-degree primary invariants composed of high powers of building blocks, implying that they are functions of high multiples of CP-violating phases. The degeneracies can be eliminated by including more basic invariants other than the primary ones. For example, in Sec.~\ref{subsec:extract2g}, the 10 primary invariants can only determine the values of $\cos2\alpha$ and $\cos2\beta$, which remain unchanged under $\alpha\to -\alpha$ and $\beta\to -\beta$. Then one can introduce two more CP-odd invariants ${\cal I}_{240}^{}\propto \sin2\alpha$ and ${\cal I}_{042}^{}\propto \sin2\beta$, whose signs can be used to eliminate the $Z_2^{}$ degeneracy. However, one should keep in mind that ${\cal I}_{240}^{}$ and ${\cal I}_{042}$ are by no means algebraically independent of the primary invariants because their squares can be decomposed into the polynomials of the primary ones due to the syzygies at degree 12.
\section{CP-violating observables and flavor invariants}
\label{sec:observables}
In the last two sections we have shown that all the physical parameters can be extracted from the primary flavor invariants. Therefore, any physical observable can be completely written as the function of a finite number of invariants, which is explicitly independent of the parametrization schemes and the flavor basis that has been chosen. In particular, for any CP-violating processes, one will be able to define a working observable ${\cal A}_{\rm CP}^{}$ that changes its sign under the CP-conjugate transformation and can be expressed as\footnote{Here it is not claimed that any CP-violating observable can be written into the form of Eq.~(\ref{eq:observable decomposition}). But for any CP-violating processes, there exists a working observable that measures the CP asymmetry for the processes and can be written into the form of Eq.~(\ref{eq:observable decomposition}).}
\begin{eqnarray}
\label{eq:observable decomposition}
{\cal A}_{\rm CP}^{}=\sum_{j=1}^{j_{\rm max}^{}}{\cal F}_j^{}\left[{\cal I}_k^{\rm even}\right]{\cal I}_j^{\rm odd}\;,
\end{eqnarray}
where ${\cal I}_{j}^{\rm odd}$ and ${\cal F}_j^{}\left[{\cal I}_k^{\rm even}\right]$ (for $j=1,2,...,j_{\rm max}^{}$) are respectively CP-odd basic flavor invariants and functions of only CP-even basic flavor invariants,\footnote{If all the primary invariants happen to be CP-even, then ${\cal I}_k^{\rm even}$ can be restricted to be only primary invariants.} with $j_{\rm max}^{}$ being some positive integer.
The statement in Eq.~(\ref{eq:observable decomposition}) can be proved as follows. Suppose that there are totally $n$ independent phases coming from the complex couplings in the theory denoted by $\left\{\alpha_1^{},...,\alpha_n^{}\right\}$. First, noticing that both ${\cal A}_{\rm CP}^{}$ and CP-odd flavor invariants change the sign under the CP-conjugate transformation while CP-even flavor invariants remain unchanged, we can generally write ${\cal A}_{\rm CP}^{}$ as
\begin{eqnarray}
{\cal A}_{\rm CP}^{}=\sum_{j}^{}{\cal G}_j^{}\left(\vec{y}\right)\cos\left(\sum_{i=1}^{n}z_j^i\alpha_i\right)\sin\left(\sum_{i'=1}^{n}\tilde{z}_j^{i'}\alpha_{i'}^{}\right)\;,
\end{eqnarray}
where $z_j^i$ and $\tilde{z}_j^{i'}$ are integers and ${\cal G}_j^{}\left(\vec{y}\right)$ are functions of a set of parameters, denoted by $\vec{y}$, other than the phases. A key observation is that one can convert all the trigonometric functions into the rational functions of $x_i^{}$ by setting $x_i^{}\equiv \tan\left(\alpha_i^{}/2\right)$ (for $i=1,2,...,n$):
\begin{eqnarray}
\label{eq:tangent half-angle substitution}
&&\sin\alpha_i^{}=\frac{2x_i}{1+x_i^2}\;,\quad
\cos\alpha_i^{}=\frac{1-x_i^2}{1+x_i^2}\;,\quad\nonumber\\ &&\sin\left(\alpha_i^{}+\alpha_j^{}\right)=\frac{2x_i\left(1-x_j^2\right)+2x_j\left(1-x_i^2\right)}{\left(1+x_i^2\right)\left(1+x_j^2\right)}\;,\quad
\cos\left(\alpha_i^{}+\alpha_j^{}\right)=\frac{\left(1-x_i^2\right)\left(1-x_j^2\right)-4x_ix_j}{\left(1+x_i^2\right)\left(1+x_j^2\right)}\;,
\nonumber\\
&&\sin\left(\alpha_i^{}+\alpha_j^{}+\alpha_k^{}\right)\nonumber\\
&&=\frac{2x_i\left(1-x_j^2\right)\left(1-x_k^2\right)+2x_j\left(1-x_k^2\right)\left(1-x_i^2\right)+2x_k\left(1-x_i^2\right)\left(1-x_j^2\right)-8x_ix_jx_k}{\left(1+x_i^2\right)\left(1+x_j^2\right)\left(1+x_k^2\right)}\;,
\end{eqnarray}
and so on. Using Eq.~(\ref{eq:tangent half-angle substitution}), ${\cal A}_{\rm CP}^{}$ can be rewritten as\footnote{This is because under the CP-conjugate transformation, only the odd powers of $x_i^{}$ change signs, but both $\vec{y}$ and even powers of $x_i^{}$ remain unchanged.}
\begin{eqnarray}
\label{eq:observable decomposition2}
{\cal A}_{\rm CP}^{}&=&\sum_{i=1}^{n}{\cal G}_i^{}\left(\vec{y}\right){\cal R}_i^{}\left(x_1^2,...,x_n^2\right)x_i^{}+\sum_{1=i_1<i_2<i_3}^{n}{\cal G}_{i_1i_2i_3}^{}\left(\vec{y}\right){\cal R}_{i_1i_2i_3}\left(x_1^2,...,x_n^2\right)x_{i_1}^{}x_{i_2^{}}x_{i_3^{}}\nonumber\\
&&+\cdots+\sum_{1=i_1<\cdots<i_{\rm max}}^{n}{\cal G}_{i_1\cdots i_{\rm max}}^{}\left(\vec{y}\right){\cal R}_{i_1\cdots i_{\rm max}}\left(x_1^2,...,x_n^2\right)x_{i_1}^{}\cdots x_{i_{\rm max}^{}}\nonumber\\
&=&\sum_{i=1}^{n}{\cal H}_{i}^{}\left[{\cal I}_k^{\rm even}\right]x_i^{}+\sum_{1=i_1<i_2<i_3}^{n}{\cal H}_{i_1i_2i_3}^{}\left[{\cal I}_k^{\rm even}\right]x_{i_1}^{}x_{i_2}^{}x_{i_3}\nonumber\\
&&+\cdots+\sum_{1=i_1<\cdots<i_{\rm max}}^{n}{\cal H}_{i_1\cdots i_{\rm max}}^{}\left[{\cal I}_k^{\rm even}\right]x_{i_1}^{}\cdots x_{i_{\rm max}^{}}\;,
\end{eqnarray}
where $i_{\rm max}^{}=n$ (or $n-1$) if $n$ is odd (or even), ${\cal G}$ are functions of $\vec{y}$, ${\cal R}$ are rational functions of $\left\{x_1^2,\cdots,x_n^2\right\}$ and ${\cal H}\left[{\cal I}_k^{\rm even}\right]$ are functions of CP-even basic invariants. The second equality in Eq.~(\ref{eq:observable decomposition2}) has been derived by noticing the fact that $\vec{y}$, the physical parameters other than phases, as well as $\left\{x_1^2,\cdots,x_n^2\right\}$, can all be extracted from only CP-even basic invariants. Note that in Eq.~(\ref{eq:observable decomposition2}) there are totally
\begin{eqnarray*}
\left(
\begin{matrix}
n\\
1
\end{matrix}
\right)+\left(
\begin{matrix}
n\\
3
\end{matrix}
\right)+\left(
\begin{matrix}
n\\
5
\end{matrix}
\right)+\cdots+
\left(
\begin{matrix}
n\\
i_{\rm max}^{}
\end{matrix}
\right)=2_{}^{n-1}
\end{eqnarray*}
linearly-independent odd-power monomials of $\left\{x_1^{},\cdots,x_n^{}\right\}$. Therefore, Eq.~(\ref{eq:observable decomposition2}) can be written as the linear combination of the monomials
\begin{eqnarray}
\label{eq:observable decomposition3}
{\cal A}_{\rm CP}^{}=\sum_{i=1}^{2^{n-1}}{\cal H}_i^{}\left[{\cal I}_k^{\rm even}\right]{\cal M}_i^{}\;,
\end{eqnarray}
where ${\cal M}_i^{}$ (for $i=1,2,...,2_{}^{n-1}$) range over all the odd-power monomials of $\left\{x_1^{},\cdots,x_n^{}\right\}$.
Since CP-odd flavor invariants change their signs under the CP-conjugate transformation as ${\cal A}_{\rm CP}^{}$ does, they can also be decomposed into the form of Eq.~(\ref{eq:observable decomposition3}). In order to \emph{linearly} extract all the ${\cal M}_i^{}$, one must utilize $2_{}^{n-1}$ linearly-independent CP-odd flavor invariants. If the total number of the CP-odd basic invariants in the ring is no smaller than $2^{n-1}_{}$, then it is possible to choose ${\cal I}_j^{\rm odd}$ (for $j=1,2,...,2_{}^{n-1}$) to linearly extract all the odd-power monomials
\begin{eqnarray}
\label{eq:linear combination}
{\cal M}_i^{}=\sum_{j=1}^{2^{n-1}}\widetilde{\cal H}_{ij}^{}\left[{\cal I}_k^{\rm even}\right]{\cal I}_j^{\rm odd}\;,\qquad
i=1,2,...,2^{n-1}_{}\;,
\end{eqnarray}
where $\widetilde{\cal H}_{ij}^{}\left[{\cal I}_k^{\rm even}\right]$ are functions of only CP-even basic invariants. Substituting Eq.~(\ref{eq:linear combination}) back into Eq.~(\ref{eq:observable decomposition3}) one obtains
\begin{eqnarray}
{\cal A}_{\rm CP}^{}=\sum_{i=1}^{2^{n-1}}\sum_{j=1}^{2^{n-1}}{\cal H}_i^{}\left[{\cal I}_k^{\rm even}\right]\widetilde{\cal H}_{ij}^{}\left[{\cal I}_k^{\rm even}\right]{\cal I}_j^{\rm odd}=\sum_{j=1}^{2^{n-1}}{\cal F}_j^{}\left[{\cal I}_k^{\rm even}\right]{\cal I}_j^{\rm odd}\;,
\end{eqnarray}
where ${\cal F}_j^{}\left[{\cal I}_k^{\rm even}\right]\equiv \sum_{i=1}^{2^{n-1}}{\cal H}_i^{}\left[{\cal I}_k^{\rm even}\right]\widetilde{\cal H}_{ij}^{}\left[{\cal I}_k^{\rm even}\right]$. This completes the proof of Eq.~(\ref{eq:observable decomposition}) with $j_{\rm max}^{}=2_{}^{n-1}$. However, if the total number of the CP-odd basic invariants in the ring is smaller than $2_{}^{n-1}$, then it is in general \emph{impossible} to express an arbitrary CP-violating observable as the \emph{linear} combination of CP-odd basic flavor invariants as in Eq.~(\ref{eq:observable decomposition}).
It should be noted that the number of the CP-odd basic invariants to linearly expand any CP-violating observable needs not to match the minimal number of CP-odd invariants that one requires to be vanishing to guarantee CP conservation in the theory. For example, for a theory with $n=6$ independent phases (such as the SEFT for the three-generation case), one needs $2^{6-1}=32$ linearly-independent CP-odd invariants to linearly expand any CP-violating observable in the most general case. However, as we have shown in Sec.~\ref{subsec:conditions3g}, the vanishing of only 6 CP-odd invariants is sufficient to guarantee the absence of CP violation, which is equivalent to the vanishing of all CP-violating observables. The point is that the vanishing of \emph{some} CP-odd flavor invariants can reduce the number of independent phases. In our case, there are 6 independent phases $\left\{\alpha_{12}^{}, \alpha_{23}^{}, \alpha_{31}^{}, \beta_{12}^{}, \beta_{23}^{}, \beta_{31}^{}\right\}$ at the beginning. The vanishing of $\left\{{\cal J}_{121}^{}, {\cal J}_{141}^{}, {\cal J}_{161}^{}\right\}$ in Eqs.~(\ref{eq:j121})-(\ref{eq:j161}) enforces $\alpha_{ij}^{}=\beta_{ij}^{}+k\pi$ and thus reduces the number of independent phases to 3. In addition, the vanishing of ${\cal J}_{360}^{}$ in Eq.~(\ref{eq:j360}) leads to $\alpha_{12}^{}+\alpha_{23}^{}+\alpha_{31}^{}=k\pi$ and further eliminates one more phase. Therefore, if ${\cal J}_{121}^{}={\cal J}_{141}^{}={\cal J}_{161}^{}={\cal J}_{360}^{}=0$ is satisfied, there remain only 2 independent phases in the theory. Under this condition, any nonzero CP-violating observable can be written as the linear combination of ${\cal J}_{240}^{(2)}$ and ${\cal J}_{260}^{}$ and the further vanishing of ${\cal J}_{240}^{(2)}$ and ${\cal J}_{260}^{}$ will lead to the vanishing of all CP-violating observables in the theory. This explains from another point of view why the vanishing of $2^{6-1-4}_{}+4=6$ CP-odd flavor invariants can serve as the minimal sufficient and necessary conditions for leptonic CP conservation in the SEFT for the three-generation case.
Below we will discuss some concrete CP-violating processes and explain how to write the CP-violating observables into the form of Eq.~(\ref{eq:observable decomposition}).
\subsection{Neutrino-neutrino oscillations}
After the SM gauge symmetry is spontaneously broken, the Wilson coefficient matrix $C_5^{}$ gives the Majorana mass term of light neutrinos while the Wilson coefficient matrix $C_6^{}$ contributes to the unitarity violation of the flavor mixing matrix in the leptonic sector. The effective Lagrangian governing the lepton mass spectra, flavor mixing, and charged-current interaction together with the kinetic term of neutrinos after the gauge symmetry breaking reads
\begin{eqnarray}
\label{eq:Lagrangian eff after SSB}
{\cal L}_{\rm eff}^{}=\overline{\nu_{\rm L}^{}}\,{\rm i}\slashed{\partial}\,{\cal K}\nu_{\rm L}^{}- \left[\frac{1}{2} \overline{\nu_{\rm L}^{}}M_\nu^{}\nu_{\rm L}^{\rm C} + \overline{l_{\rm L}^{}}M_l^{}l_{\rm R}^{} - \frac{g}{\sqrt{2}}\overline{l_{\rm L}^{}}\gamma_{}^\mu\nu_{\rm L}^{}W_\mu^{-}+{\rm h.c.} \right]\;,
\end{eqnarray}
where ${\cal K}=1+v_{}^2 C_6^{}/\left(2\Lambda_{}^2\right)$, $M_\nu^{}=v_{}^2 C_5^{}/\left(2\Lambda\right)$, $M_l^{}=vY_l^{}/\sqrt{2}$ and $g$ is the gauge coupling constant of ${\rm SU}(2)_{\rm L}^{}$ group. Recalling that we work in the basis where $C_5^{}$ is real and diagonal, thus $M_\nu^{}=\widehat{M}_\nu^{}={\rm Diag}\left\{m_1^{},m_2^{},m_3^{}\right\}$. In order to normalize the kinetic term of neutrinos, one should rescale the neutrino fields as
$\nu_{\rm L}\to \nu_{\rm L}^{\prime}={\cal K}_{}^{1/2}\nu_{\rm L}^{}$, which will modify the neutrino mass matrix as
\begin{eqnarray}
M_\nu^{}\to M_\nu^\prime={\cal K}_{}^{-1/2}\widehat{M}_\nu^{}\left({\cal K}_{}^{\rm T}\right)_{}^{-1/2}=\widehat{M}_\nu^{}+\frac{v^2}{4\Lambda^2}\left(C_6^{}\widehat{M}_\nu^{}+\widehat{M}_\nu^{}C_6^{\rm T}\right)\;.
\end{eqnarray}
However, since $\widehat{M}_\nu^{}$ itself is of order ${\cal O}\left(1/\Lambda\right)$, the difference between $M_\nu^{}$ and $M_\nu^\prime$ is on the order of ${\cal O}\left(1/\Lambda_{}^3\right)$ and thus can be neglected to the order of ${\cal O}\left(1/\Lambda^2_{}\right)$. Hence we have $M_\nu^\prime=\widehat{M}_\nu^{}$. The next step is to diagonalize the charged-lepton mass matrix via $V M_l^{}V_{}^{\prime \dagger}=\widehat{M}_l^{}={\rm Diag}\left\{m_e^{},m_\mu^{},m_\tau^{}\right\}$ with $V$ and $V_{}^\prime$ being unitary matrices and change the basis of charged-lepton fields $l_{\rm L}^{}\to l_{\rm L}^\prime=Vl_{\rm L}^{}$, $l_{\rm R}^{}\to l_{\rm R}^\prime=V_{}^{\prime}l_{\rm R}^{}$, then the Lagrangian in the mass basis is given by
\begin{eqnarray}
\label{eq:Lagrangian eff after SSB2}
{\cal L}_{\rm eff}^{}=\overline{\nu_{\rm L}^{\prime}}\,{\rm i}\slashed{\partial}\nu_{\rm L}^{\prime} - \left[ \frac{1}{2}\overline{\nu_{\rm L}^{\prime}}\widehat{M}_\nu^{}\nu_{\rm L}^{\prime \rm C} + \overline{l_{\rm L}^{\prime}}\widehat{M}_l^{}l_{\rm R}^{\prime} - \frac{g}{\sqrt{2}}\overline{l_{\rm L}^{\prime}}\gamma_{}^\mu V {\cal K}_{}^{-1/2}\nu_{\rm L}^{\prime}W_\mu^{-}+{\rm h.c.} \right]\;.
\end{eqnarray}
From Eq.~(\ref{eq:Lagrangian eff after SSB2}) we obtain the non-unitary flavor mixing matrix
\begin{eqnarray}
V_{\rm eff}^{}=V {\cal K}_{}^{-1/2}=V\left(1-\frac{v^2}{4\Lambda^2}C_6^{}\right)\;,
\end{eqnarray}
which violates the unitarity to the order of ${\cal O}\left(1/\Lambda_{}^2\right)$. The unitarity violation will contribute to the CP asymmetries in neutrino-neutrino oscillations
\begin{eqnarray}
\label{eq:CP asymmetry in neutrino oscillation def}
{\cal A}_{\nu\nu}^{\alpha\beta}\equiv \frac{{\rm P}\left(\nu_\alpha\to\nu_\beta\right)-{\rm P}\left(\bar{\nu}_\alpha\to\bar{\nu}_\beta\right)}{{\rm P}\left(\nu_\alpha\to\nu_\beta\right)+{\rm P}\left(\bar{\nu}_\alpha\to\bar{\nu}_\beta\right)}\;,
\end{eqnarray}
where ${\rm P}(\nu_\alpha^{}\to\nu_\beta^{})$ denotes the transition probability from $\nu_\alpha^{}$ to $\nu_\beta^{}$ while ${\rm P}(\bar{\nu}_\alpha^{}\to\bar{\nu}_\beta^{})$ denotes that of its CP-conjugate process. The CP asymmetries in Eq.~(\ref{eq:CP asymmetry in neutrino oscillation def}) are found to be~\cite{Fernandez-Martinez:2007iaa, Xing:2007zj, Goswami:2008mi, Antusch:2009pm}
\begin{eqnarray}
\label{eq:CP asymmetry in neutrino oscillation}
{\cal A}_{\nu\nu}^{\alpha\beta}=\frac{2\sum_{i<j}{\rm Im}\left(Q_{\alpha\beta}^{ij}\right)\sin2\Delta_{ji}^{}}{\delta_{\alpha\beta}-4\sum_{i<j}{\rm Re}\left(Q_{\alpha\beta}^{ij}\right)\sin^2\Delta_{ji}}\;,
\end{eqnarray}
where $Q_{\alpha\beta}^{ij}\equiv \left(V_{\rm eff}^{}\right)_{\alpha i}^{}\left(V_{\rm eff}^{}\right)_{\beta j}^{}\left(V_{\rm eff}^{}\right)_{\alpha j}^{*}\left(V_{\rm eff}^{}\right)_{\beta i}^{*}$ and $\Delta_{ji}^{}\equiv \left(m_j^2-m_i^2\right)L/(4E)$ have been defined with $L$ and $E$ being respectively the propagation distance and neutrino beam energy. It is clear that Eq.~(\ref{eq:CP asymmetry in neutrino oscillation}) is only nonvanishing for $\alpha\neq \beta$, as a consequence of the CPT theorem. Particularly for the two-generation case, ${\cal A}_{\nu\nu}^{\alpha\beta}$ is nonzero for $\alpha\neq \beta$. This is contrary to the result in the unitary limit (i.e., $\Lambda\to \infty$), where CP violation is absent in neutrino oscillations with only two flavors. In fact, we have
\begin{eqnarray}
\label{eq:CP asymmetry in neutrino oscillation2}
A_{\nu\nu}^{\mu e}=-A_{\nu\nu}^{e\mu}=\frac{v^2}{\Lambda^2}\frac{b_{12}}{\sin2\theta}\cot\left(\Delta^{}_{21}\right)\sin\left(\beta+\phi\right)\;,
\end{eqnarray}
which is nonvanishing though suppressed by $v_{}^2/\Lambda_{}^2$. Note that we have used the parametrization of $C_6^{}$ and $V$ in Eqs.~(\ref{eq:parametrization of C6 2g}) and (\ref{eq:parametrization of V 2g}). It is then interesting to rewrite the result in Eq.~(\ref{eq:CP asymmetry in neutrino oscillation2}) into a complete form of flavor invariants, which is independent of the parametrization and flavor basis. This can be achieved by using Eqs.~(\ref{eq:extract C6 2g 2})-(\ref{eq:extract neutrino mass 2g}), Eqs.~(\ref{eq:extract chargd-lepton mass 2g})-(\ref{eq:extract theta}) and recalling Eq.~(\ref{eq:i1212}). Finally one arrives at
\begin{eqnarray}
\label{eq:neutrino oscillation}
A_{\nu\nu}^{e\mu}=-\frac{v^2}{\Lambda^2}\cot\left(\Delta^{}_{21}\right){\cal F}_{\nu\nu}^{e\mu}\left[{\cal I}_{100}^{},{\cal I}_{200}^{},{\cal I}_{020}^{},{\cal I}_{120}^{},{\cal I}_{040}^{}\right]\,{\cal I}_{121}^{(2)}\;,
\end{eqnarray}
where
\begin{eqnarray}
{\cal F}_{\nu\nu}^{e\mu}\left[{\cal I}_{100}^{},{\cal I}_{200}^{},{\cal I}_{020}^{},{\cal I}_{120}^{},{\cal I}_{040}^{}\right]=\frac{\left(2{\cal I}_{040}-{\cal I}_{020}^2\right)^{1/2}\left(2{\cal I}_{200}-{\cal I}_{100}^2\right)^{1/2}}{{\cal I}_{040}\left(2{\cal I}_{200}-{\cal I}_{100}^2\right)-2{\cal I}_{120}^{}\left({\cal I}_{120}-{\cal I}_{020}{\cal I}_{100}\right)-{\cal I}_{020}^2{\cal I}_{200}}\;,
\end{eqnarray}
and
\begin{eqnarray}
\Delta_{21}^{}= \frac{L}{4E}\left(m_2^2-m_1^2\right)=\frac{Lv^4}{16E\Lambda^2}\left(2{\cal I}_{040}^{}-{\cal I}_{020}^2\right)_{}^{1/2}\;.
\end{eqnarray}
Thus we have successfully recast the CP-asymmetries in two-flavor neutrino oscillations into the form of Eq.~(\ref{eq:observable decomposition}), which is linearly proportional to the unique CP-odd flavor invariant ${\cal I}_{121}^{(2)}$ with the coefficient being function of only CP-even primary flavor invariants.
\subsection{Neutrino-antineutrino oscillations}
Next we take the example of neutrino-antineutrino oscillations. The CP-asymmetries are defined by
\begin{eqnarray}
\label{eq:CP asymmetry in neutrino-antineutrino oscillation def}
{\cal A}_{\nu\bar{\nu}}^{\alpha\beta}\equiv \frac{{\rm P}\left(\nu_\alpha\to \bar{\nu}_\beta\right)-{\rm P}\left(\bar{\nu}_\alpha \to \nu_\beta\right)}{{\rm P}\left(\nu_\alpha\to \bar{\nu}_{\beta}\right)+{\rm P}\left(\bar{\nu}_\alpha\to \nu_\beta\right)}\;,
\end{eqnarray}
and can be calculated immediately~\cite{Xing:2013ty, Xing:2013woa, Wang:2021rsi}
\begin{eqnarray}
{\cal A}_{\nu\bar{\nu}}^{\alpha\beta}=\frac{2\sum_{i<j}m_im_j\,{\rm Im}\left(\widetilde{Q}_{\alpha\beta}^{ij}\right)\sin2\Delta_{ji}^{}}{\left|\langle m \rangle_{\alpha\beta}\right|^2-4\sum_{i<j}m_im_j\,{\rm Re}\left(\widetilde{Q}_{\alpha\beta}^{ij}\right)\sin^2\Delta_{ji}^{}}\;,
\end{eqnarray}
where $\widetilde Q_{\alpha\beta}^{ij}\equiv \left(V_{\rm eff}^{}\right)_{\alpha i}^{}\left(V_{\rm eff}^{}\right)_{\beta i}^{}\left(V_{\rm eff}^{}\right)_{\alpha j}^{*}\left(V_{\rm eff}^{}\right)_{\beta j}^{*}$ and $\langle m \rangle_{\alpha\beta}\equiv \sum_i^{} m_i^{} \left(V_{\rm eff}\right)_{\alpha i}^{}\left(V_{\rm eff}\right)_{\beta i}^{}$ have been defined. For illustration, we consider the two-generation case and take $\alpha\neq\beta$. We will express the corresponding CP asymmetries into the complete form of flavor invariants. It is easy to show that
\begin{eqnarray}
{\cal A}_{\nu\bar{\nu}}^{e\mu}={\cal A}_{\nu\bar{\nu}}^{\mu e}=-\frac{2 m_1 m_2 \sin2\phi \sin 2\Delta_{21}}{m_1^2+m_2^2-2m_1 m_2 \cos2\phi\cos2\Delta_{21}}+{\cal O}\left(\frac{v^2}{\Lambda^2}\right)\;.
\end{eqnarray}
It should be noted that the parameters in $C_6^{}$ do not contribute at the leading order. Using Eqs.~(\ref{eq:extract neutrino mass 2g}) and (\ref{eq:extract chargd-lepton mass 2g})-(\ref{eq:extract phi}) and taking into account ${\cal I}_{240}^{}\equiv {\rm Im}\,{\rm Tr}\left(X_l^{}X_5^{}G_{l5}^{}\right)$, one finally gets at the leading order
\begin{eqnarray}
\label{eq:neutrino-antineutrino oscillation}
{\cal A}_{\nu\bar{\nu}}^{e\mu}={\cal F}_{\nu\bar{\nu}}^{e\mu}\left[{\cal I}_{100}^{},{\cal I}_{200}^{},{\cal I}_{020}^{},{\cal I}_{120}^{},{\cal I}_{220}^{},{\cal I}_{040}^{}\right]\,{\cal I}_{240}^{}\;,
\end{eqnarray}
where
\begin{eqnarray}
&&{\cal F}_{\nu\bar{\nu}}^{e\mu}\left[{\cal I}_{100}^{},{\cal I}_{200}^{},{\cal I}_{020}^{},{\cal I}_{120}^{},{\cal I}_{220}^{},{\cal I}_{040}^{}\right]\nonumber\\
&=&4\left(2{\cal I}_{040}^{}-{\cal I}_{020}^2\right)_{}^{1/2}\sin2\Delta_{21}^{}
\left\{{\cal I}_{020}^{}\left[2{\cal I}_{120}^{}\left({\cal I}_{100}^{}{\cal I}_{020}^{}-{\cal I}_{120}^{}\right)-{\cal I}_{040}^{}\left({\cal I}_{100}^2-2{\cal I}_{200}^{}\right)-{\cal I}_{020}^2{\cal I}_{200}^{}\right]\right.\nonumber\\
&&\left.+\cos2\Delta_{21}^{}\left[{\cal I}_{020}^{}\left({\cal I}_{020}^2{\cal I}_{100}^2+2{\cal I}_{120}^2-{\cal I}_{040}^{}{\cal I}_{100}^2\right)+4{\cal I}_{040}^{}\left({\cal I}_{100}^{}{\cal I}_{120}^{}-{\cal I}_{220}^{}\right)+2{\cal I}_{020}^2\right.\right.\nonumber\\
&&\left.\left.\times\left({\cal I}_{220}^{}-2{\cal I}_{100}^{}{\cal I}_{120}^{}\right)\right]\right\}^{-1}\;,
\end{eqnarray}
with
\begin{eqnarray*}
\Delta_{21}^{}=\frac{Lv^4}{16E\Lambda^2}\left(2{\cal I}_{040}^{}-{\cal I}_{020}^2\right)_{}^{1/2}\;.
\end{eqnarray*}
Thus we have also written the CP asymmetries in neutrino-antineutrino oscillations for the two-generation case into the form of Eq.~(\ref{eq:observable decomposition}). They are linearly proportional to the CP-odd flavor invariant ${\cal I}_{240}^{}$ and the coefficient is the function of only CP-even primary flavor invariants. Contrary to the case of neutrino-neutrino oscillations, ${\cal A}_{\nu\bar{\nu}}^{e\mu}$ is not suppressed by ${\cal O}\left(v_{}^2/\Lambda_{}^2\right)$. This is because the Majorana-type CP phase in the two-generation flavor mixing matrix already enters the CP asymmetries in neutrino-antineutrino oscillations.
As has been mentioned in Sec.~\ref{subsec:conditions2g}, any CP-violating observable in the two-generation SEFT can be written as the linear combination of ${\cal I}_{121}^{(2)}$ and ${\cal I}_{240}^{}$ into the form of Eq.~(\ref{eq:observable decomposition}) with $j_{\rm max}^{}=2$. This can be realized by noticing that one can linearly extract $\tan\left(\phi/2\right)$ and $\tan\left(\beta/2\right)$ using ${\cal I}_{121}^{(2)}$ and ${\cal I}_{240}^{}$ with the coefficients being functions of CP-even invariants, and that any CP-violating observable in the two-generation SEFT must be proportional to $\tan\left(\phi/2\right)$ or $\tan\left(\beta/2\right)$. Since ${\cal I}_{121}^{(2)}$ and ${\cal I}_{240}^{}$ are respectively responsible for the CP asymmetries in neutrino-neutrino and neutrino-antineutrino oscillations, we draw the conclusion that ${\cal A}_{\nu\nu}^{e\mu}$ and ${\cal A}_{\nu\bar{\nu}}^{e\mu}$ already contain all the information about leptonic CP violation in the two-generation SEFT.
\subsection{Observables in the three-generation case}
In this subsection we discuss how to express CP-violating observables in terms of flavor invariants in the SEFT for the three-generation case. As has been explained above, for a theory with 6 independent phases, there are $2_{}^{6-1}=32$ linearly-independent monomials and one needs at least $32$ CP-odd basic flavor invariants to linearly expand all possible CP-violating observables in the most general case. However, since we are working in the effective theory, any physical observable suppressed more than ${\cal O}\left(1/\Lambda_{}^2\right)$ should be neglected. So we do not need to consider the observables that proportional to monomials with the power of $\tilde{x}_{ij}^{}$ higher than one, such as $x_{12}^{}\tilde{x}_{23}^{}\tilde{x}_{31}^{}$, $\tilde{x}_{12}^{}\tilde{x}_{23}^{}\tilde{x}_{31}^{}$ and so on,
where we have defined $x_{ij}^{}\equiv\tan\left(\alpha_{ij}^{}/2\right)$ and $\tilde{x}_{ij}^{}\equiv\tan\left(\beta_{ij}^{}/2\right)$.\footnote{A brief comment on the power counting of ${\cal O}(1/\Lambda)$ is helpful. One may wonder whether neutrino masses of ${\cal O}(v^2/\Lambda)$ could appear in the denominator and thus break the rule of power counting. For any CP-violating process, it is always possible to define a dimensionless working CP-violating observable, in which the power of $v^2_{}/\Lambda^{}_{}$ in neutrino mass matrix $M_\nu^{}$ in the numerator and that in the denominator cancel with each other [e.g., the definition of CP asymmetries in Eqs. (\ref{eq:CP asymmetry in neutrino oscillation def}) and (\ref{eq:CP asymmetry in neutrino-antineutrino oscillation def})]. This is because $M_\nu^{}$, in the mass basis that we have chosen, contains neutrino mass eigenvalues and does not account for CP violation, so its overall scale $v^2_{}/\Lambda$ can always be factorized out in describing CP-violating process. Therefore, the neutrino mass matrix cannot affect the power counting in CP-violating observables.} Therefore, we are left with only 16 possible monomials to the order of ${\cal O}\left(1/\Lambda_{}^2\right)$:
\begin{itemize}
\item 4 monomials not suppressed: $x_{12}^{}$, $x_{23}^{}$, $x_{31}^{}$, $x_{12}^{}x_{23}^{}x_{31}^{}$\;;
\item 12 monomials suppressed by ${\cal O}\left(1/\Lambda_{}^2\right)$: $\tilde{x}_{12}^{}$, $\tilde{x}_{23}^{}$, $\tilde{x}_{31}^{}$, $x_{12}^{}x_{23}^{}\tilde{x}_{12}^{}$, $x_{12}^{}x_{23}^{}\tilde{x}_{23}^{}$, $x_{12}^{}x_{23}^{}\tilde{x}_{31}^{}$,\\
$x_{23}^{}x_{31}^{}\tilde{x}_{12}^{}$, $x_{23}^{}x_{31}^{}\tilde{x}_{23}^{}$, $x_{23}^{}x_{31}^{}\tilde{x}_{31}^{}$,
$x_{31}^{}x_{12}^{}\tilde{x}_{12}^{}$, $x_{31}^{}x_{12}^{}\tilde{x}_{23}^{}$, $x_{31}^{}x_{12}^{}\tilde{x}_{31}^{}$\;.
\end{itemize}
Then we will demonstrate that they can indeed be linearly extracted using 16 CP-odd basic flavor invariants in the SEFT.
First, the 4 monomials not suppressed can be linearly extracted using 4 invariants not containing $C_6^{}$: $x_{ij}^{}$ (for $ij=12,23,31$) can be extracted using ${\cal J}_{240}^{(2)}$, ${\cal J}_{260}^{}$, ${\cal J}_{280}^{}$ which only involve $\sin2\alpha_{ij}^{}$ while $x_{12}^{}x_{23}^{}x_{31}^{}$ can be extracted using ${\cal J}_{360}^{}$ in which $\sin\left(\alpha_{12}^{}+\alpha_{23}^{}+\alpha_{31}^{}\right)$ is involved. Similarly, $\tilde{x}_{ij}^{}$ can be extracted using invariants only involving $\sin2\beta_{ij}^{}$, namely, ${\cal J}_{042}^{(2)}$, ${\cal J}_{062}^{}$ and ${\cal J}_{082}^{}$. Then the 3 cyclic monomials $x_{12}^{}x_{23}^{}\tilde{x}_{31}^{}$, $x_{23}^{}x_{31}^{}\tilde{x}_{12}^{}$ and $x_{31}^{}x_{12}^{}\tilde{x}_{23}^{}$ can be determined using the following 3 invariants involving $\sin\left(\alpha_{12}^{}+\alpha_{23}^{}+\beta_{31}^{}\right)$, $\sin\left(\alpha_{23}^{}+\alpha_{31}^{}+\beta_{12}^{}\right)$ and $\sin\left(\alpha_{31}^{}+\alpha_{12}^{}+\beta_{23}^{}\right)$:
\begin{eqnarray}
{\cal J}_{221}^{}&\equiv& {\rm Im}\,{\rm Tr}\left(X_l^2X_5^{}C_6^{}\right)\;,\\
{\cal J}_{241}^{}&\equiv& {\rm Im}\,{\rm Tr}\left(X_l^2X_5^{2}C_6^{}\right)\;,\\
{\cal J}_{261}^{}&\equiv& {\rm Im}\,{\rm Tr}\left(X_l^2X_5^{2}C_6^{}X_5^{}\right)\;.
\end{eqnarray}
Finally, the remaining 6 non-cyclic monomials $x_{12}^{}x_{23}^{}\tilde{x}_{12}^{}$, $x_{12}^{}x_{23}^{}\tilde{x}_{23}^{}$, $x_{23}^{}x_{31}^{}\tilde{x}_{23}^{}$, $x_{23}^{}x_{31}^{}\tilde{x}_{31}^{}$,
$x_{31}^{}x_{12}^{}\tilde{x}_{12}^{}$, and $x_{31}^{}x_{12}^{}\tilde{x}_{31}^{}$
can be determined by using the following 6 invariants, where six sine functions of different phase combinations, i.e., $\sin\left(\alpha_{12}^{}+2\alpha_{23}^{}\pm\beta_{12}^{}\right)$, $\sin\left(2\alpha_{12}^{}+\alpha_{23}^{}\pm\beta_{23}^{}\right)$,
$\sin\left(\alpha_{23}^{}+2\alpha_{31}^{}\pm\beta_{23}^{}\right)$,
$\sin\left(2\alpha_{23}^{}+\alpha_{31}^{}\pm\beta_{31}^{}\right)$,
$\sin\left(2\alpha_{31}^{}+\alpha_{12}^{}\pm\beta_{12}^{}\right)$ and
$\sin\left(\alpha_{31}^{}+2\alpha_{12}^{}\pm\beta_{31}^{}\right)$, are present:
\begin{eqnarray}
{\cal J}_{321}^{(1)}&\equiv&{\rm Im}\,{\rm Tr}\left(X_l^{}C_6^{}G_{l5}^{(2)}\right)\;,\\
{\cal J}_{341}^{(1)}&\equiv&{\rm Im}\,{\rm Tr}\left(X_5^{}X_l^{}C_6^{}G_{l5}^{(2)}\right)\;,\\
{\cal J}_{361}^{(1)}&\equiv&{\rm Im}\,{\rm Tr}\left(X_5^{2}X_l^{}C_6^{}G_{l5}^{(2)}\right)\;,\\
{\cal J}_{321}^{(2)}&\equiv&{\rm Im}\,{\rm Tr}\left(X_l^2C_6^{}G_{l5}^{}\right)\;,\\
{\cal J}_{341}^{(2)}&\equiv&{\rm Im}\,{\rm Tr}\left(X_5^{}X_l^2C_6^{}G_{l5}^{}\right)\;,\\
{\cal J}_{361}^{(2)}&\equiv&{\rm Im}\,{\rm Tr}\left(X_5^2X_l^2C_6^{}G_{l5}^{}\right)\;,
\end{eqnarray}
where we have defined $G_{l5}^{(2)}\equiv C_5^{}\left(X_l^*\right)_{}^{2}C_5^\dagger$.
To sum up, all the possible 16 monomials can be linearly extracted from the following 16 CP-odd basic flavor invariants
$$\left\{{\cal J}_{360}^{},{\cal J}_{240}^{(2)},{\cal J}_{260}^{},{\cal J}_{280}^{},{\cal J}_{042}^{(2)},{\cal J}_{062}^{},{\cal J}_{082}^{},{\cal J}_{221}^{},{\cal J}_{241}^{},{\cal J}_{261}^{},{\cal J}_{321}^{(1)},{\cal J}_{341}^{(1)},{\cal J}_{361}^{(1)},{\cal J}_{321}^{(2)},{\cal J}_{341}^{(2)},{\cal J}_{361}^{(2)}
\right\}$$
and thus any CP-violating observable in the SEFT for three-generation case can be written as the linear combination of these 16 CP-odd flavor invariants into the form of Eq.~(\ref{eq:observable decomposition}) with $j_{\rm max}^{}=16$.
\section{Connection between the full theory and the effective theory}
\label{sec:matching}
In Secs.~\ref{sec:construction2g} and \ref{sec:construction3g} we have studied the algebraic structure of the invariant ring in the SEFT using the tool of invariant theory. In particular, we have explicitly constructed all the basic flavor invariants for the two-generation case and all the primary flavor invariants for the three-generation case. We have also shown that all the physical parameters in the theory can be extracted using primary flavor invariants. On the other hand, the algebraic structure of the invariant ring and the construction of flavor invariants in the full seesaw model have been partly studied in Refs.~\cite{Manohar:2009dy,Manohar:2010vu,Yu:2021cco}. Thus an intriguing question is what the connection between the invariant ring of the flavor space in the SEFT and that in the full theory is and how the two sets of flavor invariants match with each other.
\renewcommand\arraystretch{1.2}
\begin{table}[t!]
\centering
\begin{tabular}{l|c|c|c|c}
\hline\hline
{\bf Model} & {\bf Moduli} & {\bf Phases} & {\bf Physical parameters} & {\bf Primary invariants}\\
\hline
Two-generation SEFT & 8 & 2 & 10 & 10\\
\hline
Two-generation seesaw & 8 & 2 & 10 & 10\\
\hline
Three-generation SEFT & 15 & 6 & 21 & 21\\
\hline
Three-generation seesaw & 15 & 6 & 21 & 21\\
\hline\hline
\end{tabular}
\vspace{0.5cm}
\caption{\label{table:comparison} Comparison of the number of independent physical parameters in the theory and the number of primary invariants in the flavor space between the SEFT and the full seesaw model. Note that the moduli denote the parameters in the theory other than phases. It is obvious that the SEFT and the full seesaw model share exactly equal number of independent physical parameters and primary flavor invariants.}
\end{table}
\renewcommand\arraystretch{1.0}
In the full seesaw model introduced in Eq.~(\ref{eq:full lagrangian}), the building blocks for the construction of the flavor invariants are $Y_l^{}$, $Y_\nu^{}$ and $Y_{\rm R}^{}$ and they transform in the flavor space as Eq.~(\ref{eq:Yukawa trans}). Given the representations of the building blocks under the flavor-basis transformation, it is straightforward to calculate the HS, which encodes the information about the flavor structure in the full theory. The results of the HS for two-generation and three-generation seesaw are given in Eq.~(\ref{eq:HS seesaw 2g}) and Eq.~(\ref{eq:HS seesaw 3g}), respectively. The key observation is that the denominator of Eq.~(\ref{eq:HS seesaw 2g}) [or Eq.~(\ref{eq:HS seesaw 3g})] and that of Eq.~(\ref{eq:HS eff 2g main}) [or Eq.~(\ref{eq:HS eff 3g main})] have exactly the same number of factors, implying that there are equal number of algebraically-independent invariants in the flavor space of the full theory and that of the SEFT, i.e.,
$$
\boxed{
\text{\# primary invariants in SEFT}=\text{\# primary invariants in seesaw}
}
$$
Moreover, the number of independent physical parameters in the full seesaw model also matches that in the SEFT~\cite{Broncano:2002rw}, as summarized in Table~\ref{table:comparison}. This nontrivial correspondence implies that for type-I seesaw model, which only extends the SM by adding gauge singlets, only one $d=5$ and one $d=6$ operator are already \emph{adequate} to incorporate all physical information about the UV theory, including the sources of CP violation~\cite{Broncano:2002rw,Broncano:2003fq}.
\renewcommand\arraystretch{1.2}
\begin{table}[t!]
\centering
\begin{tabular}{l|c|c}
\hline \hline
flavor invariants & degree & CP parity \\
\hline \hline
$I_{200}^{}\equiv {\rm Tr}\left(X_l^{}\right)$ & 2 & + \\
\hline
$I_{020}^{}\equiv {\rm Tr}\left(X_\nu^{}\right)$ & 2 & +\\
\hline
$I_{002}^{}\equiv {\rm Tr}\left(X_{\rm R}^{}\right)$ & 2 &+\\
\hline
$I_{400}^{}\equiv {\rm Tr}\left(X_l^2\right)$ & 4 &+\\
\hline
$I_{220}^{}\equiv {\rm Tr}\left(X_l^{}X_\nu^{}\right)$ & 4 &+\\
\hline
$I_{040}^{}\equiv {\rm Tr}\left(X_\nu^2\right)$ & 4 &+\\
\hline
$I_{022}^{}\equiv {\rm Tr}\left(\widetilde{X}_\nu^{}X_{\rm R}^{}\right)$ & 4 & $+$\\
\hline
$I_{004}^{}\equiv {\rm Tr}\left(X_{\rm R}^2\right)$ & 4 & $+$\\
\hline
$I_{222}^{}\equiv {\rm Tr}\left(X_{\rm R}^{}G_{l\nu}^{}\right)$ & 6 & $+$\\
\hline
$I_{042}^{}\equiv {\rm Tr}\left(\widetilde{X}_\nu^{}G_{\nu{\rm R}}^{}\right)$ & 6 & $+$\\
\hline
$I_{242}^{(1)}\equiv {\rm Tr}\left(G_{l\nu}^{}G_{\nu{\rm R}}^{}\right)$ & 8 & $+$\\
\hline
$I_{242}^{(2)}\equiv {\rm Im}\,{\rm Tr}\left(\widetilde{X}_\nu^{}X_{\rm R}^{}G_{l\nu}^{}\right)$ & 8 & $-$\\
\hline
$I_{044}^{}\equiv {\rm Im}\,{\rm Tr}\left(\widetilde{X}_\nu^{}X_{\rm R}^{}G_{\nu{\rm R}}^{}\right)$ & 8 & $-$\\
\hline
$I_{442}^{}\equiv {\rm Tr}\left(G_{l\nu}^{}G_{l\nu{\rm R}}^{}\right)$ & 10 & $+$\\
\hline
$I_{262}^{}\equiv {\rm Im}\,{\rm Tr}\left({\widetilde X}_\nu^{}G_{l\nu}^{}G_{\nu{\rm R}}^{}\right)$ & 10 & $-$\\
\hline
$I_{244}^{}\equiv {\rm Im}\,{\rm Tr}\left(X_{\rm R}^{}G_{l\nu}^{}G_{\nu{\rm R}}^{}\right)$ & 10 & $-$\\
\hline
$I_{462}^{}\equiv {\rm Im}\,{\rm Tr}\left(\widetilde{X}_\nu^{}G_{l\nu}^{}G_{l\nu{\rm R}}^{}\right)$ & 12 & $-$\\
\hline
$I_{444}^{}\equiv {\rm Im}\,{\rm Tr}\left(X_{\rm R}^{}G_{l\nu}^{}G_{l\nu{\rm R}}^{}\right)$ & 12 & $-$\\
\hline
\hline
\end{tabular}
\vspace{0.5cm}
\caption{\label{table:2g seesaw}Summary of the basic flavor invariants along with their degrees and CP parities in the full seesaw model for two-generation case, where the subscripts of the invariants denote the degrees of $Y_l^{}$, $Y_\nu^{}$ and $Y_{\rm R}^{}$, respectively. Note that we have also defined some building blocks that transform adjointly under the flavor transformation: $X_l^{}\equiv Y_l^{}Y_l^\dagger$, $X_\nu^{}\equiv Y_\nu^{} Y_\nu^\dagger$, $\widetilde{X}_\nu^{}\equiv Y_\nu^\dagger Y_\nu^{}$, $X_{\rm R}^{}\equiv Y_{\rm R}^\dagger Y_{\rm R}^{}$, $G_{l\nu}^{}\equiv Y_\nu^\dagger X_l^{} Y_\nu^{}$, $G_{\nu{\rm R}}^{}\equiv Y_{\rm R}^{\dagger} \widetilde{X}_\nu^* Y_{\rm R}^{}$ and $G_{l\nu{\rm R}}^{}\equiv Y_{\rm R}^\dagger G_{l\nu}^*Y_{\rm R}^{}$. There are in total 12 CP-even and 6 CP-odd basic invariants in the invariant ring of the flavor space.}
\end{table}
\renewcommand\arraystretch{1}
This point can be seen more clearly from the basic invariants. We take the two-generation case for illustration. With the help of Eqs.~(\ref{eq:HS seesaw 2g}) and (\ref{eq:PL seesaw 2g}) one can explicitly construct all the basic flavor invariants in the full theory, as listed in Table~\ref{table:2g seesaw}. To one's surprise, there are exactly equal number of CP-odd and CP-even basic invariants in Table~\ref{table:2g eff} and Table~\ref{table:2g seesaw}, both are 6 and 12, i.e.,
$$
\boxed{
\text{\# CP-odd (-even) basic invariants in SEFT}=\text{\# CP-odd (-even) basic invariants in seesaw}
}
$$
Recalling that the basic invariants serve as the generators of the invariant ring in the sense that any flavor invariant in the ring can be decomposed into the polynomial of the basic ones, we reach the conclusion that the ring of the invariants in the SEFT and that in the full seesaw model share an equal number of generators. One can then establish a direct connection between the two sets of generators by noticing that the building blocks $C_5^{}$ and $C_6^{}$ in the SEFT are related to the building blocks $Y_\nu^{}$ and $Y_{\rm R}^{}$ in the full theory by Eq.~(\ref{eq:wilson coe}). In Appendix~\ref{app:matching} we give the details of the matching procedure and the final conclusion is: \emph{All the basic invariants in the SEFT can be written as the rational functions of those in the full seesaw model.}\footnote{This result has been partly derived in Ref.~\cite{Yu:2021cco} for the minimal seesaw model, but the inclusion of the dimension-six operator as well as a complete matching is lacking therein.}
For instance, the 18 basic flavor invariants in the SEFT have been explicitly expressed as the rational functions of the 18 basic flavor invariants in the full seesaw model in Eqs.~(\ref{eq:odd1 app})-(\ref{eq:even 12 app}) for the two-generation case. Moreover, one can establish a \emph{one-to-one} correspondence between the 6 CP-odd basic invariants in the SEFT and the 6 CP-odd basic invariants in the full theory (see Appendix~\ref{app:matching} for more details)
{\allowdisplaybreaks
\begin{eqnarray}
{\cal I}_{121}^{(2)}&=&\frac{2}{\left(I_{002}^2-I_{004}\right)^2}\left[I_{242}^{(2)}I_{022}^{}-I_{044}^{}I_{220}^{}+I_{262}^{}I_{002}^{}-I_{244}^{}I_{020}^{}\right]\;,\label{eq:odd1}\\
{\cal I}_{221}^{}&=&\frac{2}{\left(I_{002}^2-I_{004}\right)^2}\left[I_{242}^{(2)}I_{222}^{}+I_{244}^{}I_{220}^{}+I_{462}^{}I_{002}^{}-I_{444}^{}I_{020}^{}\right]\;,\label{eq:odd2}\\
{\cal I}_{122}^{}&=&\frac{2}{\left(I_{002}^2-I_{004}\right)^3}\left\{I_{242}^{(2)}\left[3I_{022}^2+2I_{040}^{}\left(I_{002}^2-I_{004}^{}\right)-4I_{020}^{}I_{002}^{}I_{022}^{}\right]\right.\nonumber\\
&&\left.+I_{044}^{}\left(4I_{020}^{}I_{222}^{}-I_{220}^{}I_{022}^{}-2I_{242}^{(1)}\right)+I_{262}^{}\left[3I_{002}^{}I_{022}^{}-I_{020}^{}\left(I_{002}^2+3I_{004}^{}\right)\right]\right.\nonumber\\
&&\left.+I_{244}^{}\left(3I_{020}^{}I_{022}^{}-2I_{042}^{}\right)\right\}\;,\label{eq:odd3}\\
{\cal I}_{240}^{}&=&\frac{1}{\left(I_{002}^2-I_{004}\right)^2}\left[3I_{242}^{(2)}\left(I_{022}^{}I_{220}^{}-I_{020}^{}I_{222}^{}\right)-I_{044}^{}I_{220}^2+I_{262}^{}\left(3I_{002}^{}I_{220}^{}-2I_{222}^{}\right)\right.\nonumber\\
&&\left.-2 I_{244}^{}I_{020}^{}I_{220}^{}+I_{462}^{}\left(2I_{022}^{}-3I_{002}^{}I_{020}\right)+I_{444}^{}I_{020}^2\right]\;,\label{eq:odd4}\\
{\cal I}_{141}^{}&=&\frac{2}{\left(I_{002}^2-I_{004}\right)^3}\left\{I_{242}^{(2)}I_{020}^{}I_{022}^2+I_{044}^{}I_{020}^{}\left(I_{022}^{}I_{220}^{}-2I_{242}^{(1)}\right)\right.\nonumber\\
&&\left.+I_{262}^{}\left[I_{002}^{}I_{020}^{}I_{022}^{}+I_{040}^{}\left(I_{004}^{}-I_{002}^2\right)\right]+I_{244}^{}I_{020}^{}\left(I_{020}^{}I_{022}-2I_{042}^{}\right)\right\}\;,\label{eq:odd5}\\
{\cal I}_{042}^{}&=&\frac{2}{\left(I_{002}^2-I_{004}\right)^3}\,I_{044}^{}\left(I_{020}^2-I_{040}^{}\right)_{}^2\;.\label{eq:odd6}
\end{eqnarray}
}
Note that Eqs.~(\ref{eq:odd1})-(\ref{eq:odd6}) form a system of \emph{linear} equations with respect to the CP-odd invariants and the determinant of the coefficient matrix in Eqs.~(\ref{eq:odd1})-(\ref{eq:odd6}) turns out to be
\begin{eqnarray}
\label{eq:det}
{\rm Det}&=&\frac{128}{\left(I_{002}^2-I_{004}^{}\right)^{14}}\,I_{020}^{}\left(I_{002}^{}I_{020}^{}-I_{022}^{}\right)\left(I_{020}^2-I_{040}^{}\right)_{}^2\nonumber\\
&&\times\left\{I_{020}^2I_{022}^{}\left(3I_{020}^{}I_{022}^{}-4I_{002}^{}I_{040}^{}-3I_{042}^{}\right)-I_{022}^{}I_{040}^{}I_{042}^{}\right.\nonumber\\
&&\left.+ I_{020}^{}I_{040}^{}\left[3I_{022}^2+2I_{002}^{}I_{042}^{}+I_{040}^{}\left(I_{002}^2-I_{004}^{}\right)\right]\right\}\;,
\end{eqnarray}
which is nonzero in general. This implies the vanishing of all CP-odd flavor invariants in the SEFT is equivalent to the vanishing of all CP-odd flavor invariants in the full seesaw model. Therefore, the absence of CP violation in the low-energy effective theory up to ${\cal O}\left(1/\Lambda_{}^2\right)$ is equivalent to the CP conservation in the full theory.
Eqs.~(\ref{eq:odd1})-(\ref{eq:odd6}) can be implemented to link the CP violation at low energies and that for leptogene-\\sis at high energies. For the purpose of illustration, we consider the (unflavored) CP asymmetries in the decays of RH Majorana neutrinos for the two-generation case, which are defined as
\begin{eqnarray}
\epsilon_i^{}\equiv \frac{\sum_\alpha \left[\Gamma\left(N_i^{}\to \ell_\alpha+H\right)-\Gamma\left(N_i^{}\to \overline{\ell_\alpha}+\overline{H}\right)\right]}{\sum_\alpha \left[\Gamma\left(N_i^{}\to \ell_\alpha+H\right)+\Gamma\left(N_i^{}\to \overline{\ell_\alpha}+\overline{H}\right)\right]}\;,
\end{eqnarray}
where $\Gamma\left(N_i^{}\to \ell_\alpha+H\right)$ and $\Gamma\left(N_i^{}\to \overline{\ell_\alpha}+\overline{H}\right)$ denote the decay rate of $N_i^{}\to \ell_\alpha+H$ and $N_i^{}\to \overline{\ell_\alpha}+\overline{H}$ (for $i=1,2$ and $\alpha=e,\mu$), respectively. In the basis where the charged-lepton and the RH neutrino mass matrices are real and diagonal, $\epsilon_i^{}$ can be calculated as~\cite{Xing:2011zza}
\begin{eqnarray}
\label{eq:epsiloni}
\epsilon_i^{}=\frac{1}{8\pi\left(\widetilde{X}_\nu^{}\right)_{ii}^{}}\sum_{j\neq i}^{}{\rm Im}\,\left[\left(\widetilde{X}_\nu^{}\right)_{ij}^2\right]F\left(\frac{Y_j^2}{Y_i^2}\right)\;,
\end{eqnarray}
with
$$
F\left(x\right)\equiv \sqrt{x}\left[\frac{2-x}{1-x}+\left(1+x\right)\ln\left(\frac{x}{1+x}\right)\right]\;.
$$
In terms of the flavor invariants one can recast the CP asymmetries into the form of Eq.~(\ref{eq:observable decomposition}) with the unique CP-odd flavor invariant $I_{044}^{}$
\begin{eqnarray}
\label{eq:epsiloni2}
\epsilon_{1,2}^{}=\frac{\sqrt{2}\, I_{044}}{4\pi\sqrt{I_{002}^2-I_{004}}\left(I_{020}I_{002}-2I_{022}\pm I_{020}\sqrt{2I_{004}-I_{002}^2}\right)}F\left(\frac{I_{002}\pm\sqrt{2I_{004}-I_{002}^2}}{I_{002}\mp\sqrt{2I_{004}-I_{002}^2}}\right)\;,
\end{eqnarray}
where the upper and lower signs refer respectively to $\epsilon_1^{}$ and $\epsilon_2^{}$. Note that Eq.~(\ref{eq:epsiloni2}) is manifestly independent of the parametrization schemes and the flavor basis, though Eq.~(\ref{eq:epsiloni}) is calculated in a specific basis where $X_l^{}$ and $Y_{\rm R}^{}$ are real and diagonal. For the hierarchical mass spectrum $Y_2^{}\gg Y_1^{}$, only $\epsilon_1^{}$ from the decay of lighter Majorana neutrino is relevant for leptogenesis. In consideration of $F\left(x\right)=-3/\left(2\sqrt{x}\right)$ for $x\gg 1$ one obtains
\begin{eqnarray}
\epsilon_1^{}=\frac{3}{16\pi}\frac{I_{044}}{I_{002}\left(I_{022}-I_{002}I_{020}\right)}\;.
\end{eqnarray}
Then using Eq.~(\ref{eq:odd6}) one can relate $\epsilon_1^{}$ to one CP-odd flavor invariant in the SEFT, namely, ${\cal I}_{042}^{}$, in a simple way
\begin{eqnarray}
\label{eq:epsilon1}
\epsilon_1^{}=\frac{3}{32\pi}\frac{\left(I_{002}^2-I_{004}\right)^3}{I_{002}\left(I_{022}-I_{002}I_{020}\right)\left(I_{020}^2-I_{040}\right)^2}\,{\cal I}_{042}^{}\;,
\end{eqnarray}
with the coefficient composed of all CP-even flavor invariants in the full theory. Furthermore, since there are only 2 independent phases in the SEFT for the two-generation case, only 2 of the 6 CP-odd basic invariants in Table~\ref{table:2g eff} are algebraically independent. With the help of the syzygies in Eqs.~(\ref{eq:syzygy1})-(\ref{eq:syzygy4}) one can express any four of them as the linear combinations of the other two, with the coefficients being rational functions of only CP-even invariants. To be explicit, one can express ${\cal I}_{042}^{}$ as the linear combination of ${\cal I}_{121}^{(2)}$ and ${\cal I}_{240}^{}$, which are responsible for the CP violation in neutrino-neutrino and neutrino-antineutrino oscillations, respectively [cf. Eqs.~(\ref{eq:neutrino oscillation}) and (\ref{eq:neutrino-antineutrino oscillation})]
\begin{eqnarray}
{\cal I}_{042}^{}=\frac{{\cal P}_1\left[{\cal I}_{\rm even}\right]\,{\cal I}_{121}^{(2)}+{\cal P}_2\left[{\cal I}_{\rm even}\right]\,{\cal I}_{240}^{}}{\left[{\cal I}_{021}\left({\cal I}_{100}^2-2{\cal I}_{200}\right)+{\cal I}_{101}\left(2{\cal I}_{120}-{\cal I}_{020}{\cal I}_{100}\right)+{\cal I}_{001}\left({\cal I}_{020}{\cal I}_{200}-{\cal I}_{100}{\cal I}_{120}\right)\right]^2}\;,
\end{eqnarray}
where
\begin{eqnarray}
\label{eq:P1}
{\cal P}_1^{}\left[{\cal I}_{\rm even}\right]
&=&{\cal I}_{021}^3{\cal I}_{100}^{}\left(2{\cal I}_{200}^{}-{\cal I}_{100}^2\right)+{\cal I}_{021}^2\left[\left({\cal I}_{100}^2-2{\cal I}_{200}^{}\right)\left({\cal I}_{001}^{}{\cal I}_{120}^{}+2{\cal I}_{121}^{(1)}\right)\right.\nonumber\\
&&\left. +{\cal I}_{020}^{}{\cal I}_{100}^{}\left({\cal I}_{100}^{}{\cal I}_{101}^{}-{\cal I}_{001}^{}{\cal I}_{200}^{}\right)+2{\cal I}_{220}^{}\left({\cal I}_{001}^{}{\cal I}_{100}^{}-2{\cal I}_{101}^{}\right)\right]\nonumber\\
&&+{\cal I}_{021}^{}\left\{4{\cal I}_{121}^{(1)}\left[{\cal I}_{001}^{}\left({\cal I}_{020}^{}{\cal I}_{200}^{}-{\cal I}_{100}^{}{\cal I}_{120}^{}\right)+{\cal I}_{101}^{}\left(2{\cal I}_{120}^{}-{\cal I}_{020}^{}{\cal I}_{100}^{}\right)\right]\right.\nonumber\\
&&\left.+{\cal I}_{001}^2\left[{\cal I}_{020}^{}\left({\cal I}_{120}^{}{\cal I}_{200}^{}-{\cal I}_{100}^{}{\cal I}_{220}^{}\right)+{\cal I}_{120}^{}\left({\cal I}_{100}^{}{\cal I}_{120}^{}-2{\cal I}_{220}^{}\right)\right]\right.\nonumber\\
&&\left.+2{\cal I}_{001}^{}{\cal I}_{020}^{}{\cal I}_{101}^{}\left(2{\cal I}_{220}^{}-{\cal I}_{100}^{}{\cal I}_{120}^{}\right)+{\cal I}_{100}^3{\cal I}_{020}^{}{\cal I}_{022}^{}+{\cal I}_{100}^2{\cal I}_{120}^{}\left({\cal I}_{002}^{}{\cal I}_{020}^{}-2{\cal I}_{022}^{}\right)\right.\nonumber\\
&&\left.-2{\cal I}_{100}^{}\left[{\cal I}_{002}^{}\left({\cal I}_{120}^2+{\cal I}_{020}^{}{\cal I}_{220}^{}\right)+{\cal I}_{020}^{}{\cal I}_{022}^{}{\cal I}_{200}^{}\right]+4{\cal I}_{120}^{}\left({\cal I}_{022}^{}{\cal I}_{200}^{}+{\cal I}_{002}^{}{\cal I}_{220}^{}\right)\right\}\nonumber\\
&&+{\cal I}_{120}^3{\cal I}_{001}^{}\left(2{\cal I}_{002}^{}-{\cal I}_{001}^2\right)+{\cal I}_{120}^2\left[{\cal I}_{001}^2\left({\cal I}_{020}^{}{\cal I}_{101}^{}+2{\cal I}_{121}^{(1)}\right)+{\cal I}_{001}^{}{\cal I}_{100}^{}\left(2{\cal I}_{022}^{}-{\cal I}_{002}^{}{\cal I}_{020}^{}\right)\right.\nonumber\\
&&\left.-4\left({\cal I}_{022}^{}{\cal I}_{101}^{}+{\cal I}_{002}^{}{\cal I}_{121}^{(1)}\right)\right]+{\cal I}_{120}^{}{\cal I}_{020}^{}\left[4{\cal I}_{121}^{(1)}\left({\cal I}_{002}^{}{\cal I}_{100}^{}-{\cal I}_{001}^{}{\cal I}_{101}^{}\right)+{\cal I}_{001}^3{\cal I}_{220}^{}\right.\nonumber\\
&&\left.-{\cal I}_{001}^{}{\cal I}_{022}^{}\left({\cal I}_{100}^2+2{\cal I}_{200}^{}\right)+2\left(2{\cal I}_{022}^{}{\cal I}_{100}^{}{\cal I}_{101}^{}-{\cal I}_{001}^{}{\cal I}_{002}^{}{\cal I}_{220}^{}\right)
\right]\nonumber\\
&&-{\cal I}_{001}^2{\cal I}_{020}^2\left({\cal I}_{121}^{(1)}{\cal I}_{200}^{}+{\cal I}_{101}^{}{\cal I}_{220}^{}\right)+{\cal I}_{001}^{}{\cal I}_{020}^2{\cal I}_{100}^{}\left(2{\cal I}_{101}^{}{\cal I}_{121}^{(1)}+{\cal I}_{022}^{}{\cal I}_{200}^{}+{\cal I}_{002}^{}{\cal I}_{220}^{}\right)\nonumber\\
&&-{\cal I}_{020}^2{\cal I}_{100}^2\left({\cal I}_{022}^{}{\cal I}_{101}^{}+{\cal I}_{002}^{}{\cal I}_{121}^{(1)}\right)\;,
\end{eqnarray}
and
\begin{eqnarray}
\label{eq:P2}
{\cal P}_2^{}\left[{\cal I}_{\rm even}\right]=\left[{\cal I}_{001}^2{\cal I}_{120}^{}-{\cal I}_{001}^{}\left({\cal I}_{021}^{}{\cal I}_{100}^{}+{\cal I}_{020}^{}{\cal I}_{101}^{}\right)+{\cal I}_{002}^{}\left({\cal I}_{020}^{}{\cal I}_{100}^{}-2{\cal I}_{120}^{}\right)+2{\cal I}_{021}^{}{\cal I}_{101}^{}\right]_{}^2\;,
\end{eqnarray}
are polynomials of CP-even basic flavor invariants in the SEFT.
Since any CP-even basic flavor invariant in the SEFT can be written as the rational function of those in the full theory using Eqs.~(\ref{eq:even 1 app})-(\ref{eq:even 12 app}), one finally arrives at
\begin{eqnarray}
\label{eq:connection}
\epsilon_1^{}={\cal R}_1^{}\left[I_{\rm even}^{}\right]\,{\cal I}_{121}^{(2)}+{\cal R}_2^{} \left[I_{\rm even}^{}\right]\,{\cal I}_{240}^{}\;,
\end{eqnarray}
where ${\cal R}_1^{}\left[I_{\rm even}^{}\right]$ and ${\cal R}_2^{}\left[I_{\rm even}^{}\right]$ are rational functions of CP-even basic flavor invariants in the full theory. The complete expressions of ${\cal R}_1^{}\left[I_{\rm even}^{}\right]$ and ${\cal R}_2^{}\left[I_{\rm even}^{}\right]$ are too lengthy to be listed here, though they can be straightforwardly obtained by substituting Eqs.~(\ref{eq:even 1 app})-(\ref{eq:even 12 app}) into Eqs.~(\ref{eq:P1}) and (\ref{eq:P2}) and combining them with Eq.~(\ref{eq:epsilon1}). In Eq.~(\ref{eq:connection}) we have expressed the CP asymmetries in the decays of RH neutrinos as the linear combination of two CP-odd flavor invariants in the low-energy effective theory, which respectively measure the CP violation in neutrino-neutrino and neutrino-antineutrino oscillations. This establishes a direct link between CP-violating observables at high- and low-energy scales in a basis- and parametrization-independent way.\footnote{The connection between CP violation at low and high energies has also been discussed in some previous works~\cite{Broncano:2003fq,Branco:2001pq,Branco:2003rt,Branco:2004hu,Branco:2006ce,Pascoli:2006ie,Pascoli:2006ci,Antusch:2009gn}, but without the full language of invariant theory, and the independence of flavor bases and parametrization schemes is not manifest therein.} Particularly, if CP violation is absent in both neutrino-neutrino and neutrino-antineutrino oscillations, i.e., ${\cal A}_{\nu\nu}^{e\mu}={\cal A}_{\nu\bar{\nu}}^{e\mu}=0$ implying ${\cal I}_{121}^{(2)}={\cal I}_{240}^{}=0$, then the CP asymmetries in RH neutrino decays also vanish. Conversely, if CP violation is measured at low energies either in neutrino-neutrino or neutrino-antineutrino oscillations, indicating either ${\cal I}_{121}^{(2)}$ or ${\cal I}_{240}^{}$ is nonvanishing, then CP asymmetries may exist in the decays of RH neutrinos. This result is consistent with the conclusion drawn from Eqs.~(\ref{eq:odd1})-(\ref{eq:det}) that the absence of CP violation in the SEFT also implies the CP conservation in the full seesaw model.
The above analysis about the basic invariants can be directly generalized to the three-generation case. Since there are more than 200 basic invariants in both the SEFT and the full theory, we shall not attempt to write down the complete matching conditions among these two sets of basic invariants for the three-generation case. However, all the basic invariants in the SEFT can still be written as the rational functions of those in the full theory using the matching procedure in Appendix~\ref{app:matching} and taking advantage of Eq.~(\ref{eq:inverse 3g}). In this case, it can be shown that the absence of CP violation in the SEFT is equivalent to the CP conservation in the full seesaw model~\cite{Broncano:2003fq}.
In Ref.~\cite{Branco:2001pq} the authors constructed the following 6 CP-odd flavor invariants\footnote{It should be noticed that the notations of building blocks and flavor invariants in Ref.~\cite{Branco:2001pq} are different from those in the present paper.}
\begin{eqnarray}
J_{044}^{}&\equiv&{\rm Im}\,{\rm Tr}\left(\widetilde{X}_\nu^{}X_{\rm R}^{}G_{\nu{\rm R}}^{}\right)\;,\\
J_{046}^{}&\equiv&{\rm Im}\,{\rm Tr}\left(\widetilde{X}_\nu^{}X_{\rm R}^{2}G_{\nu{\rm R}}^{}\right)\;,\\
J_{048}^{}&\equiv&{\rm Im}\,{\rm Tr}\left(\widetilde{X}_\nu^{}X_{\rm R}^{2}G_{\nu{\rm R}}^{}X_{\rm R}^{}\right)\;,\\
J_{444}^{}&\equiv&{\rm Im}\,{\rm Tr}\left(G_{l\nu}^{}X_{\rm R}^{}G_{l\nu{\rm R}}^{}\right)\;,\\
J_{446}^{}&\equiv&{\rm Im}\,{\rm Tr}\left(G_{l\nu}^{}X_{\rm R}^{2}G_{l\nu{\rm R}}^{}\right)\;,\\
J_{448}^{}&\equiv&{\rm Im}\,{\rm Tr}\left(G_{l\nu}^{}X_{\rm R}^{2}G_{l\nu{\rm R}}^{}X_{\rm R}^{}\right)\;,
\end{eqnarray}
and mentioned that the vanishing of these 6 invariants serves as the sufficient and necessary conditions for CP conservation in the full seesaw model for the three-generation case. However, as it is emphasized in Refs.~\cite{Yu:2019ihs,Yu:2020gre}, these are not \emph{linear} equations with respect to the sine functions of the phases in $Y_\nu^{}$ so there may exist some parameter space where all these equations are satisfied but the phases can take some nontrivial values. Therefore, without
any information about the physical parameters at high-energy scales, these equations can only be understood to guarantee CP conservation in \emph{some} particular parameter space. This shortcoming can be overcome by taking advantage of the CP-odd flavor invariants in the SEFT rather than in the full theory. First, it has been proved in Ref.~\cite{Yu:2019ihs} that Eqs.~(\ref{eq:cp conservation condition 1})-(\ref{eq:cp conservation condition 3}) are sufficient to guarantee CP conservation in \emph{all} experimentally allowed parameter space to the order of ${\cal O}\left(1/\Lambda\right)$. Then Eqs.~(\ref{eq:j121})-(\ref{eq:j161}) supply three linear equations and enforce $\alpha_{ij}^{}=\beta_{ij}^{}+k\pi$ without any nontrivial solutions of the phases. Thus the vanishing of $\left\{{\cal J}_{360}^{},{\cal J}_{240}^{(2)},{\cal J}_{260}^{},{\cal J}_{121}^{},{\cal J}_{141}^{},{\cal J}_{161}^{}\right\}$ are able to guarantee CP conservation in the SEFT in all experimentally allowed parameter space. Finally, since the CP conservation in the SEFT is sufficient for CP conservation in the full theory, the vanishing of these 6 flavor invariants in the SEFT also serves as the sufficient and necessary conditions for CP conservation in the full seesaw model in all experimentally allowed parameter space.
To sum up, the connection between the full theory and its low-energy effective theory can be established through the matching of flavor invariants. The matching conditions, such as those in Eqs.~(\ref{eq:odd1 app})-(\ref{eq:even 12 app}), serve as a bridge to link the observables at high energies and those at low energies in a basis- and parametrization-independent way. In addition, the matching conditions are also necessary to determine the initial values of the renormalization-group running of the flavor invariants in the effective theory~\cite{Wang:2021wdq}.
Finally, it is worthwhile to make some brief comments on the practical determination of the physical parameters in the full theory via low-energy measurements. Although the SEFT with only $C^{}_5$ and $C^{}_6$ already contains the same number of physical parameters as the full seesaw model does, the precision of the SEFT itself is limited to the order of ${\cal O}(1/\Lambda^2)$. More precise determination of the physical parameters in $C^{}_5$ and $C^{}_6$, and thus those in the full theory, may require the inclusion of the effective operators of higher mass dimensions at the treel level or even the loop-level matching.
\section{Summary}
\label{sec:summary}
In the language of invariant theory, we have systematically investigated the algebraic structure of the ring of the flavor invariants in the SEFT, which includes one dimension-five and one dimension-six non-renormalizable operator. Particular attention has been paid to the sources of CP violation and the connection between the full seesaw model and the SEFT.
For the first time, we calculate the HS of the flavor space in the SEFT and explicitly construct all the basic (primary) flavor invariants in the invariant ring for the two- (three-) generation case. We have shown that all the physical parameters in the theory can be extracted using the primary flavor invariants, so that any physical observable can be recast into the function of flavor invariants. Furthermore, we prove that any CP-violating observable in the SEFT can be expressed as the linear combination of CP-odd flavor invariants. The minimal sufficient and necessary conditions for leptonic CP conservation in both the SEFT and the full seesaw model have been clarified.
Based on the observation that there is an equal number of independent physical parameters in the SEFT and in the full seesaw model, we reveal the intimate connection between their rings of flavor invariants. With the HS, we show that the invariant ring of the SEFT shares equal number of primary invariants with that of the full theory, indicating that the inclusion of only one dimension-five and one dimension-six operator in the SEFT is adequate to incorporate all physical information about the full seesaw model. Through a proper matching procedure, we establish a direct link between the flavor invariants in the SEFT and those in the full theory: The former can be expressed as the rational functions of the latter. The matching of the flavor invariants can be used to build a bridge between the CP asymmetries in leptogenesis and those in low-energy neutrino oscillation experiments in a basis- and parametrization-independent way.
The physical observables, which can be measured directly in experiments, should depend on neither the flavor basis nor the specific parametrization of Yukawa matrices. This is exactly the feature of flavor invariants. Therefore, it will be helpful (and more natural) to describe physical observables in terms of flavor invariants. The previous efforts~\cite{Jarlskog:1985ht,Jarlskog:1985cw,Branco:1986quark,Branco:1986lepton,Manohar:2009dy,Manohar:2010vu,Yu:2019ihs,Yu:2020gre,Wang:2021wdq,Yu:2021cco,Bonnefoy:2021tbt,Yu:2022nxj} and the results in this work have demonstrated the great power of the invariant theory in studying CP violation in the quark and leptonic sector, and call for more applications of the invariant theory in other aspects of particle physics.
\section*{Acknowledgements}
This work was supported by the National Natural Science Foundation of China under grant No.~11835013 and the Key Research Program of the Chinese Academy of Sciences under grant No. XDPB15.
\begin{appendix}
\section{Calculation of the Hilbert series}
\label{app:HS}
In this appendix, we present the computational details of the HS in the SEFT.\footnote{A concise and pedagogical introduction to the invariant theory and the HS can be found in Appendix B of Ref.~\cite{Wang:2021wdq}.} The HS plays an important role in the invariant theory and supplies a powerful tool for studying algebraic structure of the invariant ring and the polynomial identities among invariants. The HS is defined as the generating function of the invariants
\begin{eqnarray}
\label{eq:HS def}
{\mathscr H}\left(q\right)\equiv \sum_{k=0}^{\infty}c_k^{}q_{}^k\;,
\end{eqnarray}
where $c_k^{}$ (with $c_0^{}\equiv 1$) denote the number of linearly-independent invariants at degree $k$ while $q$ is an arbitrary complex number that satisfies $\left|q\right|<1$ and labels the degree of the building blocks.
The HS encodes all the information about the invariant ring. A general HS can always be written as the ratio of two polynomial functions~\cite{sturmfels2008algorithms,derksen2015computational}
\begin{eqnarray}
{\mathscr H}\left(q\right)=\frac{1+a_1q+\cdots a_{l-1}q^{l-1}+q^l}{\prod_{k=1}^r\left(1-q^{d_k}\right)}\;,
\end{eqnarray}
where the numerator has the palindromic structure (i.e., $a_k^{}=a_{l-k}^{}$) and the denominator exhibits the standard Euler product form. A highly nontrivial result is that the total number of the denominator factors $r$ equals the number of the \emph{primary} invariants in the ring, which also matches the number of independent physical parameters in the theory. Here primary invariants refer to those invariants that are algebraically independent, which means there does not exist any polynomial function of them that is identically equal to zero.
It can be proved that as long as the symmetry group is reductive (including the $N$-dimensional unitary group ${\rm U}(N)$ in the flavor space that we consider throughout this paper), the ring is finitely generated~\cite{sturmfels2008algorithms, derksen2015computational}. This implies that there exist a finite number of \emph{basic} invariants such that any invariant in the ring can be decomposed into the polynomial of these basic invariants. One should keep in mind that the number of basic invariants is in general no smaller than that of primary invariants. This is because the basic invariants may not be algebraically independent, i.e., there may exist nontrivial polynomial relations among basic invariants that are identically equal to zero, known as syzygies. In order to obtain the information of basic invariants, one can calculate the plethystic logarithm (PL) function of the HS
\begin{eqnarray}
\label{eq:PL def}
{\rm PL}\left[{\mathscr H}(q)\right]\equiv
\sum_{k=1}^{\infty}\frac{\mu(k)}{k}{\rm ln}\left[{\mathscr H}(q_{}^k)\right]\;,
\end{eqnarray}
where $\mu(k)$ is the M\"obius function. The great power of the PL function is that from it one can directly read off the number and degrees of the basic invariants and syzygies: The leading positive terms of PL correspond to the basic invariants while the leading negative terms correspond to the syzygies among them~\cite{Hanany:2006qr}. We will see how this principle is applied from the examples below.
To calculate the HS from the definition in Eq.~(\ref{eq:HS def}) is very difficult in most cases. A systematic approach is to utilize the Molien-Weyl (MW) formula, which reduces the calculation of the HS to contour integrals in the complex plane~\cite{molien1897invarianten, weyl1926darstellungstheorie}
\begin{eqnarray}
\label{eq:MW formula}
{\mathscr H}(q)=\int \left[{\rm d}\mu\right]_{G}^{} {\rm PE}\left(z_1^{},...,z_{r_0}^{};q\right)\;,
\end{eqnarray}
where $\left[{\rm d}\mu\right]_{G}^{}$ stands for the Haar measure of the symmetry group $G$ while the integrand is the plethystic exponential (PE) function defined as
\begin{eqnarray}
{\rm PE}\left(z_1^{},...z_{r_0}^{};q\right)\equiv {\rm exp}\left[\sum_{k=1}^{\infty}\sum_{i=1}^{n}\frac{\chi_{R_i}\left(z_1^k,...,z_{r_0}^k\right)q^k}{k}\right]\;,
\end{eqnarray}
where $z_1^{},...,z_{r_0}^{}$ are coordinates on the maximum torus of $G$ with $r_0^{}$ the rank of $G$ and $\chi_{R_i}^{}$ (for $i=1,...n$) is the character function of the $i$-th building block that is in the $R_i^{}$ representation of $G$. Below we will use the MW formula to calculate the HS in the flavor space of the SEFT for two- and three-generation cases.
\subsection{Two-generation SEFT}
In the two-generation scenario, the building blocks in the SEFT to construct flavor invariants ($X_l^{}$, $C_5^{}$ and $C_6^{}$) transform under the symmetry group ${\rm U}(2)$ in the flavor space as
\begin{eqnarray}
X_l^{}: {\bf 2}\otimes {\bf 2}_{}^*\;,\quad
C_5^{}: \left({\bf 2}\otimes {\bf 2}\right)_{\rm s}^{}\;,\quad
C_5^{\dagger}: \left({\bf 2}_{}^*\otimes {\bf 2}_{}^*\right)_{\rm s}^{}\;,\quad
C_6^{}: {\bf 2}\otimes {\bf 2}_{}^*\;,
\end{eqnarray}
where ${\bf 2}$ and ${\bf 2}_{}^*$ stand for the fundamental and anti-fundamental representation of ${\rm U}(2)$ respectively while the subscript ``s? denotes the symmetric part. The character function of ${\bf 2}$ and ${\bf 2}_{}^*$ are $z_1^{}+z_2^{}$ and $z_1^{-1}+z_2^{-1}$ respectively, which lead to the character functions of the building blocks
\begin{eqnarray}
\chi_l^{}\left(z_1^{},z_2^{}\right)&=&\left(z_1^{}+z_2^{}\right)\left(z_1^{-1}+z_2^{-1}\right)\;,\nonumber\\
\chi_5^{}\left(z_1^{},z_2^{}\right)&=&z_1^2+z_2^2+z_1^{}z_2^{}+z_1^{-1}+z_2^{-1}+z_1^{-1}z_2^{-1}\;,\nonumber\\
\chi_6^{}\left(z_1^{},z_2^{}\right)&=&\left(z_1^{}+z_2^{}\right)\left(z_1^{-1}+z_2^{-1}\right)\;,
\end{eqnarray}
where $z_1^{}$ and $z_2^{}$ denote the coordinates on the maximum torus of ${\rm U}(2)$ group. Then one can calculate the PE function
\begin{eqnarray}
\label{eq:PE eff 2g}
{\rm PE}\left(z_1^{},z_2^{};q\right)&=& {\rm exp}\left(\sum_{k=1}^\infty\frac{\chi_l\left(z_1^k,z_2^k\right)q^k+\chi_5\left(z_1^k,z_2^k\right)q^k+\chi_6\left(z_1^k,z_2^k\right)q^k}{k}\right)\nonumber\\
&=&\left[\left(1-q\right)_{}^4\left(1-qz_1^{}z_2^{-1}\right)_{}^2\left(1-qz_2^{}z_1^{-1}\right)_{}^2\left(1-qz_1^2\right)\left(1-qz_2^2\right)\left(1-qz_1^{}z_2^{}\right)\right.\nonumber\\
&&\left.\times\left(1-qz_1^{-2}\right)\left(1-qz_2^{-2}\right)\left(1-qz_1^{-1}z_2^{-1}\right)\right]_{}^{-1}\;,
\end{eqnarray}
where the identity $\sum_{k=1}^{\infty}(x_{}^k/k)=-{\rm ln}(1-x)$ (for $\left|x\right|<1$) has been used. Note that the degrees of $X_l^{}$, $C_5^{}$ and $C_6^{}$ are all labeled by $q$. Substituting the PE function in Eq.~(\ref{eq:PE eff 2g}) into the MW formula in Eq.~(\ref{eq:MW formula}) and taking into account the Haar measure of the ${\rm U}(2)$ group, one obtains the HS in the SEFT for the two-generation case
\begin{eqnarray}
\label{eq:HS eff 2g}
{\mathscr H}_{\rm SEFT}^{(2\rm g)}(q)&=&\int \left[{\rm d}\mu\right]_{\rm U (2)}^{} {\rm PE}\left(z_1^{},z_2^{};q\right)\nonumber\\
&=&\frac{1}{2}\frac{1}{\left(2\pi {\rm i}\right)^2}\oint_{\left|z_1\right|=1}\oint_{\left|z_2\right|=1}\left(2-\frac{z_1}{z_2}-\frac{z_2}{z_1}\right) {\rm PE}\left(z_1^{},z_2^{};q\right)\nonumber\\
&=&\frac{1+3q^4+2q^5+3q^6+q^{10}}{\left(1-q\right)^2\left(1-q^2\right)^4\left(1-q^3\right)^2\left(1-q^4\right)^2}\;,
\end{eqnarray}
where in the second line of Eq.~(\ref{eq:HS eff 2g}) the integrals are performed on the unit circle and in the final step the contour integrals are accomplished via the residue theorem. From Eq.~(\ref{eq:HS eff 2g}) one finds that the numerator of HS exhibits the palindromic structure as expected while the denominator owns totally 10 factors. The latter implies that there are 10 primary flavor invariants, corresponding to the 10 physical parameters in the theory. The number of basic invariants can be obtained by substituting Eq.~(\ref{eq:HS eff 2g}) into Eq.~(\ref{eq:PL def}) and calculate the PL function
\begin{eqnarray}
\label{eq:PL eff 2g}
{\rm PL}\left[{\mathscr H}_{\rm SEFT}^{(2\rm g)}(q)\right]=2q+4q^2+2q^3+5q^4+2q^5+3q^6-{\cal O}\left(q^8\right)\;,
\end{eqnarray}
from which one can read off that there are in total 18 basic invariants (two of degree 1, four of degree 2, two of degree 3, five of degree 4, two of degree 5 and three of degree 6), and the syzygies begin to appear at degree 8. With the help of the leading positive terms in Eq.~(\ref{eq:PL eff 2g}), one can explicitly construct all the basic invariants, as listed in Table~\ref{table:2g eff}.
\subsection{Three-generation SEFT}
We then proceed to calculate the HS in the SEFT for the three-generation case. The representations of the building blocks under the ${\rm U}(3)$ group turn out to be
\begin{eqnarray}
X_l^{}: {\bf 3}\otimes {\bf 3}_{}^*\;,\quad
C_5^{}: \left({\bf 3}\otimes {\bf 3}\right)_{\rm s}^{}\;,\quad
C_5^{\dagger}: \left({\bf 3}_{}^*\otimes {\bf 3}_{}^*\right)_{\rm s}^{}\;,\quad
C_6^{}: {\bf 3}\otimes {\bf 3}_{}^*\;,
\end{eqnarray}
where ${\bf 3}$ and ${\bf 3}_{}^*$ denote the fundamental and anti-fundamental representation of ${\rm U}(3)$. Recalling that the character functions of ${\bf 3}$ and ${\bf 3}_{}^*$ are $z_1^{}+z_2^{}+z_3^{}$ and $z_1^{-1}+z_2^{-1}+z_3^{-1}$ respectively, one can calculate the character functions of the building blocks
\begin{eqnarray}
\chi_l^{}\left(z_1^{},z_2^{},z_3^{}\right)&=&\left(z_1^{}+z_2^{}+z_3^{}\right)\left(z_1^{-1}+z_2^{-1}+z_3^{-1}\right)\;,\nonumber\\
\chi_5^{}\left(z_1^{},z_2^{},z_3^{}\right)&=&z_1^2+z_2^2+z_3^2+z_1^{}z_2^{}+z_1^{}z_3^{}+z_2^{}z_3^{}\nonumber\\
&&+z_1^{-2}+z_2^{-2}+z_3^{-2}+z_1^{-1}z_2^{-1}+z_1^{-1}z_3^{-1}+z_2^{-1}z_3^{-1}\;,\nonumber\\
\chi_6^{}\left(z_1^{},z_2^{},z_3^{}\right)&=&\left(z_1^{}+z_2^{}+z_3^{}\right)\left(z_1^{-1}+z_2^{-1}+z_3^{-1}\right)\;,
\end{eqnarray}
where $z_i^{}$ (for $i=1,2,3$) denote the coordinates on the maximum torus of ${\rm U}(3)$ group. Then the PE function can be written as
\begin{eqnarray}
\label{eq:PE eff 3g}
{\rm PE}\left(z_1^{},z_2^{},z_3^{};q\right)&=& {\rm exp}\left(\sum_{k=1}^\infty\frac{\chi_l\left(z_1^k,z_2^k,z_3^k\right)q^k+\chi_5\left(z_1^k,z_2^k,z_3^k\right)q^k+\chi_6\left(z_1^k,z_2^k,z_3^k\right)q^k}{k}\right)\nonumber\\
&=&\left[\left(1-q\right)_{}^6\left(1-q z_1^{} z_2^{-1}\right)_{}^2\left(1-q z_2^{} z_1^{-1}\right)_{}^2\left(1-q z_1^{} z_3^{-1}\right)_{}^2\left(1-q z_3^{} z_1^{-1}\right)_{}^2\right.\nonumber\\
&&\left.\times \left(1-q z_2^{} z_3^{-1}\right)_{}^2 \left(1-q z_3^{} z_2^{-1}\right)_{}^2 \left(1-q z_1^2\right)\left(1-q z_2^2\right)\left(1-q z_3^2\right)\left(1-q z_1^{}z_2^{}\right)\right.\nonumber\\
&&\left.\times \left(1-q z_1^{}z_3^{}\right)\left(1-q z_2^{}z_3^{}\right)
\left(1-q z_1^{-2}\right)\left(1-q z_2^{-2}\right)\left(1-q z_3^{-2}\right)\left(1-q z_1^{-1}z_2^{-1}\right)\right.\nonumber\\
&& \left.\times \left(1-q z_1^{-1}z_3^{-1}\right)\left(1-q z_2^{-1}z_3^{-1}\right)\right]_{}^{-1}\;.
\end{eqnarray}
Using the MW formula in Eq.~(\ref{eq:MW formula}), one obtains the HS in the SEFT for the three-generation case
\begin{eqnarray}
\label{eq:HS eff 3g1}
{\mathscr H}_{\rm SEFT}^{(3\rm g)}(q)&=&\int \left[{\rm d}\mu\right]_{\rm U (3)}^{} {\rm PE}\left(z_1^{},z_2^{},z_3^{};q\right)\nonumber\\
&=&\frac{1}{6}\frac{1}{\left(2\pi {\rm i}\right)^3}\oint_{\left|z_1\right|=1}\oint_{\left|z_2\right|=1}\oint_{\left|z_3\right|=1}\left[-\frac{\left(z_2-z_1\right)^2\left(z_3-z_1\right)^2\left(z_3-z_2\right)^2}{z_1^2z_2^2z_3^2}\right]{\rm PE}\left(z_1^{},z_2^{},z_3^{};q\right)\;,\nonumber\\
\end{eqnarray}
where in the second line of Eq.~(\ref{eq:HS eff 3g1}) the Haar measure of ${\rm U}(3)$ group has been substituted and the integrals should be performed on the unit circle. Taking advantage of the residue theorem to calculate the contour integrals and after some tedious algebra one finally obtains
\begin{eqnarray}
\label{eq:HS eff 3g}
{\mathscr H}_{\rm SEFT}^{(3\rm g)}(q)=\frac{{\mathscr N}_{\rm SEFT}^{(3\rm g)}(q)}{{\mathscr D}_{\rm SEFT}^{(3\rm g)}(q)}\;,
\end{eqnarray}
where
\begin{eqnarray}
\label{eq:numerator eff 3g}
{\mathscr N}_{\rm SEFT}^{(3\rm g)}(q)&=&q^{65}+2 q^{64}+4 q^{63}+11 q^{62}+23 q^{61}+48 q^{60}+120 q^{59}+269 q^{58}+587 q^{57}+1258 q^{56}\nonumber\\
&&+2543 q^{55}+4895 q^{54}+9124 q^{53}+16281 q^{52}+27963 q^{51}+46490 q^{50}+74644 q^{49}\nonumber\\
&&+115871q^{48}+174433 q^{47}+254494 q^{46}+360055 q^{45}+494873 q^{44}+660820 q^{43}\nonumber\\
&&+857677 q^{42}+1083226 q^{41}+1331628 q^{40}+1593650 q^{39}+1858178 q^{38}+2111158 q^{37}\nonumber\\
&&+2337226 q^{36}+2522435
q^{35}+2654026 q^{34}+2721987 q^{33}+2721987 q^{32}+2654026q^{31}\nonumber\\
&&+2522435 q^{30}+2337226 q^{29}+2111158 q^{28}+1858178 q^{27}+1593650 q^{26}+1331628 q^{25}\nonumber\\
&&+1083226 q^{24}+857677 q^{23}+660820
q^{22}+494873 q^{21}+360055 q^{20}+254494 q^{19}\nonumber\\
&&+174433 q^{18}+115871 q^{17}+74644 q^{16}+46490 q^{15}+27963 q^{14}+16281 q^{13}+9124 q^{12}\nonumber\\
&&+4895 q^{11}+2543 q^{10}+1258 q^9+587 q^8+269 q^7+120
q^6+48 q^5+23 q^4+11 q^3+4 q^2\nonumber\\
&&+2 q+1\;,
\end{eqnarray}
and
\begin{eqnarray}
\label{eq:denominator eff 3g}
{\mathscr D}_{\rm SEFT}^{(3\rm g)}(q)=\left(1-q^2\right)^3 \left(1-q^3\right) \left(1-q^4\right)^5 \left(1-q^5\right)^6 \left(1-q^6\right)^6\;.
\end{eqnarray}
It can be seen that the HS in the three-generation SEFT is much more complicated than that in the two-generation case, reflecting the richness of the leptonic flavor structure and the complexity of the invariant ring. As a nontrivial cross-check, the numerator in Eq.~(\ref{eq:numerator eff 3g}) does exhibit the palindromic structure, and more importantly, the denominator in Eq.~(\ref{eq:denominator eff 3g}) has 21 factors, which correctly matches the number of independent physical parameters in the SEFT for the three-generation case.
\subsection{Full theory}
For completeness, we also calculate the HS in the full seesaw model using the MW formula, although the results have been given in the literature~\cite{Manohar:2009dy,Manohar:2010vu}. In the full theory, the building blocks to construct flavor invariants are $Y_l^{}$, $Y_\nu^{}$ and $Y_{\rm R}^{}$, which transform under the flavor group ${\rm U}(m)\otimes {\rm U}(n)$ as in Eq.~(\ref{eq:Yukawa trans}). Their representations are assigned as
\begin{eqnarray}
X_l^{}\equiv Y_l^{}Y_l^\dagger: \textbf{\emph{m}}\otimes \textbf{\emph{m}}_{}^*\;,\quad
Y_\nu^{}: \textbf{\emph{m}}\otimes \textbf{\emph{n}}_{}^{ *}\;,\quad
Y_\nu^{\dagger}: \textbf{\emph{n}}\otimes \textbf{\emph{m}}_{}^{*}\;,\quad
Y_{\rm R}^{}: \left(\textbf{\emph{n}}_{}^{*}\otimes \textbf{\emph{n}}_{}^{*}\right)_{\rm s}^{}\;,\quad
Y_{\rm R}^{\dagger}: \left(\textbf{\emph{n}}\otimes \textbf{\emph{n}}\right)_{\rm s}^{}\;,\qquad
\end{eqnarray}
where $m$ (or $n$) is the number of the generations of active (or RH) neutrinos, $\textbf{\emph{m}}$ (or $\textbf{\emph{n}}$) and $\textbf{\emph{m}}_{}^{*}$ (or $\textbf{\emph{n}}_{}^{*}$) denote respectively the fundamental and anti-fundamental representation of ${\rm U}(m)$ [or ${\rm U}(n)$] group. In order to compare with the HS in the SEFT, we consider two special scenarios: $m=n=2$ and $m=n=3$.
For the case of $m=n=2$, the character functions of the building blocks read
\begin{eqnarray}
\chi_l^{}\left(z_1^{},z_2^{}\right)&=&\left(z_1^{}+z_2^{}\right)\left(z_1^{-1}+z_2^{-1}\right)\,\nonumber\\
\chi_\nu^{}\left(z_1^{},z_2^{},z_3^{},z_4{}\right)&=&\left(z_1^{}+z_2^{}\right)\left(z_3^{-1}+z_4^{-1}\right)+\left(z_3^{}+z_4^{}\right)\left(z_1^{-1}+z_2^{-1}\right)\;,\nonumber\\
\chi_{\rm R}\left(z_3^{},z_4^{}\right)&=&z_3^2+z_4^2+z_3^{}z_4^{}+z_3^{-2}+z_4^{-2}+z_3^{-1}z_4^{-1}\;,
\end{eqnarray}
where $z_1^{}$ and $z_2$ (or $z_3^{}$ and $z_4^{}$) denote the coordinates on the maximum torus of the ${\rm U}(2)$ group that corresponds to the flavor-basis transformation in the active neutrino (or RH neutrino) sector. The PE function turns out to be
\begin{eqnarray}
\label{eq:PE seesaw 2g}
&&{\rm PE}\left(z_1^{},z_2^{},z_3^{},z_4^{};q\right)\nonumber\\
&=&{\rm exp}\left(\sum_{k=1}^\infty\frac{\chi_l\left(z_1^k,z_2^k\right)q^{2k}+\chi_\nu\left(z_1^k,z_2^k,z_3^k,z_4^k\right)q^{k}+\chi_{\rm R}\left(z_3^k,z_4^k\right)q^k}{k}\right)\nonumber\\
&=&\left[\left(1-q_{}^2\right)_{}^2\left(1-q_{}^2z_1^{}z_2^{-1}\right)\left(1-q_{}^2z_2^{}z_1^{-1}\right)
\left(1-qz_1^{}z_3^{-1}\right)\left(1-qz_3^{}z_1^{-1}\right)\left(1-qz_1^{}z_4^{-1}\right)\left(1-qz_4^{}z_1^{-1}\right)\right.\nonumber\\
&&\left.\times \left(1-qz_2^{}z_3^{-1}\right)\left(1-qz_3^{}z_2^{-1}\right)\left(1-qz_2^{}z_4^{-1}\right)\left(1-qz_4^{}z_2^{-1}\right)\left(1-qz_4^2\right)\left(1-qz_5^2\right)\right.\nonumber\\
&&\left.\times
\left(1-qz_4^{}z_5^{}\right)\left(1-qz_4^{-2}\right)\left(1-qz_5^{-2}\right)\left(1-qz_4^{-1}z_5^{-1}\right)
\right]_{}^{-1}\;.
\end{eqnarray}
Note that we have counted the degrees of $Y_l^{}$, $Y_\nu^{}$ and $Y_{\rm R}^{}$ by $q$, such that the degree of $X_l^{}\equiv Y_l^{}Y_l^\dagger$ is labeled by $q_{}^2$, which is different from the convention in the scenario of the SEFT.\footnote{Different conventions under which the degrees of building blocks are labeled will change the form of the HS. However, they have no influence on the algebraic structure of the invariant ring. Namely, the construction of primary invariants, basic invariants as well as the syzygies are not affected by different conventions.} Substituting Eq.~(\ref{eq:PE seesaw 2g}) into Eq.~(\ref{eq:MW formula}) one obtains
\begin{eqnarray}
\label{eq:HS seesaw 2g}
{\mathscr H}_{\rm SS}^{(2\rm g)}(q)
&=&\int \left[{\rm d}\mu\right]_{{\rm U}(2)\otimes{\rm U}(2)}^{} {\rm PE}\left(z_1^{},z_2^{},z_3^{},z_4^{};q\right)\nonumber\\
&=&\frac{1}{4}\frac{1}{\left(2\pi {\rm i}\right)^4}\oint_{\left|z_1\right|=1}\oint_{\left|z_2\right|=1}\oint_{\left|z_3\right|=1}\oint_{\left|z_4\right|=1}\left(2-\frac{z_1}{z_2}-\frac{z_2}{z_1}\right)\left(2-\frac{z_3}{z_4}-\frac{z_4}{z_3}\right) {\rm PE}\left(z_1^{},z_2^{},z_3^{},z_4^{};q\right)\nonumber\\
&=&\frac{1+q^6+3q^8+2q^{10}+3q^{12}+q^{14}+q^{20}}{\left(1-q^2\right)^3\left(1-q^4\right)^5\left(1-q^6\right)\left(1-q^{10}\right)}\;,
\end{eqnarray}
which agrees with the result obtained in Ref.~\cite{Manohar:2009dy}. The denominator in Eq.~(\ref{eq:HS seesaw 2g}) has 10 factors, corresponding to the 10 physical parameters in the two-generation seesaw model. The PL function of the HS is given by
\begin{eqnarray}
\label{eq:PL seesaw 2g}
{\rm PL}\left[{\mathscr H}_{\rm SS}^{(2\rm g)}(q)\right]=3q^2+5q^4+2q^6+3q^8+3q^{10}+2q^{12}-{\cal O}\left(q^{14}\right)\;,
\end{eqnarray}
from which one can read off there are in total 18 basic invariants (three of degree 2, five of degree 4, two of degree 6, three of degree 8, three of degree 10 and two of degree 12) and the syzygies begin to appear at the degree 14. With the help of Eq.~(\ref{eq:PL seesaw 2g}) one can explicitly construct all the basic invariants, as shown in Table~\ref{table:2g seesaw}.
For the case of $m=n=3$, the character functions of the building blocks read
\begin{eqnarray}
\chi_l^{}\left(z_1^{},z_2^{},z_3^{}\right)&=&\left(z_1^{}+z_2^{}+z_3^{}\right)\left(z_1^{-1}+z_2^{-1}+z_3^{-1}\right)\,\nonumber\\
\chi_\nu^{}\left(z_1^{},z_2^{},z_3^{},z_4{},z_5^{},z_6^{}\right)&=&\left(z_1^{}+z_2^{}+z_3^{}\right)\left(z_4^{-1}+z_5^{-1}+z_6^{-1}\right)+\left(z_4^{}+z_5^{}+z_6^{}\right)\left(z_1^{-1}+z_2^{-1}+z_3^{-1}\right)\;,\nonumber\\
\chi_{\rm R}\left(z_4^{},z_5^{},z_6^{}\right)&=&z_4^2+z_5^2+z_6^2+z_4^{}z_5^{}+z_4^{}z_6^{}+z_5^{}z_6^{}+z_4^{-2}+z_5^{-2}+z_6^{-2}\nonumber\\
&&+z_4^{-1}z_5^{-1}+z_4^{-1}z_6^{-1}+z_5^{-1}z_6^{-1}\;,
\end{eqnarray}
where $z_i^{}$, for $i=1,2,3$ (or for $i=4,5,6$) denote the coordinates on the maximum torus of the ${\rm U}(3)$ group that corresponds to the flavor-basis transformation in the active neutrino (or RH neutrino) sector. Labeling the degrees of $Y_l^{}$, $Y_\nu^{}$ and $Y_{\rm R}^{}$ by $q$, one can calculate the PE function
\begin{eqnarray}
\label{eq:PE seesaw 3g}
&&{\rm PE}\left(z_1^{},z_2^{},z_3^{},z_4^{},z_5^{},z_6^{};q\right)\nonumber\\
&=&\left[\left(1-q_{}^2\right)_{}^3\left(1-q_{}^2z_1^{}z_2^{-1}\right)\left(1-q_{}^2z_2^{}z_1^{-1}\right)\left(1-q_{}^2z_1^{}z_3^{-1}\right)\left(1-q_{}^2z_3^{}z_1^{-1}\right)\left(1-q_{}^2z_2^{}z_3^{-1}\right)\right.\nonumber\\
&&\left.\times\left(1-q_{}^2z_3^{}z_2^{-1}\right)\left(1-qz_1^{}z_4^{-1}\right)\left(1-qz_4^{}z_1^{-1}\right)\left(1-qz_1^{}z_5^{-1}\right)\left(1-qz_5^{}z_1^{-1}\right)\left(1-qz_1^{}z_6^{-1}\right)\right.\nonumber\\
&&\left.\times\left(1-qz_6^{}z_1^{-1}\right)\left(1-qz_2^{}z_4^{-1}\right)\left(1-qz_4^{}z_2^{-1}\right)\left(1-qz_2^{}z_5^{-1}\right)\left(1-qz_5^{}z_2^{-1}\right)\left(1-qz_2^{}z_6^{-1}\right)\right.\nonumber\\
&&\left.\times\left(1-qz_6^{}z_2^{-1}\right)\left(1-qz_3^{}z_4^{-1}\right)\left(1-qz_4^{}z_3^{-1}\right)\left(1-qz_3^{}z_5^{-1}\right)\left(1-qz_5^{}z_3^{-1}\right)\left(1-qz_3^{}z_6^{-1}\right)\right.\nonumber\\
&&\left.\times\left(1-qz_6^{}z_3^{-1}\right)\left(1-qz_4^2\right)\left(1-qz_5^2\right)\left(1-qz_6^2\right)\left(1-qz_4^{}z_5^{}\right)\left(1-qz_4^{}z_6^{}\right)\left(1-qz_5^{}z_6^{}\right)\right.\nonumber\\
&&\left.\times \left(1-qz_4^{-2}\right)\left(1-qz_5^{-2}\right)\left(1-qz_6^{-2}\right)\left(1-qz_4^{-1}z_5^{-1}\right)\left(1-qz_4^{-1}z_6^{-1}\right)\left(1-qz_5^{-1}z_6^{-1}\right)
\right]_{}^{-1}\;.
\end{eqnarray}
Inserting Eq.~(\ref{eq:PE seesaw 3g}) into Eq.~(\ref{eq:MW formula}) and performing the complex integrals by virtue of the residue theorem, one gets
\begin{eqnarray}
\label{eq:HS seesaw 3g}
{\mathscr H}_{\rm SS}^{(3\rm g)}(q)&=&\int \left[{\rm d}\mu\right]_{{\rm U} (3)\otimes{\rm U}(3)}^{} {\rm PE}\left(z_1^{},z_2^{},z_3^{},z_4^{},z_5^{},z_6^{};q\right)\nonumber\\
&=&\frac{1}{36}\frac{1}{\left(2\pi {\rm i}\right)^6}\oint_{\left|z_1\right|=1}\oint_{\left|z_2\right|=1}\oint_{\left|z_3\right|=1}\oint_{\left|z_4\right|=1}\oint_{\left|z_5\right|=1}\oint_{\left|z_6\right|=1}\left[-\frac{\left(z_2-z_1\right)^2\left(z_3-z_1\right)^2\left(z_3-z_2\right)^2}{z_1^2z_2^2z_3^2}\right]\nonumber\\
&&\times\left[-\frac{\left(z_5-z_4\right)^2\left(z_6-z_4\right)^2\left(z_6-z_5\right)^2}{z_4^2z_5^2z_6^2}\right]{\rm PE}\left(z_1^{},z_2^{},z_3^{},z_4^{},z_5^{},z_6^{};q\right)\;,\nonumber\\
&=&\frac{{\mathscr N}_{\rm SS}^{(3\rm g)}(q)}{{\mathscr D}_{\rm SS}^{(3\rm g)}(q)}\;,
\end{eqnarray}
where
\begin{eqnarray}
\label{eq:numerator seesaw 3g}
{\mathscr N}_{\rm SS}^{(3\rm g)}(q)&=&1+q^4+5q^6+9q^8+22q^{10}+61q^{12}+126q^{14}+273q^{16}+552q^{18}+1038q^{20}+1880q^{22}\nonumber\\
&&+3293q^{24}+5441q^{26}+8712q^{28}+13417q^{30}+19867q^{32}+28414q^{34}+39351q^{36}\nonumber\\
&&+52604q^{38}+68220q^{40}+85783q^{42}+104588q^{44}+123852q^{46}+142559q^{48}+159328q^{50}\nonumber\\
&&+173201q^{52}+183138q^{54}+188232q^{56}+188232q^{58}+183138q^{60}+173201q^{62}\nonumber\\
&&+159328q^{64}+142559q^{66}+123852q^{68}+104588q^{70}+85783q^{72}+68220q^{74}+52604q^{76}\nonumber\\
&&+39351q^{78}+28414q^{80}+19867q^{82}+13417q^{84}+8712q^{86}+5441q^{88}+3293q^{90}\nonumber\\
&&+1880q^{92}+1038q^{94}+552q^{96}+273q^{98}+126q^{100}+61q^{102}+22q^{104}+9q^{106}+5q^{108}\nonumber\\
&&+q^{110}+q^{114}\;,
\end{eqnarray}
and
\begin{eqnarray}
\label{eq:denominator seesaw 3g}
{\mathscr D}_{\rm SS}^{(3\rm g)}(q)=\left(1-q^2\right)^3\left(1-q^4\right)^4(1-q^6)^4\left(1-q^8\right)^2\left(1-q^{10}\right)^2\left(1-q^{12}\right)^3\left(1-q^{14}\right)^2\left(1-q^{16}\right)\;, \quad
\end{eqnarray}
which is in agreement with the result in Ref.~\cite{Manohar:2010vu}. Note that the numerator in Eq.~(\ref{eq:numerator seesaw 3g}) also exhibits the palindromic structure and the denominator in Eq.~(\ref{eq:denominator seesaw 3g}) has totally 21 factors, exactly corresponding to the 21 independent physical parameters in the three-generation seesaw model.
\section{Matching of flavor invariants}
\label{app:matching}
In this appendix, we explain how to relate the flavor invariants in the SEFT to those in the full seesaw model. In fact, all the basic invariants in the SEFT can be expressed as the rational functions of the basic invariants in the full theory.
This matching can be realized by noticing that the building blocks $C_5^{}$ and $C_6^{}$ in the SEFT are related to the building blocks $Y_\nu^{}$ and $Y_{\rm R}^{}$ in the full theory by Eq.~(\ref{eq:wilson coe}), and $Y_l^{}$ is the building block both in the SEFT and the full theory. Below we will explicitly show how to express the 18 basic invariants in the SEFT (i.e., the invariants in Table~\ref{table:2g eff}) as the rational functions of those in the full theory (i.e., the invariants in Table~\ref{table:2g seesaw}) in the two-generation case. The generalization to the three-generation case is straightforward.
We take ${\cal I}_{121}^{(2)}\equiv {\rm Im}\,{\rm Tr}\left(X_l^{}X_5^{}C_6^{}\right)$, the first CP-odd basic invariant in Table~\ref{table:2g eff}, as a concrete example. The first step is to replace $C_5^{}$ and $C_6^{}$ with $Y_\nu^{}$ and $Y_{\rm R}^{}$ using Eq.~(\ref{eq:wilson coe})
\begin{eqnarray}
\label{eq:inverse matrix ex1}
{\cal I}_{121}^{(2)}={\rm Im}\,{\rm Tr}\left[X_l^{}Y_\nu Y_{\rm R}^{-1}Y_{\nu}^{\rm T}Y_\nu^*\left(Y_{\rm R}^{\dagger }\right)_{}^{-1}Y_\nu^{\dagger}Y_\nu^{}\left(Y_{\rm R}^{\dagger}Y_{\rm R}^{}\right)_{}^{-1}Y_\nu^\dagger\right]\;.
\end{eqnarray}
In order to deal with the inverse matrix in Eq.~(\ref{eq:inverse matrix ex1}), we can utilize the following identity
\begin{eqnarray}
\label{eq:inverse matrix 2g}
A_{}^{-1}=\frac{2\left[\,{\rm Tr}\left(A\right) {\bf 1}_2-A\right]}{{\rm Tr}\left(A\right)^2-{\rm Tr}\left(A^2\right)}\;,
\end{eqnarray}
where $A$ is any $2\times 2$ non-singular matrix and ${\bf 1}_2^{}$ is the 2-dimensional identity matrix. Note that ${\cal I}_{121}^{(2)}$ is unchanged under the transformation in the flavor space, so is the right-hand side of Eq.~(\ref{eq:inverse matrix ex1}). Therefore one cannot substitute $Y_{\rm R}^{}$ directly into Eq.~(\ref{eq:inverse matrix 2g}), because $Y_{\rm R}^{}$ does not possess a bi-unitary transformation in the flavor space and thus ${\rm Tr}\left(Y_{\rm R}^{}\right)$ is not invariant under the flavor-basis transformati-\\on [recalling that $Y_{\rm R}^{}\to Y_{\rm R}^{\prime}=U_{\rm R}^{*} Y_{\rm R}^{} U_{\rm R}^\dagger$ and ${\rm Tr}\left(Y_{\rm R}^{\prime}\right)\neq{\rm Tr}\left(Y_{\rm R}^{}\right)$]. So it is necessary to rearrange the matrices on the right-hand side of Eq.~(\ref{eq:inverse matrix ex1}) into the form that transform \emph{adjointly} in the flavor space
\begin{eqnarray}
\label{eq:inverse matrix ex2}
{\cal I}_{121}^{(2)}&=&{\rm Im}\,{\rm Tr}\left\{\left(Y_\nu^\dagger X_l^{}Y_\nu\right)\left[Y_{\rm R}^\dagger\left(Y_\nu^{\rm T}Y_\nu^*\right)_{}^{-1}Y_{\rm R}^{}\right]_{}^{-1}\left(Y_\nu^\dagger Y_\nu^{}\right)\left(Y_{\rm R}^{\dagger}Y_{\rm R}^{}\right)_{}^{-1}\right\}\nonumber\\
&=&{\rm Im}\,{\rm Tr}\left(G_{l\nu}^{}G_{\tilde{\nu}{\rm R}}^{-1}\widetilde{X}_\nu^{}X_{\rm R}^{-1}\right)\;,
\end{eqnarray}
where $G_{\tilde{\nu}{\rm R}}^{}\equiv Y_{\rm R}^\dagger(\widetilde{X}_{\nu}^{*})_{}^{-1}Y_{\rm R}^{}$, while $G_{l\nu}^{}$, $\widetilde{X}_\nu^{}$ and $X_{\rm R}^{}$ have been defined in the caption of Table~\ref{table:2g seesaw}. Note that all the matrices on the right-hand side of Eq.~(\ref{eq:inverse matrix ex2}) transform as the bi-unitary representation in the flavor space
\begin{eqnarray}
G_{l\nu}^{}\to U_{\rm R}^{}G_{l\nu}^{} U_{\rm R}^\dagger\;,\quad
G_{\tilde{\nu}{\rm R}}^{}\to U_{\rm R}^{}G_{\tilde{\nu}{\rm R}}^{} U_{\rm R}^\dagger\;,\quad
\widetilde{X}_{\nu}^{}\to U_{\rm R}^{}\widetilde{X}_{\nu}^{} U_{\rm R}^\dagger\;,\quad
X_{{\rm R}}^{}\to U_{\rm R}^{}X_{{\rm R}}^{} U_{\rm R}^\dagger\;,\quad
\end{eqnarray}
and thus their traces are all invariant under the flavor-basis transformation. Then one can substitute $\widetilde{X_\nu}^{}$, $G_{\tilde{\nu}{\rm R}}$ and $X_{\rm R}^{}$ into Eq.~(\ref{eq:inverse matrix 2g}) to obtain
\begin{eqnarray}
&&G_{\tilde{\nu}{\rm R}}^{}=\frac{2}{\left(I_{020}^2-I_{040}\right)}\left(I_{020}^{}X_{\rm R}^{}-G_{\nu{\rm R}}^{}\right)\;,\\
\label{eq:inverse matrix ex3}
&&G_{\tilde{\nu}{\rm R}}^{-1}=\frac{-2}{\left(I_{002}^2-I_{004}\right)}\left[I_{020}^{}X_{\rm R}^{}-G_{\nu {\rm R}}^{}-\left(I_{020}^{}I_{002}^{}-I_{022}^{}\right){\bf 1}_2^{}\right]\;,\\
\label{eq:inverse matrix ex4}
&&X_{\rm R}^{-1}=\frac{2}{\left(I_{002}^2-I_{004}\right)}\left(I_{002}^{}{\bf 1}_2^{}-X_{\rm R}^{}\right)\;.
\end{eqnarray}
Inserting Eqs.~(\ref{eq:inverse matrix ex3})-(\ref{eq:inverse matrix ex4}) back into Eq.~(\ref{eq:inverse matrix ex2}) and after some algebra one obtains
\begin{eqnarray}
\label{eq:inverse matrix ex5}
{\cal I}_{121}^{(2)}=\frac{4}{\left(I_{002}^2-I_{004}\right)^2}\left[I_{242}^{(2)}I_{022}^{}+I_{262}^{}I_{002}^{}-{\rm Im}\,{\rm Tr}\left(G_{l\nu}^{}G_{\nu{\rm R}}^{}\widetilde{X}_\nu^{}X_{\rm R}^{}\right)\right]\;.
\end{eqnarray}
The final step is to decompose all the flavor invariants on the right-hand side of Eq.~(\ref{eq:inverse matrix ex5}) into the polynomials of the basic invariants in Table~\ref{table:2g seesaw}. Using the algorithm of decomposition developed in Appendix C of Ref.~\cite{Wang:2021wdq}, we have
\begin{eqnarray}
\label{eq:inverse matrix ex6}
{\rm Im}\, {\rm Tr}\left(G_{l\nu}^{}G_{\nu{\rm R}}^{}\widetilde{X}_\nu^{}X_{\rm R}^{}\right)=\frac{1}{2}\left(I_{242}^{(2)}I_{022}^{}+I_{044}^{}I_{220}^{}+I_{262}^{}I_{002}^{}+I_{244}^{}I_{020}^{}\right)\;.
\end{eqnarray}
Substituting Eq.~(\ref{eq:inverse matrix ex6}) into Eq.~(\ref{eq:inverse matrix ex5}) we finally get the expression of ${\cal I}_{121}^{(2)}$ in terms of the rational function of the basic invariants in the full theory
\begin{eqnarray}
\label{eq:odd1 app}
{\cal I}_{121}^{(2)}=\frac{2}{\left(I_{002}^2-I_{004}\right)^2}\left[I_{242}^{(2)}I_{022}^{}-I_{044}^{}I_{220}^{}+I_{262}^{}I_{002}^{}-I_{244}^{}I_{020}^{}\right]\;,
\end{eqnarray}
which is exactly Eq.~(\ref{eq:odd1}). The remaining 5 CP-odd basic invariants in Table~\ref{table:2g eff} can be handled in the same manner as ${\cal I}_{121}^{(2)}$, and thus we ultimately obtain
{\allowdisplaybreaks
\begin{eqnarray}
{\cal I}_{221}^{}&=&\frac{2}{\left(I_{002}^2-I_{004}\right)^2}\left[I_{242}^{(2)}I_{222}^{}+I_{244}^{}I_{220}^{}+I_{462}^{}I_{002}^{}-I_{444}^{}I_{020}^{}\right]\;,\label{eq:odd2 app}\\
{\cal I}_{122}^{}&=&\frac{2}{\left(I_{002}^2-I_{004}\right)^3}\left\{I_{242}^{(2)}\left[3I_{022}^2+2I_{040}^{}\left(I_{002}^2-I_{004}^{}\right)-4I_{020}^{}I_{002}^{}I_{022}^{}\right]\right.\nonumber\\
&&\left.+I_{044}^{}\left(4I_{020}^{}I_{222}^{}-I_{220}^{}I_{022}^{}-2I_{242}^{(1)}\right)+I_{262}^{}\left[3I_{002}^{}I_{022}^{}-I_{020}^{}\left(I_{002}^2+3I_{004}^{}\right)\right]\right.\nonumber\\
&&\left.+I_{244}^{}\left(3I_{020}^{}I_{022}^{}-2I_{042}^{}\right)\right\}\;,\label{eq:odd3 app}\\
{\cal I}_{240}^{}&=&\frac{1}{\left(I_{002}^2-I_{004}\right)^2}\left[3I_{242}^{(2)}\left(I_{022}^{}I_{220}^{}-I_{020}^{}I_{222}^{}\right)-I_{044}^{}I_{220}^2+I_{262}^{}\left(3I_{002}^{}I_{220}^{}-2I_{222}^{}\right)\right.\nonumber\\
&&\left.-2 I_{244}^{}I_{020}^{}I_{220}^{}+I_{462}^{}\left(2I_{022}^{}-3I_{002}^{}I_{020}\right)+I_{444}^{}I_{020}^2\right]\;,\label{eq:odd4 app}\\
{\cal I}_{141}^{}&=&\frac{2}{\left(I_{002}^2-I_{004}\right)^3}\left\{I_{242}^{(2)}I_{020}^{}I_{022}^2+I_{044}^{}I_{020}^{}\left(I_{022}^{}I_{220}^{}-2I_{242}^{(1)}\right)\right.\nonumber\\
&&\left.+I_{262}^{}\left[I_{002}^{}I_{020}^{}I_{022}^{}+I_{040}^{}\left(I_{004}^{}-I_{002}^2\right)\right]+I_{244}^{}I_{020}^{}\left(I_{020}^{}I_{022}-2I_{042}^{}\right)\right\}\;,\label{eq:odd5 app}\\
{\cal I}_{042}^{}&=&\frac{2}{\left(I_{002}^2-I_{004}\right)^3}\,I_{044}^{}\left(I_{020}^2-I_{040}^{}\right)_{}^2\;.\label{eq:odd6 app}
\end{eqnarray}
}
Therefore, the 6 CP-odd basic invariants in the SEFT have been written as the linear combinations of the 6 CP-odd basic invariants in the full theory, with the coefficients being rational functions of CP-even basic invariants.
For completeness, we also list below the matching conditions of the CP-even basic invariants and all of them can be deduced in the same manner as ${\cal I}_{121}^{(2)}$, i.e.,
{\allowdisplaybreaks
\begin{eqnarray}
\label{eq:even 1 app}
{\cal I}_{100}^{}&=&I_{200}^{}\;,\\
{\cal I}_{001}^{}&=&\frac{2}{\left(I_{002}^2-I_{004}\right)}\left(I_{002}^{}I_{020}^{}-I_{022}^{}\right)\;,\\
{\cal I}_{200}^{}&=&I_{400}^{}\;,\\
{\cal I}_{101}^{}&=&\frac{2}{\left(I_{002}^2-I_{004}\right)}\left(I_{002}^{}I_{220}^{}-I_{222}^{}\right)\;,\\
{\cal I}_{020}^{}&=&\frac{2}{\left(I_{002}^2-I_{004}\right)}\left(I_{042}^{}-2I_{022}^{}I_{020}^{}+I_{020}^2I_{002}^{}\right)\;,\\
{\cal I}_{002}^{}&=&\frac{2}{\left(I_{002}^2-I_{004}\right)^2}\left[2I_{022}^{}\left(I_{022}^{}-2I_{002}^{}I_{020}^{}\right)+I_{004}^{}\left(I_{020}^2-I_{040}^{}\right)+I_{002}^2\left(I_{020}^2+I_{040}^{}\right)\right]\;,\\
{\cal I}_{120}^{}&=&\frac{2}{\left(I_{002}^2-I_{004}\right)}\left[I_{220}^{}\left(I_{020}^{}I_{002}^{}-I_{022}^{}\right)+I_{242}^{(1)}-I_{020}^{}I_{222}^{}\right]\;,\\
{\cal I}_{021}^{}&=&\frac{1}{\left(I_{002}^2-I_{004}\right)^2}\left[I_{004}^{}I_{020}^{}\left(I_{020}^2-I_{040}^{}\right)+I_{002}^2I_{020}^{}\left(3I_{020}^2+I_{040}^{}\right)-4I_{022}^{}\left(I_{042}^{}-2I_{020}^{}I_{022}^{}\right)\right.\nonumber\\
&&\left.+4I_{002}^{}I_{020}^{}\left(I_{042}^{}-3I_{020}^{}I_{022}^{}\right)\right]\;,\\
{\cal I}_{220}^{}&=&\frac{2}{\left(I_{002}^2-I_{004}\right)}\left[I_{220}^{}\left(I_{220}^{}I_{002}^{}-2I_{222}^{}\right)+I_{442}^{}\right]\;,\\
{\cal I}_{121}^{(1)}&=&\frac{1}{\left(I_{002}^2-I_{004}\right)^2}\left[I_{004}^{}I_{220}^{}\left(I_{020}^2-I_{040}^{}\right)+I_{002}^2I_{220}^{}\left(3I_{020}^2+I_{040}^{}\right)\right.\nonumber\\
&&\left.+4I_{022}^{}\left(I_{022}^{}I_{220}^{}+I_{020}^{}I_{222}^{}-I_{242}^{(1)}\right)-4I_{002}^{}I_{020}^{}\left(2I_{022}^{}I_{220}^{}+I_{020}^{}I_{222}^{}-I_{242}^{(1)}\right)\right]\;,\\
{\cal I}_{040}^{}&=&\frac{1}{\left(I_{002}^2-I_{004}\right)^2}\left[I_{004}^{}\left(I_{020}^2-I_{040}^{}\right)_{}^2+I_{002}^2\left(3I_{020}^2-I_{040}^{}\right)\left(I_{020}^2+I_{040}^{}\right)\right.\nonumber\\
&&\left.-4\left(2I_{020}^{}I_{022}^{}-I_{042}^{}\right)\left(2I_{002}^{}I_{020}^2-2I_{020}^{}I_{022}^{}+I_{042}^{}\right)
\right]\;,\\
{\cal I}_{022}^{}&=&\frac{1}{\left(I_{002}^2-I_{004}\right)^3}\left[I_{002}^3\left(5I_{020}^4+2I_{020}^2I_{040}+I_{040}^2\right)+8I_{020}^{}\left(I_{002}^2I_{020}^{}I_{042}^{}-2I_{022}^3\right)\right.\nonumber\\
&&\left.+8I_{022}^2\left(5I_{002}^{}I_{020}^2+I_{042}\right)-4I_{022}^{}I_{002}^{}I_{020}^{}\left(7I_{002}^{}I_{020}^2+I_{002}^{}I_{040}^{}+4I_{042}^{}\right)\right.\nonumber\\
\label{eq:even 12 app}
&&\left.+I_{004}^{}\left(I_{020}^2-I_{040}^{}\right)\left(3I_{002}^{}I_{020}^2+I_{002}^{}I_{040}^{}-4I_{020}^{}I_{022}^{}\right)\right]\;.
\end{eqnarray}
}
From Eqs.~(\ref{eq:even 1 app})-(\ref{eq:even 12 app}) we can observe that all the 12 CP-even basic invariants in the SEFT can be expressed as the rational functions of those 12 CP-even basic invariants in the full theory, which however are independent of any CP-odd invariants.
The generalization to the three-generation case is straightforward. One just needs to replace Eq.~(\ref{eq:inverse matrix 2g}) by
\begin{eqnarray}
\label{eq:inverse 3g}
A_{}^{-1}=\frac{6A^2-6\,{\rm Tr}\left(A\right)A+3\left[{\rm Tr}\left(A\right)^2-{\rm Tr}\left(A^2\right)\right]{\bf 1}_3}{{\rm Tr}\left(A\right)^3-3\,{\rm Tr}\left(A^2\right){\rm Tr}\left(A\right)+2\,{\rm Tr}\left(A^3\right)}\;,
\end{eqnarray}
where $A$ is a $3\times 3$ non-singular matrix and ${\bf 1}_3^{}$ is the 3-dimensional identity matrix. Following the same strategy as before, one can also express all the basic invariants in the SEFT as the rational functions of those in the full theory for the three-generation case.
Conversely, one can also express all the basic invariants in the full theory as the functions of those in the SEFT (but not rational functions) as described in the following procedure. First, the number of independent physical parameters in the full seesaw model exactly matches that in $C_5$ and $C_6$, both of which are 10 (or 21) for two- (or three-) generation case. Moreover, one can prove that all the physical parameters in the full seesaw model can be expressed in terms of those in the SEFT (see Refs.~\cite{Broncano:2002rw,Broncano:2003fq} for details). Second, we have shown in Sec.~\ref{subsec:extract2g} and Sec.~\ref{subsec:extract3g} that all the physical parameters in the SEFT can be extracted using primary invariants (which are a subset of basic invariants). Therefore, all the basic invariants in the full seesaw model, which are composed of the physical parameters in the full theory, can be expressed as the functions of the basic invariants in the SEFT. Since any flavor invariant in the ring can be decomposed as the polynomial of basic ones, we conclude that the matching of two sets of flavor invariants can be inverted. Such an inversion may be accomplished up to some possible discrete degeneracies due to the non-linearity of polynomial functions. However, as some complication arises at each step, we shall not explicitly invert the complete invariant matching.
\end{appendix}
\bibliographystyle{JHEP}
|
2,877,628,089,071 | arxiv | \section{Introduction}
\label{intr}
Theoretical and experimental access to many-body correlations is essential in elucidating the properties and
underlying physics of strongly interacting systems \cite{cira12,garc14}. In the framework of ultracold atoms,
the quantum correlations in momentum space associated with bosonic or fermionic neutral atoms trapped in
optical tweezers (with a finite number $N$ of particles \cite{prei19,berg19,bech20}) or in extended optical
lattices (with control of the 1D, 2D, or 3D dimensionality \cite{grei02,gerb05,gerb05.2,clem18,clem19})
are currently attracting significant experimental attention, empowered
\cite{bech20,prei19,berg19,clem18,clem19,hodg17} by advances in single-atom-resolved detection methods
\cite{ott16}.
In this paper, we derive explicit analytic expressions for the 3rd-, 2nd-, and 1st-order momentum
correlations of 3 ultracold bosonic atoms trapped in an optical trap of 3 wells in a linear arrangement
(denoted as 3b-3w). Compared to the case of 2 particles in 2 wells (2p-2w)
\cite{bran17,bran18,yann19.1,yann19.2},
a complete Hubbard-model treatment of momentum correlations (as a function of the
interparticle interaction) for the 3b-3w case increases the complexity and effort involved, by an order of
magnitude, because of the larger Hilbert space and the larger number of states, i.e., a total of 10 states
instead of 4, including the excited states which are long-lived \cite{joch15} for trapped ultracold atoms.
Therefore, demonstrating that this complexity of the theoretical treatment can be handled in an
efficient manner through the use of algebraic computer languages constitutes an important step toward the
implementation of the bottom-up approach for simulating many-body physics with ultracold atoms. In this
respect, the statement above parallels earlier observations that three-particle entanglement extends
two-particle entanglement in a nontrivial way \cite{zeil99,cira00,yann19.3}.
\textcolor{black}{
Compared to the standard numerical treatments \cite{galle15,rave17,shib72,call87,dago94}
of the Hubbard model,}
the advantage of our algebraic treatment
is the ability to produce in closed analytic form cosinusoidal/sinusoidal expressions of the many-body wave
function and the associated momentum correlations of all orders; see for example Eqs.\
(\ref{wfbexpr}), (\ref{2ndbexpr}), and (\ref{frstbexpr}), which codify the main results of our paper.
Due to recent experimental advances in tunability and control of a system of a few ultracold atoms trapped in
finite optical lattices (referred to also as optical tweezers), such momentum correlations can be measured directly
in time-of-flight experiments \cite{berg19,prei19,bech20} and their experimental cosinusoidal diffraction patterns
are revealing direct analogies with the quantum optics of massless photons \cite{bran18,yann19.1,prei19}.
\textcolor{black}{
In this context, this paper aims at researchers actively engaged in experimental and theoretical investigations
of the properties of (finite) quantum few-body systems, as well as those aiming to understand many-body quantum
systems through bottom-up hierarchical modeling of trapped finite ultracold-atom assemblies with deterministically
controllable increased size and complexity; see, e.g.,
Refs.\ \cite{kauf14,kauf18,berg19,prei19,bech20,joch15,sowi16,zinn14}.
Indeed, we target researchers in these fields by providing finger-print characteristics to aid the design,
diagnostics, and interpretation of experiments,}
\textcolor{black}{
as well as by giving benchmark results \cite{note9} for comparisons with future theoretical treatments.}
We foresee these as important merits that will contribute to future impact of our work.
\textcolor{black}{
In addition, the availability of the complete analytic set of momentum correlations enabled us to reveal and
explore two major physical aspects of the 3b-3w ultracold-atom system, namely: (i) Signatures of an emergent
quantum phase transition \cite{note7}, from a Superfuid phase to a Mott-insulator phase -- here the designation
'emergent' is used to indicate the gradual emergence of a phase transition in a finite system as the system size
is increased to infinity \cite{note7}, alternatively termed as 'inter-phase crossover' -- and
(ii) Analogies between the interference properties of three trapped ultracold atom systems with quantum-optics
three-photon interference. These aspects are elaborated in some detail immediately below.
}
{\it (i) Signatures of emergent Superfluid to Mott transition:\/}
The sharp superfluid-to-Mott transition has been observed in extended optical lattices with trapped
ultracold bosonic
alkali atoms ($^{87}$Rb) \cite{grei02}, as well as with excited $^4$He$^*$ bosonic atoms \cite{clem18}. In these
experiments, after a time-of-flight (TOF) expansion, the single-particle momentum (spm) density (1st-order
momentum correlation) was recorded. An oscillating spm-density provides a hallmark of a superfluid phase,
associated with a maximum uncertainty regarding a particle's site occupation; this happens for the
non-interacting case when the particles are fully delocalized. On the other hand, a featureless spm-density is
the hallmark of being deeply in the Mott-insulator phase when all particles are fully localized on the
lattice sites exhibiting no fluctuations in the site occupancies.
\textcolor{black}{
Here, we show that the 1st-order momentum correlations for the 3b-3w system vary smoothly, alternating as a
function of the Hubbard ${\cal U}$ between a featureless profile and that resulting from the sum of two cosine terms;
such profile alternations may provide signatures of an emerging superfluid to Mott-insulator phase crossing.
The periods of the cosine terms depend on the inverse of the lattice constant $d$ and its double $2d$ ($d$ being
the nearest-neighbor interwell distance). We note that for extended lattices only the $\cos(dk)$ term has been
theoretically specified \cite{gerb05.2,seng05,triv09}
with perturbative $1/{\cal U}$ approaches, and that our non-perturbative
results suggest that all cosine terms with all possible interwell distances in the argument should in general
contribute.}
Furthermore, we show that the correspondence between the featureless profiles and the interaction strength is
not a one-to-one correspondence. Indeed, we show that a
featureless spm-density can correspond to different strengths of the interaction, depending on the sign of the
interaction (repulsive versus attractive) and the precise Hubbard state under consideration (ground state or
one of the excited states). {\it For a unique characterization of a phase regime, both the 2nd-order and the
3rd-order momentum correlations beyond the spm-density are required.\/}
{\it (ii) Analogies with quantum-optics three-photon interference:\/}
Recent experimental \cite{prei19,berg19,lege04,gerr15.1,gerr15.2,tamm18.1,tamm19} and theoretical
\cite{bran17,bran18,bonn18,tamm18.2,yann19.1,yann19.2,yann19.3} advances have ushered a new research direction
regarding investigations of higher-order quantum interference resolved at the level of the intrinsic microscopic
variables that constitute the single-particle wave packet of the interfering particles. These intrinsic
variables are pairwise conjugated; they are the single-particle momenta ($k$'s) and mutual distances ($d$'s) for
massive localized particles \cite{prei19,berg19,bran17,bran18,bonn18,yann19.1,yann19.2,yann19.3} and the
frequencies ($\omega$'s) and relative time delays ($\tau$'s) for massless photons
\cite{lege04,gerr15.1,gerr15.2,tamm18.1,tamm18.2,tamm19}.
For the case of two fermionic or bosonic ultracold atoms, we investigated in Ref.\ \cite{yann19.1} this
correspondence in detail and we proceeded to establish a complete analogy between the cosinusoidal patterns
(with arguments $\propto kd$ or $\propto \omega \tau$) of the second-order $(k_1,k_2)$ correlation
maps for the two trapped atoms (determined experimentally through TOF measurements \cite{prei19,berg19}) with the
landscapes of the two-photon ($\omega_1,\omega_2)$ interferograms \cite{gerr15.1,gerr15.2,tamm19}. In addition,
we demonstrated that the Hong-Ou-Mandel (HOM) \cite{hom87} single-occupancy coincidence probability at
the detectors, $P_{11}$ (which relates to the celebrated HOM dip for total destructive interference, i.e.,
when $P_{11}=0$), corresponds to a double integral over the momentum variables $(k_1,k_2)$
of a specific term contributing to the
full correlation map, in full analogy with the treatment of the optical ($\omega_1,\omega_2)$ interferograms in
Ref.\ \cite{gerr15.1}. Due to this summation over the intrinsic momentum (or frequency for photons) variables,
the information contained in the HOM dip is limited compared to the full correlation map. Precise analogs of the
original optical HOM dip (with $P_{11}$ varying as a function of relative time delay or separation between
particles) have also been experimentally realized using the interference of massive particles,
i.e., two colliding electrons \cite{taru98,jonc12,bocq13} or two colliding $^4$He atoms \cite{lope15}. For the
case of two ultracold atoms trapped in two optical tweezers, analogs of the $P_{11}$ coincidence probability
can be determined via {\it in situ\/} measurements, as a function of the time evolution of the system
\cite{kauf14,yann19.1} or the interparticle interaction \cite{bran18,yann19.1}.
In this paper, we establish for the 3b-3w case the full range of analogies between the TOF spectroscopy
\cite{note3},
as well as the {\it in-situ\/} measurements, of localized massive particles and the multi-photon interference
in linear optical networks \cite{agar15,tamm18.1,tamm18.2,tamm19},
paying attention in particular to the mutual interparticle interactions which are
absent for photons. These analogies encompass extensions of the 2p-2w analogies mentioned above, i.e.,
correlation maps dependent on three momentum variables $(k_1,k_2,k_3)$ for massive particles versus
interferograms with three frequency variables $(\omega_1,\omega_2,\omega_3)$ for massless photons, and the HOM
$P_{111}$ coincidence probability for three particles versus that for three photons. Most importantly, however,
these analogies include highly nontrivial aspects beyond the reach of two-photon (or two-particle)
and one-photon (or one-particle) interferences,
such as genuine three-photon interference \cite{agne17,mens17} which cannot be determined from the
knowledge solely of the lower two-photon and one-photon interferences.
\begin{figure}[t]
\includegraphics[width=8cm]{3b_3w_fig1.pdf}
\caption{Spectrum of the ten bosonic eigenvalues in Eq.\ (\ref{eigvalb}) as a function of ${\cal U}$ (horizontal axis).
(a) This frame (with the extended $-10 \leq {\cal U} \leq 10$ scale) illustrates the convergence to the three values
of zero [ground state (${\cal U} >0$) or highest excited state (${\cal U} < 0$)], $\pm |{\cal U}|$ (six excited states),
and $\pm 3 |{\cal U}|$ (ground state and two excited states for ${\cal U} < 0$).
(b) A more detailed view in the range $-2 \leq {\cal U} \leq 2$.
Taking into consideration the three energy crossings at ${\cal U}=0$, the corresponding eigenstates
are labeled in ascending energy order as
$i=1$, $2$, $3r(4l)$, $4r(3l)$, $5r(6l)$, $6r(5l)$, $7r(8l)$, $8r(7l)$, $9$, $10$,
where ``$r$'' means ``right'' for the region of positive ${\cal U}$ and ``$l$'' means ``left'' for the region of
negative ${\cal U}$.
}
\label{feigvalb}
\end{figure}
\subsection {Plan of paper}
\label{plan}
Following the introductory section where we defined the aims of this work, we introduce in Sec.\ \ref{3b-hb} the
linear three-site Hubbard model and its analytic solution for three spinless ultracold bosonic atoms. We display
the spectrum of the ten bosonic eigenvalues of the Hubbard model for both attractive and repulsive interatomic
interactions (Fig.\ \ref{feigvalb}), and discuss in detail: (1) the infinite repulsive or attractive interaction
limit, and (2) the non-interacting limit. In Sec.\ \ref{hordcorr} we outline the general definition and relations
pertaining to higher-order correlations in momentum space.
In the following several sections we give explicit
analytic results and graphical illustrations pertaining to momentum correlation functions of the various orders,
starting from the third-order, since the lower-order are obtained from the third-order one by integration over the
unresolved momentum variables [see, e.g., Eq.\ (\ref{2nd}) for the second-order momentum correlation]. The
third-order momentum correlations for 3 bosons in 3 wells, with explicit discussion of the infinite-interaction
(repulsive or attractive) limit is given in Sec.\ \ref{3rdcorrUpmI} (see Fig.\ \ref{f3rdcorrb}),
followed by explicit results for the non-interacting limit in Sec.\ \ref{3rdcorrU0}. Sec.\ \ref{s3rdanyu} is
devoted to a presentation and discussion of results for the third-order momentum correlations for 3 bosons in 3
wells as a function of the strength of the inter-atom interaction over the whole range, from highly attractive to
highly repulsive (see momentum correlation maps in Fig.\ \ref{f3rdcorrbst2}).
Next we discuss in Sec.\ \ref{s2ndanyu} the second-order momentum correlation as a
function of the interparticle interaction; see momentum correlation maps for the whole interaction range in
Fig.\ \ref{f2ndcorrbst2}.
The first-order momentum correlation, obtained via integration of the second-order one over the momentum of one of
the atoms, is discussed as a function of inter-atom interaction strength in Sec.\ \ref{s1stanyu},
with a graphic illustration in Fig.\ \ref{f1stcorrbst2} for the first-excited state of 3 bosons in 3
wells, illustrating transition as a function of interaction strength from localized to superfluid behavior.
Sec.\ \ref{sign} is devoted to a detailed study of the quantum phase transition from localized to
superfluid behavior, as deduced from inspection of the first-order correlation function for the ground state of 3
bosons in 3 wells (Fig.\ \ref{corrbst1}, top row), and further elucidated and elaborated
with the use of second-order (Fig.\ \ref{corrbst1}, middle row), and third-order
(Fig.\ \ref{corrbst1}, bottom row) momentum correlation maps. Further discussion of the
quantum phase transition through analysis of site occupancies and their fluctuations for the ground and
first-excited states as a function of the interparticle interactions, illuminating the connection between the
quantum phase-transition from superfluid (phase coherent) to localized (incoherent) states, and the phase-number
(site occupancy) uncertainty principle, is illustrated in Fig.\ \ref{focc}.
Sec.\ \ref{anal} expounds on analogies with three-photon interference in quantum optics, including {\it genuine\/}
three-photon interference. We summarize the contents of the paper in Sec.\ \ref{summ},
closing with a comment concerning the expected relevance of the all-order momentum-space correlations for the
3 bosons in 3 wells as an alternative route to exploration with massive particles of aspects pertaining to the
boson sampling problem \cite{aaar13} and its extensions, which are serving as a major topic (see, e.g., Refs.
\cite{tamm15,tamm15.1,tich14,lain14,wals19}) in quantum-optics investigations as an intermediate step towards
the implementation of a quantum computer.
\textcolor{black}{
Appendix \ref{a11} and Appendix \ref{a12} complement Sec.\
\ref{eigvecuinf} and Sec.\ \ref{eigvecu0}, respectively, by listing the Hubbard eigenvectors of the
remaining eight excited states not discussed in the main text (where, as above-mentioned, we focus on the ground
and first-excited states). In addition, regarding again the remaining eight excited states not discussed in the
main text, Appendix \ref{a1} and Appendix \ref{a2} complement Sec.\ \ref{3rdcorrUpmI} and Sec.\ \ref{3rdcorrU0},
respectively, by listing the corresponding three-body wave functions. Specifically, Appendices \ref{a11} and
\ref{a1} focus on the limit of infinite repulsive or attractive interaction, whereas Appendices \ref{a12} and
\ref{a2} focus on the noninteracting case.
}
The last three appendices give details of the all-order correlation
functions as a function of the interaction strength for the remaining eight states not discussed in the main text.
\section{The linear three-site Hubbard model and its analytic solution for three spinless ultracold bosonic atoms}
\label{3b-hb}
Numerical solutions for small Hubbard clusters are readily available in the literature. Here we present a
compact analytic exposition for all the 10 eigenvalues and eigenstates of the linear three-bosons/three-site
Hubbard Hamiltonian. Such analytic solutions, involving both the ground and excited states, are needed to
further obtain the characteristic cosinusoidal or sinusoidal expressions for the associated third-, second-,
and first-order momentum correlations.
The following {\it ten\/} primitive kets form a basis that spans the many-body Hilbert space of three spinless
bosonic atoms distributed over three trapping wells:
\begin{align}
\begin{split}
& 1 \rightarrow \ket{111}, \\
& 2 \rightarrow \ket{210}, \; 3 \rightarrow \ket{201}, \; 4 \rightarrow \ket{120}, \\
& 5 \rightarrow \ket{021}, \; 6 \rightarrow \ket{102}, \; 7 \rightarrow \ket{012}, \\
& 8 \rightarrow \ket{300}, \; 9 \rightarrow \ket{030}, \; 10 \rightarrow \ket{003}.
\end{split}
\label{3b-kets}
\end{align}
The kets used above are of a general notation $|n_1,n_2,n_3\rangle$, where $n_i$ (with $i=1,2,3$) denotes the
particle occupancy at the $i$th well.
We note that there is only one primitive ket (No. 1) with all three wells being singly-occupied. The case of
doubly-occupied wells is represented by 6 primitives kets (Nos. 2$-$7). Finally, there are 3 primitive kets
(Nos. 8$-$10) that represent triply-occupied wells.
The Bose-Hubbard Hamiltonian for 3 spinless bosons trapped in 3 wells in a linear arrangement is given by
\begin{align}
H_B=-J(\hat{b}^\dagger_1 \hat{b}_2 + \hat{b}^\dagger_2 \hat{b}_3 + h.c.)+ \frac{U}{2}\sum_{i=1}^3 n_i(n_i-1),
\label{3b-hub}
\end{align}
where $n_i=\hat{b}^\dagger_i \hat{b}_i$ is the occupation operator per site. $J$ is the hopping (tunneling)
parameter and the Hubbard $U$ can be positive (repulsive interaction), vanishing (noninteracting), or negative
(attractive interaction).
Using the capabilities of the SNEG \cite{sneg} program in conjunction with the MATHEMATICA \cite{math18}
algebraic language, one can write the following matrix Hamiltonian for the spinless three-boson Hubbard problem:
\begin{widetext}
\begin{align}
\begin{split}
H_b=\left(
\begin{array}{cccccccccc}
0 & 0 & -\sqrt{2} J & -\sqrt{2} J & -\sqrt{2} J & -\sqrt{2}
J & 0 & 0 & 0 & 0 \\
0 & U & -J & -2 J & 0 & 0 & 0 & -\sqrt{3} J & 0 & 0 \\
-\sqrt{2} J & -J & U & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
-\sqrt{2} J & -2 J & 0 & U & 0 & 0 & 0 & 0 & -\sqrt{3} J &
0 \\
-\sqrt{2} J & 0 & 0 & 0 & U & 0 & -2 J & 0 & -\sqrt{3} J &
0 \\
-\sqrt{2} J & 0 & 0 & 0 & 0 & U & -J & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & -2 J & -J & U & 0 & 0 & -\sqrt{3} J \\
0 & -\sqrt{3} J & 0 & 0 & 0 & 0 & 0 & 3 U & 0 & 0 \\
0 & 0 & 0 & -\sqrt{3} J & -\sqrt{3} J & 0 & 0 & 0 & 3 U & 0
\\
0 & 0 & 0 & 0 & 0 & 0 & -\sqrt{3} J & 0 & 0 & 3 U \\
\end{array}
\right)
\end{split}
\label{3b-mat}
\end{align}
\end{widetext}
The eigenvalues (in units of $J$) of the bosonic matrix Hamiltonian in Eq.\ (\ref{3b-mat}) are:
\begin{align}
\begin{array}{ll}
E_1 = \; { ^6{\cal R} }^b_1 & \;\;\;\;\;\; E_6 = \; { ^3{\cal R} }^b_2\;({\cal U}) \\
E_2 = \; { ^3{\cal R} }^b_1 & \;\;\;\;\;\; E_7 = \; { ^6{\cal R} }^b_4 \\
E_3 = \; { ^6{\cal R} }^b_2 & \;\;\;\;\;\; E_8 = \; { ^6{\cal R} }^b_5\\
E_4 = \; { ^6{\cal R} }^b_3 & \;\;\;\;\;\; E_9 = \; { ^3{\cal R} }^b_3 \\
E_5 = \; {\cal U} \;({ ^3{\cal R} }^b_2) & \;\;\;\;\;\; E_{10} = \; { ^6{\cal R} }^b_6,
\end{array}
\label{eigvalb}
\end{align}
\textcolor{black}{
where ${\cal U}=U/J$.
}
For $E_5$ and $E_6$, the quantities without parentheses apply for ${\cal U} >0$ and those within
parentheses for ${\cal U}<0$. The expressions for the remaining eigenvalues apply for any ${\cal U}$, negative or
positive. ${ ^6{\cal R} }^b_i$, $i=1,\ldots,6$ denote in ascending order (for any ${\cal U}$, negative or positive)
the six real roots of the sixth-order polynomial
\begin{widetext}
\begin{align}
\begin{split}
P^b_6(x) = & x^6 - 9{\cal U} x^5 + (30{\cal U}^2-22) x^4 \\
& + (144{\cal U}-46{\cal U}^3) x^3 + (76 - 314 {\cal U}^2 + 33 {\cal U}^4) x^2 -
(252 {\cal U} - 264 {\cal U}^3 + 9 {\cal U}^5) x - (72 - 180 {\cal U}^2 + 72 {\cal U}^4),
\end{split}
\label{6pb}
\end{align}
\end{widetext}
and ${^3{\cal R} }^b_i$, $i=1,2,3$ denote in ascending order (for any ${\cal U}$, negative or positive) the three real
roots of the third-order polynomial
\begin{align}
P^b_3(x) = x^3 - 5 {\cal U} x^2 + (7 {\cal U}^2 - 8) x + 18 {\cal U} - 3 {\cal U}^3.
\label{3pb}
\end{align}
\begin{table}[b]
\caption{\label{tcorr}}
\textcolor{black}{
Correspondence of the energy eigenvalues of the Hubbard matrix Hamiltonian [Eq.\ (\ref{3b-mat})] at the double
degeneracies at ${\cal U}=0$; see Fig.\ \ref{feigvalb}.}
\begin{ruledtabular}
\begin{tabular}{ccc|ccc}
$E_3({\cal U}>0)$ & $\Longleftrightarrow$ & $E_4({\cal U}<0)$ & $E_4({\cal U}>0)$ & $\Longleftrightarrow$ & $E_3({\cal U}<0)$ \\
$E_5({\cal U}>0)$ & $\Longleftrightarrow$ & $E_6({\cal U}<0)$ & $E_6({\cal U}>0)$ & $\Longleftrightarrow$ & $E_5({\cal U}<0)$ \\
$E_7({\cal U}>0)$ & $\Longleftrightarrow$ & $E_8({\cal U}<0)$ & $E_8({\cal U}>0)$ & $\Longleftrightarrow$ & $E_7({\cal U}<0)$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
\textcolor{black}{
At ${\cal U}=0$, a smooth crossing of eigenvalues implies the correspondence displayed in TABLE \ref{tcorr},
associated with the double degeneracies $E_3({\cal U}=0) = E_4({\cal U}=0)$, $E_5({\cal U}=0)= E_6({\cal U}=0)$,
and $E_7({\cal U}=0) = E_8({\cal U}=0)$.
}
These remarks are reflected in the choice of online colors (or shading in the
print grayscale version) for the ${\cal U}>0$ and ${\cal U}<0$ segments of the curves in Fig.\
\ref{feigvalb}, where the bosonic eigenvalues listed in Eq.\ (\ref{eigvalb}) are plotted as a function of ${\cal U}$.
Note further that the ordering between $E_4$ and $E_5$ is interchanged for $|{\cal U}| \geq 3\sqrt{2}=4.24264$ [not
visible in Fig.\ \ref{feigvalb}(a) due to the scale of the figure].
\textcolor{black}{
In the following, the corresponding
Hubbard eigenstates are labeled in ascending energy order as
$i=1$, $2$, $3r(4l)$, $4r(3l)$, $5r(6l)$, $6r(5l)$, $7r(8l)$, $8r(7l)$, $9$, $10$,
where ``$r$'' means ``right'' for the region of positive ${\cal U}$ and ``$l$'' means ``left'' for the region of
negative ${\cal U}$.
}
The 10 normalized eigenvectors $\phi^b_i({\cal U})$, with $i=1,\ldots,10$, of the bosonic matrix Hamiltonian in Eq.\
(\ref{3b-mat}) have the general form
\begin{align}
\begin{split}
\phi&^b_i({\cal U})= \\
\{ & {\bf c}_{111}({\cal U}),{\bf c}_{210}({\cal U}),{\bf c}_{201}({\cal U}),{\bf c}_{120}({\cal U}),{\bf c}_{021}({\cal U}),\\
& {\bf c}_{102}({\cal U}),{\bf c}_{012}({\cal U}),{\bf c}_{300}({\cal U}),{\bf c}_{030}({\cal U}),{\bf c}_{003}({\cal U}) \}.
\end{split}
\label{phiU}
\end{align}
Because the algebraic expressions for the ${\bf c}_{ijk}$'s for an arbitrary ${\cal U}$ are very long and complicated,
we explicitly list in this paper the Hubbard eigenvectors only for the characteristic limits of infinite
repulsive and attractive interaction (${\cal U} \rightarrow \pm \infty$) and for the non-interacting case (${\cal U}=0$).
\textcolor{black}{
Specifically, for the reader's convenience, we list in the main text only the Hubbard eigenvectors for the
ground- and first-excited states; see Sec.\ \ref{eigvecuinf} and Sec.\ \ref{eigvecu0}. The eigenvectors for the
remaining 8 excited states are given in Appendix \ref{a11} (for ${\cal U} \rightarrow \pm \infty$) and
Appendix \ref{a12} (for ${\cal U}=0$).
}
\subsection{The infinite repulsive or attractive interaction (${\cal U} \rightarrow \pm \infty$) limit}
\label{eigvecuinf}
For large values of $|{\cal U}|$ (${\cal U} \rightarrow \pm \infty$), the ten bosonic eigenvalues in Eq.\ (\ref{eigvalb})
(in units of $J$) are well approximated by the simpler expressions:
\begin{align}
\begin{split}
E_1^{+\infty} (E_{10}^{-\infty})= &\; -8/{\cal U} + 20/{\cal U}^3 \\
E_2^{+\infty} (E_9^{-\infty}) = &\; {\cal U}\mp\sqrt{5}-3/(4{\cal U}) \\
E_3^{+\infty} (E_8^{-\infty}) = &\; {\cal U}\mp\sqrt{5}+33/(20{\cal U}) \\
E_4^{+\infty} (E_7^{-\infty}) = &\; {\cal U}+1/(5{\cal U}) \\
E_5^{+\infty} (E_6^{-\infty}) = &\; {\cal U} \\
E_6^{+\infty} (E_5^{-\infty}) = &\; {\cal U}\pm\sqrt{5}-3/(4{\cal U}) \\
E_7^{+\infty} (E_4^{-\infty}) = &\; {\cal U}\pm\sqrt{5}+33/(20{\cal U}) \\
E_8^{+\infty} (E_3^{-\infty}) = &\; 3{\cal U}+3/(2{\cal U})-9/(4{\cal U}^3) \\
E_9^{+\infty} (E_2^{-\infty}) = &\; 3{\cal U}+3/(2{\cal U})+3/(4{\cal U}^3) \\
E_{10}^{+\infty} (E_1^{-\infty})= &\; 3{\cal U}+3/{\cal U}+7/(2{\cal U}^3),
\end{split}
\label{eigvalb2}
\end{align}
where symbols $E_i^{+\infty}$ without a parenthesis and the upper signs in $\mp$ and $\pm$ refer to the positive
limit ${\cal U} \rightarrow +\infty$, and those ($E_i^{-\infty}$) within a parenthesis and the lower signs in $\mp$
and $\pm$ refer to the negative limit ${\cal U} \rightarrow -\infty$.
From the above, one sees that for large $\pm|{\cal U}|$ the bosonic eigenvalues are organized in three groups:
a high-energy (low-energy) group of three eigenvalues around $\pm 3 |{\cal U}|$ (triply occupied sites, see below), a
middle-energy group of six eigenvalues around $\pm|{\cal U}|$ (doubly occupied sites, see below), and a single
negative and lowest (positive and highest) eigenvalue approaching zero (singly occupied sites, see below). Fig.\
\ref{feigvalb} illustrates this behavior.
The corresponding eigenvectors at ${\cal U} \rightarrow +\infty$ and ${\cal U} \rightarrow -\infty$ for the ground and
first-excited states are given by
\begin{align}
\begin{split}
\phi^{b,+\infty}_1 &= \{1,0,0,0,0,0,0,0,0,0\}\\
\phi^{b,-\infty}_1 &= \{0,0,0,0,0,0,0,0,1,0\}
\end{split}
\label{phi1}
\end{align}
\begin{align}
\begin{split}
\phi^{b,+\infty}_2 &=
\left\{0,-\frac{1}{2},-\frac{1}{2\sqrt{5}},-\frac{1}{\sqrt{5}},\frac{1}{\sqrt{5}},\frac{
1}{2 \sqrt{5}},\frac{1}{2},0,0,0\right\}\\
\phi^{b,-\infty}_2 &= \{0,0,0,0,0,0,0,-\frac{1}{\sqrt{2}},0,\frac{1}{\sqrt{2}} \}
\end{split}
\label{phi2}
\end{align}
\textcolor{black}{
The eigevectors for the remaining 8 excited states are listed in Appendix \ref{a11}.
}
Note that the eigenvectors in Eqs. (\ref{phi1}) and (\ref{phi2}) and in Appendix A are grouped
in pairs ($+\infty$, $-\infty$), which are displayed using a common equation number
The eigenvectors at ${\cal U} \rightarrow +\infty$ and ${\cal U} \rightarrow -\infty$ are pairwise related
as follows:
\begin{align}
\begin{array}{ll}
\phi^{b,+\infty}_1 = - \phi^{b,-\infty}_{10} & \;\;\;\;\;\; \phi^{b,+\infty}_6 = - \phi^{b,-\infty}_9 \\
\phi^{b,+\infty}_2 = - \phi^{b,-\infty}_5 & \;\;\;\;\;\; \phi^{b,+\infty}_7 = - \phi^{b,-\infty}_8 \\
\phi^{b,+\infty}_3 = - \phi^{b,-\infty}_4 & \;\;\;\;\;\; \phi^{b,+\infty}_8 = \phi^{b,-\infty}_3 \\
\phi^{b,+\infty}_4 = \phi^{b,-\infty}_7 & \;\;\;\;\;\; \phi^{b,+\infty}_9 = \phi^{b,-\infty}_2 \\
\phi^{b,+\infty}_5 = \phi^{b,-\infty}_6 & \;\;\;\;\;\; \phi^{b,+\infty}_{10} = \phi^{b,-\infty}_1
\end{array}.
\label{phirelUpmI}
\end{align}
The pairs in Eq.\ (\ref{phirelUpmI}) correspond to states with the same absolute eigenvalues $|E_i^{+\infty}|$
and $|E_j^{-\infty}|$ (with $i,j=1,\ldots,10$) given in Eq.\ (\ref{eigvalb2}).
\subsection{The noninteracting (${\cal U} =0$) limit}
\label{eigvecu0}
When ${\cal U}=0$, the polynomial-root eigenvalues listed in Eq.\ (\ref{eigvalb}) simplify to
\begin{align}
\begin{array}{ll}
E_1 = \; - 3 \sqrt{2} & \;\;\;\;\;\; E_6 = \; 0 \\
E_2 = \; - 2 \sqrt{2} & \;\;\;\;\;\; E_7 = \; \sqrt{2} \\
E_3 = \; - \sqrt{2} & \;\;\;\;\;\; E_8 = \; \sqrt{2} \\
E_4 = \; - \sqrt{2} & \;\;\;\;\;\; E_9 = \; 2 \sqrt{2} \\
E_5 = \; 0 & \;\;\;\;\;\; E_{10} = \; 3 \sqrt{2}
\end{array},
\label{eigvalb3}
\end{align}
The ${\cal U}=0$ Hubbard ground-state eigenvector is given by
\begin{align}
\begin{split}
\phi_1^{b,{\cal U}=0} =\left\{\frac{\sqrt{3}}{4},
\frac{\sqrt{\frac{3}{2}}}{4},
\frac{\sqrt{3}}{8},
\frac{\sqrt{3}}{4},
\frac{\sqrt{3}}{4},
\frac{\sqrt{3}}{8},
\frac{\sqrt{\frac{3}{2}}}{4},
\frac{1}{8},\frac{1}{2\sqrt{2}},
\frac{1}{8}\right\}
\end{split}
\label{eigvecU0st1}
\end{align}
whereas the first-excited state is represented by the eigenvector
\begin{align}
\begin{split}
& \phi_2^{b,{\cal U}=0}= \\
& \left\{0,-\frac{1}{2},-\frac{1}{4
\sqrt{2}},-\frac{1}{2 \sqrt{2}},\frac{1}{2
\sqrt{2}},\frac{1}{4
\sqrt{2}},\frac{1}{2},-\frac{\sqrt{\frac{3}{2}}}{4
},0,\frac{\sqrt{\frac{3}{2}}}{4}\right\}.
\end{split}
\label{eigvecU0st2}
\end{align}
\textcolor{black}{
The eigenvectors for the remaining 8 excited states are listed in Appendix \ref{a12}.
}
\begin{figure*}[t]
\includegraphics[width=17.5cm]{3b_3w_fig2.pdf}
\caption{Cuts ($k_3=0$) of 3rd-order momentum correlation maps, $^3{\cal G}_i^{b,+\infty}=
\Phi_i^{b,+\infty}\Phi_i^{b,+\infty,*}$, corresponding to the momentum-space wave functions for three bosons
in three wells [see Eqs.\ (\ref{phibUpmI_1})-(\ref{phibUpmI_2})
and Eqs.\ (\ref{phibUpmI_3})-(\ref{phibUpmI_10}), top lines].
(a) Ground state ($i=1$). (b) First-excited sate ($i=2$). (c) Second-excited sate ($i=3$).
(d) Third-excited sate ($i=4$). (e) Fourth-excited sate ($i=5$). (f) Fifth-excited sate ($i=6$).
(g) Sixth-excited sate ($i=7$). (h) Seventh-excited sate ($i=8$). (i) Eighth-excited sate ($i=9$).
(j) Ninth-excited sate ($i=10$).
The choice of parameters is: interwell distance $d=3.8$ $\mu$m and
spectral width of single-particle distribution in momentum space [see Eq.\ (\ref{psikd})]
being the inverse of $s=0.5$ $\mu$m.
\textcolor{black}{
The correlation functions $^3{\cal G}_i^{b,+\infty}(k_1,k_2,k_3=0)$ (map landscapes) are given in units of
$\mu$m$^3$ according to the color bars on top of each panel, and the momenta $k_1$ and $k_2$ are in units of
1/$\mu$m.
}
The value of the plotted correlation functions was multiplied by a factor of 10 to achieve better contrast
for the map features. 3rd-order momentum correlation maps for the infinite attractive limit are not
explicitly plotted due to the equalities between pairs of the Hubbard eigenvectors at
${\cal U} \rightarrow -\infty$ and ${\cal U} \rightarrow +\infty$; see Eq.\ (\ref{phirelUpmI}) for the detailed
association of states.}
\label{f3rdcorrb}
\end{figure*}
\section{Higher-order correlations in momentum space: Outline of general definitions}
\label{hordcorr}
To motivate our discussion about momentum-space correlation functions, it is convenient to recall that,
usually, a configuration-interaction (CI) calculation (or other exact diagonalization schemes used for
solution of the microscopic many-body Hamiltonian) yields a many-body wave function expressed in position
coordinates.
Then the $N$th-order {\it real space\/} density, $\rho(x_1,x_1^\prime,x_2,x_2^\prime,...,x_N,x_N^\prime)$,
for an $N$-particle system is defined as the product of the many-body wave function
$\Psi(x_1, x_2, \ldots,x_N)$ and its complex conjugate $\Psi^*(x_1^\prime, x_2^\prime, \ldots,x_N^\prime)$
\cite{lowd55}. The $i$th-order density function (with $i \leq N$) is defined as an integral over $\rho$ taken
over the coordinates $x_{i+1},\ldots,x_N$ of $N-i$ particles, i.e.,
\begin{align}
\begin{split}
&\rho_i(x_1,x_1^\prime,x_2,x_2^\prime, \ldots ,x_i,x_i^\prime)=\\
& \int dx_{i+1} \dots dx_N \rho(x_1,x_1^\prime,..,x_i,x_i^\prime,x_{i+1},x_{i+1},...x_N,x_N).
\end{split}
\label{rhoi}
\end{align}
To obtain the $i$th-order real space correlation, one simply sets the prime coordinates in Eq.\ (\ref{rhoi}) to
be equal to the corresponding unprimed ones,
\begin{align}
^i{\cal G}(x_1,x_2,...,x_i)=\rho_i(x_1,x_1,x_2,x_2,...,x_i,x_i).
\label{g1sp}
\end{align}
Knowing the real-space density, one can obtain the corresponding higher-order momentum correlations through a
Fourier transform \cite{bran17,bran18,yann19.1,alvi12}
\begin{align}
\begin{split}
&^i{\cal G} (k_1,k_2,\ldots,k_i) = \\
& \frac{1}{4\pi^2} \int e^{ i k_1 ( x_1-x_1') } e^{i k_2 ( x_2-x_2')}\ldots
e^{i k_i ( x_i-x_i')} \\
& \times \rho_i (x_1, x_1', x_2, x_2',\ldots, x_i, x_i')
dx_1 dx_1' dx_2 dx_2'\ldots dx_i dx_i',
\end{split}
\label{tbmc}
\end{align}
In this paper, we obtain directly an expression for the momentum-space $N$-body wave function
corresponding to the Hubbard model Hamiltonian. This circumvents the need for the above Fourier-transform.
Instead, consistent with the Fourier-transform relation [Eq.\ (\ref{tbmc}) above],
the highest-order $N$th-order momentum correlation function is given by the modulus square
\begin{align}
^N{\cal G}(k_1,k_2,...,k_N)=|\Phi(k_1,k_2,...,k_N)|^2,
\label{gnmom}
\end{align}
and, successively, any lower $(N-i)$th-order (with $i=1,\ldots,N-1$) momentum correlation is obtained through
an integration of the higher $(N-i+1)$th-order correlation over the $k_{N-i+1}$ momentum.
\section{Third-order momentum correlations for 3 bosons in 3 wells:
The infinite-interaction limit (${\cal U} \rightarrow \pm \infty$)}
\label{3rdcorrUpmI}
\textcolor{black}{
To derive the all-order momentum correlations, we augment the finite-site Hubbard model as follows:
Each boson in any of the three wells is represented by a single-particle localized orbital having the form
of a displaced Gaussian function \cite{bran17,bran18,yann19.1,yann19.3}, which in the real configuration space
has the form
\begin{equation}
\psi_j(x) = \frac{1}{ (2 \pi)^{1/4} \sqrt{s} } \exp \left[ - \frac{(x-d_j)^2}{4s^2} \right].
\label{psixd}
\end{equation}
In Eq.\ (\ref{psixd}), $d_j$ ($j=1,2,3$) denotes the position of each of the three wells and $2s$ is the
width of the Gaussian function in real configuration space.
}
\textcolor{black}{
In this way, the structure (interwell distances) and the spatial profile of the orbitals of the
trapped particles enter in the augmented Hubbard model.
In momentum space, the corresponding orbital $\psi_j(k)$ is given by the Fourier transform of $\psi_j(x)$,
namely, $\psi_j(k)=(1/\sqrt{2\pi})\int_{-\infty}^\infty \psi_j(x)\exp(ikx)dx$. Performing this Fourier
transform, one finds
\begin{equation}
\psi_j(k) = \frac{2^{1/4}\sqrt{s}}{\pi^{1/4}} e^{-k^2 s^2} e^{i d_j k}.
\label{psikd}
\end{equation}
Naturally the spectral witdth of the orbital's profile in the momentum space is $1/s$.
}
\textcolor{black}{
In using orbitals localized on each well, our treatment of the augmented Hubbard trimer is similar to
Coulson's treatment of the Hydrogen molecule \cite{coul41}. In broader terms, our use of localized orbitals
(atomic orbitals) belongs to the general methodology in chemistry known as LCAO-MO (linear combination of atomic
orbitals $-$ molecular orbitals \cite{szabobook,wiki2}).
}
\textcolor{black}{
We stress that the cosinusoidal/sinusoidal dependencies of the momentum correlations derived here [and their
coefficients ${\cal C}$'s, ${\cal B}$'s, and ${\cal A}$'s; see Eqs.\ (\ref{wfbexpr}), (\ref{2ndbexpr}), and (\ref{frstbexpr})
below] do not depend on the precise profile of the atomic orbital, as noted already in Ref.\ \cite{coul41},
where the general symbol $\mathfrak{A}(k)$ was used for the Fourier transform of $\psi_0(x)$ at $d_0=0$.
For the Hydrogen molecule an obvious choice is a Slater-type orbital (see Eqs.\ (35) and (36) in Ref.\
\cite{coul41}). The reason behind this behavior is the so-called shift property \cite{shifttt} of the Fourier
transform, which applies to a displaced profile (centered at $d_j \neq 0$); it states that
\begin{align}
\mathfrak{F}[\psi_j(x)] = \mathfrak{F}[\psi_0(x)] \exp(ikd_j) = \mathfrak{A}(k)\exp(ikd_j),
\label{shift}
\end{align}
where $\mathfrak{F}$ denotes the Fourier-transform operation \cite{shifttt}. The Fourier-transformed profile
$\mathfrak{A}(k)$ at the initial site factors out in all expressions of the momentum correlations.
The Gaussian profile (also used in aforementioned experimental publications \cite{prei19,berg19,bech20,bonn18})
in our paper was used for convenience; it is an obvious approximation for the lowest single-particle
level in a deep potential \cite{note5} approaching a harmonic trap in the framework of experiments on neutral
ultracold atoms \cite{note6}.
}
\textcolor{black}{
For a discussion of the comparison, for the entire range of interatomic interactions, ${\cal U}$, between exact
microscopic diagonalization of the Hamiltonian (configuration interaction, CI) calculations, results of the
augmented Hubbard-model, and measurements from trapped ultracold-atoms experiments, see Ref.\ \cite{note8}.
}
With the help of the single-boson orbitals in Eq.\ (\ref{psikd}), each basis ket in Eq.\ (\ref{3b-kets}) can
be mapped onto a wave function of the three single-particle momenta $k_1$, $k_2$, and $k_3$. For each ket, this
wave function naturally is a permanent built from the three bosonic orbitals. For a general eigenvector
solution of the Hubbard Hamiltonian, the corresponding wave function $\Phi^b_i(k_1, k_2, k_3)$
(with $i=1,\ldots,10$) in momentum space is a sum over such permanents, and the associated third-order
correlation function is simply the modulus square, i.e.,
\begin{align}
^3{\cal G}^b_i (k_1,k_2,k_3)=|\Phi^b_i(k_1,k_2,k_3)|^2.
\label{3gdef}
\end{align}
Because the expressions for the third-order correlations can become very long and cumbersome,
for bookkeeping purposes, we found advantageous to display and characterize instead the three-body wave functions
$\Phi^b_i(k_1,k_2,k_3)$ themselves. Then the associated third-order correlations can be calculated using
Eq.\ (\ref{3gdef}).
Below, in Eqs.\ (\ref{phibUpmI_1})-(\ref{phibUpmI_2}), we list without commentary the momentum-space wave
functions, $\Phi^{b,\pm\infty}_1(k_1,k_2,k_3)$ and $\Phi^{b,\pm\infty}_2(k_1,k_2,k_3)$, associated with the
Hubbard eigenvectors, $\phi^{b,\pm\infty}_1$ and $\phi^{b,\pm\infty}_2$, respectively
[see Eqs.\ (\ref{phi1})-(\ref{phi2})], at the limits of infinite repulsive or attractive strength (i.e., for
${\cal U} \rightarrow \pm \infty$). The commentary integrating these wave functions into the broader scheme of their
evolution as a function of any interaction strength $-\infty < {\cal U} < +\infty$ is left for Sec.\ \ref{s3rdanyu}
below. The three-body wave functions for the remaining 8 excited states are listed in Appendix \ref{a1}.
Note that the wave functions in Eqs. (\ref{phibUpmI_1}) and (\ref{phibUpmI_2}) below and in Appendix C are grouped
in pairs ($+\infty$, $-\infty$), which are displayed using a common equation number
Assuming that the wells are linearly placed at $d_1 = -d$, $d_2 = 0$, and $d_3 = d$,
these momentum-space wave functions at ${\cal U} \rightarrow \pm \infty$ are as follows:
\begin{widetext}
\begin{align}
\begin{split}
\Phi^{b,+\infty}_1(k_1, k_2, k_3) & =
\frac { 2 \times 2^{1/4} } { \sqrt{3} \pi^{3/4} } s^{3/2} e^{ -(k_1^2+k_2^2+k_3^2)s^2 }
[ \cos( d(k_1-k_2) ) + \cos( d(k_1-k_3) ) + \cos( d(k_2-k_3) ) ], \\
\Phi^{b,-\infty}_1(k_1, k_2, k_3) & =
\left( \frac{2}{\pi} \right)^{3/4} s^{3/2} e^{ -(k_1^2+k_2^2+k_3^2)s^2 }.
\end{split}
\label{phibUpmI_1}
\end{align}
\begin{align}
\begin{split}
\Phi^{b,+\infty}_2(& k_1, k_2, k_3) =
\frac{ i 2^{3/4} } { 5 \sqrt{3} \pi ^{3/4} } s^{3/2} e^{ -(k_1^2+k_2^2+k_3^2)s^2 } \\
& \times \left[ \sqrt{5} \sin (d(-k_1+k_2+k_3))+\sqrt{5} \sin (d(k_1+k_2-k_3)) \right.
+\sqrt{5} \sin (d(k_1-k_2+k_3)) \\
&\;\; +5 \sin (d(k_1+k_2)) +5 \sin (d(k_1+k_3)) + 5 \sin (d(k_2+k_3)) +2 \sqrt{5} \sin (d k_1)
\left. +2 \sqrt{5} \sin (d k_2)+2 \sqrt{5} \sin (dk_3)\right], \\
\Phi^{b,-\infty}_2(& k_1, k_2, k_3) =
\frac{ 2 i 2^{1/4} } {\pi ^{3/4} } s^{3/2} e^{ -(k_1^2+k_2^2+k_3^2)s^2 }
\sin (d(k_1+k_2+k_3)).
\end{split}
\label{phibUpmI_2}
\end{align}
\end{widetext}
Plots for the corresponding 3rd-order momentum correlations $^3{\cal G}^{b,+\infty}_i (k_1,k_2,k_3)$,
with $i=1,\ldots,10$ [see Eq.\ (\ref{3gdef})], are presented in Fig.\ \ref{f3rdcorrb}.
We note that we do not explicitly plot the 3rd-order momentum correlations for the limit of infinite
attraction (${\cal U} \rightarrow -\infty$) because $^3{\cal G}^{b,-\infty}_i (k_1,k_2,k_3)=
^3{\cal G}^{b,+\infty}_j (k_1,k_2,k_3)$ for the pairs $(i=1,j=10)$, $(i=2,j=9)$, $(i=3,j=8)$, $(i=4,j=3)$,
$(i=5,j=2)$, $(i=6,j=5)$, $(i=7,j=4)$, $(i=8,j=7)$, $(i=9,j=6)$, and $(i=10,j=1)$ due to the equalities
between eigenvectors listed in Eq.\ (\ref{phirelUpmI}).
{\it Explicit expression for the third-order correlation $^3{\cal G}^{b,+\infty}_1 (k_1,k_2,k_3)$.\/}
Because of the special role played by the ground state $\phi^{b,+\infty}_1=\ket{111}$ at infinite repulsion, we
explicitly list below the corresponding third-order correlation function, i.e.,
\begin{align}
\begin{split}
&^3{\cal G}^{b,+\infty}_1 (k_1,k_2,k_3)=|\Phi^{b,+\infty}_1(k_1,k_2,k_3)|^2=\\
& \frac{2 \sqrt{2}}{3 \pi^{3/2}} s^3 e^{-2 s^2 (k_1^2+k_2^2+k_3^2)}
\big\{3 + 2 \cos (d (k_1+k_2-2 k_3)) \\
& +2 \cos (d (k_2+ k_3 -2 k_1))
+2 \cos (d (k_1+k_3 - 2 k_2))\\
& +\cos (2 d (k_1- k_2))
+2 \cos (d (k_1- k_2))\\
& +\cos (2 d (k_1- k_3))
+2 \cos (d (k_1- k_3))\\
& +\cos (2 d (k_2- k_3))
+2 \cos (d (k_2-k_3)) \big\}.
\end{split}
\label{3gUIst1}
\end{align}
It is worth noting that the expression (\ref{3gUIst1}) above for 3 bosons is similar to the third-order
correlation for the triplet states (with total spin $S=3/2$ and spin projections $S_z=3/2$ or $S_z=1/2$) for
3-fermions trapped in 3 wells, except that in the fermionic case the sign in front of the cosine terms with
only 2 momenta in the cosine argument is negative; see Refs.\ \cite{yann19.3,prei19}
\section{Third-order momentum correlations for 3 bosons in 3 wells:
The non-interacting limit ${\cal U}=0$}
\label{3rdcorrU0}
Assuming that the wells are linearly placed at $d_1=-d$, $d_2=0$, and $d_3=d$, the noninteracting ground-state
three-boson wave function in momentum space is given by
\begin{widetext}
\begin{align}
\begin{split}
& \frac{(2 \pi )^{3/4}}{s^{3/2}} e^{(k_1^2+k_2^2+k_3^2)s^2} \Phi_1^{b,{\cal U}=0}(k_1,k_2,k_3) =
1 + 2 \sqrt{2} \cos (d {k_1}) \cos (d {k_2}) \cos (d {k_3}) \\
& +2 \cos (d {k_1}) \cos(d {k_2})
+2 \cos (d {k_1}) \cos (d {k_3})+\sqrt{2} \cos (d {k_1})
+2 \cos (d {k_2}) \cos (d {k_3})+\sqrt{2} \cos (d {k_2})
+\sqrt{2} \cos (d {k_3})
\end{split}
\label{phibU0_1}
\end{align}
The above takes also the form of the general expression (\ref{wfbexpr}) below, i.e.,
\begin{align}
\begin{split}
& \frac{(2 \pi )^{3/4}}{s^{3/2}} e^{(k_1^2+k_2^2+k_3^2)s^2} \Phi_1^{b,{\cal U}=0}(k_1,k_2,k_3) =
1 + \sqrt{2} \big( \cos (d k_1) + \cos (d k_2) + \cos (d k_3) \big) \\
& + \cos [d (k_1-k_2)] + \cos [d (k_1-k_3)] + \cos [d (k_2-k_3)]
+ \cos [d (k_1+k_2)] + \cos [d (k_1+k_3)] + \cos [d (k_2+k_3)] \\
& + \frac{1}{\sqrt{2}} \big(\cos [d (k_1+k_2-k_3)] + \cos [d (k_1-k_2+k_3)] + \cos [d (-k_1+k_2+k_3)]
+ \cos [d (k_1+k_2+k_3)]\big)
\end{split}
\label{phibU0_1_2}
\end{align}
For the first-excited state, the three-boson noninteracting wave function in momentum space at ${\cal U}=0$ was
found to be
\begin{align}
\begin{split}
& \frac{-i (2 \pi )^{3/4} \sqrt{3}}{s^{3/2}} e^{(k_1^2+k_2^2+k_3^2)s^2} \Phi_2^{b,{\cal U}=0}(k_1,k_2,k_3) =
2 \big(\sin (d k_1) + \sin (d k_2) + \sin (d k_3) \big) \\
& + 2 \sqrt{2} \big(\sin [d (k_1+k_2)] + \sin [d (k_1+k_3)] + \sin [d (k_2+k_3)] \big) \\
& + \sin [d (k_1-k_2+k_3)] + \sin [d (-k_1+k_2+k_3)] + \sin [d (k_1+k_2-k_3)]
+ 3 \sin [d (k_1+k_2+k_3)].
\end{split}
\label{phibU0_2}
\end{align}
\end{widetext}
The noninteracting three-body wave functions for the remaining 8 excited states are listed in Appendix
\ref{a2}.
\section{Third-order momentum correlations for 3 bosons in 3 wells
as a function of the strength of the interaction ${\cal U}$}
\label{s3rdanyu}
The general cosinusoidal (or sinusoidal) expression of third-order correlations is
too cumbersome and lengthy to be displayed in
print in a paper. Instead, as mentioned earlier, we give here the general expression for the three-boson wave
function $\Phi_i^b(k_1,k_2,k_3)$ (with $i=1,\ldots,10$) calculated in the momentum space. Then the third-order
momentum correlations are obtained simply as the modulus square of this wave function [see Eq.\ (\ref{3gdef})].
Using MATHEMATICA, we found that the general cosinusoidal (or sinusoidal) expression of the three-body wave
function has the form:
\begin{align}
\begin{split}
& \Phi_j^b(k_1,k_2,k_3) = p^j s^{3/2} e^{-(k_1^2+k_2^2+k_3^2)s^2} \\
& \times \{ {\cal C}_0^j + {\cal C}_1^j ({\cal F}(dk_1)+{\cal F}(dk_2)+{\cal F}(dk_3)) \\
& + {\cal C}_{1-1}^j ( {\cal F}[d(k_1-k_2)] + {\cal F}[d(k_1-k_3)] + {\cal F}[d(k_2-k_3)] )\\
& + {\cal C}_{1+1}^j ( {\cal F} [d(k_1+k_2)] + {\cal F} [d(k_1+k_3)] + {\cal F} [d(k_2+k_3)] )\\
& + {\cal C}_{1+1-1}^j ( {\cal F}[d(k_1+k_2-k_3)]+{\cal F}[d(k_1-k_2+k_3)] \\
& + {\cal F}[d(-k_1+k_2+k_3)] ) + {\cal C}_{1+1+1}^j {\cal F}[d(k_1+k_2+k_3)] \},
\end{split}
\label{wfbexpr}
\end{align}
where $p^j=1$ and ${\cal F}$ stands for ``$\cos$'' for the states
$j=1,3r(4l),4r(3l),7r(8l),8r(7l),10$; $p^j=i$ (here $i^2=-1$; it is not an index) and
${\cal F}$ stands for ``$\sin$'' for the remaining states $j=2,5r(6l),6r(5l),9$.
\textcolor{black}{The ${\cal C}_0$ coefficient
denotes an ${\cal F}$-independent term. The subscripts $1$, $1\pm1$, and $1+1\pm1$ in the other ${\cal C}$
coefficients reflect the number of terms in the argument of the ${\cal F}$ functions and
the sign in front of each of them (without consideration of any ordering of the $k_1$, $k_2$, and $k_3$ momentum
variables).}
\begin{figure}[b]
\includegraphics[width=7.2cm]{3b_3w_fig3.pdf}\\
\caption{
\textcolor{black}{
The six different ${\cal C}$-coefficients (dimensionless) [see Eq.\ (\ref{wfbexpr})] for the 2
lowest-in-energy eigenstates of 3 bosons trapped in 3 linearly arranged wells as a function of ${\cal U}$
(dimensionless).
}
(a) Ground state ($i=1$). (b) First-excited state ($i=2$). See text for a detailed description.
For a description of the remaining eight excited states, see Appendix \ref{a3rd}.
The choice of online colors is as follows:
${\cal C}_0 \rightarrow$ Violet, ${\cal C}_1 \rightarrow$ Green, ${\cal C}_{1-1} \rightarrow$ Light Blue,
${\cal C}_{1+1} \rightarrow$ Brown, ${\cal C}_{1+1-1} \rightarrow$ Yellow, ${\cal C}_{1+1+1} \rightarrow$ Dark Blue.
\textcolor{black}{
For the print grayscale version, the positioning (referred to as \#$n$, with $n=1,2,3,\dots$)
of the curves from top to bottom at the point ${\cal U} = -6$
is as follows: (a) ${\cal C}_0 \rightarrow$ \#1, ${\cal C}_1 \rightarrow$ \#3, ${\cal C}_{1-1} \rightarrow$ \#5,
${\cal C}_{1+1} \rightarrow$ \#4, ${\cal C}_{1+1-1} \rightarrow$ \#6, ${\cal C}_{1+1+1} \rightarrow$ \#2 and
(b) ${\cal C}_0=0$, ${\cal C}_1 \rightarrow$ \#3, ${\cal C}_{1-1}=0$,
${\cal C}_{1+1} \rightarrow$ \#2, ${\cal C}_{1+1-1} \rightarrow$ \#4, ${\cal C}_{1+1+1} \rightarrow$ \#1.}
}
\label{ccoef}
\end{figure}
In general, there are 14 cosinusoidal (or sinusoidal) terms and 6 distinct
${\cal U}$-dependent coefficients ${\cal C}$'s
for a given state in expression (\ref{wfbexpr}).
We note that ${\cal C}_0 \equiv 0$ and ${\cal C}_{1-1} \equiv 0$ for any ${\cal U}$ for all
the states of the second group above for which ${\cal F} \equiv \sin$.
The ${\cal C}$-coefficients for the 2 lowest-in-energy eigenstates are plotted in Fig.\
\ref{ccoef} as a function of ${\cal U}$. The corresponding explicit numerical values can be found in a data file
included in the supplemental material \cite{supp}.
\begin{figure*}[t]
\includegraphics[width=17.5cm]{3b_3w_fig4.pdf}
\caption{Cuts ($k_3=0$) of 3rd-order momentum correlation maps for the first-excited state of 3 bosons in 3 wells
[see Eqs.\ (\ref{3gdef}) and (\ref{wfbexpr}) with $i=2$].
(a) ${\cal U}=-200$. (b) ${\cal U}=-10$. (c) ${\cal U}=0$. (d) ${\cal U}=10$. (e) ${\cal U}=200$.
The choice of parameters is: interwell distance $d=7$ $\mu$m and spectral width of single-particle distribution
in momentum space [see Eq.\ (\ref{psikd})] being the inverse of $s=0.35$ $\mu$m. The correlation functions
$^3{\cal G}_i^b(k_1,k_2,k_3=0)$ (map landscapes) are given in units of $\mu$m$^3$
according to the color bars on top of each panel, and the momenta $k_1$ and $k_2$ are in units of 1/$\mu$m.
The value of the plotted correlation functions was multiplied by a factor of 10 to achieve better contrast
for the map features.}
\label{f3rdcorrbst2}
\end{figure*}
\begin{table*}[t]
\caption{\label{bgsu0}
\textcolor{black}{
The 9 distinct coefficients at ${\cal U}=0$ present in Eq.\ (\ref{2ndbexpr}) in the case of the ground state.}}
\begin{ruledtabular}
\begin{tabular}{ccccccccc}
${\cal B}^{1,{\cal U}=0}_0$ & ${\cal B}^{1,{\cal U}=0}_1$ & ${\cal B}^{1,{\cal U}=0}_2$ & ${\cal B}^{1,{\cal U}=0}_{1-1}$ & ${\cal B}^{1,{\cal U}=0}_{2-2}$ &
${\cal B}^{1,{\cal U}=0}_{2-1}$ & ${\cal B}^{1,{\cal U}=0}_{1+1}$ & ${\cal B}^{1,{\cal U}=0}_{2+2}$ & ${\cal B}^{1,{\cal U}=0}_{2+1}$ \\ \hline
$2/\pi=$ & $2\sqrt{2}/\pi=$ & $1/\pi=$ & $2/\pi=$ & $1/(4\pi)=$ &
$1/(\sqrt{2}\pi)=$ & $2/\pi=$ & $1/(4\pi)=$ & $1/(\sqrt{2}\pi)=$ \\
$ 0.63662$ & $0.90032$ & $0.31831$ & $0.63662$ & $0.07958$ &
$0.22508$ & $0.63662$ & $0.07958$ & $0.22508$
\end{tabular}
\end{ruledtabular}
\end{table*}
{\it The ground state (state denoted as $i=1$ for $-\infty < {\cal U} < +\infty$):\/}
For ${\cal U} \rightarrow -\infty$, it is seen from the panel (a) in Fig.\ \ref{ccoef} that only the constant
coefficient ${\cal C}^{1,-\infty}_0=(2/\pi)^{3/4}=0.7127$ survives in expression (\ref{wfbexpr}); the ground-state
in momentum space is given by the second expression in Eq.\ (\ref{phibUpmI_1}).
It is a simple Gaussian distribution associated with a Bose-Einstein condensate, reflecting
the fact that all three bosons are localized in the middle well and occupy the same orbital; the corresponding
Hubbard eigenvector is given by $\phi^{b,-\infty}_1$ [second line in Eq.\ (\ref{phi1})] which contains only a
single component from the primitive kets listed in Eq.\ (\ref{3b-kets}), i.e., the basis ket No. 9
$\rightarrow |030\rangle$.
For ${\cal U} =0$, all 6 coefficients, ${\cal C}^{1,{\cal U}=0}$'s, are present, and their numerical values from the frame (a) in
Fig.\ \ref{ccoef} agree with the corresponding algebraic expressions for $\Phi_1^{b,{\cal U}=0}(k_1,k_2,k_3)$ in
Eq.\ (\ref{phibU0_1_2}).
For ${\cal U} \rightarrow +\infty$, only the coefficient ${\cal C}^{1,+\infty}_{1-1}=
2\times2^{1/4}/(\sqrt{3}\pi^{3/4}) = 0.5819$ survives in expression (\ref{wfbexpr});
see again panel (a) in Fig.\ \ref{ccoef}. The ground-state in momentum space comprises three cosinusoidal
terms and is given by the first expression in Eq.\ (\ref{phibUpmI_1}). This form corresponds to the Hubbard
eigenvector $\phi^{b,+\infty}_1$ [first line in Eq.\ (\ref{phi1})] which contains only a single component from
the primitive kets listed in Eq.\ (\ref{3b-kets}), i.e., the basis ket No. 1 $\rightarrow |111\rangle$.
As mentioned earlier, the primitive ket $|111\rangle$ represents a case where all three wells are singly occupied.
Thus it enables a direct mapping to quantum-optics investigations of the frequency-resolved interference of
three temporally distinguishable photons prepared in three separate fibers (tritter)
\cite{tamm19} [recall the analogies
\cite{yann19.1}: particle momentum ($k$) $\leftrightarrow$ photon frequency ($\omega/c$) and interwell distance
($d$) $\leftrightarrow$ time-delay between single photons ($\tau c$)].
{\it The first excited state (state denoted as $i=2$ for $-\infty < {\cal U} < +\infty$):\/}
For ${\cal U} \rightarrow -\infty$ only the coefficient
${{\cal C}}^{2,-\infty}_{1+1+1}= 2\times 2^{1/4}/\pi^{3/4} = 1.0079$ survives
in expression (\ref{wfbexpr}) [see frame (b) in Fig.\ \ref{ccoef}];
the corresponding state, $\phi^{b,-\infty}_2$ [second line in Eq.\ (\ref{phi2})], is a NOON state of the form
$(-|300\rangle + |003\rangle)/\sqrt{2}$, and the corresponding wave function in momentum
space is given by the second expression in Eq.\ (\ref{phibUpmI_2}), {\it which includes a single sin term only\/}.
For ${\cal U}=0$, four coefficients are present, namely ${{\cal C}}^{2,{\cal U}=0}_1$, ${{\cal C}}^{2,{\cal U}=0}_{1+1}$,
${{\cal C}}^{2,{\cal U}=0}_{1+1-1}$, and ${{\cal C}}^{2,{\cal U}=0}_{1+1+1}$. Their numerical values from frame (b) in
Fig.\ \ref{ccoef} agree with the corresponding algebraic expressions for $\Phi_2^{b,{\cal U}=0}(k_1,k_2,k_3)$ in
Eq.\ (\ref{phibU0_2}).
For ${\cal U} \rightarrow +\infty$ only three coefficients,
${{\cal C}}^{2,+\infty}_1=2\times2^{3/4}/(\sqrt{15}\pi^{3/4})=0.3680$,
${{\cal C}}^{2,+\infty}_{1+1}=2^{3/4}/(\sqrt{3}\pi^{3/4})=0.4115$, and
${{\cal C}}^{2,+\infty}_{1+1-1}=2^{3/4}/(\sqrt{15}\pi^{3/4})=0.1840$,
survive in expression (\ref{wfbexpr}) [see frame (b)
in Fig.\ \ref{ccoef}]; the corresponding state, $\phi^{b,+\infty}_2$ [first line in Eq.\ (\ref{phi2})] consists
of all 6 primitive kets [see Eq.\ (\ref{3b-kets})] representing exclusively doubly-occupied wells, and the
corresponding wave function in momentum space has 9 {\it sinusoidal\/} terms and is given by the first
expression in Eq.\ (\ref{phibUpmI_2}).
In the main text of this paper, we restrict the ${\cal U}$-evolution of the ${\cal C}({\cal U})$'s coefficients in Eq.\
(\ref{wfbexpr}) to the two lowest-in-energy states. Indeed the ground state and the first excited state
are the natural candidates for initial experiments. For example, for the case of two and three ultracold fermions
($^6$Li atoms), see Ref.\ \cite{berg19} and Ref.\ \cite{prei19}, respectively; for recent experiments focused on
the ground state of large bosonic Hubbard systems, see Refs.\ \cite{grei02} and \cite{gerb05.2} ($^{87}$Rb atoms)
and Ref.\ \cite{clem18,clem19} ($^4$He$^*$ atoms). In the case of trapped ultracold atoms other excited states
are in principle accessible. Thus in anticipation of future experimental activity, we complete in Appendix
\ref{a3rd} the description of the details of the ${\cal U}$-evolution of the ${\cal C}({\cal U})$'s for the
remaining eight excited states.
Fig.\ \ref{f3rdcorrbst2} illustrates visually for the first-excited state ($i=2$) the ${\cal U}$-evolution of the
third-order correlation maps described by expressions (\ref{3gdef}) and (\ref{wfbexpr}) when $i=1$. The maps for
5 characteristic values of ${\cal U}$ are plotted, namely, ${\cal U}=-200$, $-10$, $0$, $10$, and $200$.
Corresponding illustrations for the ground state are left for Sec.\ \ref{sign}.
\begin{figure}[t]
\includegraphics[width=7.2cm]{3b_3w_fig5.pdf}\\
\caption{The ${\cal B}$-coefficients (dimensionless) [see Eq.\ (\ref{2ndbexpr})] for the 2 lowest-in-energy
eigenstates of 3 bosons trapped in 3 linearly arranged wells as a function of the interaction strength ${\cal U}$
(dimensionless). (a) ground
state ($i=1$). (b) First-excited state ($i=2$). See text for a detailed description.
The choice of online colors is as follows:
${\cal B}_0 \rightarrow$ Constant (Violet), ${\cal B}_1 \rightarrow$ Second Violet,
${\cal B}_2 \rightarrow$ Green, ${\cal B}_{1-1} \rightarrow$ Light Blue, ${\cal B}_{2-2} \rightarrow$ Brown,
${\cal B}_{2-1} \rightarrow$ Yellow, ${\cal B}_{1+1} \rightarrow$ Dark Blue,
${\cal B}_{2+2} \rightarrow$ Red, ${\cal B}_{2+1} \rightarrow$ Black.
For the print grayscale version, the positioning (referred to as \#$n$, with $n=1,2,3,\dots$)
of the curves from top to bottom at the point ${\cal U} = +2$
is as follows: (a) ${\cal B}_0 {\rm (constant)} \rightarrow$ \#3, ${\cal B}_1 \rightarrow$ \#1, ${\cal B}_2 \rightarrow$ \#5,
${\cal B}_{1-1} \rightarrow$ \#2, ${\cal B}_{2-2} \rightarrow$ \#8, ${\cal B}_{2-1} \rightarrow$ \#6,
${\cal B}_{1+1} \rightarrow$ \#4, ${\cal B}_{2+2} \rightarrow$ \#9, ${\cal B}_{2+1} \rightarrow$ \#7 and
(b) ${\cal B}_0 {\rm (constant)} \rightarrow$ \#1, ${\cal B}_1 \rightarrow$ \#2, ${\cal B}_2 \rightarrow$ \#6,
${\cal B}_{1-1} \rightarrow$ \#3, ${\cal B}_{2-2} \rightarrow$ \#5, ${\cal B}_{2-1} \rightarrow$ \#4,
${\cal B}_{1+1} \rightarrow$ \#7, ${\cal B}_{2+2} \rightarrow$ \#8, ${\cal B}_{2+1} \rightarrow$ \#9.
For a description of the remaining eight excited states, see Appendix \ref{a2nd}.
}
\label{bcoef}
\end{figure}
\begin{figure*}[t]
\includegraphics[width=17.5cm]{3b_3w_fig6.pdf}
\caption{2nd-order momentum correlation maps for the first-excited state of 3 bosons in 3 wells
[see Eq.\ (\ref{2ndbexpr}) with $i=2$].
(a) ${\cal U}=-200$. (b) ${\cal U}=-10$. (c) ${\cal U}=0$. (d) ${\cal U}=10$. (e) ${\cal U}=200$.
The choice of parameters is: interwell distance $d=7$ $\mu$m and spectral width of single-particle
distribution in momentum space [see Eq.\ (\ref{psikd})] being the inverse of $s=0.35$ $\mu$m.
The correlation functions $^2{\cal G}_i^{b}(k_1,k_2)$ (map landscapes) are given in units of $\mu$m$^2$
according to the color bars on top of each panel, and the momenta $k_1$ and $k_2$ are in units of 1/$\mu$m.
The value of the plotted correlation functions was
multiplied by a factor of 10 to achieve better contrast for the map features.}
\label{f2ndcorrbst2}
\end{figure*}
\begin{table*}[t]
\caption{\label{bst2u0}
\textcolor{black}{
The 7 distinct coefficients at ${\cal U}=0$ present in Eq.\ (\ref{2ndbexpr}) in the case of the 1st-excited state.}}
\begin{ruledtabular}
\begin{tabular}{ccccccccc}
${\cal B}^{2,{\cal U}=0}_0$ & ${\cal B}^{2,{\cal U}=0}_1$ & ${\cal B}^{2,{\cal U}=0}_2$ & ${\cal B}^{2,{\cal U}=0}_{1-1}$ & ${\cal B}^{2,{\cal U}=0}_{2-2}$ &
${\cal B}^{2,{\cal U}=0}_{2-1}$ & ${\cal B}^{2,{\cal U}=0}_{1+1}$ & ${\cal B}^{2,{\cal U}=0}_{2+2}$ & ${\cal B}^{2,{\cal U}=0}_{2+1}$ \\ \hline
$2/\pi=$ & $4\sqrt{2}/\pi=$ & 0 & $4/(3\pi)=$ & $1/(12\pi)=$ &
$1/(3\sqrt{2}\pi)=$ & 0 & $-7/(12\pi)=$ & $-1/(\sqrt{2}\pi)=$ \\
$ 0.63662$ & $0.60021$ & ~~ & $0.42441$ & $0.026526$ &
$0.075026$ & ~~ & $-0.185681$ & $-0.22508$
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{table*}[t]
\caption{\label{bst2uinf}
\textcolor{black}{
The 9 distinct coefficients at ${\cal U} \rightarrow +\infty$ present in Eq.\ (\ref{2ndbexpr}) in the case of the
1st-excited state.}}
\begin{ruledtabular}
\begin{tabular}{ccccccccc}
${\cal B}^{2,+\infty}_0$ & ${\cal B}^{2,+\infty}_1$ & ${\cal B}^{2,+\infty}_2$ & ${\cal B}^{2,+\infty}_{1-1}$ &
${\cal B}^{2,+\infty}_{2-2}$ & ${\cal B}^{2,+\infty}_{2-1}$ & ${\cal B}^{2,+\infty}_{1+1}$ &
${\cal B}^{2,+\infty}_{2+2}$ & ${\cal B}^{2,+\infty}_{2+1}$ \\ \hline
$2/\pi$ & $2\sqrt{5}/(3\pi)$ & $-2/(5\pi)$ & $26/(15\pi)$ & $2/(15\pi)$ &
$2/(3\sqrt{5})$ & $-4/(5\pi)$ & $-1/(3\pi)$ & $-2/(\sqrt{5}\pi)$
\end{tabular}
\end{ruledtabular}
\end{table*}
\section{Second-order momentum correlations for 3 bosons in 3 wells
as a function of the strength of the interaction ${\cal U}$}
\label{s2ndanyu}
The second-order correlations are obtained through an integration of the third-order ones over the third
momentum variable $k_3$, i.e.,
\begin{align}
^2{\cal G}_i^{b}(k_1,k_2)=\int^{\infty}_{-\infty}\; ^3{\cal G}_i^{b}(k_1,k_2,k_3)dk_3,
\label{2nd}
\end{align}
with $i=1,\ldots,10$.
Using MATHEMATICA and neglecting the terms that vanish as $e^{-\gamma d^2/s^2}$ (for arbitrary $\gamma >0$ and
$d^2/s^2 >> 1$), we found that the second-order correlations are given by the following general expression
\begin{align}
\begin{split}
& ^2{\cal G}_i^{b}(k_1,k_2)= s^2 e^{-2(k_1^2+k_2^2)s^2} \\
& \times \{ {\cal B}_0^i + {\cal B}_{1}^i (\cos(dk_1)+\cos(dk_2)) \\
& + {\cal B}_{2}^i (\cos(2dk_1)+\cos(2dk_2)) \\
& + {\cal B}_{1-1}^i \cos[d(k_1-k_2)] + {\cal B}_{2-2}^i \cos[2d(k_1-k_2)] \\
& + {\cal B}_{2-1}^i (\cos[d(k_1-2k_2)]+\cos[d(2k_1-k_2)])\\
& + {\cal B}_{1+1}^i \cos[d(k_1+k_2)] + {\cal B}_{2+2}^i \cos[2d(k_1+k_2)] \\
& + {\cal B}_{2+1}^i (\cos[d(k_1+2k_2)]+\cos[d(2k_1+k_2)]) \}.
\end{split}
\label{2ndbexpr}
\end{align}
\textcolor{black}{The ${\cal B}_0$ coefficient denotes a $\cos$-independent term. The subscripts $1$, $2$, $1\pm1$,
$2\pm1$, and $2\pm2$ in the other ${\cal B}$ coefficients reflect the number of terms in the argument of the
$\cos$ functions (one or two) and the factor of $\pm1$ or $\pm2$ in front of $k_1$ or $k_2$ (without
consideration of any ordering of $k_1$ and $k_2$).}
Including the constant term, there are 13 sinusoidal terms, but only 9 distinct coefficients in Eq.\
(\ref{2ndbexpr}). The first coefficient above is a constant, i.e., ${\cal B}^i_0=2/\pi \approx 0.63662$ for all
ten eigenstates. The remaining 8 ${\cal B}$-coefficients in Eq.\ (\ref{2ndbexpr}) are ${\cal U}$-dependent.
These ${\cal U}$-dependent ${\cal B}$-coefficients for the 2 lowest-in-energy eigenstates are plotted in Fig.\
\ref{bcoef} as a function of ${\cal U}$. The corresponding explicit numerical values can be found in a data file
included in the supplemental material \cite{supp}.
Note that expression (\ref{2ndbexpr}) has a total of 13 different cosine terms.
{\it The ground state (state denoted as $i=1$ for $-\infty < {\cal U} < +\infty$):\/}
For ${\cal U} \rightarrow -\infty$ only the constant term, ${\cal B}^1_0$ survives; see the frame (a) in Fig.\
\ref{bcoef}. The ground state is the triply occupied middle well [see the Hubbard eigenvector in the second line
of Eq.\ (\ref{phi1})]. In this case, the second-order correlation function is
\begin{align}
^2{\cal G}_1^{b,-\infty}(k_1,k_2)= \frac{2}{\pi}s^2 e^{-2(k_1^2+k_2^2)s^2}.
\label{3b2ndggsUmI}
\end{align}
\textcolor{black}{
In the noninteracting case (${\cal U}=0$), for which the Hubbard eigenvector is given by Eq.\ (\ref{eigvecU0st1}),
all 13 cosinusoidal terms and 9 distinct coefficients (listed in TABLE \ref{bgsu0}) are present in Eq.\
(\ref{2ndbexpr}), in agreement with frame (a) of Fig.\ \ref{bcoef}.
}
For ${\cal U} \rightarrow +\infty$, three terms survive, including the constant one; see frame (a) in Fig.\
\ref{bcoef}. In this case, the ground state is that of all three wells being singly occupied. In this case, the
second-order correlation function acquires a simple expression
\begin{align}
\begin{split}
^2{\cal G}_1^{b,+\infty}& (k_1,k_2) = \frac{2}{3\pi} s^2 e^{-2(k_1^2+k_2^2)s^2} \{ 3 \\
& + 2 \cos[d(k_1-k_2)] + \cos[2d(k_1-k_2)] \}.
\end{split}
\label{3b2ndggsUI}
\end{align}
It is interesting to note that the second-order correlation function for three fermions with parallel spins
trapped in three wells in the limit ${\cal U} \rightarrow +\infty$ is given by the same expression as that in
Eq.\ (\ref{3b2ndggsUI}), but with the 2 and 1 coefficients in front of the $\cos[d(k_1-k_2)]$ and
$\cos[2d(k_1-k_2)]$ terms being replaced by their negatives, $-2$ and $-1$, respectively (see Eq.\ (9) and
TABLE I (row for $i=3$) in Ref.\ \cite{yann19.3}). This naturally is a reflection of the different quantum
statistics between bosons and fermions.
Fig.\ \ref{f2ndcorrbst2} illustrates for the first-excited state the ${\cal U}$-evolution of the second-order
correlation maps described by expression (\ref{2ndbexpr}) when $i=2$. The maps for 5 specific values of ${\cal U}$
are plotted, namely, ${\cal U}=-200$, $-10$, $0$, $10$, and $200$.
{\it The first excited state (state denoted as $i=2$ for $-\infty < {\cal U} < +\infty$):\/}
For ${\cal U} \rightarrow -\infty$ only the constant term, ${\cal B}^2_0=2/\pi$, survives; the corresponding state
is a NOON state of the form $(-|300\rangle + |003\rangle)/\sqrt{2}$. In this case, the second-order
correlation function is again
\begin{align}
^2{\cal G}_2^{b,-\infty}(k_1,k_2)= \frac{2}{\pi}s^2 e^{-2(k_1^2+k_2^2)s^2}.
\label{3b2ndgst2UmI}
\end{align}
\textcolor{black}{
In the noninteracting case (${\cal U}=0$), for which the Hubbard eigenvector is given by Eq.\ (\ref{eigvecU0st2}), 10
cosinusoidal terms and 7 distinct coefficients (listed in TABLE \ref{bst2u0}) are present in Eq.\ (\ref{2ndbexpr}),
in agreement with frame (b) of Fig.\ \ref{bcoef}.
}
\textcolor{black}{
For ${\cal U} \rightarrow +\infty$, all 13 sinusoidal terms survive in expression (\ref{2ndbexpr});
the corresponding state is given by the first expression in Eq.\ (\ref{phi2}).
For this case, we give the 9 distinct coefficients in TABLE \ref{bst2uinf}.
}
These results are in agreement with the ${\cal U}$-dependence portrayed in frame (b) of Fig.\ \ref{bcoef}.
For a description of the remaining eight excited states, see Appendix \ref{a2nd}.
\begin{figure}[t]
\includegraphics[width=7.2cm]{3b_3w_fig7.pdf}
\caption{The ${\cal A}$-coefficients (dimensionless) [see Eq.\ (\ref{frstbexpr})] for the 2 lowest-in-energy
eigenstates of 3 bosons trapped in 3 linear wells as a function of the interaction strength ${\cal U}$ (dimensionless).
(a) ground state ($i=1$). (b) First-excited state ($i=2$). See text for a detailed description.
The choice of online colors is as follows:
${\cal A}_0 \rightarrow$ Constant (Light Blue), ${\cal A}_1 \rightarrow$ Violet, ${\cal A}_2 \rightarrow$ Green.
For the print grayscale version, excluding the top constant ${\cal A}_0$ horizontal line, the positioning of the two
remaining curves in both frames is: ${\cal A}_1 \rightarrow$ upper curve, ${\cal A}_2 \rightarrow$ lower curve.
For a description of the remaining eight excited states, see Appendix \ref{a1st}.
}
\label{acoef}
\end{figure}
\begin{figure*}[t]
\includegraphics[width=17.5cm]{3b_3w_fig8.pdf}
\caption{1st-order momentum correlation plots for the first-excited state of 3 bosons in 3 wells.
From left to right: (a) ${\cal U}=-200$, (b) ${\cal U}=-10$, (c) ${\cal U}=0$, (d) ${\cal U}=10$, and (e) ${\cal U}=200$
[see Eq. (\ref{frstbexpr}) with $i=2$].
The correlation functions $^1{\cal G}_i^b(k)$ (vertical axes) are given in units of $\mu$m, and
the momenta $k$ are in units of 1/$\mu$m.
The choice of parameters is: interwell distance $d=7$ $\mu$m and width of single-particle distribution in
momentum space [see Eq.\ (\ref{psikd})]
\textcolor{black}{
being the inverse of $s=0.35$ $\mu$m.
}
}
\label{f1stcorrbst2}
\end{figure*}
\section{First-order momentum correlations for 3 bosons in 3 wells
as a function of the strength of the interaction ${\cal U}$}
\label{s1stanyu}
The first-order correlations are obtained through an integration of the second-order ones [see Eq.\
(\ref{2ndbexpr})] over the second momentum variable $k_2$, i.e.,
\begin{align}
^1{\cal G}_i^{b}(k)=\int^{\infty}_{-\infty}\; ^2{\cal G}_i^{b}(k,k_2)dk_2,
\label{frst}
\end{align}
with $i=1,\ldots,10$.
Exploiting the computational abilities of MATHEMATICA and neglecting terms that vanish as $e^{-\gamma d^2/s^2}$
(for arbitrary $\gamma >0$ and $d^2/s^2 >> 1$), one can find that the first-order correlations are given by the
following general expression
\begin{align}
^1{\cal G}_i^{b}(k)= s e^{-2 k^2 s^2} \{ {\cal A}^i_0 + {\cal A}^i_1 \cos(dk) + {\cal A}^i_2 \cos(2dk) \}.
\label{frstbexpr}
\end{align}
${\cal A}^i_0=\sqrt{2/\pi} \approx 0.797885$ above is ${\cal U}$-independent for all ten eigenstates. The remaining two
coefficients in Eq.\ (\ref{frstbexpr}), ${\cal A}^i_1$ and ${\cal A}^i_2$ are ${\cal U}$-dependent for 9 out of the ten
eigenstates. These ${\cal U}$-dependent ${\cal A}$-coefficients for the 2 lowest-in-energy eigenstates are
plotted as a function of ${\cal U}$ in Fig.\ \ref{acoef}.
The corresponding explicit numerical values can be found in a data file included in the supplemental material
\cite{supp}.
{\it The ground state (state denoted as $i=1$ for $-\infty < {\cal U} < +\infty$):\/}
For ${\cal U} \rightarrow -\infty$, it is seen from frame (a) in Fig.\ \ref{acoef} that only the constant
coefficient ${\cal A}^1_0$ survives in expression (\ref{frstbexpr}), i.e., the first-order correlation
(single-particle density) in momentum space is devoid of any oscillatory structure, being given simply by a
Gaussian distribution function,
\begin{align}
^1{\cal G}_1^{b,-\infty}(k)= \sqrt{ \frac{2}{\pi}} s e^{-2 k^2 s^2}.
\label{frstggsUmI}
\end{align}
This structureless distribution corresponds to a photonic triple-slit experiment where Young's \cite{youn04}
``which way'' question, related to the source of the particle detected
with a time-of-flight measurement, can be answered with a 100\%
certainty as being one single well (zero quantum fluctuations in the single-particle occupation number per site).
Indeed, the corresponding ground-state Hubbard eigenvector is given by $\phi^{b,-\infty}_1$ [second line in Eq.\
(\ref{phi1})] which contains only one triply-occupied component from the primitive kets listed in Eq.\
(\ref{3b-kets}), i.e., the basis ket No. 9 $\rightarrow |030\rangle$.
For the non-interacting case (${\cal U}=0$), all 3 coefficients survive [see frame (a) in Fig.\ \ref{acoef}];
specifically one has:
\begin{align}
^1{\cal G}_1^{b,{\cal U}=0}(k)=\sqrt{ \frac{2}{\pi} } s e^{-2 k^2 s^2} \{1 + \sqrt{2} \cos(dk) +
\frac{1}{2} \cos(2dk) \}.
\label{frstggsU0}
\end{align}
Expression (\ref{frstggsU0}) exhibits a highly oscillatory interference pattern. It corresponds to the ground
state given by the Hubbard eigenvector in Eq.\ (\ref{eigvecU0st1}), which is often described as a bosonic
superfluid. Indeed the quantum fluctuations in the single-particle occupation number per site are strongest and
the single-particle bosonic orbitals are maximally delocalized over all three sites.
For ${\cal U} \rightarrow +\infty$, it is seen from frame (a) in Fig.\ \ref{acoef} that again only the
${\cal U}$-independent coefficient ${\cal A}^1_0$ survives in expression (\ref{frstbexpr}), i.e., the first-order correlation
(single-particle density) in momentum space is devoid of any oscillatory structure, being given simply by a
Gaussian distribution function like in Eq.\ (\ref{frstggsUmI}), i.e.,
\begin{align}
^1{\cal G}_1^{b,+\infty}(k) = {^1{\cal G}}_1^{b,-\infty}(k).
\label{frstggsUI}
\end{align}
Again, this structureless distribution corresponds to a photonic triple-slit experiment where Young's
\cite{youn04} ``which way'' question, related to the source of the particle detected
with a time-of-flight measurement, can be answered with a 100\%
certainty as being one single well (zero quantum fluctuations in the single-particle occupation number per site).
Indeed, the corresponding ground-state Hubbard eigenvector is given by $\phi^{b,+\infty}_1$ [first line in Eq.\
(\ref{phi1})] which contains only the singly-occupied component from the primitive kets listed in Eq.\
(\ref{3b-kets}), i.e., the basis ket No. 1 $\rightarrow |111\rangle$. The implications of the above results
encoded in Eqs.\ (\ref{frstggsUmI}), (\ref{frstggsU0}), and (\ref{frstggsUI}) regarding phase transitions will be
discussed below in Sec.\ \ref{sign}.
{\it The first excited state (state denoted as $i=2$ for $-\infty < {\cal U} < +\infty$):\/}
For ${\cal U} \rightarrow -\infty$, it is seen from frame (b) in Fig.\ \ref{acoef} that only the constant
coefficient ${\cal A}^1_0$ survives in expression (\ref{frstbexpr}), i.e., the first-order correlation
(single-particle density) in momentum space is devoid of any oscillatory structure, being given simply by a
Gaussian distribution function,
\begin{align}
^1{\cal G}_2^{b,-\infty}(k)= \sqrt{ \frac{2}{\pi}} s e^{-2 k^2 s^2}.
\label{frstgUmIst2}
\end{align}
In this case, this structureless distribution does not correspond to zero quantum fluctuations in the
single-particle occupation number per site (see detailed discussion in Sec.\ \ref{sign} below).
Indeed, the corresponding Hubbard eigenvector is given by $\phi^{b,-\infty}_2$ [second line in Eq.\
(\ref{phi2})] which is a NOON state spread over two sites. i.e., it is a superposition of the two basis
kets No. 8 $\rightarrow |300\rangle$ and No. 10 $\rightarrow |003\rangle$.
For the non-interacting case (${\cal U}=0$), 2 coefficients survive [see frame (b) in Fig.\ \ref{acoef}];
specifically one has:
\begin{align}
^1{\cal G}_2^{b,{\cal U}=0}(k)=\sqrt{ \frac{2}{\pi} } s e^{-2 k^2 s^2} \{1 + \frac{2\sqrt{2}}{3} \cos(dk)\}.
\label{frstgU0st2}
\end{align}
Expression (\ref{frstgU0st2}) exhibits a highly oscillatory interference pattern. It corresponds to the
state given by the Hubbard eigenvector in Eq.\ (\ref{eigvecU0st2}).
For ${\cal U} \rightarrow +\infty$, all 3 coefficients survive [see frame (b) in Fig.\ \ref{acoef}], one of them
being negative; specifically one has:
\begin{align}
^1{\cal G}_2^{b,+\infty}(k)=\sqrt{ \frac{2}{\pi} } s e^{-2 k^2 s^2} \{1 + \frac{\sqrt{5}}{3} \cos(dk)
-\frac{1}{5} \cos(2dk) \}.
\label{frstgUIst2}
\end{align}
Expression (\ref{frstgUIst2}) exhibits a highly oscillatory interference pattern. It corresponds to the
state given by the Hubbard eigenvector in the first line of Eq.\ (\ref{phi2}), which consists exclusively
of double-single occupancy components [basis kets No. 2 to No. 7; see Eq.\ (\ref{3b-kets})]
Fig.\ \ref{f1stcorrbst2} illustrates for the first-excited state the ${\cal U}$-evolution of the first-order
correlations described by expression (\ref{frstbexpr}) when $i=2$. The cases for 5 characteristic values
of ${\cal U}$ are plotted, namely, ${\cal U}=-200$, $-10$, $0$, $10$, and $200$.
For a description of the remaining eight excited states, see Appendix \ref{a1st}.
\begin{figure*}[t]
\includegraphics[width=17.5cm]{3b_3w_fig9.pdf}
\caption{Momentum correlation plots and maps for the ground state of 3 bosons in 3 wells.
Top row (a,d,g,j,m): 1st-order correlations $^1{\cal G}_i^b(k)$ (vertical axes) in units of $\mu$m.
Middle row (b,e,h,k,n): 2nd-order correlations $^2{\cal G}_i^b(k_1,k_2)$ in units of $\mu$m$^2$
according to the color bars on top of each panel.
Bottom row (c,f,i,l,o): 3rd-order (cuts at $k_3=0$) correlations $^3{\cal G}_i^b(k_1,k_2,k_3=0)$
in units of $\mu$m$^3$ according to the color bars on top of each panel.
The momenta $k$, $k_1$, and $k_2$ are in units of 1/$\mu$m. From left to right column:
${\cal U}=-200$, $-10$, $0$, $10$, and $300$ [see Eqs. (\ref{frstbexpr}) and (\ref{2ndbexpr})
with $i=1$, as well as Eqs.\ (\ref{3gdef}) and (\ref{wfbexpr}) with $i=1$].
The choice of parameters is: interwell distance $d=7$ $\mu$m and
spectral width of single-particle distribution in momentum space [see Eq.\ (\ref{psikd})]
being the inverse of $s=0.35$ $\mu$m.
The value of the plotted correlation functions in the bottom two
rows was multiplied by a factor of 10 to achieve better contrast for the map features.}
\label{corrbst1}
\end{figure*}
\textcolor{black}{
\section{Signatures of emergent quantum phase transitions}
\label{sign}
}
The system of 3 bosons in 3 wells is a building block of bulk-size systems containing a large number of bosons
(e.g., $^{87}$Rb or $^4$He$^*$ atoms) in 3D, 2D, and 1D optical lattices. Such bulk-like systems have been
available already for some time and several physical aspects of them have been explored experimentally
\cite{grei02,gerb05,gerb05.2,clem18,clem19,bloc05}, accompanied by theoretical studies \cite{seng05,triv09}.
In particular, of direct interest to this paper are the observations, obtained through time-of-flight
measurements, of the superfluid to Mott insulator phase transition \cite{grei02,gerb05,gerb05.2,clem18,clem19}
(in 3D lattices), and of the second-order particle interference \cite{bloc05}
(in 1D lattices) in analogy with a quantal extension of Hanburry Brown-Twiss-type optical interference.
The detailed algebraic analysis of all-order correlations presented earlier for the system of 3 bosons in 3 wells
provides the tools for exploring these major physical aspects (quantum phase transitions and quantum-optics
analogies) in the context of a finite-size system. In this respect, it is a first step towards the deciphering of
the evolution of these aspects as the system size increases from a few particles to the thermodynamic limit. In
this section, we analyze the signatures for quantum phase transitions that appear already in the case of a finite
system as small as 3 bosons.
We begin by collecting in a single figure (Fig.\ \ref{corrbst1}) and for the ground state of the 3 bosons-3 wells
systems all three levels of correlations as a function of the interaction strength ${\cal U}$ (with ${\cal U}=-200$, $-10$,
0, 10, and 300). For large ${\cal U}$ (${\cal U}=300$, describing very strong repulsive interparticle interaction),
the system's ground-state Hubbard eigenvector is very close to
the single ket No. 1 $\rightarrow |111\rangle$ [see $\phi_1^{b,+\infty}$ in Eq.\ (\ref{phi1})] which describes
exclusively singly-occupied sites. For 3 bosons in 3 wells, the state $|111\rangle$ is the analog of the Mott
insulator phase, familiar from bulk systems.
The associated three-body wave function is well approximated by the permanent
$\Phi^{b,+\infty}_1(k_1,k_2,k_3)$ [see Eq.\ (\ref{phibUpmI_1})] formed from the three localized orbitals
$\psi_j(k)$ in Eq.\ (\ref{psikd}).
A crucial observation is that the corresponding single-particle momentum density
(first-order correlation) portrayed
in frame (m) of Fig.\ \ref{corrbst1} (in top row) is structureless and devoid of any oscillatory pattern, in
contrast to fully developed oscillations present in the single-particle density of the non-interacting ground
state [see frame (g) in top row of Fig.\ \ref{corrbst1}]. As was the case with the bulk systems, this
structureless pattern in the first-order correlation can thus be used as a signature of the Mott insulator even in
the case of a small system.
In analogy with the interpretation for bulk systems, the appearance of oscillations in the non-interacting case
can be associated with the spreading of the single-particle orbitals over all the three sites (three wells).
Namely, for ${\cal U}=0$, the lowest energy single-particle wave function of the tight-binding Hamiltonian (in matrix
representation)
\begin{align}
H_{b,{\rm TB}}^{\rm sp} =
-J \left(
\begin{array}{ccc}
0 & 1 & 0 \\
1 & 0 & 1 \\
0 & 1 & 0 \\
\end{array}
\right)
\label{hbsp}
\end{align}
is a molecular orbital which is expressed as a coherent linear superposition of all three localized atomic
orbitals $\psi_j(k)$ [with $j=1,2,3$, see Eq.\ (\ref{psikd})], namely
\begin{align}
\begin{split}
\psi_{\rm MO}(k)& =\frac{\psi_1(k)}{2} + \frac{\psi_2(k)}{\sqrt{2}} + \frac{\psi_3(k)}{2} \\
& = \frac{2^{1/4}\sqrt{s}}{\pi^{1/4}} e^{-k^2 s^2}
\left( \frac{e^{-idk}}{2} + \frac{1}{\sqrt{2}} + \frac{e^{idk}}{2} \right).
\end{split}
\label{mo}
\end{align}
Then the three-body wave function is constructed by triply occupying this molecular orbital, i.e., it is
given by the Bose-Einstein-condensate product
\begin{align}
\Phi^{b,{\cal U}=0}_1(k_1,k_2,k_3) = \psi_{\rm MO}(k_1)\psi_{\rm MO}(k_2)\psi_{\rm MO}(k_3).
\label{phibU0_1_3}
\end{align}
Eq.\ (\ref{phibU0_1_3}) above
equals expression (\ref{phibU0_1}) derived by us earlier (see Sec.\ \ref{3rdcorrU0}) as the ${\cal U}=0$
limit of the solution of the Bose-Hubbard Hamiltonian [Eq.\ (\ref{3b-hub})], obtained through the matrix
representation [Eq.\ (\ref{3b-mat})] in the 10-ket basis [Eq.\ (\ref{3b-kets})] for the problem of three bosons
trapped in three-wells.
Because of the molecular orbital in Eq.\ (\ref{mo}), which expresses the delocalization of the single-particle
wave functions over the whole system, the three-body wave function $\Phi^{b,{\cal U}=0}_1(k_1,k_2,k_3)$ can be
characterized as describing a superfluid phase in analogy with the bulk case \cite{grei02,fish89}.
The natural difference of course is that in the bulk case the superfluid to Mott-insulator transition happens
abruptly at ${\cal U}= z \times 5.8$ \cite{fish89}, with $z$ being the number of next neighbors of a lattice site,
whereas for the small finite system this transition is not sharp but proceeds continuously as a function of ${\cal U}$.
Some steps of this smooth evolution are illustrated in frame (g) (${\cal U}=0$), frame (j) (${\cal U}=10$), and
frame (m) (${\cal U}=300$) of Fig.\ \ref{corrbst1} (top row).
Furthermore, another aspect from the bulk studies that is relevant to our 3-boson results is the determination,
made deeply in the Mott-insulator region, of a small oscillatory contribution to the single-particle density
superimposed on the structureless background \cite{gerb05,gerb05.2,seng05,triv09}. This contribution \cite{note1}
was found to vary as $\propto -2\sum_{\nu=x,y,z} \cos(k_\nu d)/{\cal U}$, as obtained via perturbative (or related)
approaches around ${\cal U} \rightarrow +\infty$.
Our exact algebraic expression for $^1{\cal G}_i^{b}(k)$ [Eq.\ (\ref{frstbexpr})], which is valid for
any ${\cal U}$, contains a second term $\cos(2dk)$ in addition to the $\cos(dk)$ term. Deeply in the Mott-insulator
regime, however, there is agreement at the qualitative level between our result and the bulk one, because the
coefficient ${\cal A}^1_2$ vanishes much faster than the coefficient ${\cal A}^1_1$ as ${\cal U} \rightarrow +\infty$ as
is revealed by an inspection of the curves in frame (a) of Fig.\ \ref{acoef}.
At the non-interacting limit (${\cal U}=0$), however, this second term cannot be neglected [see frame (a) in
Fig.\ \ref{acoef} and Eq.\ (\ref{frstggsU0})]. In this limit, its effect is to narrow the width of the
cosinusoidal peaks at $k=2\pi j/d$, with $j=0,1,2,\ldots$. From this, one can conjecture \cite{yannun} that for
larger systems with $N$ bosons, all cosine terms of the form $\cos(ndk)$ with $n=1,2,3,\ldots,N-1$ (corresponding
to all possible interwell distances) will contribute. The summation of many of such terms will enhance further
the shrinking of the width of the main peaks, while it will give a practically vanishing result in the
in-between regions. Thus the main peaks will acquire the shape of sharp spikes as was indeed observed
\cite{grei02} in the bulk systems.
\begin{figure*}[t]
\includegraphics[width=13.0cm]{3b_3w_fig10.pdf}
\caption{(a,b,c,d) Site occupations (vertical axes, dimensionless) and
their fluctuations (vertical axes, dimensionless) for the ground, $\phi_1^b({\cal U})$ (a,b), and
first-excited, $\phi_2^b({\cal U})$ (c,d), states as a function of the strength ${\cal U}$ of the interaction.
Panels (a) and (c) refer to the left site (well), whereas panels (b) and (d) refer to the middle site (well).
Violet color (midle curve at ${\cal U}=+40$): site occupations. Green color (upper curve at ${\cal U}=+40$):
expectation value of the square of the site number operator.
Light blue color (lower curve at ${\cal U}=+40$): standard deviation. Note that the middle and lower
curves in frame (c) coincide for all practical purposes.
(ex,fx) The first-order correlations for the ground and first-excited states, respectively, for
five characteristic values : ${\cal U}=-200$ (x=1) (close to $\rightarrow -\infty$), -10 (x=2), 0 (x=3), 10 (x=4),
and 300 (x=5) (close to $\rightarrow +\infty$). The first-order correlations $^1{\cal G}^b(k)$ (vertical axes)
are in units of $\mu$m and the momenta $k$ are in units of 1/$\mu$m. The choice of parameters for the
correlations is: interwell distance $d=7$ $\mu$m and spectral width of single-particle distribution in momentum
space [see Eq.\ (\ref{psikd})] being the inverse of $s=0.35$ $\mu$m.
}
\label{focc}
\end{figure*}
In the present paper, we cover the full range of interaction strengths, from infinite attraction
(${\cal U} \rightarrow -\infty$) to infinite repulsion (${\cal U} \rightarrow +\infty$). Following the sequence of
frames from the third to the first frame in Fig.\ \ref{corrbst1} (top row), it is seen that a structureless
single-particle momentum density emerges also in the limit ${\cal U} \rightarrow -\infty$; for intermediate negative
values of ${\cal U}$, the weight of the oscillatory pattern decreases gradually as the absolute value $|{\cal U}|$
increases. However,
based on our full solution of the 3 bosons-3 wells Hubbard system, it is apparent that this succession (i.e.,
from the third to the first frame of Fig.\ \ref{corrbst1}) does not reflect a transition from a superfluid to a
Mott-insulator phase. Indeed, the Hubbard ground-state eigenvector for ${\cal U} \rightarrow -\infty$ is given by
$\phi_1^{b,-\infty}$ in the second line of Eq.\ (\ref{phi1}), which can properly be characterized as a
Bose-Einstein condensate; namely, this ground state consists only of a single basis ket (No. 9 $\rightarrow
|030\rangle$) that represents a triply occupied atomic orbital $\psi_2(k)$ [see Eq.\ (\ref{psikd})] located in
the middle well.
{\it The caveat from the discussion above is that the first-order correlation does not uniquely characterize the
associated many-body state.\/} This is not an uncommon occurrence, as can be also seen from an inspection of
Fig.\ \ref{f1stcorrbst2}, which illustrates a succession of $^1{\cal G}_2^{b}(k)$'s for the first excited state.
Indeed, the single-particle momentum density in frame (a) in Fig.\ \ref{f1stcorrbst2} (case of ${\cal U}=-200$)
is structureless; however, the corresponding Hubbard eigenvector is very well approximated by $\phi_2^{b,-\infty}$
in the second line of Eq.\ (\ref{phi2}). Naturally, this eigenvector represents a many-body state that is neither
a Mott insulator nor a Bose-Einstein condensate. Rather it represents a $(-|300\rangle+|003\rangle)\sqrt{2}$
NOON state; the family of NOON states are a focal point in quantum-optics investigations \cite{oubook,shihbook}.
For a complete characterization of the many-body state under consideration, additional information, beyond the
first-order correlations, is needed. A natural candidate to this effect are
the maps for the second-order (Sec.\ \ref{s2ndanyu}) and third-order (Sec.\ \ref{s3rdanyu}) correlations
investigated earlier. For example, in the case of the structureless single-particle momentum density cases
discussed above [i.e., frame (m) in Fig.\ \ref{corrbst1} (top row), frame (a) in Fig.\ \ref{corrbst1}
(top row), and frame (a) in Fig.\ \ref{f1stcorrbst2}], all three corresponding third-order correlation maps are
drastically different [compare frame (c) in Fig.\ \ref{corrbst1} (bottom row),
frame (o) in Fig.\ \ref{corrbst1} (bottom row), and frame (a) in Fig.\ \ref{f3rdcorrbst2}].
Note that the information provided by second-order correlation maps only is still not sufficient for the
full characterization of the underlying many-body
state. Indeed, the second-order correlation maps in frame (b) of Fig.\ \ref{corrbst1} (second row)
(case of the ground state at ${\cal U}=-200$) is very similar to that in frame (a) of Fig.\ \ref{f2ndcorrbst2}
(case of the first-excited state at ${\cal U}=-200$).
We stress again at this point that Figs.\ \ref{f3rdcorrbst2}, \ref{f2ndcorrbst2}, \ref{f1stcorrbst2},
and Fig.\ \ref{corrbst1} illustrate graphically the ability of our methodology to determine all three levels of
momentum correlations and their evolution as a function of the interaction strength ${\cal U}$, from the attractive to
the repulsive regime, and thus to provide the tools for a complete characterization of the underlying many-body
states.
Before leaving this section, we found it worthwhile to explicitly investigate the conjecture that vanishing
fluctuations in the site occupations are always associated with a structureless first-order momentum correlation.
To this effect, we plot in Fig.\ \ref{focc} the site occupation,
$\langle \phi^b_j({\cal U}) |n_i| \phi^b_j({\cal U}) \rangle$ [the site number operator $n_i=\hat{b}^\dagger_i \hat{b}_i$;
see below Eq.\ (\ref{3b-hub})], the expectation value of the square of the site number operator,
$\langle \phi^b_j({\cal U}) |n_i^2| \phi^b_j({\cal U}) \rangle$, and the standard deviation,
$\sqrt{\langle \phi^b_j({\cal U}) |n_i^2| \phi^b_j({\cal U}) \rangle-\langle \phi^b_j({\cal U}) |n_i| \phi^b_j({\cal U}) \rangle^2}$
for the ground ($j=1$) and first-excited ($j=2$) states and for the left $(i=1)$ and middle ($i=2$) sites (wells).
As already noted in the introductory section of this paper, the connection between the fluctuations in
site-occupation and the appearance of structural patterns (or the lack thereof) in the first-order momentum
correlations is a manifestation of the connection between the quantum phase-transition from superfluid (coherent)
to localized (incoherent) states, and the quantum uncertainty relation connecting the fluctuations in phase
and number (site-occupancy).
From an inspection of the four panels (a,b,c,d) in Fig.\ \ref{focc}, one concludes that indeed in all four panels
an oscillatory pattern in the single-particle momentum density [see subpanels (e2,e3,e4) and (f2,f3,f4,f5)] is
accompanied by a nonvanishing fluctuation in the site occupations. However, a structureless single-particle
momentum density is not always associated with a vanishing fluctuation; see the case of the NOON state
$\phi^{b,-\infty}_2$ [Fig.\ \ref{focc}(c)] for which the standard deviation of the left well is 3/2, whereas the
corresponding single-particle momentum density [Fig.\ \ref{focc}(f1)] is structureless.
Finally, we mention that temperature effects on the quantum phase transitions in bosonic gases trapped in
optical lattices have recently attracted some attention (see, e.g., Refs.\ \cite{lu06,jin19}).
Our beyond-mean-field theoretical approach can be generalized \cite{yannun} to account for such effects,
but this falls outside the scope of the present paper.
\section{Analogies with three-photon interference in quantum optics}
\label{anal}
In this section, we elaborate on the analogies between our results for the system of 3 massive bosons trapped
in 3 wells with the three-photon interference in quantum optics, which is an area of frontline research
activities \cite{spag13,tich14,agar15,agne17,mens17,tamm18.1,tamm18.2,tamm19}. Such three-photon interference
investigations fall into two major categories: (1) Those that employ a tritter \cite{note2} to produce a
scattering event between three photons impinging on the input ports of a tritter and which measure coincidence
probabilities for the photons exiting the three output ports \cite{spag13,tich14,agar15,agne17,mens17}.
At the abstract theoretical level, the scattering event is described by a unitary scattering matrix. The
coincidence probabilities are denoted as $P_{111}$ (one photon in each one of the output ports), $P_{210}$
(two photons in the first port and a single photon in the second port), $P_{300}$ (three photons in the first
port), etc..., and they are apparently a direct generalization of the $P_{11}$ and $P_{20}$ coincidence
probabilities familiar from the celebrated HOM \cite{hom87} two-photon interference experiment.
Variations in the $P_{ijk}$, with $i,j,k=0,1,2,3$ and $i+j+k=3$, probabilities are achieved through control of
the time delays between photons and other parameters of the tritter. (2) Those that resolve the intrinsic
conjugate variables underlying the wave packets of the impinging photons on the tritter (i.e., frequency,
$\omega$, and time delay, $\tau$) \cite{tamm18.1,tamm18.2,tamm19}; for earlier two-photon interference
investigations in this category, see Refs.\ \cite{lege04,gerr15.1,gerr15.2}. This category of experiments
produces spectral correlation landscapes as a function of the three frequencies
$\omega_1$, $\omega_2$, and $\omega_3$.
\begin{figure}[t]
\includegraphics[width=7.5cm]{3b_3w_fig11.pdf}
\caption{The (dimensionless) Hong-Ou-Mandel-type probabilities $P_{111}$ (violet, right curve) and $P_{030}$
(green, left curve) associated with the Hubbard ground-state vector $\phi^b_1({\cal U})$ as a function of the
interaction strength ${\cal U}$ (dimensionless).}
\label{fphom}
\end{figure}
In the case of the 3 bosons in 3 wells, the quantum-optics category (1) above finds an analog to
{\it in situ\/} experiments and their theoretical treatments. Indeed, the analogs of the three-photon wave
function in the output ports are the vector solutions [stationary or time-dependent (not considered in this
paper)] of the Hubbard Hamiltonian matrix in Eq.\ (\ref{3b-mat}); compare the general form of the Hubbard
vector solutions [Eq.\ (\ref{phiU}) in Sec.\ \ref{3b-hb}] to Eq.\ (5) for the three-photon output state from a
tritter in Ref.\ \cite{agar15}. Control of these Hubbard vector solutions is achieved through
variation of the interaction parameter ${\cal U}$ and the choice of a ground or excited state. For example,
choosing the ground-state vector, the probability for finding only one boson in each well is given
by the modulus square of the ${\cal U}$-dependent coefficient in the Hubbard eigenvector [Eq.\ (\ref{phiU})] in
front of the basis ket No. 1 $\rightarrow |111 \rangle$, i.e., $P_{111}({\cal U})=|{\bf c}_{111}({\cal U})|^2$; naturally
$P_{030}({\cal U})=|{\bf c}_{030}({\cal U})|^2$.
In Fig.\ \ref{fphom}, we plot the $P_{111}({\cal U})$ and $P_{030}({\cal U})$ probabilities associated with the Hubbard
ground-state eigenvector $\phi^b_1({\cal U})$. This figure is reminiscent of Fig.\ 2 in Ref.\ \cite{spag13}
(see also Fig.\ 3 in Ref.\ \cite{agar15}). It is interesting to note that the three-photon state $\ket{300}$
(experimentally realized in Ref.\ \cite{spag13}) is described in quantum optics as a ``three-photon bosonic
coalescence'', whereas for atomic and molecular physics a description as a micro Bose-Einstein condensate
appears to come naturally in mind.
Note, further, that the $P_{ijk}$'s in
Ref.\ \cite{spag13} depend on two parameters, instead of a single one. For the case of 3 massive bosons in
3 wells, a second parameter becomes relevant by considering the time evolution of the Hubbard vector solutions
\cite{yannun}; see Refs.\ \cite{yann19.1,kauf14} for the consideration of the time-evolution in the case of
2 massive bosons in 2 wells. Note further that, in quantum optics, two fully overlapping photons are
described as perfectly {\it indistinguishable\/}, whereas two non-overlapping photons are described as perfectly
{\it distinguishable\/} \cite{spag13,tich14}. In the context of the present study for 3 massive trapped bosons
(which uses the assumption $d^2/s^2 >> 1$), an example of the former is the ket No. 9 $\rightarrow |030 \rangle$,
whereas an example of the latter is the ket No. 1 $\rightarrow |111 \rangle$. A double-single occupancy
ket, like ket No. 2 $\rightarrow |210 \rangle$, can be referred to as a mode with two indistinguishable and one
distinguishable bosons \cite{tich14}.
The analogy between the two-photon optical HOM formalism and the vector solutions of the Hubbard
theoretical modeling for 2 bosons (or 2 fermions) in 2 wells was reported earlier in Refs.\
\cite{bran18,yann19.1}
Furthermore, in the case of the 3 bosons in 3 wells, the quantum-optics category (2) above finds an analog to
time-of-flight experiments and their theoretical treatments. This analogy derives from the following
correspondence (revealed in Ref.\ \cite{yann19.1})
\begin{align}
\begin{split}
k & \longleftrightarrow \omega/c \\
d & \longleftrightarrow \tau c \\
kd & \longleftrightarrow \omega\tau.
\end{split}
\label{kdwt}
\end{align}
As was done \cite{yann19.1} for the case of 2 massive trapped particles versus two interfering photons, this
correspondence can be used to establish a complete analogy between the cosinusoidal patterns of all three orders
of momentum correlation functions presented in this paper for 3 massive and trapped bosons (and which can be
determined experimentally through time-of-flight measurements \cite{prei19}) with the landscapes
\cite{tamm18.1,tamm19} of the frequency-resolved three-photon interferograms (which are a function of the three
photon frequencies, $\omega_1$, $\omega_2$, and $\omega_3$). For example the interferograms in Fig.\ 3 of
Ref.\ \cite{tamm19} are analogous to the map in Fig.\ \ref{corrbst1} [frame (i), bottom row] of the
$k_3=0$ cut of the third-order momentum correlation associated with 3 non-interacting trapped massive bosons.
A difference to keep in mind is that in this paper the interwell distances were taken to be equal, whereas the
time delays in Ref.\ \cite{tamm19} are unequal.
Furthermore, Eq.\ (S1) in the Supplemental Material of Ref.\ \cite{tamm19} which describes the three-photon
output wave function at the detectors, $\psi(\omega_1, \omega_2, \omega_3)$, is a permanent of the three
single-photon wave functions $\chi_j(\omega_i)=E_j(\omega_i)\exp(-i \omega_i t_j)$, with $i,j=1,2,3$, where $t_j$
denotes time instances [corresponding to the position of each well in our single-particle orbitals displayed in
Eq.\ (\ref{psikd})]. As a result, for $E_1(\omega_i)=E_2(\omega_i)=E_2(\omega_i)=E(\omega_i)$ and $t_1=-\tau$,
$t_2=0$, and $t_3=\tau$, it reduces exactly to the form of the three-body wave function
$\Phi^{b,+\infty}_1(k_1,k_2,k_3)$ [see top line in Eq.\ (\ref{phibUpmI_1})] in this paper which is associated
with the case of the three singly-occupied wells, i.e., the Hubbard solution at infinite repulsion, $|111\rangle$
(perfectly distinguishable bosons).
A central focus in the recent quantum-optics literature has been the demonstration of genuine three-photon
interference \cite{agne17,mens17}, that is interference effects that cannot be inferred by a knowledge of the
one- and two-photon interference patterns. In the language of many-body literature for massive particles, this
is equivalent to isolating the {\it connected\/} terms, ${\cal G}_{\rm con}$, in the total third-order
correlations by subtracting the {\it disconnected\/} ones, ${\cal G}_{\rm dis}$. Reflecting its name, the
disconnected contribution to the total third-order correlation consists of products of the first- and
second-order correlations.
For the case of 3 perfectly distinguishable bosons in 3 wells
(described by the ket $|111\rangle$), one can observe that
the first-order momentum correlation given in Eqs.\ (\ref{frstggsUI}) and (\ref{frstggsUmI}) does not contain any
cosine (or sine) terms, whereas the second-order momentum correlation given in Eq.\ (\ref{3b2ndggsUI}) contains
consine terms with two momenta in the cosine arguments. As a result, the connected part of the
third-order momentum correlations [see Eq.\ (\ref{3gUIst1})] is necessarily reflected in the
cosine terms having an argument that depends on all three single-particle momenta $k_1$, $k_2$, and $k_3$.
Another way to view the above remarks is that the genuine three-body interference involves a total phase
$\varphi$ which is the sum of three partial phases $\varphi_1$, $\varphi_2$, and $\varphi_3$, associated with the
individual bosons, i.e., $\varphi=\varphi_1+\varphi_2+\varphi_3$. Such a triple phase (referred to also as a
triad phase) has been prominent in the quantum-optics literature \cite{agne17,mens17} regarding genuine
three-photon interference.
Specifically, the disconnected part of the third-order correlation for 3 bosons is given by the expression
\begin{align}
\begin{split}
& ^3{\cal G}_{\rm dis}^b(k_1,k_2,k_3)=-2 {^1{\cal G}^b(k_1)} {^1{\cal G}^b(k_2)} {^1{\cal G}^b(k_3)} + \\
& {^1{\cal G}^b(k_1)} {^2{\cal G}^b(k_2,k_3)} + {^1{\cal G}^b(k_2)} {^2{\cal G}^b(k_1,k_3)} + \\
& {^1{\cal G}^b(k_3)} {^2{\cal G}^b(k_1,k_2)}.
\end{split}
\label{gdis}
\end{align}
We can apply the above expression immediately to the case of the Hubbard ground-state eigenvector $\ket{111}$ (limit
of infinite repulsion, 3 perfectly distinguishable bosons), because we have derived explicit algebraic
expressions for the corresponding third-order [Eq.\ (\ref{3gUIst1})], second-order [Eq.\ (\ref{3b2ndggsUI})],
and first-order momentum correlations [Eqs.\ (\ref{frstggsUI}) and (\ref{frstggsUmI})].
Indeed one finds for the connected correlation part
\begin{align}
\begin{split}
& ^3{\cal G}_{1,\rm con}^{b,+\infty}(k_1,k_2,k_3)=\\
& ^3{\cal G}_1^{b,+\infty}(k_1,k_2,k_3) - ^3{\cal G}_{1,\rm dis}^{b,+\infty}(k_1,k_2,k_3) =\\
& \frac{4 \sqrt{2}}{3 \pi^{3/2}} s^3 e^{-2 s^2 (k_1^2+k_2^2+k_3^2)}
\big\{\cos (d (k_1+k_2-2 k_3)) \\
& + \cos (d (k_2+ k_3 -2 k_1))
+ \cos (d (k_1+k_3 - 2 k_2)) \big\}.
\end{split}
\label{gcon}
\end{align}
It is worth noting that the result in Eq.\ (\ref{gcon}) above for the connected correlation part for 3 perfectly
distinguishable bosons in 3 wells coincides with the corresponding result \cite{prei19,yann19.3} for 3 perfectly
distinguishable {\it fully spin polarized fermions\/} in 3 wells.
\section{Summary}
\label{summ}
In this paper, we develop and expand a formalism and a theoretical framework, which, with the use of an
algebraic-language computations tool (MATHEMATICA \cite{math18}), allows us to derive explicit analytic
expressions for all three orders (third,
second, and first) of momentum-space correlations for 3 interacting ultracold bosonic atoms confined in 3 optical
wells in a linear geometry. This 3b-3w system was modeled as a three-site Bose-Hubbard Hamiltonian whose 10
eigenvectors were mapped onto first-quantization three-body wave functions in momentum space by: (1) associating
the bosons with the Fourier transforms of displaced Gaussian functions centered on each well, and (2) constructing
the permanents associated with the basis kets of the Hubbard Hilbert space by using the Fourier transforms of the
displaced Gaussians describing the trapped bosons.
The 3rd-order momentum-space correlations are the modulus square of such three-body wave functions, and the
second- and first-order correlations are derived through successive integrations over the unresolved momentum
variables. This methodology applies to all bosonic states with strong \cite{note4} or without entanglement, and
\textcolor{black}{
does not rely on the standard Wick's factorization scheme, employed in earlier studies (see, e.g., Refs,\
\cite{gome06,hodg13,schm17}) of higher-order momentum correlations for expanding or colliding Bose-Einstein
condensates of ultracold atoms.
}
The availability of such explicit analytic correlation functions will greatly assist in the analysis of
anticipated future TOF measurements with few ($N > 2$) ultracold atoms trapped in optical lattices, following
the demonstrated feasibility of determining higher-than-first-order momentum correlation functions
via single-particle detection in the case of $N=2$ fermionic $^6$Li atoms \cite{berg19}, $N=3$ fully
spin-polarized fermionic $^6$Li atoms \cite{prei19}, and a large number of bosonic $^4$He$^*$ atoms \cite{clem19}.
The availability of the complete set of all-order momentum correlations enabled us to reveal and explore in detail
two major physical aspects of the 3b-3w ultracold-atom system:
(I) That a small system of only 3 bosons exhibits indeed an embryonic behavior akin to an emergent superfluid to
Mott transition and (II) That both the {\it in situ\/} and TOF spectroscopies of the 3b-3w system exhibit
analogies with the quantum-optics three-photon interference, including the aspects of genuine three-photon
interference which cannot be understood from the knowledge of the lower second- and first-order correlations
alone \cite{agne17,mens17}.
The superfluid to Mott-insulator transition in extended optical lattices \cite{grei02,gerb05,gerb05.2} was
explored based on the variations in the shape of the first-order momentum correlations. For the 3b-3w system, we
reported clear variations of the first-order momentum correlations, from being oscillatory with a period that
depends on the inter-well distance, characteristic of a coherent state of a superfluid phase with multiple site
occupancies by each of the trapped ultracold bosonic atoms (high site-occupancy uncertainty), to a structureless
shape characteristic of localized states, (see below) with low site-occupancy uncertainty and consequent high
phase-uncertainty (incoherent phase). Furthermore, we also concluded
that the first-order momentum correlations are not sufficient to characterize uniquely
the underlying nature of a state of the 3b-3w system. To this effect, knowledge of all three orders of
correlations is needed. Indeed, a structureless first-order correlation relates to three different 3b-3w
states, i.e., the $|030\rangle$ ground state at ${\cal U} \rightarrow -\infty$ (Bose-Einstein condensate), the
$|111\rangle$ ground state at ${\cal U} \rightarrow +\infty$ (Mott insulator), and the
$(-|300\rangle + |003\rangle)/\sqrt{2}$ first-excited state at ${\cal U} \rightarrow -\infty$ (NOON state).
Concerning the quantum optics analogies, we established that {\it in situ\/} measurements of the site
occupation probabilities as a function of ${\cal U}$, $P_{ijk}({\cal U})$ (with $i,j=1,\ldots,3$), provide analogs of the
celebrated HOM coincidence probabilities for three photons at the output ports of a tritter as discussed in
Refs.\ \cite{spag13,agar15}. We further established that the momentum-space all-order correlations for the
3b-3w system parallel the frequency-resolved interferograms of distinguishable photons as explored in
Refs.\ \cite{tamm18.1,tamm18.2,tamm19}. The analogies with the genuine three-photon interference were
established in the framework of the many-body theoretical concepts of disconnected versus connected
correlation terms.
To achieve simplicity in this paper, we assumed throughout that the interwell separation is much larger than
the width of the single-particle Gaussian function in the real configuration space,
i.e., $d^2/s^2 >> 1$ (see Sec. \ref{3rdcorrUpmI}). This is equivalent to considering localized bosons with vanishing
overlaps (distinguishable bosons in different wells) or unity overlaps (indistinguishable bosons in the same
well); indeed the overlap of two single-particle wave functions according to Eq.\ (\ref{psikd}) is given by
$S=e^{-d^2/(8s^2)}$. Considering cases with small, but finite $S$, which represent partial indistinguishability
\cite{tich14}, complicates substantially the analytic results \cite{yannun}.
Finally, we note here that our all-order momentum-space correlations for the 3b-3w system can contribute an
alternative way to study and explore with massive particles aspects of the boson sampling problem \cite{aaar13},
and in particular its extension to the multiboson correlation sampling \cite{tamm15,tamm15.1}. We note that
boson sampling problems have become a major focus [see, e.g., Refs. \cite{tamm15,tamm15.1,tich14,lain14,wals19}]
in quantum-optics investigations because they are considered to be an intermediate step on the road towards the
implementation of the quantum computer.
\section{Acknowledgments}
This work has been supported by a grant from the Air Force Office of Scientific Research
(AFOSR, USA) under Award No. FA9550-15-1-0519. Calculations were carried out at the GATECH Center for
Computational Materials Science.
|
2,877,628,089,072 | arxiv | \section{Introduction}
In the seminal work of John Bell, it was shown that the quantum correlations
arising from spatially separated systems can break the limits of classical
causal relations \cite{Bell,EPR}. In classical physics, the realism and
locality of spacelike events constrain the strength of classical correlations
bounded by the Bell inequalities. Quantum theory inconsistent with local
realism predicts stronger correlations that violate the Bell inequalities.
Thanks to quantum information science, two-particle and multi-particle quantum
correlations have been extensively investigated. As a distinct feature from
classical physics, Bell nonlocality has led to applications in quantum
information processing including private random number generation
\cite{random}, quantum cryptography \cite{DIC}, and reductions in
communication complexity \cite{complexity}.
In typical Bell experiments on statistical correlations, a source emits a
state comprising two or more particles that are shared between two or more
distant observers, who each perform local measurements with a random chosen
setting and then obtain the measurement outcomes. To reveal the Bell
nonlocality of an entangled state, the local observables should be set
deliberately. For example, to violate the Bell inequalities tailored to
stabilizer states such as graph states, it is helpful to take the stabilizers
as a reference for finding suitable measurement settings \cite{gf1,gf2,gf3}.
To emphasize the role of stabilizing operators and logical bit-flip operators,
we review the Clauser-Horne-Shimony-Holt (CHSH) inequality and its
modification below. Let the two-qubit entangled state ($0<\phi<\frac{\pi}{2
$)
\begin{equation}
\left\vert \phi\right\rangle =\cos\phi\left\vert \overline{0}\right\rangle
+\sin\phi\left\vert \overline{1}\right\rangle \label{psi
\end{equation}
be the codeword of the [[2, 1, 2]] stabilizer-based quantum error-detecting
code with the logical states $\left\vert \overline{0}\right\rangle =\left\vert
00\right\rangle $ and $\left\vert \overline{1}\right\rangle =\left\vert
11\right\rangle $. The stabilizer generator, logical bit-flip, and phase-flip
operators are $\sigma_{z}\otimes$ $\sigma_{z}$, $\sigma_{x}\otimes$
$\sigma_{x}$, and $I\otimes\sigma_{z}$, respectively ($\sigma_{x}$,
$\sigma_{y}$, and $\sigma_{z}$ are Pauli matrices and $I$ is the identity
operator). In the bipartite Bell test, the CHSH operator is $\mathbf{B
_{CHSH}=\sum\nolimits_{i,\text{ }j=0}^{1}(-1)^{ij}A_{i}\otimes B_{j}$, where
$A_{i}$ and $B_{j}$ are local observables, and the CHSH inequality states that
$\left\langle \mathbf{B}_{CHSH}\right\rangle \overset{c}{\leq}2$, where the
$\left\langle \cdot\right\rangle $ denotes the expectation value of $\cdot.$
To violate the CHSH inequality, for the first-qubit, we assign
\begin{equation}
\sigma_{z}\rightarrow\frac{1}{2\cos\mu}(A_{0}+A_{1})\text{, }\sigma
_{x}\rightarrow\frac{1}{2\sin\mu}(A_{0}-A_{1}),\label{a
\end{equation}
and for the second-qubit, we assign
\begin{equation}
\sigma_{z}\rightarrow B_{0}\text{, }\sigma_{x}\rightarrow B_{1\text{
}\label{b
\end{equation}
It is easy to verify that
\begin{align}
& \left\langle \phi\left\vert \mathbf{B}_{CHSH}\right\vert \phi\right\rangle
\nonumber\\
& =2\cos\mu\left\langle \phi\left\vert \sigma_{z}\otimes\sigma_{z}\right\vert
\phi\right\rangle +2\sin\mu\left\langle \phi\left\vert \sigma_{x}\otimes
\sigma_{x}\right\vert \phi\right\rangle \nonumber\\
& =2(\cos\mu+\sin\mu\sin2\phi)\nonumber\\
& =2\sqrt{1+\sin^{2}2\phi}\cos(\mu-\mu_{0})\nonumber\\
& \leq2\sqrt{1+\sin^{2}2\phi},\label{CHSH
\end{align}
where $\tan\mu_{0}=\sin2\phi$. Some remarks are in order. First, the operators
$(A_{0}+A_{1})\otimes B_{0}=\cos\mu\sigma_{z}\otimes\sigma_{z}$ and
$(A_{0}-A_{1})\otimes B_{1}=\sin\mu\sigma_{x}\otimes\sigma_{x}$, and hence
both terms in the first equality in (\ref{CHSH}) exemplify the usefulness of
the stabilizing operator and the logical (bit-flip) operator for finding the
local observables that violate the CHSH inequality. The maximal CHSH value
larger than two can be achieved by setting $\mu=\mu_{0}$. In particular,
$\left\vert \phi=\frac{\pi}{4}\right\rangle $ is maximally entangled,
$\sigma_{x}\otimes$ $\sigma_{x}$ becomes another stabilizing operator rather
than simply a logical bit-flip operator, and the CHSH inequality can be
maximally violated. There are a variety of Bell inequalities violated by the
graph states as a specific family of stabilizer states, where the associated
Bell operators can be reformulated as the sum of their stabilizing operators,
and hence the perfect/antiperfect correlations therein can reach the maximal
violation \cite{gf2,gf3,gs2,gs3,gs4,gs5}. If the multi-qubits mixed states
involve two stabilizing operators, their nonlocality can be verified by the
violation of stabilizer-based Bell-type inequalities \cite{mix1,mix2}. Second,
observables $A_{0}$ and $A_{1}$ can be regarded as the results of
\textquotedblleft cutting and mixing\textquotedblright\ the stabilizing
operator and the logical bit-flip operator into the local observables:
\begin{equation}
\sigma_{z}\otimes\sigma_{z},\sigma_{x}\otimes\sigma_{x}\overset{\text{cutting
}{\rightarrow}\sigma_{z},\sigma_{x}\overset{\text{mixing}}{\rightarrow
}A_{x_{i}}=\cos\mu\sigma_{z}+(-1)^{_{x_{i}}}\sin\mu\sigma_{x}.\label{mix
\end{equation}
Cutting means cutting the first qubit observables from $\sigma_{z
\otimes\sigma_{z}$ and $\sigma_{x}\otimes\sigma_{x}$ ; mixing means linearly
superposing the two cut observables. In what follows, the local observables on
the source side will be constructed in a similar way. In addition, note that
the local observables anticommute; i.e., $\left\{ A_{0}\text{,
A_{1}\right\} =\left\{ B_{0}\text{, }B_{1}\right\} =0$. Third, the
observable $B_{0}$ is the phase-flip operator, which can be exploited in the
tilted CHSH operators $\mathbf{B}_{\beta\text{-}CHSH}=\beta B_{0}+$
$\mathbf{B}_{CHSH}$ with the Bell inequality $\left\langle \mathbf{B
_{\beta\text{-}CHSH}\right\rangle \overset{c}{\leq}2+\beta$. By setting
$\beta=2/\sqrt{1+2\tan^{2}2\phi}$, the maximal violation can be achieved\ by
$\left\vert \phi\right\rangle $ \cite{non1, non2}. That is, the nonmaximally
entangled state can maximally violate the tilted Bell inequalities involving
the phase-flip operators.
One can generalize the above approach to the multi-qubit case. For example,
let $\left\vert \psi\right\rangle =\cos\psi\left\vert \overline{0
\right\rangle +\sin\psi\left\vert \overline{1}\right\rangle $ be the codeword
of the [[$5$, $1$, $3$]] stabilizer-based quantum error-correcting code
(SQECC), and let Alice hold the first qubit and Bob hold the other qubits. In
this case, the useful stabilizing operator and logical bit-flip operator are
$g=\sigma_{z}\otimes\sigma_{z}\otimes\sigma_{x}\otimes I\otimes\sigma_{x}$ and
$\sigma_{x}^{\otimes5}$, respectively. The observables of the first and second
qubits of the stabilizing operators and the bit-flip operator are the same as
those in (\ref{a}) and (\ref{b}), respectively. The last three qubits are
termed\textit{ idle} qubits since the observable $\sigma_{x}$ is always
measured on them, while the outcome of the fourth qubit is discarded if Bob
measures $B_{1}^{\prime}$. In the Bell test, Alice randomly measures one of
the observables in (\ref{a}), whereas Bob randomly measures $B_{0}^{\prime
}=B_{0}\otimes\sigma_{x}^{\otimes3}$ or $B_{1}^{\prime}=B_{1}\otimes\sigma
_{x}\otimes I\otimes\sigma_{x}$. The CHSH-like inequality can be written as
$\left\langle \sum\nolimits_{i,\text{ }j=0}^{1}(-1)^{ij}A_{i}\otimes
B_{j}^{\prime}\right\rangle \overset{c}{\leq}2.$ As another example, let the
logical states be $\left\vert \overline{0}\right\rangle =\left\vert
0\right\rangle ^{\otimes n}$ and $\left\vert \overline{1}\right\rangle
=\left\vert 1\right\rangle ^{\otimes n}$, and denote the $n$-qubit
Greenberger--Horne--Zeilinger (GHZ) state ($n\geq3$) as $\left\vert GHZ_{\phi
}\right\rangle =\cos\phi\left\vert \overline{0}\right\rangle +\sin
\phi\left\vert \overline{1}\right\rangle $. Alice holds the first $m$ qubits
and Bob holds the other $n-m$ qubits. Here, the useful stabilizing, logical
bit-flip and phase-flip operators can be set as $\sigma_{z}\otimes I^{\otimes
m-1}\otimes\sigma_{z}\otimes I^{\otimes n-m-1}$, $\sigma_{x}^{\otimes n}$, and
$I^{\otimes m}\otimes\sigma_{z}\otimes I^{\otimes n-m-1}$, respectively. One
can construct the local observables $A_{i}^{\prime\prime}=\cos\mu\sigma
_{z}\otimes I^{\otimes m-1}+(-1)^{i}\sin\mu\sigma_{x}^{\otimes m}$ using the
cutting-and-mixing scheme, and,\ $B_{j}^{\prime\prime}=(1-j)\sigma_{z}\otimes
I^{\otimes n-m-1}+j\sigma_{x}^{\otimes n-m}$ such that $\left\{ A_{0
^{\prime\prime}\text{, }A_{1}^{\prime\prime}\right\} =\left\{ B_{0
^{\prime\prime}\text{, }B_{1}^{\prime\prime}\right\} =0$. Hence, we reach the
CHSH-like inequality $\left\langle \sum\nolimits_{i,\text{ }j=0}^{1
(-1)^{ij}A_{i}^{\prime\prime}\otimes B_{j}^{\prime\prime}\right\rangle
\overset{c}{\leq}2$. Here, the last $n-m-1$ qubits are idle since only the
observable $\sigma_{x}$ is always measured on each of them. Conditional on the
qubit assignment, one can select the useful stabilizing operators and logical
operators to derive similar CHSH-like inequalities.
Recently, Bell nonlocality in quantum networks as the generalized Bell
experiments have attracted much research attention. Long-distance quantum
networks of large-scale multi-users are the substantial goals of quantum
communication, so it is fundamental to study their nonlocal capacities. A
quantum network involves multiple independent quantum sources, where each of
them initially emits a two- or multi-qubit entangled state shared among a set
of observers or agents. There are several obstacles to the study of
nonlocality in a quantum network. The classical correlations of a network
indicate more sophisticated causal relations and lead to stronger constraints
than those in the typical (one-source) Bell scenario. In addition, each
observer can perform a joint measurement on the qubits at hand, which could
result in strong correlations across the network. Unlike the typical linear
Bell-type inequalities, most Bell-type inequalities for various classes of
networks are nonlinear, which implies the nonconvexity of the multipartite
correlation space \cite{star, noncyclic,Poly,luo1}. In the two-source case as
the simplest quantum network, bilocal and nonlocal correlations were
thoroughly investigated \cite{bilocal1, bilocal2}. Next the Bell-type
inequalities for star-shaped and noncyclic networks were studied \cite{star,
noncyclic}. Recently, broader classes of quantum networks based on locally
causal structures have also been investigated \cite{Poly, Poly1, luo1,luo2}.
In particular, computationally efficient algorithms for constructing Bell
inequalities have been proposed \cite{luo1}. These Bell-type inequalities are
tailored for networks with quantum sources emitting either two-qubit Bell
states or generalized GHZ states. The stabilizing operators and logical
operators implicitly play substantial roles in setting up local joint
observables. On the other hand, regarding the potential applications of
encrypted communication in quantum networks, stabilizer quantum
error-correcting codes (SQECCs) can be more useful in quantum network
cryptography, such as in quantum secret sharing \cite{QSS} and secure quantum
key distribution protocols \cite{Preskill}. Revealing Bell nonlocality in a
network is required for detecting potential eavesdropping attacks. Moreover, a
structured quantum state with stabilizing operators and logical operators is
more useful in the engineering of quantum networks \cite{percolation,
memory,repeater1, QSS1, QSS2}.
In this work, we study the\ Bell nonlocality of quantum networks. Hereafter,
we consider a $K$-locality network $\mathcal{N}$ as shown in Fig. (1). There
are $K+M$ agents of which $K$, $\mathcal{S}^{(1)}$\ldots$\mathcal{S}^{(K)}$,
are on the source side and $M$, $\mathcal{R}^{(1)}$\ldots$\mathcal{R}^{(M)}$,
on the receiver side. There are $N$ independent sources. Let $0=e_{0
<e_{1}<\cdots<e_{K}=N$. The agent $\mathcal{S}^{(s)}$ holds the sources
$e_{s-1}+1$,\ldots\ , and $e_{s}$; thus the number of sources that
$\mathcal{S}^{(s)}$ holds is $(e_{s}-e_{s-1})$. The source $i$ ($e_{s-1}+1\leq
i\leq e_{s}$) held by $\mathcal{S}^{(s)}$ emits $n_{i}$ particles of which
$n_{i}^{(0)}$ ($\neq0$) are in possession of the agent $\mathcal{S}^{(s)\text{
}}$and $n_{i}^{(m)}$($\geq$ $0$) are sent to the agent $\mathcal{R}^{(m)}$.
Consequently, $\sum_{j=0}^{M}n_{i}^{(j)}=n_{i}$. In the classical networks,
source $i$ emits $n_{i}$ particles described by hidden state $\lambda_{i}$; in
the quantum networks, it emits $n_{i}$ qubits in quantum state $\left\vert
\psi_{(i)}\right\rangle $, which can be either a stabilizer state or a
codeword of a SQECC. In the Bell test, observer $\mathcal{S}^{(s)
$($\mathcal{R}^{(m)}$) measures observable $A_{x_{s}}^{(s)}$ ($B_{y_{m}
^{(m)}$) with the associated outcome $a_{s}$ ($b_{m}$), where $x_{s}$,
$y_{m}\in\{0,1\}$ and $a_{s}$, $b_{m}\in\{-1,1\}$. In the following, the index
pair $(i,$ $j)$ denotes the $j$-th particle emitted from source\ $i$, and
$(i$, $j)\rightarrow\mathcal{X}^{(k)}$ indicates that particle $(i$, $j)$ is
at agent $\mathcal{X}^{(k)}$'s hand ($\mathcal{X}\in\{\mathcal{R}$,
$\mathcal{S}\}$). Finally, we denote the particle sets $\mathbb{S
^{(k)}=\{(i,j)|\forall(i,j)\rightarrow\mathcal{S}^{(k)}\}$ and $\mathbb{R
^{(k)}=\{(i,j)|\forall(i,j)\rightarrow\mathcal{R}^{(k)}\}$.
The remainder of this paper is organized as follows: In Sec. II, we
investigate the classical networks, where general local causal models (GLCMs)
are introduced. Then, Bell inequalities associated with classical networks are
proposed. We study Bell nonlocality of $K$-locality quantum networks in Sec.
III. First we review the stabilizer states and SQECCs. Then we demonstrate how
to violate the proposed Bell inequalities using deliberated local observables.
It will be shown that, the local observables can be made up of
\textquotedblleft cut-graft-mixing\textquotedblright\ stabilizing operators
and logical operators. Notably, there are two alternative nonlocal
correlations. One is due to the entanglement of the logical states themselves,
where two suitable stabilizing operators are exploited to construct the local
observables. The other nonlocality results from the entanglement due to the
superposition of logical states, in which a stabilizing operator and a logical
operator can be suitably exploited in this case. We illustrate these two kinds
of nonlocality in terms of the 5-qubit code, which is the smallest stabilizer
code that protects against single-qubit errors. In Sec. IV, we propose Bell
inequalities tailored for nonmaximally entangled states distributed in a
quantum network. Finally, conclusions are drawn in Sec. V.
\section{Classical networks}
\subsection{General local causal models}
The general local causal models (GLCMs) in classical networks can be described
as follows: The $i$-th source is associated with a local random variable as
the hidden state $\lambda_{i}$ in the measure space $(\Omega_{i}$, $\Sigma
_{i}$, $\mu_{i})$\ with the normalization condition $\int_{\Omega_{i}}d\mu
_{i}(\lambda_{i})\ =1$. All systems in the Bell test scenario are considered
in the hidden state $\Lambda=(\lambda_{i},\cdots,$ $\lambda_{N})$ in the
measure space $(\Omega$, $\Sigma$, $\mu)$, where $\Omega=$ $\Omega_{1
\otimes\cdots\otimes\Omega_{N}$ and the measure of $\Lambda$ is given by
$\mu(\Lambda)=\prod\mu_{i}(\lambda_{i})$ with the normalization condition
$\int_{\Omega}d\mu(\Lambda)=1$. In the measurement phase, agent $\mathcal{S
^{(s)}$ performs the measurement $A_{x_{s}}^{(s)}$ on state $\Lambda_{S
^{(s)}$ with the corresponding outcome denoted by $a_{s}\in\{1,-1\}$.\ The
GLCM suggests a joint conditional probability distribution of the measurement outcome
\begin{equation}
P(\mathbf{a}|\mathbf{x})=\int_{\Omega}d\mu(\Lambda)\prod_{s=1}^{K
P(a_{s}|x_{s},\Lambda), \label{JP
\end{equation}
where $\mathbf{a}=(a_{1},a_{2}$, $\cdots$, $a_{K})$ and\ $\mathbf{x
=(x_{1},x_{2}$, $\cdots$, $x_{K})$; hence, we have the correlatio
\begin{equation}
\left\langle A_{x_{1}}^{(1)}\ldots A_{x_{K}}^{(K)}\right\rangle =\sum
\nolimits_{\mathbf{a}}a_{1}\cdots a_{K}P(\mathbf{a}|\mathbf{x},\Lambda).
\end{equation}
On the other hand, in the $K$-locality condition \cite{luo1}, $\mathcal{S
^{(s)}$ can access the hidden state $\Lambda_{S}^{(s)}=(\lambda_{e_{s-1}+1
$,$\cdots$, $\lambda_{e_{s}})$ in the measure space $\Omega_{S}^{(s)
=\Omega_{e_{s-1}+1}\otimes\cdots\otimes\Omega_{e_{s}}$, where
\begin{equation}
\Lambda_{S}^{(s)}\cap\Lambda_{S}^{(s^{\prime})}=\emptyset\text{
\Leftrightarrow\text{ }s\neq s^{\prime
\end{equation}
and
\begin{equation}
\cup_{i=1}^{K}\Lambda_{S}^{(i)}=\Lambda.
\end{equation}
Eq. (\ref{JP}) can be rewritten as
\begin{equation}
P(\mathbf{a}|\mathbf{x})=\prod_{s=1}^{K}\int_{\Omega_{S}^{(s)}}d\mu
_{s}(\Lambda_{S}^{(s)})P(a_{s}|x_{s}\text{, }\Lambda_{S}^{(s)}),
\end{equation}
Denote the local expectation as
\begin{equation}
\left\langle A_{x_{s}}^{(s)}\right\rangle =\sum_{a_{s}=-1,1}P(a_{s
|x_{s}\text{, }\Lambda_{S}^{(s)}),
\end{equation}
By $K$-locality with the GLCM, we have
\begin{equation}
\left\langle A_{x_{1}}^{(1)}\ldots A_{x_{K}}^{(K)}\right\rangle =\prod
_{s=1}^{K}\left\langle A_{x_{s}}^{(s)}\right\rangle .
\end{equation}
Denote $\Delta^{\pm}A^{(i)}=\frac{1}{2}(\left\langle A_{0}^{(i)}\right\rangle
\pm\left\langle A_{1}^{(i)}\right\rangle )$. Since $-1\leq\left\langle
A_{x_{i}}^{(i)}\right\rangle \leq1$, we have
\begin{equation}
-1\leq\Delta^{-}A^{(i)},\Delta^{+}A^{(i)}\leq1
\end{equation}
and
\begin{equation}
\left\vert \Delta^{+}A^{(i)}\right\vert +\left\vert \Delta^{-}A^{(i)
\right\vert =\max\{\left\vert \left\langle A_{0}^{(i)}\right\rangle
\right\vert ,\left\vert \left\langle A_{1}^{(i)}\right\rangle \right\vert
\}\leq1. \label{delta
\end{equation}
On the receiving side, let $n_{_{j}}^{(m)}>0$ if $j\in\{j_{1},\cdots
,j_{_{k_{m}}}\}$ and $n_{_{j}}^{(m)}=0$ otherwise. In this case,
$\mathcal{R}^{(j)}$ receives the hidden states $\Lambda_{R}^{(m)
=(\lambda_{j_{1}}$,$\cdots$, $\lambda_{j_{k_{m}}})$ in the measure space
$\Omega_{R}^{(m)}=$ $\Omega_{j_{1}}\otimes\cdots\otimes\Omega_{j_{k_{m}}}$,
where $1\leq j_{1}<j_{2}...<j_{k_{m}}\leq N$. In the measurement phase,
$\mathcal{R}^{(m)}$ performs the measurement $B_{y_{m}}^{(m)},$ $y_{m
\in\{0,1\}$, on the state $\Lambda_{R}^{(m)}$ with the corresponding outcome
denoted by $b_{m}\in\{1,-1\}$. We hav
\begin{align}
\left\vert \left\langle B_{y_{m}}^{(m)}\right\rangle \right\vert &
=\left\vert \int_{\Omega_{R}^{(m)}}\prod_{k}d\mu_{k}(\lambda_{k})\sum
_{b_{m}=-1,1}b_{m}P(b_{m}|y_{m}\text{, }\Lambda_{R}^{(m)})\right\vert
\nonumber\\
& \leq1. \label{bb
\end{align}
\subsection{Bell inequalities}
The correlation strength in the proposed $K$-locality network is evaluated by
two quantities:
\begin{equation}
\mathbf{I}_{K,M}=\frac{1}{2^{K}}\left\langle \prod_{i=1}^{K}\prod_{j=1
^{M}(A_{0}^{(i)}+A_{1}^{(i)})B_{0}^{(j)}\right\rangle
\end{equation}
and
\begin{equation}
\mathbf{J}_{K,M}=\frac{1}{2^{K}}\left\langle \prod_{i=1}^{K}\prod_{j=1
^{M}(A_{0}^{(i)}-A_{1}^{(i)})B_{1}^{(j)}\right\rangle .
\end{equation}
In the classical scenario, we hav
\begin{align}
& \left\vert \mathbf{I}_{K,M}\right\vert _{GLCM}\nonumber\\
& =\frac{1}{2^{K}}\int_{\Omega}\prod_{i=1}^{K}\left\vert \left\langle
A_{0}^{(i)}+A_{1}^{(i)}\right\rangle \right\vert \prod_{j=1}^{M}\left\vert
\left\langle B_{0}^{(j)}\right\rangle \right\vert \prod_{k=1}^{N}d\mu
_{k}(\lambda_{k})\nonumber\\
& \leq\int_{\Omega}\prod_{i=1}^{K}\left\vert \Delta^{+}A^{(i)}\right\vert
\prod_{k=1}^{N}d\mu_{k}(\lambda_{k})
\end{align}
and
\begin{align}
& \left\vert \mathbf{J}_{K,M}\right\vert _{GLCM}\nonumber\\
& =\frac{1}{2^{K}}\int_{\Omega}\prod_{i=1}^{K}\left\vert \left\langle
A_{0}^{(i)}-A_{1}^{(i)}\right\rangle \right\vert \prod_{j=1}^{M}\left\vert
\left\langle B_{1}^{(j)}\right\rangle \right\vert \prod_{k=1}^{N}d\mu
_{k}(\lambda_{k})\nonumber\\
& \leq\int_{\Omega}\prod_{i=1}^{K}\left\vert \Delta^{-}A^{(i)}\right\vert
\prod_{k=1}^{N}d\mu_{k}(\lambda_{k}),
\end{align}
where the inequalities are from (\ref{bb}). Before proceeding further, two
useful lemmas are introduced as follows:
\textit{Lemma 1 }(Mahler inequality) Let $\alpha_{k}$ and $\beta_{k}$ be
nonnegative real numbers, and let $p$ $\in\mathbb{N}$; then
\begin{equation}
\prod\nolimits_{k=1}^{p}\alpha_{k}^{1/p}+\prod\nolimits_{k=1}^{p}\beta
_{k}^{1/p}\leq\prod\nolimits_{k=1}^{p}(\alpha_{k}+\beta_{k})^{1/p}.
\end{equation}
The proof can be found in \cite{star}.
We obtain the following nonlinear Bell inequality:
\begin{align}
& \left\vert \mathbf{I}_{K,M}\right\vert _{GLCM}^{\frac{1}{K}}+\left\vert
\mathbf{J}_{K,M}\right\vert _{GLCM}^{\frac{1}{K}}\nonumber\\
& \leq\{\int_{\Omega}\prod_{i=1}^{K}(\left\vert \Delta^{+}A^{(i)}\right\vert
+\left\vert \Delta^{-}A^{(i)}\right\vert )\prod_{k=1}^{N}d\mu_{k}(\lambda
_{k})\}^{\frac{1}{K}}\nonumber\\
& =\{\int_{\Omega}\prod_{i=1}^{K}(\max\{\left\vert \left\langle A_{x_{i
=0}\right\rangle \right\vert ,\text{ }\left\vert \left\langle A_{x_{i
=1}\right\rangle \right\vert \})\prod_{k=1}^{N}d\mu_{k}(\lambda_{k
)\}^{\frac{1}{K}}\nonumber\\
& \leq(\int_{\Omega}\prod_{j=k}^{N}d\mu_{k}(\lambda_{k}))^{\frac{1}{K}}=1,
\label{NonBell
\end{align}
where the first inequality is from Lemma 1, and the fourth line is a
consequence of (\ref{delta}).
\section{Bell nonlocality of a quantum network}
\subsection{Review of stabilizer states and stabilizer-based quantum
error-correcting code}
Let the state $\left\vert \psi_{(i)}\right\rangle $ emitted from the quantum
source $i$ be an $n_{i}$-qubit stabilizer state. By definition, an $n_{i
$-qubit stabilizer state is one that is stabilized by a stabilizer which is a
nontrivial subgroup of the Pauli group. In particular, if $\left\vert
\psi_{(i)}\right\rangle $ as a codeword of [[$n_{i}$, $k_{i}$, $d_{i}$]]
SQECC, denote the last ($k_{i}$-th) logical qubit as $\left\vert \overline
{0}_{i}\right\rangle $ and$\ \left\vert \overline{1}_{i}\right\rangle $ and
the corresponding logical bit- and phase-flip operators as $\overline{X
_{(i)}^{\text{ }}$ and $\overline{Z}_{(i)}^{\text{ }}$. Without loss of
generality, we have
\begin{equation}
\left\vert \psi_{(i)}\right\rangle =\sum_{z\in\{0,1\}^{\otimes k_{i}}
a_{z}\left\vert \overline{z}\right\rangle =\cos\phi_{i}\left\vert \varphi
_{i}^{0}\right\rangle \left\vert \overline{0}_{i}\right\rangle +\sin\phi
_{i}\left\vert \varphi_{i}^{1}\right\rangle \left\vert \overline{1
_{i}\right\rangle , \label{SQECCC state
\end{equation}
where $\left\langle \varphi_{i}^{0}|\varphi_{i}^{1}\right\rangle \in R$, and
$\left\langle \varphi_{i}^{0}|\varphi_{i}^{0}\right\rangle =\left\langle
\varphi_{i}^{1}|\varphi_{i}^{1}\right\rangle =1$. In addition
\begin{equation}
\overline{X}_{(i)}^{\text{ }}\left\vert \overline{0}_{i}\right\rangle
=\left\vert \overline{1}_{i}\right\rangle ,\overline{X}_{(i)}^{\text{
}\left\vert \overline{1}_{i}\right\rangle =\left\vert \overline{0
_{i}\right\rangle ,
\end{equation
\begin{equation}
\overline{Z}_{(i)}^{\text{ }}\left\vert \overline{c}_{i}\right\rangle
=(-1)^{c}\left\vert \overline{c}_{i}\right\rangle (c\in\{0,1\}).
\end{equation}
In what follows, we exploit $g_{(i)}$ and $g_{(i)}^{\prime}$ as useful
stabilizing operators, and $\overline{X}_{(i)}^{\text{ }}$ and $\overline
{Z}_{(i)}^{\text{ }}$ as useful logical operators. Denote the Pauli set of
qubit $(i,$ $j)$ as $\mathbb{P}_{(i,\text{ }j)}=\{X_{(i,\text{ }j)}$,
$Y_{(i,\text{ }j)}$, $Z_{(i,\text{ }j)}$, $I_{(i,\text{ }j)}\}$, where
$X_{(i,\text{ }j)}=\sigma_{x},$ $Y_{(i,\text{ }j)}=\sigma_{y},$ $Z_{(i,\text{
}j)}=\sigma_{z}$, and $I_{(i,\text{ }j)}=I$. Let $h_{(i)}$ $\in\{\overline
{X}_{(i)}^{\text{ }},$ $g_{(i)}^{\prime}\}$ and $h_{(i)}^{\prime}$
$\in\{\overline{X}_{(i)}^{\text{ }},$ $\overline{Z}_{(i)}^{\text{ }}\}$. Note
that
\begin{equation}
\left[ g_{(i)}\text{, }h_{(i)}\right] =\left[ g_{(i)}\text{,
h_{(i)}^{\prime}\right] =0, \label{commu
\end{equation}
and since $g_{(i)}$, $h_{(i)}$, and $h_{(i)}^{\prime}$ are $n_{i}$-fold tensor
products of the Pauli operators, we have
\begin{equation}
g_{(i)}=\prod_{j=1}^{n_{i}}\widehat{s}_{(i,\text{ }j)},h_{(i)}=\prod
_{j=1}^{n_{i}}\widehat{t}_{(i,\text{ }j)},h_{(i)}^{\prime}=\prod_{j=1}^{n_{i
}\widehat{t}_{(i,\text{ }j)}^{\prime},
\end{equation}
where $\widehat{s}_{(i,\text{ }j)}$, $\widehat{t}_{(i,\text{ }j)}$, and
$\widehat{t}_{(i,\text{ }j)}^{\prime}\in\mathbb{P}_{(i,\text{ }j)}$ $\forall
i$, $j.$
Before proceeding further, some notations are introduced as follows: Denote
the qubit index sets as $\mathbb{D}_{i}=\{j|\{\widehat{s}_{(i,\text{
j)},\widehat{t}_{(i,\text{ }j)}\}=0\}$ and $\mathbb{H}_{i}=\{j|[\widehat
{s}_{(i,\text{ }j)},\widehat{t}_{(i,\text{ }j)}]=0\}$, where $\mathbb{D
_{i}\cap\mathbb{H}_{i}=\emptyset$, and $\left\vert \mathbb{D}_{i}\right\vert
+\left\vert \mathbb{H}_{i}\right\vert =n_{i}$. The qubits belonging to the
sets $\mathbb{D}_{1}$, $\cdots$, $\mathbb{D}_{N}$ play substantial roles in
the proposed Bell inequalities of the quantum networks. Note that qubit $(i$,
$j)$ is called idle if $(i$, $j)\in$ $\mathbb{H}_{i}$ and $(i$, $j)\rightarrow
\mathcal{R}^{(k)}$ for some $k$, and the nonidentity operator of $\widehat
{s}_{(i,\text{ }j)}\ $or $\widehat{t}_{(i,\text{ }j)}$ is denoted by
$\widehat{o}_{(i,\text{ }j)}$, which will be repeatedly measured on qubit
$(i$, $j)$ in the Bell test. Let the indicator $\delta_{(i,\text{ }j)}^{D}=1$
if qubit $(i$, $j)\in$ $\mathbb{D}_{i}$, and $0$ if $(i$, $j)\in$
$\mathbb{H}_{i}$. According to (\ref{commu}), we have $\sum_{j=1}^{n_{i
}\delta_{(i,\text{ }j)}^{D}\operatorname{mod}2=0,$ and hence,
\begin{equation}
\sum_{i=1}^{N}\sum_{j=1}^{n_{i}}\delta_{(i,\text{ }j)}^{D}\operatorname{mod
2=0. \label{t1
\end{equation}
Here, we focus on quantum networks fulfilling the following conditions:
Regarding the qubits held by $\mathcal{S}^{(k)}$
\begin{equation}
\sum\nolimits_{i=e_{k-1}+1}^{e_{k}}\sum\nolimits_{j=1,(i,\text{
j)\in\mathbb{S}^{(k)}}^{n_{i}}\delta_{(i,\text{ }j)}^{D}\operatorname{mod
2=1\text{ }\forall k=1,2,\cdots,K\text{,} \label{t2
\end{equation}
and regarding the qubits held by $\mathcal{R}^{(k)}$
\begin{equation}
\sum\nolimits_{i,\text{ }j,(i,\text{ }j)\in\mathbb{R}^{(k)}}\delta_{(i,\text{
}j)}^{D}\operatorname{mod}2=1\text{ }\forall j=1,\cdots,M. \label{t3
\end{equation}
Combining the constraints (\ref{t1}), (\ref{t2}), and (\ref{t3}), the value
$K+M$ must be even. According to the qubit distribution, it is very flexible
to choose suitable $g_{(i)}$, $h_{(i)}$, and $h_{(i)}^{\prime}$ to implement
local observables.
\subsection{Violation of the Bell inequalities in a quantum network}
To implement the local observables on the source side, we assig
\begin{equation}
\widehat{S}^{(k)}=\prod\nolimits_{i,\text{ }j,\text{ }(i,\text{
j)\in\mathbb{S}^{(k)}}\widehat{s}_{(i,\text{ }j)}\rightarrow\frac{1}{2\cos\mu
}(A_{0}^{(k)}+A_{1}^{(k)}) \label{SS
\end{equation}
and
\begin{equation}
\widehat{T}^{(k)}=\prod\nolimits_{i^{\prime},\text{ }j^{\prime},\text{
}(i^{\prime},\text{ }j^{\prime})\in\mathbb{S}^{(k)}}\widehat{t}_{(i^{\prime
},\text{ }j^{\prime})}\rightarrow\frac{1}{2\sin\mu}(A_{0}^{(k)}-A_{1}^{(k)}).
\label{TT
\end{equation}
Since $A_{x_{k}}^{(k)}=A_{x_{k}}^{(k)\dag}$and $\left( A_{xi}^{(i)}\right)
^{2}=A_{x_{i}}^{(i)}A_{xi}^{(i)\dag}=I$, $A_{x_{k}}^{(k)}$ is a unitary
Hermitian with an eigenvalue of either $1$ or $-1$. On the receiving side, the
local observables $B_{y_{l}}^{(l)}$ for observer $\mathcal{R}^{(k)}$ ar
\begin{equation}
B_{y_{l}}^{(l)}=(1-y_{l})\prod\nolimits_{i,\text{ }j,\text{ }(i,\text{
j)\in\mathbb{R}^{(l)}}\widehat{s}_{(i,\text{ }j)}+y_{l}\prod
\nolimits_{i^{\prime},\text{ }j^{\prime},\text{ }(i^{\prime},\text{
j^{\prime})\in\mathbb{R}^{(l)}}\widehat{t}_{(i^{\prime},\text{ }j^{\prime})},
\end{equation}
where $\left\{ B_{0}^{(k)}\text{, }B_{1}^{(k)}\right\} =0$ according to
(\ref{t3}). In the measuring phase, $\mathcal{S}^{(k)}$ ($\mathcal{R}^{(k)}$ )
randomly measures either the observable $A_{0}^{(k)}$ or $A_{1}^{(k)}$
($B_{0}^{(k)}$ or $B_{1}^{(k)}$) with an outcome of either $1$ or $-1$. In
practice, if the qubit $(i^{\prime\prime},$ $j^{\prime\prime})\rightarrow
\mathcal{R}^{(k)}$ is idle, $\mathcal{R}^{(k)}$ can always measure the
observable $\widehat{o}_{(i^{\prime\prime},j^{\prime\prime})}$ in each round
and discard the outcome in postprocessing if $\widehat{s}_{(i^{\prime\prime
},\text{ }j^{\prime\prime})}=I$ or $\widehat{t}_{(i^{\prime\prime},\text{
}j^{\prime\prime})}=I$.
To evaluate the correlation strength, let $\left\vert \overline{\Psi
}\right\rangle $ $=\prod\nolimits_{i\text{ }}^{N}\otimes\left\vert \psi
_{(i)}\right\rangle ,$ and we hav
\begin{align}
& \left\vert \mathbf{I}_{K,M}\right\vert _{Q}\nonumber\\
& =\frac{1}{2^{K}}\left\vert \sum_{x_{1},\cdots x_{M}=0}^{1}\left\langle
A_{x_{1}}^{(1)}\cdots A_{x_{K}}^{(K)}\prod_{j=1}^{M}B_{0}^{(j)}\right\rangle
\right\vert \nonumber\\
& =\frac{1}{2^{K}}\left\vert \left\langle \prod_{i=1}^{K}(A_{0}^{(i)
+A_{1}^{(i)})\prod_{j=1}^{M}B_{0}^{(j)}\right\rangle \right\vert \nonumber\\
& =\left\vert \left\langle \prod_{k=1}^{K}\cos\theta_{k}\prod
\nolimits_{i,\text{ }j,\text{ }(i,\text{ }j)\in\mathbb{S}^{(k)}}\widehat
{s}_{(i,\text{ }j)}\prod_{l=1}^{M}\prod\nolimits_{i^{\prime},\text{
j^{\prime},\text{ }(i^{\prime},\text{ }j^{\prime})\in\mathbb{R}^{(l)}
\widehat{s}_{(i^{\prime},\text{ }j^{\prime})}\right\rangle \right\vert
\nonumber\\
& =\left\vert \prod_{k=1}^{K}\cos\theta_{k}\left\langle \prod_{i=1
^{N}g_{(i)}\right\rangle \right\vert \nonumber\\
& =\left\vert \prod_{k=1}^{K}\cos\theta_{k}\right\vert ,
\end{align}
where $\left\langle \cdot\right\rangle =\left\langle \overline{\Psi}\left\vert
\cdot\right\vert \overline{\Psi}\right\rangle $ and hence $\left\langle
\prod_{i=1}^{N}g_{(i)}\right\rangle =1$. \bigskip\ In addition
\begin{align}
& \left\vert \mathbf{J}_{K,M}\right\vert _{Q}\nonumber\\
& =\frac{1}{2^{K}}\left\vert \sum_{x_{1},\cdots x_{M}=0}^{1}(-1)^{\sum x_{j
}\left\langle A_{x_{1}}^{(1)}\cdots A_{x_{n}}^{(n)}\prod_{j=1}^{M}B_{1
^{(j)}\right\rangle \right\vert \nonumber\\
& =\frac{1}{2^{K}}\left\vert \left\langle \prod_{i=1}^{K}(A_{0}^{(i)
-A_{1}^{(i)})\prod_{j=1}^{M}B_{1}^{(j)}\right\rangle \right\vert \nonumber\\
& =\left\vert \left\langle \prod_{k=1}^{K}\sin\theta_{k}\prod
\nolimits_{i,\text{ }j,\text{ }(i,\text{ }j)\in\mathbb{S}^{(k)}}\widehat
{t}_{(i,\text{ }j)}\prod_{l=1}^{M}\prod\nolimits_{i^{\prime},\text{
j^{\prime},\text{ }(i^{\prime},\text{ }j^{\prime})\in\mathbb{R}^{(l)}
\widehat{t}_{(i^{\prime},\text{ }j^{\prime})}\right\rangle \right\vert
\nonumber\\
& =\left\vert \prod_{j=1}^{K}\sin\theta_{j}\prod_{i=1}^{N}\left\langle
h_{(i)}\right\rangle \right\vert .
\end{align}
A useful lemma is introduced as follows:
\bigskip\textit{Lemma 2 }For any $\theta_{1},\theta_{2},\cdots,\theta_{K
\in\lbrack0,\frac{\pi}{2}]$,
\begin{equation}
\left\vert \prod\nolimits_{j=1}^{K}\sin\theta_{j}\right\vert ^{\frac{1}{K
}\leq\sin\overline{\theta}\text{ and }\left\vert \prod\nolimits_{j=1}^{K
\cos\theta_{j}\right\vert ^{\frac{1}{K}}\leq\cos\overline{\theta},
\end{equation}
where $\overline{\theta}=\frac{1}{K}\sum_{j=1}^{K}\theta_{j}$. The proof can
be found in \cite{luo1}.
Let $c_{i}=\left\langle h_{(i)}\right\rangle \leq1$ and $C=\left\vert
\prod_{i=1}^{N}c_{i}\right\vert ^{\frac{1}{K}}$. We hav
\begin{align}
& \left\vert \mathbf{I}_{K,M}\right\vert _{Q}^{\frac{1}{K}}+\left\vert
\mathbf{J}_{K,M}\right\vert _{Q}^{\frac{1}{K}}\nonumber\\
& =\left\vert \prod\nolimits_{k=1}^{K}\cos\theta_{j}\right\vert ^{\frac{1
{K}}+\left\vert \prod\nolimits_{k=1}^{K}\sin\theta_{j}\right\vert ^{\frac
{1}{K}}\left\vert C\right\vert \nonumber\\
& \leq\left\vert \cos\overline{\theta}\right\vert +\left\vert \sin
\overline{\theta}\right\vert \left\vert C\right\vert \nonumber\\
& \leq\sqrt{1+C^{2}}, \label{Max
\end{align}
where the first inequality in (\ref{Max}) follows from Lemma 2 and the second
inequality follows from the Cauchy-Schwarz inequality. To reach the maximal
violation, by setting $\theta_{1}=\ldots=\theta_{K}=\overline{\theta}$, and
$\tan\overline{\theta}=C$, we obtain
\begin{equation}
\max(\left\vert \mathbf{I}_{K,M}\right\vert _{Q}^{\frac{1}{K}}+\left\vert
\mathbf{J}_{K,M}\right\vert _{Q}^{\frac{1}{K}})=\sqrt{1+C^{2}}>1.
\label{Violation
\end{equation}
Some discussion is in order.\ The state $\left\vert \psi_{(i)}\right\rangle $
can contribute to the nonlocality of the quantum network through different
facets of its nonlocality. Loosely speaking, there are $C_{2}^{2^{n_{i}-k_{i
}}$ ways of choosing suitable $\ g_{(i)}$ and $g_{(i)}^{\prime}$ and
$k_{i}(2^{2(n_{i}-k_{i})})$ ways of choosing suitable $\ g_{(i)}$ and
$\overline{X}_{(i)}^{\text{ }}$, which reflect the flexibility of testing Bell
nonlocality in the quantum networks. The maximum value of $C=1$ ($\overline
{\theta}=\frac{\pi}{4}$), and hence $\max(\left\vert I_{K,M}\right\vert
_{Q}^{\frac{1}{K}}+\left\vert J_{K,M}\right\vert _{Q}^{\frac{1}{K}})=\sqrt{2
$, can be reached if $c_{i}=1$ $\forall$ $i$. That is,\ $h_{(i)}$\ stabilizes
$\left\vert \psi_{(i)}\right\rangle $ $\forall$ $i$. As shown in (\ref{SS})
and (\ref{TT}), one can benefit from the stabilizing operators in designing
local observables to achieve the maximal violations of Bell inequalities in
$K$-locality quantum networks. In detail, $c_{i}=1$ if $h_{(i)}=$
$\overline{X}_{(i)}^{\text{ }}$, $\left\vert \left\langle \varphi_{i
^{0}|\varphi_{i}^{1}\right\rangle \right\vert =1$, and $\phi_{i}=\frac{\pi
{4}$. The Bell nonlocality here is due to the superposition of $\left\vert
\overline{0}_{i}\right\rangle $ and $\left\vert \overline{1}_{i}\right\rangle
$; if $h_{(i)}=$ $g_{(i)}^{\prime}$, $c_{i}$ is certain to be $1$. The Bell
nonlocality in a network is due to that of either the stabilizer state or the
logical states of the stabilizer code [[$n_{i}$, $k_{i}$, $d_{i}$]]. Note that
such nonlocality involving the stabilizing operators $g_{(i)}$ and
$g_{(i)}^{\prime}$ can be also obtained even by using specific states
stabilized by the same stabilizing operators. For example, given source $i$
emitting a four-qubit state with stabilizing operators $g_{(i)}=$ $\sigma
_{z}^{\otimes4}$ and $g_{(i)}^{\prime}=$ $\sigma_{x}^{\otimes4}$, which both
can also stabilize the 4-qubit Smolin state $\rho_{\text{Smolin}}=\frac{1
{4}\sum\nolimits_{ij=0,1}\left\vert \Psi_{ij}\right\rangle \left\langle
\Psi_{ij}\right\vert $, where $\left\vert \Psi_{ij}\right\rangle =\sigma
_{z}^{i}\otimes\sigma_{x}^{j}[\frac{1}{\sqrt{2}}(\left\vert 00\right\rangle
+\left\vert 11\right\rangle )]$. Eventually, Bell nonlocality of at most
$\left\vert \mathbb{D}_{i}\right\vert $-qubit entanglement in $\left\vert
\psi_{(i)}\right\rangle $ is involved.
As an illustration, let $K=N=2$ and $M=1$, and let sources 1 and 2 each emit
the codeword states of $[[5$, $1$, $3]]$ SQECC with four stabilizer generators
(the subscript qubit index $(i,$ $j)$ is shortened as $j$)
\begin{align}
\mathfrak{g}_{1} & =X_{1}Z_{2}Z_{3}X_{4}I_{5},\mathfrak{g}_{2}=I_{1
X_{2}Z_{3}Z_{4}X_{5},\nonumber\\
\mathfrak{g}_{3} & =X_{1}I_{2}X_{3}Z_{4}Z_{5},\mathfrak{g}_{4}=Z_{1
X_{2}I_{3}X_{4}Z_{5}. \label{gg
\end{align}
The well-known logical bit-flip and phase-flip operators are $\overline
{X}^{\prime\text{ }}=\prod\nolimits_{i=1}^{5}X_{i}$ and $\overline{Z
^{\prime\text{ }}=\prod\nolimits_{i=1}^{5}Z_{i}$. Observers $\mathcal{S
^{(1)}$ and $\mathcal{S}^{(2)}$ hold qubits $(1,$ $1)$ and $(2,$ $1)$,
respectively, while observer $\mathcal{R}^{(k)}$ holds the other 8 qubits, as
shown in Fig. (2). Note that, in the following examples, the bilocal
inequality is
\begin{equation}
\sqrt{\mathbf{I}_{2,1}}+\sqrt{\mathbf{J}_{2,1}}\leq1, \label{bibi
\end{equation}
which is exactly the bilocal inequality for binary inputs and outputs in
\cite{bilocal1}.
\textit{Example (a)}:\textit{ }Let $\left\vert \psi_{(1)}\right\rangle
=\left\vert \psi_{(2)}\right\rangle =\cos\phi\left\vert \overline
{0}\right\rangle +\sin\phi\left\vert \overline{1}\right\rangle $. Here, we
choose the useful operators $g_{(i)}=\mathfrak{g}_{1}\mathfrak{g
_{2}\mathfrak{g}_{3}\mathfrak{g}_{4}=Z_{1}Z_{2}X_{3}I_{4}X_{5}$ and
$h_{(i)}=\overline{X}^{\text{ }}=\overline{X}^{\prime}$. In this case,
$\left\vert \mathbb{D}_{i}\right\vert =2$, and $(i,$ $3)$, $(i,$ $4)$, $(i,$
$5)$ are idle qubits. $A_{x_{i}}^{(i)}=\frac{1}{\sqrt{2}}(Z_{(i,1)
+(-1)^{x_{i}}X_{(i,1)})$, $B_{0}^{(1)}=\prod\nolimits_{i=1}^{2}Z_{(i,2)
X_{(i,3)}X_{(i,5)}$, and $B_{1}^{(1)}=\prod\nolimits_{i=1}^{2}X_{(i,2)
X_{(i,3)}X_{(i,4)}X_{(i,5)}$. Note that $\mathbf{I}_{2,1}$ and $\mathbf{J
_{2,1}$ here are formally equivalent to $I^{22}$ and $J^{22}$ in
[\cite{bilocal1}]. \ Denote the 5-qubit state $\left\vert \varphi\right\rangle
=(\cos\phi\left\vert 0\right\rangle _{1}\left\vert 0\right\rangle _{2
+\sin\phi\left\vert 1\right\rangle _{1}\left\vert 1\right\rangle
_{2})\left\vert +\right\rangle _{3}\left\vert +\right\rangle _{4}\left\vert
+\right\rangle _{5}$. It is easy to verify that $g_{(i)}\left\vert
\varphi\right\rangle =\left\vert \varphi\right\rangle $ and $\left\langle
\psi_{(i)}\left\vert h_{(i)}\right\vert \psi_{(i)}\right\rangle =\left\langle
\varphi\left\vert h_{(i)}\right\vert \varphi\right\rangle $. As a result, the
same violation can be obtained using either $\left\vert \psi_{(1)
\right\rangle \left\vert \psi_{(2)}\right\rangle $ or $\left\vert
\varphi\right\rangle ^{\otimes2}$.
\textit{Example (b)}:\textit{ }Let $\left\vert \psi_{(1)}\right\rangle
=\left\vert \overline{0}\right\rangle $, $\left\vert \psi_{(2)}\right\rangle
=\left\vert \overline{1}\right\rangle $. Here, we choose the useful operators
$g_{(i)}=\mathfrak{g}_{1}\mathfrak{g}_{2}\mathfrak{g}_{3}\mathfrak{g
_{4}=Z_{1}Z_{2}X_{3}I_{4}X_{5}$ and $h_{(i)}=\mathfrak{g}_{1}$ for any
$i\in\{1,2\}$. In this case, the local observables $A_{x_{i}}^{(i)}$ and
$B_{0}^{(1)}$ are the same as those in example (a), while $B_{1}^{(1)
=\prod\nolimits_{i=1}^{2}Z_{(i,2)}Z_{(i,3)}X_{(i,4)}$. Here, $\left\vert
\mathbb{D}_{i}\right\vert =2$ and $(i,$ $2)$, $(i,$ $4)$, $(i,$ $5)$ are idle
qubits. However, note that $g_{(i)}\left\vert \varphi^{\prime}\right\rangle
=h_{(i)}\left\vert \varphi^{\prime}\right\rangle =\left\vert \varphi^{\prime
}\right\rangle $, where $\left\vert \varphi^{\prime}\right\rangle =\frac
{1}{\sqrt{2}}(\left\vert 0\right\rangle _{1}\left\vert +\right\rangle
_{3}+\left\vert 1\right\rangle _{1}\left\vert -\right\rangle _{3})\left\vert
0\right\rangle _{2}\left\vert +\right\rangle _{4}\left\vert +\right\rangle
_{5}$. The maximal violation can be obtained using either $\left\vert
\psi_{(1)}\right\rangle \left\vert \psi_{(2)}\right\rangle $ or $\left\vert
\varphi^{\prime}\right\rangle ^{\otimes2}$.
Consequently, although the two 5-qubit two logical states $\left\vert
\overline{0}\right\rangle $and $\left\vert \overline{1}\right\rangle $ and the
codeword states are genuinely entangled \cite{Add, bdd}, one can replace the
genuinely entangled state with a state involving two-qubit entanglement,
either $\left\vert \varphi\right\rangle $ or $\left\vert \varphi^{\prime
}\right\rangle $, to reach the same correlation strength. It is not known
whether the Bell nonlocality of genuine entanglement can be revealed in a
$K$-locality quantum network. Specifically, it is not known whether the
genuine entanglement of $\left\vert \psi_{(i)}\right\rangle $ can be deduced
from violations of variant Bell inequalities of a quantum network involving
different stabilizing operators.
\section{Bell inequalities tailored for nonmaximal entangled states in a
quantum network}
If $c_{i}<1$ for at least one $i$, $\left\vert \overline{\Psi}\right\rangle $
cannot achieve the maximal violation of the nonlinear Bell inequality
(\ref{NonBell}). To explore the Bell inequalities maximally violated by
$\left\vert \overline{\Psi}\right\rangle $, recall that in the two-qubit case
($N=K=M=1$, $n_{1}=2$), the tilted CHSH operator $\mathbf{B}_{\beta
\text{-}CHSH}=\beta B_{0}+$ $\mathbf{B}_{CHSH}$ is exploited using the logical
phase-flip operator $B_{0}$ with appropriate parameter $\beta$. Although it is
unlikely that $\prod_{m=1}^{M}B_{0}^{(m)\text{ }}=$ $\prod_{i=1}^{N
\overline{Z}_{(i)}^{\text{ }}$ in quantum networks, it will be shown that the
logical phase-flip operators are still useful in finding the tilted Bell
inequalities. Denote two index sets $\mathfrak{C}=\{i|c_{i}=1\}$ and
$\mathfrak{c}=\{i^{\prime}|c_{i^{\prime}}<1\}$, where $\mathfrak{C\cap
c=}\emptyset$ and $\left\vert \mathfrak{C}\right\vert +\left\vert
\mathfrak{c}\right\vert =N$. Without loss of generality, let $i\in
\mathfrak{C}$ for $1\leq i\leq\left\vert \mathfrak{C}\right\vert $ and
$i\in\mathfrak{c}$ for $\left\vert \mathfrak{C}\right\vert +1\leq
i\leq\left\vert \mathfrak{C}\right\vert +\left\vert \mathfrak{c}\right\vert
=N$. Denote $\left\vert \overline{\Psi}_{\mathfrak{C}}\right\rangle
=\prod\nolimits_{i=1\text{ }}^{\left\vert \mathfrak{C}\right\vert }\left\vert
\psi_{(i)}\right\rangle $ and $\left\vert \overline{\Psi}_{\mathfrak{c
}\right\rangle =\prod\nolimits_{i=\left\vert \mathfrak{C}\right\vert +1\text{
}}^{N}\left\vert \psi_{(i)}\right\rangle $; hence, we have $\left\vert
\overline{\Psi}\right\rangle =\left\vert \overline{\Psi}_{\mathfrak{C
}\right\rangle \otimes$ $\left\vert \overline{\Psi}_{\mathfrak{c
}\right\rangle $. Given $i^{\prime}\in\mathfrak{c}$, set the suitable operator
$h_{(i^{\prime})}^{\prime}$ $=$ $\overline{Z}_{(i^{\prime})}^{\text{ }
=\prod_{j^{\prime}=1}^{n_{i^{\prime}}}\widehat{t^{\prime}}_{(i^{\prime},\text{
}j^{\prime})}$ that fulfills the following conditions: (i) if $(i^{\prime},$
$j^{\prime})\rightarrow\mathcal{S}^{(k)},$ then $\widehat{t^{\prime
}_{(i^{\prime},\text{ }j^{\prime})}=I;$ and (ii) if $(i^{\prime},$ $j^{\prime
})\rightarrow\mathcal{R}^{(k)}$, then either qubit $(i^{\prime},$ $j^{\prime
})$ is idle or $\widehat{t^{\prime}}_{(i^{\prime},\text{ }j^{\prime
)}=\widehat{s}_{(i^{\prime},\text{ }j^{\prime})}$ if qubit $(i^{\prime},$
$j^{\prime})$ is not idle. The logical phase-flip operator can be revised as
\begin{equation}
\overline{Z}_{(i^{\prime})}^{\text{ }}=\underset{j^{\prime},\text{
(i^{\prime},\text{ }j^{\prime})\text{: on the source side }}{\prod}\widehat
{s}_{(i^{\prime},\text{ }j^{\prime})}\prod_{j^{\prime\prime},\text{
}(i^{\prime},\text{ }j^{\prime\prime}):\text{idle}}\widehat{o^{\prime
}_{(i^{\prime},\text{ }j^{\prime\prime})}, \label{Zop
\end{equation}
where $\widehat{o^{\prime}}_{(i^{\prime},\text{ }j^{\prime\prime})
\in\{\widehat{o}_{(i^{\prime},\text{ }j^{\prime\prime})}$, $I\}$. Instead of
measuring $B_{0}^{(k)\text{ }}$directly, the agent $\mathcal{R}^{(k)}$
measures
\begin{equation}
\overline{B}_{0}^{(k)\text{ }}=B_{0}^{(k)\text{ }}\underset{\text{
}{\underset{\widehat{s}_{(i^{\prime},\text{ }j^{\prime})}=I}{\prod
_{(i^{\prime},\text{ }j^{\prime})\rightarrow\mathcal{R}^{(k)}}}
\widehat{t^{\prime}}_{(i^{\prime},j^{\prime})}. \label{BB0
\end{equation}
The outcome of $B_{0}^{(k)\text{ }}$can be obtained from that of $\overline
{B}_{0}^{(k)\text{ }}$by dropping any outcome of the local observables
$\widehat{t^{\prime}}_{(i^{\prime},j^{\prime})}$ in (\ref{BB0}); the outcome
of $\prod_{i^{\prime},i^{\prime}\in\mathfrak{c}}\overline{Z}_{(i^{\prime
)}^{\text{ }}$ can be obtained from that of $\prod_{k=1}^{K}\overline{B
_{0}^{(k)\text{ }}$by dropping any outcome of the qubit $(i^{\prime},$
$j^{\prime\prime})$ if (i) $i^{\prime}\in\mathfrak{C}$ or (ii) $i^{\prime
\in\mathfrak{c}$ and $\widehat{o^{\prime}}_{(i^{\prime},\text{
j^{\prime\prime})}$ in (\ref{Zop}) is the unit matrix. In this case, we
propose the tilted Bell inequalities tailored for $\left\vert \overline{\Psi
}\right\rangle
\begin{equation}
G_{N,K}^{\beta}=\beta\left\vert P_{N,K}\right\vert ^{\frac{1}{K}}+\left\vert
I_{N,K}\right\vert ^{\frac{1}{K}}+\left\vert J_{N,K}\right\vert ^{\frac{1}{K
}\overset{c}{\leq}\beta+1,
\end{equation}
where \bigskip$P_{N,K}=\prod\nolimits_{i=1\text{ }}^{\left\vert \mathfrak{C
\right\vert }I^{\otimes n_{i}}\prod\nolimits_{i^{\prime}=\left\vert
\mathfrak{C}\right\vert +1\text{ }}^{N}\overline{Z}_{(i^{\prime})}^{\text{ }
$. However, it is very difficult to exploit sum-of-squares decomposition to
find the tailored Bell operators in quantum networks with an extremely large
Hilbert space \cite{non2, SOS1}. Our strategy is to simplify $G_{N,K}^{\beta}$
as tilted CHSH inequalities. In detail, according to Lemma 2, we have
\begin{equation}
G_{N,K}^{\beta}\leq\overline{G}_{N,K}(\beta,\overline{\phi},\overline{\theta
})=\beta(\cos2\overline{\phi})^{\frac{\left\vert \mathfrak{c}\right\vert }{K
}+\cos\overline{\theta}+\sin\overline{\theta}(\sin2\overline{\phi
)^{\frac{\left\vert \mathfrak{c}\right\vert }{K}},
\end{equation}
where equality holds when $\theta_{1}=\ldots=\theta_{N}=\overline{\theta}$.
Let the parameter $\beta$ satisfying $\frac{\partial}{\partial\overline
{\theta}}\overline{G}_{N,K}|_{(\overline{\theta}_{\max},\overline{\phi
)}=\frac{\partial}{\partial\overline{\phi}}\overline{G}_{N,K}|_{(\overline
{\theta}_{\max},\overline{\phi})}=0$ be $\beta_{\max}$. We have
\begin{equation}
\tan\overline{\theta}_{\max}=(\sin2\overline{\phi})^{\frac{\left\vert
\mathfrak{c}\right\vert }{K}}, \label{tang
\end{equation}
and
\begin{equation}
\beta_{\max}=\frac{(\tan2\overline{\phi})^{^{\frac{2\left\vert \mathfrak{c
\right\vert }{K}-2}}}{\sqrt{(1+\tan^{^{2}}2\overline{\phi})^{\frac{\left\vert
\mathfrak{c}\right\vert }{K}}+(\tan2\overline{\phi})^{\frac{2\left\vert
\mathfrak{c}\right\vert }{K}}}}. \label{betamax
\end{equation}
As a result, the state $\left\vert \overline{\Psi}_{\mathfrak{C}}\right\rangle
\otimes$ $\left\vert \overline{\Psi}_{\mathfrak{c}}\right\rangle $ can
maximally violate the tailored Bell inequality $G_{N,K}^{\beta_{\max
}=\overset{c}{\overline{G}_{N,K}(\beta_{\max},\overline{\phi},\overline
{\theta}_{\max})\leq}\beta_{\max}+1\ $by setting $\theta_{1}=\ldots=\theta
_{N}=\overline{\theta}_{\max}$. Note that $G_{N,K}^{\beta_{\max}}$ coincides
with the tilted CHSH inequality in the case that $\frac{\left\vert
\mathfrak{c}\right\vert }{K}=1$ \cite{non2}. \bigskip
As an example, we consider the following "star-like" quantum network as shown
in Fig. 2. Here, set $K=N$ as an odd integer and $M=1$. For any $i$, let
$\left\vert \psi_{(i)}\right\rangle =\cos\phi_{i}\left\vert \overline{0
_{i}\right\rangle +\sin\phi_{i}\left\vert \overline{1}_{i}\right\rangle $ be
the codeword of the $[[5$, $1$, $3]]$ QECC, where $i\in\mathfrak{C}$ if
$\phi_{i}=\frac{\pi}{4}$ and $i\in\mathfrak{c}$ if $\phi_{i}=\overline{\phi
}\neq\frac{\pi}{4}$. The useful operators are $g_{(i)}=\mathfrak{g}_{4}$,
$h_{(i)}=\overline{X}_{(i)}^{\prime\text{ }}$, and $h_{(i)}^{\prime
=\overline{Z}_{(i)}^{\text{ }}=\mathfrak{g}_{1}\mathfrak{g}_{3}\overline
{Z}_{(i)}^{\prime\text{ }}=-I_{(i,2)}X_{(i,3)}X_{(i,4)}I_{(i,5)}Z_{(i,1)}$. On
the source side, agent $\mathcal{S}^{(i)}$ possesses qubit $(i$, $2)$, and
using the cut-and-mix method, the local observables are set as $A_{x_{i
}^{(i)}=\cos\theta_{i}Z_{(i,\text{ }2)}+(-1)^{x_{i}}\sin\theta_{i}X_{(i,\text{
}2)}$ for any $i=1,..,N$. On the receiving side, the only agent $\mathcal{R
^{(1)}$ possesses qubits $(j$, $1)$, $(j$, $3),(j$, $4)$, and $(j$, $5)$,
$1\leq j\leq N$. Notably, $\widehat{t^{\prime}}_{(j,\text{ }2)}=I$,
$\widehat{t^{\prime}}_{(j,\text{ }1)}=$ $\widehat{s}_{(j,\text{ }1)
=\sigma_{z}$, and qubits $(j$, $3),(j$, $4)$, and $(j$, $5)$ are idle. Here
two local observables for $\mathcal{R}^{(1)}$ are $B_{0}^{(1)}=\prod
\nolimits_{j=1}^{N}X_{(j,3)}I_{(j,4)}X_{(j,5)}Z_{(j,1)}$ and $B_{1
^{(1)}=\prod\nolimits_{j=1}^{N}X_{(j,3)}X_{(j,4)}X_{(j,5)}X_{(j,1)}$. In this
case, one can set $\overline{B}_{0}^{(1)}=-B_{0}^{(1)}\prod\nolimits_{j=1
^{N}X_{(j,4)}$. In practice,$\ \mathcal{R}^{(1)}$ randomly measures
$\sigma_{x}$ or $\sigma_{z}$ on the qubits $(j$, $1)$, \ldots, $(N$, $1)$, and
always measures $\sigma_{x}$ on qubits $(j$, $3),(j$, $4)$, and $(j$, $5)$,
$1\leq j\leq N$. In this scenario, the numerical simulation shows that $\max
G_{N,K}^{\beta}=\beta_{\max}$.
\section{Conclusions}
In conclusion, we study the quantum networks with sources emitting different
stabilizer states. To characterize Bell nonlocality, knowledge of the emitted
entangled states is demonstrated to be quite useful. Regarding qubit
distributions in quantum networks, nonlinear Bell inequalities are proposed,
which can reveal variant facets of Bell nonlocality. On the other hand, by
fully exploiting the logical bit-flip and phase-flip operators, we derive
tilted nonlinear Bell inequalities tailored for the codewords of [[5, 1, 3]]
QECC with a specific qubit distribution. It is interesting to construct the
tilted nonlinear Bell inequalities maximally violated by $\left\vert
\overline{\Psi}\right\rangle $ comprising the generic codewords of QECCs in
quantum networks.
|
2,877,628,089,073 | arxiv | \section{Introduction} \label{sec:intro}
In this paper, we present \N{}: a dataset of diachronic semantic change on the lexical level for Norwegian. Such datasets are required to evaluate lexical semantic change detection systems or contextualized embeddings in general, but can also be useful for historical linguists. \N{} is naturally accompanied with publicly available historical corpora.
These historical corpora were used to produce \N{} via a meticulous manual annotation effort following the DWUG (Diachronic Word Usage Graphs) methodology \citelanguageresource{DWUG}: in short, it means that the annotators are yielding judgements about how semantically similar a word $x$ is in sentence pairs shown to them. As such, \N{} is fully compatible with datasets for other languages used, for example, in the SemEval'2020 shared task on semantic change detection \cite{schlechtweg-etal-2020-semeval}. However, it is different from most of them in that \N{} features two independent Norwegian datasets dubbed here `Subset 1' and `Subset 2'. Subset 1 deals with semantic change occurring between the period of 1929-1965 and the period of 1970-2013. Subset 2 instead focuses on the time periods of 1980-1990 and 2012-2019. Since the annotation procedure was exactly the same for both subsets, they can be used interchangeably as train-test splits for each other.
\N{} is published in full\footnote{\url{https://github.com/ltgoslo/nor_dia_change}} with all the raw annotation judgements so that any preferred scoring workflow can be applied to it. However, we stick to the standard DWUG scoring procedure described below and provide a graded and a binary change score to each target word.
The rest of the paper is structured as follows. In section~\ref{sec:related}, we put our annotation effort in the context of prior work on semantic change detection. Section~\ref{sec:corpora} describes the historical corpora of Norwegian we used to sample sentences for annotation. In section~\ref{sec:target}, we explain how we selected the target words to annotate. Further on, section~\ref{sec:annotation} describes the annotation process itself. In section~\ref{sec:analysis}, we conduct qualitative linguistic analyses of the annotation results and sanity-check the output of the scoring algorithms we used. Finally, in section~\ref{sec:conclusion}, we conclude and discuss possible future avenues for our work.
\section{Related work} \label{sec:related}
Studying semantic change is a venerable field in linguistics; see \cite{bloomfield} and \cite{blank1999historical}, among many others. However, in natural language processing, the topic of automatically tracing semantic change received comparatively little attention until the advent of easy-to-use distributional representations of lexical meaning (word embeddings). \newcite{kulkarni2015statistically}, \newcite{hamilton-etal-2016-cultural}, \newcite{hamilton-etal-2016-diachronic} and others have shown the potential that distributional semantics has for this task. We refer interested readers to comprehensive reviews on the topic: one can mention \cite{kutuzov-etal-2018-diachronic} and \cite{nina_tahmasebi_2021_5040302}, among others.
In order to compare different semantic change detection methods, one needs access to high-quality evaluation datasets. Although attempts to create such resources started as early as at least in 2011 \cite{baroni:2011}, they were diverse, far from being standardised, and suffered from sparse and biased data selection.
Currently, the mainstream approach to avoid this problems is to employ \textit{graded} contextual word meaning annotation, with the DURel framework \cite{schlechtweg-etal-2018-diachronic} being the most prominent example. In it, annotators are shown contextualized word usage pairs and asked to judge semantic similarity of two usages for the same word on a graded scale. After that, a change score is inferred from these judgements in this or that way (see section~\ref{sec:annotation} for more details). SemEval'2020 shared task on unsupervised semantic change detection employed datasets annotated within this framework, further developing it to include smart sampling of usage pairs and clustering word usages based on their relations to each other in a usage graph \cite{schlechtweg-etal-2020-semeval}.
A large trove of diachronic \textit{word usage graphs} annotated this way exists for several languages (English, German, Swedish, Latin) \cite{schlechtweg-etal-2021-dwug}, but this set lacks Norwegian. \N{} is aimed at filling in this gap, by being fully compatible to the existing datasets (produced using the same procedure).
Additionally, we are following \cite{rodina-kutuzov-2020-rusemshift} and \cite{kutuzov-pivovarova-2021-three} in providing \textit{several} independent datasets for a particular language (in our case, Norwegian). These \textit{subsets} are different in their target time periods and word lists, but are annotated in the same way and by the same team. This potentially allows to develop \textit{supervised} semantic change detection systems which can be trained or fine-tuned on one subset and then evaluated on another. Systems using fine-tuning instead of `zero-shot' approaches turned out to be the best in the recent semantic change detection shared task for Russian \cite{rushifteval2021}, and so we consider this possibility to be crucial for our \N{} dataset as well.
\section{Corpora used} \label{sec:corpora}
The underlying corpora of our work are the NBdigital\footnote{\url{https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-34/}} corpus from the National Library of Norway, and the Norwegian newspaper corpus (Norsk Aviskorpus or NAK\footnote{\url{https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-4/}}). Both corpora are freely available, and can be downloaded from the website of the National Library of Norway. The NBdigital is a historical corpus containing a collection of over 26,000 books, reports, and news articles from the public domain. The texts cover various genres, written in various languages including both Norwegian forms (Bokmål and Nynorsk). In addition, each text has a list of metadata information, as e.g. author, OCR confidence, and date of publication. However, due to the use of OCR, not all of the texts are of acceptable quality. We therefore had to first filter out all texts with an OCR confidence below 70\%, and thereafter cleaned the non Norwegian Bokmål texts from the Bokmål collection of NBdigital.
The content of NBdigital is dated up to 2013. In order to get more recent data that can reflect more modern language use, we have also selected news articles from the NAK corpus. We therefore only use articles published between 2012 and 2019 from NAK.
We decided to divide the corpus into two subsets. The first subset compares the time periods of 1929-1965 and 1970-2013. In the second subset, we look at the time period of 1980-1990 compared to 2012-2013. In what follows, we give the arguments behind this decision, and describe the content of each subset.
\subsection{Subset 1: 1929-1965 VS 1970-2013}
This subset captures two important historical time periods for Norway. The pre- and post-war periods have affected Norwegians' standard of living, and therefore their language use. Higher living standards and better economy after the 1960s made more people, traditionally farmers, move to bigger cities. By the advent of the 1970s a consumer culture was established and new technology entered Norwegian homes. All of these changes impacted the language use by both introducing new words to the vocabulary, and adding new senses to pre-existing words.
The time period between the 1970s and 2013 also introduced quite many technologically related words and senses, and as society developed, the language followed the trends.
In this subset, both time periods are extracted from the historical NBdigital corpus. Table \ref{tab:subset1} shows the total number of words and documents in both time periods of Subset 1.
\begin{table}
{ \centering
\begin{tabular}{lrr}
\toprule
Period & Words & Documents \\
\midrule
1929 -- 1965 & 57 mln & 959 \\
1970 -- 2013 & 175 mln & 4,209 \\
\bottomrule
\end{tabular}
\caption{Total number of word tokens and documents in Subset 1.}
\label{tab:subset1}}
\end{table}
\subsection{Subset2: 1980-1990 VS 2012-2019}
This subset contains shorter time periods than in Subset 1, but we still expect shifts in word usages. The changes between the two periods of 1980-1900 and 2012-2019 can be caused both by linguistic and cultural factors. Most of the changes are expected to be within technology, and the changes in language use are certainly mostly in vocabulary additions. However, many words related to technological advances were added as new senses to pre-existing Norwegian words.
The first time period is extracted from NBdigital, while the second one from 2012 to 2019 comprises texts from the NAK corpus. The language use in NAK might be different from NBdigital, as we expect news texts to contain more `modern' senses of words that have shifted from the previous time period.
In Table \ref{tab:subset2}, we give an overview of the total number of words and documents in each time period of Subset 2.
\begin{table}
{ \centering
\begin{tabular}{lrr}
\toprule
Period & Words & Documents \\
\midrule
1980 -- 1990 & 43 mln & 1,115 \\
2012 -- 2019 & 649 mln & 1,763,843 \\
\bottomrule
\end{tabular}
\caption{Total number of word tokens and documents in Subset 2.}
\label{tab:subset2}}
\end{table}
\section{Choosing target words} \label{sec:target}
We manually selected target words that we believe can have undergone semantic changes during the periods of Subset 1 and Subset 2. This was based on linguistic intuition of the authors as native Norwegian speakers and on existing linguistic work, similar to what has been done by \newcite{rodina-kutuzov-2020-rusemshift} and \newcite{schlechtweg-etal-2021-dwug}.
For each of the selected target words, a filler word was added to the word list. Filler words were randomly sampled from the corresponding corpora; their frequency percentiles in both earlier and later time periods had to be similar to those of the corresponding target words (in this, we also followed the existing best practices). The purpose was to ensure that corpus frequency dynamics alone cannot be used to solve the dataset (frequency changes often accompany semantic changes). Indeed, there are no statistically significant correlations between frequency differences (IPM-normalised) and annotated change scores. For Subset 1, Spearman $\rho$ is $-0.06$ ($p=0.70$), for Subset 2 it is $0.11$ ($p=0.49$). It means that \N{} does not just list words which sharply changed their corpus frequency together with semantic shifts. It is balanced with regards to word frequencies, and the systems aiming to approximate the scores in \N{} based on corpus data must take into account more than that.
Note that the filler words were manually checked to make sure that they are valid Norwegian lemmas not immediately associated with any known diachronic semantic shift. However, after the annotation, we found out that some of the filler words actually did experience some historical change; see section~\ref{sec:analysis} below.
All our target and filler words are nouns and we discarded words belonging to other parts of speech.
The initial target word lists for both subsets (without the fillers) contained 20 words each (40 target words in total). These words are expected to have semantically shifted between 1929-1965 and 1970-2013 for Subset 1, and between 1980-1990 and 2012-2019 for Subset 2. For some of the words, we also did a dictionary check to see if the word seemed to have lost senses in different time periods.
\begin{comment}
\begin{table}
{ \centering
\begin{tabular}{p{0.1\textwidth}p{0.33\textwidth}}
\toprule
Target & Senses \\
\midrule
bit & small piece vs bit in binary system \\
bølge & radio, sound etc. vs water \\ data & fact/information vs computer \\
egg & animal egg, fish egg, tractor egg \\
fil & tool vs car lane vs data file \\
forhold & romantic vs non-romantic \\ gress & grass vs weed (marijuana) \\
idiot & medical term vs derogatory term \\
kart & map vs unripe fruit/a person/ \\
kjemi & scientific field vs interpersonal chemistry \\
krem & diary product vs skin product \\
leilighet & opportunity vs place to live \\
linse & plant vs contact lens \\
mål & voice/language vs measure \\
pære & pear vs light bulb \\ plattform & actual platform, metaphorical platform \\
rev & animal vs joint \\
ris & unit of measure vs rice \\
skjerm & screen vs computer screen \\ stoff & fabric vs illegal drugs \\
\bottomrule
\end{tabular}
\caption{List of target words and their expected senses for Subset 1: 1929-1965 VS 1970-2013.}
\label{tab:targets1}}
\end{table}
\end{comment}
\begin{comment}
\begin{table}
{ \centering
\begin{tabular}{p{0.1\textwidth}p{0.33\textwidth}}
\toprule
Target & Senses \\
\midrule
fane & tab \\
fjær & feather vs spring \\
kanal & waterway vs digital platform or social media \\
kode & secret code vs. computer code \\
kohort & 1/10 of a legion or people being born the same year (statistics) vs smaller group who may have social interaction at a time when one must keep a distance to prevent infection \\
mappe & physical folder vs. directory \\
melding & report vs SMS, digital message \\
melk & diary vs plant \\
mus & small rodent vs computer device \\
nett & Fishing net vs Internet \\
side & Internet page vs side in a dispute \\
sky & weather, dust, computing \\
spill & analogous game, computer game \\
strøm & water vs electricity \\
tavle & analogous vs digital board \\
tjener & servant vs server \\
vert & physical vs digital host \\
vindu & window vs computer window \\
virkelighet & not fantasy vs . not digital \\
virus & body vs data \\
\bottomrule
\end{tabular}
\caption{List of target words and their expected senses for Subset 2: 1980-1990 VS 2012-2019.}
\label{tab:targets2}}
\end{table}
\end{comment}
\section{Annotation process} \label{sec:annotation}
The data was annotated using the DURel framework \cite{schlechtweg-etal-2018-diachronic} and the accompanying web service.\footnote{\url{https://www.ims.uni-stuttgart.de/data/durel-tool}} Its essence is that the annotators are presented with \textit{word usage pairs}: two snippets of text, both containing the same target word. The annotation task is to decide the semantic similarity of the target word in the two different contexts, using the following graded scale:
\begin{itemize}
\item senses are \textit{identical}: the word usage pair is annotated with $4$
\item senses are \textit{closely related}: $3$
\item senses are \textit{distantly related}: $2$
\item senses are \textit{unrelated}: $1$
\item the annotator is unsure and cannot decide the semantic similarity: $0$
\end{itemize}
Thus, annotators are judging usage pairs on a semantic proximity scale, avoiding any manual definitions of word senses. An example of semantically unrelated usages is the word \norex{ris} in the following usage pair:
\begin{enumerate}
\item \norex{Sidene pagineres i hver bok fra 1—500: Papiret koster pr. \textbf{ris} å 500 ark kr. 32,80, hvorpå beregnes 20 pet. fortjeneste.} \\
(\eng{The pages are numbered from 1-500 in each book: The paper costs for each \textbf{ream} of 500 sheets NOK 32,80, to which a 20\% profit is calculated.})
\item \norex{I saltvann produseres primært grønn tang til en samlet verdi av \$ 440 mill. (1973). Denne tangen spises til \textbf{ris} som en slags grønnsak.} \\
(\eng{In seawater green seaweed is primarily produced for a total value of \$ 440 million (1973). This seaweed is eaten with \textbf{rice} as a type of vegetable.})
\end{enumerate}
In the first sentence, \norex{ris} refers to a unit of quantity of paper, while in the second sentence, it refers to rice. Thus, the annotator is expected to yield the $1$ judgement here.
Contextualised word usages are essentially sentences containing at least one target word (can be a filler). These sentences were sampled from the historical corpora described in section~\ref{sec:corpora} in the following way. First, we lemmatized and POS-tagged our corpora using UDPipe \cite{straka-strakova-2017-tokenizing}. This was required to be able to find inflected forms of the target words. From the processed corpora, we extracted all sentences containing at least one token with a target word as its lemma (we filtered out target words not tagged as a noun). The extracted sentences (both raw text and processed versions with lemmas and POS tags) were stored as time period marked JSON files for further usage: one file per target word / time period.
From each of these files, we randomly sampled $11$ sentences as representatives of a particular target word usage in a particular time period. Before sampling, the sentences were de-duplicated, and we discarded sentences containing the `\^{}` character (in most cases, it signalled heavy OCR errors). This means that for every target word, we had $22$ total usage examples ($11$ from the earlier and $11$ from the later time period). In the rare cases when there were less than $11$ occurrences of the target word\footnote{\norex{Idiot} and \norex{katten} in Subset 1; \norex{fane} and \norex{syden} in Subset 2. It has never been less than $7$ occurrences per a time period.}, all the existing occurrences were used.
$11$ might seem to be a small sample: \newcite{schlechtweg-etal-2021-dwug} sampled 100 usages for each target word / historical corpus. However, one should keep in mind that not all possible usage pairs are getting annotated (see more on that below). In the end, \N{} contains $29,425$ annotator judgements, which is comparable to the existing datasets for English ($29,000$) and Swedish ($20,000$).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{instilling.png}
\caption{Word usage graph for \norex{instilling}, time periods 1980-1990 and 2012-2019.}
\label{fig:instilling}
\end{figure}
The sampled sentences were fed to the DURel web service which was responsible for selecting usage pairs to present to the annotators. More details on that can be found in \cite{schlechtweg-etal-2021-dwug}. In short, the aim here was to spend as little annotation effort as possible to construct a well-connected diachronic word usage graph (DWUG), where nodes are usages (sentences) and edges between them are weighted with annotator's judgements. Thus, sentences where a target word is used in roughly the same sense ($4$ judgements), are naturally grouped into a `sense cluster'. See, for example, DWUG for the Norwegian word \norex{instilling} in figure~\ref{fig:instilling}. Node colours correspond to automatically inferred sense clusters, edge thickness corresponds to the median annotators' judgements. The blue cluster contains sentences where \norex{instilling} is used in the older sense of `recommendation', and they come from both 1980-1990 (earlier) and the 2012-2019 (later) time periods. However, the orange, green, and lilac clusters contain exclusively sentences from the later time period, with \norex{instilling} used in the senses of `setting' (as in `account settings') or `attitude'. Thus, this DWUG constitutes a case of a word gaining new senses diachronically.
\N{} is published with full raw annotation data, so it is possible to infer sense grouping from human similarity judgements on usage pairs in any preferred way. However, we followed the standard workflow from \cite{schlechtweg-etal-2021-dwug} and employed their code to process our dataset and produce clustered DWUGs. After the sense clusters are inferred, it becomes possible to compare sense distributions across time (normalised to become probability distributions). The graded change score was the main score we assigned each word in \N{}. It is calculated as Jensen-Shannon divergence (JSD) between the probability distributions of senses in the earlier and the later time periods \cite{giulianelli-etal-2020-analysing}.
To continue the example with \norex{instilling} on figure~\ref{fig:instilling}, there are four sense clusters in total. In the earlier time period, all $11$ usages belong to the first cluster, so the usage distribution is $[11, 0, 0, 0]$ or $[1, 0, 0, 0]$ when converted to a probability distribution. In the later time period, the usage distribution is $[4, 4, 2, 1]$ (only $4$ of the usages belong to the first cluster), and after converting to a probability distribution, it is $[0.364, 0.364, 0.182, 0.091]$. The divergence between $[1, 0, 0, 0]$ and $[0.364, 0.364, 0.182, 0.091]$ equals to $0.655$ which is the degree of diachronic change that the word \norex{instilling} experienced between 1980-1990 and 2012-2019.
In addition, we provide binary change scores, where each word is assigned a $1$ label if it gained or lost any senses between two time periods or a $0$ label if its senses remained stable. In determining these scores, we again followed the approach from \cite{schlechtweg-etal-2020-semeval}. A word is considered to experience a binary change, if at least one sense was attested at most $k$ times in one time period and at least $n$ times in another time period. $n$ and $k$ here are manually defined hyper-parameters which are needed to filter out insignificant fluctuations in sense frequencies. We kept the default values of $k=1$ and $n=3$. This means that, for example, having an entirely novel sense cluster with two usages in it is not enough to assign a $1$ binary change score (since $2 < 3$). However, in the case of \norex{instilling}, \N{} marks it as binary changed, since its second sense is represented with $4$ usages in the later time period ($4 \ge 3$) and $0$ usages in the earlier time period ($0 \le 1$).
\subsection{Annotators}
\N{} was annotated by three native speakers of Norwegian, all of whom hold bachelor degrees in either linguistics or language technology. They received identical guidelines which can be found at \url{https://github.com/ltgoslo/nor_dia_change/blob/main/guidelines.md}. Several reconciliation meetings were held between the annotators and project managers to make sure that all annotators understand the guidelines in the same way, and to clear up inconsistencies in the data. Each annotator produced from $9,167$ to $10,584$ judgements in total and was paid about $13,000$ Norwegian Krone (about $1,300$ euros) for their job after taxes.
\subsection{Annotation results}
Each usage pair received $1.8$ annotator's judgements on average, so possible errors or misunderstandings of one annotator could be compensated by another judgement. The most important descriptive statistics for both \N{} subsets are given in table~\ref{tab:decriptive_stats}; more is available at our GitHub repository. For comparison, we also provide the same statistics for the Swedish dataset from \cite{schlechtweg-etal-2021-dwug} and the Russian \textit{RuShiftEval-2} dataset from \cite{kutuzov-pivovarova-2021-three}. Note that using less target words (but approximately the same total number of judgements) allowed us to reach higher values of inter-rater agreement (both Spearman $\rho$ and Krippendorff's $\alpha$) than those of any of the prior semantic change datasets.
\begin{table}
{ \centering
\begin{tabular}{lccccc}
\toprule
\textbf{Dataset} & \textbf{Words} & \textbf{$\lvert U \rvert$} & \textbf{JUD} & \textbf{SPR} & \textbf{KRI} \\
\midrule
\textbf{Subset 1} & 40 & 21 & 14,419 & 0.77 & 0.76 \\
\textbf{Subset 2} & 40 & 21 & 15,006 & 0.71 & 0.67 \\
\midrule
\textbf{Swedish} & 40 & 168 & 20,000 & 0.57 & 0.56 \\
\textbf{Russian} & 99 & 60 & 8,879 & 0.56 & 0.55 \\
\bottomrule
\end{tabular}
\caption{Descriptive statistics for \N{} subsets. $\lvert U \rvert$ gives an average number of usages (sentences) sampled for a word from the corpora. JUD is the total number of annotators' judgements for a subset. SPR is the weighted mean of pairwise Spearman $\rho$ correlations between different annotators, and KRI is the value of Krippendorff’s $\alpha$ inter-rater agreement.}
\label{tab:decriptive_stats}}
\end{table}
Table~\ref{tab:top} shows top $5$ most changed words for both \N{} subsets. The `graded' column gives the values of the graded change score (calculated with JSD). The `sense gain' and `sense lost' columns demonstrate a binary label depending on whether a word acquired or lost at least one sense based on the binary change score calculated with the same default values of $k=1$ and $n=3$.
It is interesting to note that one of those words in Subset 1 (\norex{horisont}) and three in Subset 2 (\norex{stryk}, \norex{oppvarming} and \norex{innstilling}) were filler words, randomly sampled from the corpora: that is, we did not intentionally came up with these words as `changed'. This is normal (the same was observed, for example, by \newcite{kutuzov-pivovarova-2021-three}) and demonstrates that mining real textual data can sometimes yield unexpected but still valid insights. However, in general human annotations correspond well to our original intuitions: if we assign the value of $1.0$ to the words originally chosen as target ones and $0.0$ to the words originally sampled as fillers, then both subsets show highly statistically significant ranked correlations of these values and the graded change scores. For Subset 1, Spearman $\rho$ in this case is $0.38$ ($p=0.01$) and for Subset 2, it is $0.40$ ($p=0.02$).
\begin{table}
{ \centering
\begin{tabular}{lccc}
\toprule
\textbf{Word} & \textbf{Graded} & \textbf{Sense gain} & \textbf{Sense lost} \\
\midrule
\multicolumn{4}{c}{\textbf{Subset 1} (1929-1965 VS 1970-2013)} \\
\midrule
\norex{plattform} & 0.87 & 1 & 1\\
\norex{leilighet} & 0.80 & 0 & 1\\
\norex{horisont} & 0.64 & 1 & 1\\
\norex{mål} & 0.60 & 0 & 1\\
\norex{bølge} & 0.60 & 1 & 0\\
\midrule
\multicolumn{4}{c}{\textbf{Subset 2} (1980-1990 VS 2012-2019)} \\
\midrule
\norex{stryk} & 1.00 & 1 & 1 \\
\norex{kanal} & 0.73 & 0 & 1 \\
\norex{kode} & 0.73 & 1 & 1\\
\norex{oppvarming} & 0.72 & 1 & 0 \\
\norex{innstiling} & 0.66 & 1 & 0\\
\bottomrule
\end{tabular}
\caption{Top changed words in \N{}.}
\label{tab:top}}
\end{table}
As it was expected based on prior work, some target words received disproportionately many $0$ annotations, for different reasons.\footnote{Overall, there were $548$ zero judgements in Subset 1 and $286$ in Subset 2 (Subset 1 suffers much heavier from OCR errors).} In some of those cases, a target word has a verbal homonym which was erroneously tagged as a noun by UDPipe. All occurrences where the target word was a verb in a word usage pair were annotated with 0. For the target word \norex{vert} \eng{host} from Subset 2 , 27\% of the examples were actually the verb \norex{verte} \eng{become}. This resulted in the word usage graph for 1980-1990 only containing three nodes (usages with high proportion of zero judgements are removed from the graph automatically). The annotation of the word \norex{tap} \eng{loss} from Subset 2 also yielded small word usage graphs due to many $0$ annotations. This was in part because 9\% of the context examples were actually the verb \norex{tape} \eng{lose}.
On the other hand, the word \norex{fil} \eng{file} from Subset 1 had many $0$ annotations due to noisy data. There were so many examples where it was impossible to interpret the context, that the word usage graph for 1929-1965 only had three nodes, all of them in the same cluster.
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.75\textwidth]{aligned_large.png}
\caption{The two sense clusters for \norex{oppvarming} aligned. The left graph shows the earlier time period (1980-1990), with the only sense of \eng{heating} (0). The right graph shows later time period (2012-2019), with the three clusters referring to \eng{heating} (0), \eng{global heating} (1) and \eng{warm-up} (2).}
\label{fig:aligned}
\end{figure*}
We also had some words where the team didn't fully agree with the produced results. The usage graph for the word \norex{mening} \eng{meaning, opinion} from Subset 1 clustered together senses that should not have been in the same cluster. As a result, the scoring algorithm also marked this word as binary changed, contrary to the opinion of the annotators. During a reconciliation meeting, the team agreed that the nuances in the word semantics were difficult to annotate due to the abstract nature of the word.
The annotation of the word \norex{damp} \eng{steam} from Subset 1 yielded a somewhat `strange' word usage graph for the time period 1929-1965. The graph contains four clusters, but two of these have the same sense (steamboat) and should have been grouped together into one cluster.
For the word \norex{mus} \eng{mouse} from Subset 2, the annotation showed that the sense meaning computer mouse had disappeared, but all native speakers agree that this sense is still very much present in the language. See section \ref{sec:analysis} for more details. We decided to mark these problematic words as questionable, and publish both a `clean' dataset with 37 target words in each subset, and the full set with all 40 annotated words in each subset, including the six `questionable' ones.
\section{Meaning change in the Norwegian language of the 20th century} \label{sec:analysis}
In order to qualitatively analyse the binary and graded change scores produced within \N{}, we consulted the annotators and authors whose native language was Norwegian. Their intuitions are the basis of these evaluations. The annotators consulted the graphs for each period, as in figure~\ref{fig:aligned}, and looked at the sense clusters to examine which senses had been clustered in which ways. However, it can be difficult, even for native speakers, to personally judge the semantic graphs. In an attempt to mitigate native speaker bias, we believe that looking at frequencies over time can be one way to get additional insight into the DWUGs. The National Library of Norway has released the National Library N-gram (NB N-gram) service, which allows users to check the corpus frequencies of an n-gram between 1810 and 2013. The data is similar to what was used for this annotation effort, with the exception of the NAK-data for the latter part of subset 2.\footnote{\url{https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-70/}}. We use this service to highlight certain insights. The following discussion is based on the \eng{clean} dataset.
The annotators all agree on the binary results, but there are parts of the sense graphs that do not necessarily fit the annotators' intuition completely. One case is when a sense is overly specific. An example of this can be seen for the word \norex{oppvarming} \eng{heating} (figure~\ref{fig:aligned}), where in the later period of 2012-2019, the new sense \eng{(global) warming} contributes to about 5/8 of the graded change score. However, this sense is closely related to the \eng{heating} sense, and might not be seen as a separate sense by all.
It seems like the graph is sensitive to what might be perceived as different types of metaphorical usage. This is not necessarily an error, but it is worth noting that this can potentially expand the numbers of senses considerably. Another example of this is \norex{bølge}, (see figure~\ref{fig:boelge}) whose graph has as many as 6 senses. 5 of these are metaphorical: waves of gratitude, waves of battle, waves of hair, waves of grain, waves of song.
The other case is when two senses are conflated. This can happen both within the sense graph for one period, or across time periods for senses that are considered the same. One example is the graph for \norex{skjerm}, in which it seems like at least one of the examples for the \eng{computer screen} sense, based on its sense in the later time period, also contains a sense of \eng{canvas used when taking a photo}, or something related. This could also be due to difficulties during annotation, where annotators report that some senses are more difficult to judge from context than others.
Another problem is frequency of senses in both corpora. Although the annotators agreed that it seems likely that the frequency of the sense \eng{rapids} have diminished greatly for \norex{stryk} in 2012-2019, they note that the sense is still in use, and that the largest actual change in the sense of \norex{stryk} is likely the addition of the \eng{fail} sense, rather than the absence of the \eng{rapids} sense. Overall, there seems to be a tendency for certain scientific senses to be more frequent in the earlier time periods. Examples of this is the \eng{geological horizon} sense for \norex{horisont}, the botanical sense of \eng{umbrella} for \norex{skjerm}, and perhaps \eng{distribution board} for \norex{tavle}.
A common problem for semantic change is the tendency in Norwegian, as in Swedish and German, among other languages, to spell compound nouns as single tokens, without spaces. However, this makes exact matches with compounds containing these words impossible. In English, one might be able to match the word \norex{computer} in the compound \norex{computer game}, while this would not be possible in Norwegian. For example, according to Leksikografisk Bokmålskorpus (LBK)\footnote{\url{https://www.hf.uio.no/iln/om/organisasjon/tekstlab/prosjekter/lbk/}} \citelanguageresource{bokmalsk}, the word \norex{kode} \eng{code} occurs word-initially in around 132 lemmas, and word-finally in about 138 lemmas. In some cases the changes associated with a lemma might be more or less visible in only certain compounds, even if the lemma itself has not lost its sense. It could possibly also be that a compound lemma is preferred to the more ambiguous lemma by itself. We do not claim any direct connection between the sense of a noun and the frequencies of its compounds, but believe they can be an indicator of usage. Some examples of this are discussed below.
\subsection{Subset 1}
Of the 37 words in (`clean') Subset 1, 11 words showed binary semantic change. Details are shown in table~\ref{tab:subset_summary}. This period had 3 more words with binary change than Subset 2, resulting in a higher percentage of binary change. The average graded change is also higher. The words with binary change were \norex{anfektelse} \eng{distraction},
\norex{bit} \eng{bite},
\norex{bølge} \eng{wave},
\norex{forhold} \eng{relation, relationship},
\norex{horisont} \eng{horizon},
\norex{leilighet} \eng{opportunity, apartment},
\norex{mål} \eng{goal, measure},
\norex{pære} \eng{pear, bulb},
\norex{plattform} \eng{platform},
\norex{rev} \eng{fox},
\norex{skjerm} \eng{screen}. An additional 10 words had graded change score above zero. We will discuss three interesting cases from this period below.
\paragraph{Plattform} This word had three senses in the earlier time periods, and two in the latter. In the first period, the sense of \eng{generic platform} dominated with 5 cases, whereas 4 cases were of \eng{tram platform} and 1 was \norex{oil platform}. Not surprisingly, it is the \eng{oil platform} sense that dominates in the latter period. It is also interesting to see the disappearance of the \eng{tram platform} sense, which is likely due to changes in how the tram worked. This change seems typical of the period.
\paragraph{Rev} Intended for its possible sense of \eng{joint}, the word \norex{rev} turned out to show other changes over time. Both time periods show frequent usage of the expected sense \eng{fox}, but we observe that the sense of \eng{reef} has become more frequent.
\paragraph{Pære} The sense of \eng{light bulb} seems to have been present in both periods, but with only 2 cases in the earlier period, and 9 in the latter. Interestingly, the \eng{pear (fruit)} sense becomes less frequent, according to our data. This is contrary to our original expectations, as one would expect there to be more mention of the electric bulbs. According to the N-gram service it also seems like the frequency of the less ambiguous \norex{lyspære} \eng{light bulb} is present in both time periods, but steadily increasing.
\begin{table}
{ \centering
\begin{tabular}{lcccc}
\toprule
\textbf{Subset} & \textbf{Words} & \textbf{Binary} & \textbf{Percent.} & \textbf{Average} \\
\midrule
\textbf{Subset 1} & 37 & 11 & 29.7 & 0.26 \\
\textbf{Subset 2} & 37 & 9 & 24.3 & 0.22 \\
\bottomrule
\end{tabular}
\caption{Frequencies for subset 1 and subset 2, indicating the number of binary changes, the corresponding percentage of changed words, and the average graded change.
}
\label{tab:subset_summary}}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{boelge_gammel.png}
\caption{Sense clusters for \norex{bølge} \eng{wave} from the first time period (1929-1965) The blue cluster marked 0 represents the \eng{(sea) wave} sense, the remaining senses represent metaphorical usage. }
\label{fig:boelge}
\end{figure}
\subsection{Subset 2}
Of the 37 words in (`clean') Subset 2, 8 words showed binary semantic change, while 9 more showed graded change above zero. The remaining 23 words showed no change.
The nine words with binary change were \norex{stryk} \eng{rapids, fail}, \norex{kanal} \eng{channel}, \norex{kode} \eng{code}, \norex{oppvarming} \eng{heating}, \norex{innstilling} \eng{setting}, \norex{tavle} \eng{(black)board}, \norex{fane} \eng{banner}, \norex{strøm} \eng{current, electricity}. As with subset 1, we will look at four interesting words from this subset, including the `questionable' word \norex{mus} \eng{mouse}.
\paragraph{Stryk} was perhaps somewhat surprising. The only word with a perfect graded change score of 1, its score comes from the disappearance of the frequent sense \eng{rapids}, and the introduction of the \eng{fail} sense. The annotators note that the \eng{rapids} sense is still possible, but it seems like the \eng{fail} sense is new. The national library N-gram service (NB N-gram) shows that indeed, the relative frequency of the word \norex{strykkarakter} \eng{fail grade}, chosen for its unambiguity while being etymologically related to \norex{stryk}, went drastically up towards the end of the 1990's, keeping a higher frequency after its peak.
\paragraph{Kanal} This word showed a high degree of change. In the earlier time period, it had 5 senses, where one roughly meaning \eng{TV and radio channel} became much more frequent in the later time period, while one new sense, seemingly something like \eng{communication channel}, appeared. All other senses disappeared from the word usage graph. Although it is unlikely that the other senses, such as \eng{river channel} and \eng{electric channel} have indeed disappeared from language in general, the increase in frequency for the \eng{TV and radio channel} sense unsurprisingly fits recent trends well.
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{datamus_small.png}
\caption{An example of a NB N-gram graph, in this case for the word \norex{datamus} \eng{computer mouse}. The graph shows that the word appeared in the corpus in 1990 and sharply increased its frequency.}
\label{fig:datamus}
\end{figure}
\paragraph{Mus} Interestingly, this word lost the sense of \eng{computer mouse} in the later period, according to our usage graphs. This is the reason why it was judged as \eng{questionable}. The only sense present in the later time period is that of the animal. However, if we look at the compound \norex{datamus} \eng{computer mouse}, we see that the sense appears in 1990 and becomes more frequent (see figure~\ref{fig:datamus}). This strengthens the annotators belief that the loss of the \eng{computer mouse} sense might be due to data sampling error.
\paragraph{Spill}
Somewhat surprisingly, this word did only show a small degree of graded change. Both time periods are dominated by the general sense \eng{game}, and its change is mostly due to the loss of earlier senses. However, if we look at the frequency of the compound noun \norex{dataspill} \eng{computer game} in the NB N-gram service, we see that although there are a few instances between 1960 and 1980, it is not until the late 1980's that the word becomes more frequent, and it is much more frequent after the 1990's than before.
\section{Conclusion} \label{sec:conclusion}
We introduced \N{}, a dataset of diachronic semantic change on the lexical level for Norwegian. The dataset comprises two new manually annotated subsets that can be used interchangeably as train or test splits. We followed the DURel framework during annotation, using two historical Norwegian corpora. The dataset was produced following the same annotation procedures as for other languages. All the data in the form of diachronic word usage graphs (DWUGs) and accompanying code is available under a CC-BY license at \url{https://github.com/ltgoslo/nor_dia_change/}.
This is the first attempt at developing resources for Norwegian diachronic semantic change. In the nearest future, we plan to evaluate various semantic change modelling systems on \N{}, and report baseline performances. It is also important to further assess available Norwegian historical text collection in order to come up with a set of reference corpora which are more comparable in terms of their size and genre distribution than the ones we used for \N{}.
It is important to note that \N{} can be used not only in the field of lexical semantic change detection. By definition, it is also a full-fledged WiC (`word-in-context') dataset \cite{pilehvar-camacho-collados-2019-wic}. As such, it is a ready-to-use benchmark to evaluate \textit{synchronic} word sense disambiguation capabilities of pre-trained language models for Norwegian. Simultaneously, the same models can in theory be used to automatically `annotate' the existing data, yielding DWUGs for many thousands of words, not just dozens. This is another direction for our future research.
Finally, we plan to extend the dataset beyond nouns, which was the part of speech we focused on in this work.
\section*{Acknowledgements}
The annotation of \N{} was kindly funded by the Teksthub initiative at the University of Oslo.
We thank the three annotators and co-authors Helle Bollingmo, Tita Ranveig Enstad, and Alexandra Wittemann for all their hard work and contributions. A special thanks to Ellisiv Gulnes Heien who provided us with some of the target words.
\section{Bibliographical References}\label{reference}
\bibliographystyle{lrec2022-bib}
|
2,877,628,089,074 | arxiv | \section{Introduction}
Quantization of field theory on the light front (LF) \cite{dir},\,\,
i.e. on the hyperplane $x^+=0$ in
the coordinates $$x^{\pm}=(x^0\pm x^3)/\sqrt{2},\quad x^{\p}=x^1,x^2,$$
where $x^0,x^1,x^2,x^3$ are Lorentz coordinates and the $x^+$ plays
the role of
the time, requires special regularization of the theory.
The LF momentum operator $P_-$
(the generator of translations along
the $x^-$ axis) is nonnegative for the states with nonnegative
energy and mass:
$$P_-=(P_0-P_3)/\sqrt{2}\geqslant 0\quad \text{for} \quad p_0\geqslant0,\, p^2\geqslant0.$$
The vicinity of its minimal eigenvalue,
$p_-=0$, corresponds both to ultraviolet and infrared domains of
momenta in Lorentz coordinates. Quantizing field theory on the LF one
finds singularities at $p_-\to 0$.
And the regularization of these singularities may affect
the description of both
ultraviolet and infrared momenta physics, in particular,
correct description of vacuum effects.
Usual ways of the regularization of the $p_-\to 0$ singularities
are the following:
{\bf (a)} the cutoff $|p_-|\;\; (|p_-|\geqslant\e>0)$,
{\bf (b)} "DLCQ" regularization, i.e. the space cutoff in the
$x^-$, $|x^-|\leqslant L$, plus the periodic boundary conditions on fields $x^-$,
that leads to the discretization of the $P_-$ spectrum:
$p_-=\frac{\pi n}{L}$, $n=0,1,2,\dots$
The Fourier mode of the field with the $p_-=0$ ("zero mode")
is separared here from other modes.
In canonical formalism zero mode turns out to be dependent on other modes
due to constraints
(for gauge field theory see \cite{nov2,nov2a}).
Both ways of the regularization break Lorentz symmetry, and the
regularization (a)
violates also gauge invariance in gauge field theory.
This can lead to difficulties with the renormalization of
the theory, and also to
a nonequivalence of results, obtained with LF and with usual
("equal time") quantization.
In the framework of perturbation theory it was shown
\cite{burlang, tmf97} that to restore the symmetry and the
above mentioned equivalence it is necessary to add to the regularized
LF Hamiltonian
some special "counterterms".
However for the Quantum Chromodynamics (QCD) one can
expect effects nonperturbative in coupling
constant,
in particular, vacuum condensates. Applying the regularization of the
type (a) where one excludes zero
modes, we get
the absence of such condensates. The regularization (b) leads
to canonically
constrained and dynamically not independent zero modes. With
these zero modes
one again cannot correctly
descibe condensates \cite{yaf88, yaf89}. The study of this problem in
(1+1)-dimensional quantum electrodynamics suggested some way to introduce
correct description of condensates in the regularization (b), at
least semiphenomenologically,
using zero modes as independent variables \cite{yaf88, yaf89}.
In the present paper we review briefly our new parametrization of gauge fields
on the lattice in "transversal" space coordinates on the LF. This parametrization is
convinient for separate treatment of zero modes of fields on the LF and gives a way to introduce
gauge invariant regularization of the theory. Then we limit ourselves by QCD(2+1) model in coordinates
close to the LF and perform the limiting transition to the LF Hamiltonian keeping
the dynamical independence of zero modes of fields. We apply this Hamiltonian for simple
example of mass spectrum calculation.
\section{The definition of gauge fields on the "transverse" lattice}
The gluon part of QCD Lagrangian in continuous space has the following form:
\disn{1}{
{\cal L}=-\frac{1}{2} Tr F_{\m\n}F^{\m\n}.
\nom}
where
$$F_{\mu\nu}= \dd_{\mu} A_{\nu}- \dd_{\nu }A_{\mu}-ig[A_{\m},A_{\n}]$$
and the gluon vector fields $A_{\mu}(x)$
are $N\times N$ Hermitian traceless matrices. Under SU(N) gauge transformations
the $A_{\mu}(x)$ transform as follows:
\disn{2}{
A_{\mu}(x) \to \Omega(x)A_{\mu}(x)\Omega^+(x)+ \frac {i}{g}
\Omega(x)\dd_{\mu}\Omega^+(x).
\nom}
Here the $\Omega (x)$ are $N\times N$ matrices, corresponding
to the SU(N) gauge transformation.
In the LF Hamiltonian approach one uses continuous coordinates $x^+, x^-$
and inroduces, as an ultraviolet
regulator, the lattice in transversal coordinates.
Gauge invariance is maintained via appropriate use of Wilson lattice method
\cite{wilson}, describing gauge fields by matrices related to lattice links.
If one uses the unitary matrices for these link variables and constructs
the Hamiltonian, one needs to apply the "transfer matrix" method, described in the paper \cite{creutz, Grunewald}.
However this method is not accomodated to the LF and to the corresponding choice
of the gauge $A_-=0$.
To overcome this difficulty we propose the modification of these link variables,
introducing
nonunitary matrics of special form, where only zero modes are related with
links and nonzero modes are related with the sites, belonging to these links.
Using these lattice variables we can represent the complete regularization of the
theory in gauge invariant form.
The gluon field components
$A_+$ and $A_-$ are related with the lattice sites. Under the gauge transformations
they transform according to previous formulae (\ref{2}).
Transverse components are described by the following $N\times N$ complex matrices:
\disn{3}{
M_{\m}(x)=(I+iga\tilde A_{\m}(x))U_{\m}(x),
\nom}
where $m$ is the index of transversal components and the $\tilde A_{\m}(x)$ are
Hermitian $N\times N$ matrices,\,
related to corresponding lattice sites, $U_{\m}(x)$ are unitary
$N\times N$ matrices, related to the links $(x-ae_{\m}, x)$,
$a$ is the parameter of the lattice (the size of the link) and the $e_{\m}$ is
the unit vector along the $x^{\m}$ axis,
$g$ is the QCD coupling constant.
We define the transformation law under gauge transformations as follows:
\disn{4}{
\tilde A_{\mu}(x) \to \Omega(x)\tilde A_{\mu}(x)\Omega^+(x),\quad
U_{\m}(x)\to \Omega(x)U_{\m}(x)\Omega^+(x-ae_{\m}) .
\nom}
In consequence the matrices $M_{\m}(x)$ transform like link variables \cite{lat,lat1}:
$$M_{\m}(x)\to \Omega(x)M_{\m}(x)\Omega^+(x-ae_{\m}).$$
Let us remark that the Hermicity of matrices $\tilde A_{\m}(x)$ is kept under
these gauge transformations.
Let us introduce the operator $D_-$ by the following definitions:
\disn{5}{
D_-\tilde A_{\m}(x)=\dd_-\tilde A_{\m}(x)-ig[A_-(x),\tilde A_{\m}(x) ],\no
D_-U_{\m}(x)=\dd_-U_{\m}(x)-igA_-(x) U_{\m}(x)+igU_{\m}(x) A_-(x-ae_{\m}),\no
D_-M_{\m}(x)=\dd_-M_{\m}(x)-igA_-(x) M_{\m}(x)+igM_{\m}(x) A_-(x-ae_{\m}).
\nom}
This definition of the $D_-$ has gauge invariant form under the gauge transformations,
defined above.
Further we impose on the $U_{\m}(x)$ the condition
\disn{6}{
D_-U_{\m}(x)=0,
\nom}
while from the $\tilde A_{\m}(x)$ we exclude the part, which satisfies the equality
$D_-\,\tilde A_{\m}(x)=0$. In the gauge $A_-=0$ these conditions simply
mean a
separation of zero ($U_{\m}(x)$) and nonzero ($\tilde A_{\m}(x)$)
Fourier modes of
the field in the $x^-$. In general we have some
gauge invariant definition of this
separation.
Furthermore we can introduce
the gauge invariant cutoff in $p_-$, using a cutoff \,
in the eigenvalues $q_-$ of the $D_-$: $|q_-|\leqslant \Lambda$.
Now let us consider the naive continuous space limit $a\to 0$. We require
the following relation in the fixed gauge $A_-=0$ at $a\to 0$:
$$U_{\m}(x)\to \exp{igaA_{\m 0}(x)}\to (I + igaA_{\m 0}(x)),$$
Here the $A_{\m 0}(x)$ is zero mode of
the field $A_{\m}(x)$ in continuous
space. And for the $\tilde A_{\m}(x)$ we require that it tend
to nonzero mode part of the $A_{\m}(x)$.
Then at nonzero $A_-$ we can get for the matrix $M_{\m}(x)$
the following relation:
\disn{7}{
M_{\m}(x)\to (I+iga A_{\m}(x)+ O((ag)^2)).
\nom}
Indeed, at $a\to 0$ we have:
\disn{8}{
M_{\m}(x)\to\Omega(x; A_-)(I+iga A_{\m}(x))_{A_-=0}
\Omega^+(x-ae_{\m}; A_-)\to\no
\to \Omega(x; A_-)(I+iga A_{\m}(x))_{A_-=0}\Omega^+(x; A_-)-a
\Omega(x; A_-)\dd_{\m}\Omega^+(x; A_-)\no
\to (I+iga A_{\m}(x)),
\nom}
where the $\Omega (x; A_-)$ is the matrix of the gauge transformation,
which transforms the field in the gauge $A_-(x)=0$ to the field with a
given $A_-(x)$.
Let us introduce the lattice analog of the continuous space field strength
$F_{\m\n}(x)$. With this aim we define the following quantities
($\m, \n = 1,2$):
\disn{9}{
G_{\m\n}(x)=-\frac
{1}{ga^2}[M_{\m}(x)M_{\n}(x-ae_{\m})-M_{\n}(x)M_{\m}(x-ae_{\n})],
\nom}
\disn{10}{
G_{+-}(x)=iF_{+-}(x),\quad G_{-\m}= \frac{1}{ga}D_-M_{\m},\no
G_{+\m}(x)=\frac
{1}{ga}[\dd_{+}M_{\m}(x)-ig(A_{+}(x)M_{\m}(x)
-M_{\m}(x)A_{+}(x-ae_{\m}))].
\nom}
It is not difficult to show that at $a\to 0$ one gets
$G_{\m\n}(x)\to iF_{\m\n}(x)$, and the analogous relations are
true for the
$G_{+\m}$, $G_{-\m}$.
We get the following transformation law under the gauge transformations:
\disn{11}{
G_{\pm\m}(x) \to \Omega(x)\,G_{\pm\m}(x)\,\Omega^+(x-ae_{\m}),\no
G_{\m\n}(x)\to \Omega(x)\,G_{\m\n}(x)\,\Omega^+(x-ae_{\m}-ae_{\n}).
\nom}
Having these quantities one can construct
gauge-invariantly regularized action and
the LF Hamiltonian of the QCD (similarly to the work \cite{lat1}).
One can also apply the transfer matrix method of the paper \cite{creutz}
to construct the Hamiltonian on the
lattice even in the gauge $A_-=0$, because only zero modes are
described
by unitary matrices on links, and it is possible
to find the necessary
"transfer matrix" in $x^+$, in analogy with paper \cite{creutz}.
\vskip 1em
{\bf Acknowledgements.}
We thank V.A.~Franke and S.A.~Paston for useful discussions.
|
2,877,628,089,075 | arxiv | \section{Introduction}
\begin{amrnote}{}
This intro was taken from COLT version -- I'm not sure the updating
stuff is as relevant now.
Rethink introduction in light of stronger results concerning
characterization and equivalence results.
EXPLAIN ``Entropic Duality'' -- convex conjugation for functions
restricted to a probability simplex -- see definition in \S\ref{sub:defn}.
\end{amrnote}
The combination or aggregation of predictions is central to machine
learning.
Traditional Bayesian updating can be viewed as a particular way
of aggregating information that takes account of prior information.
Notions of ``mixability'' which play a key role in the setting of
prediction with expert advice offer a more general way to aggregate by
taking into account a loss function to evaluate predictions.
As shown by Vovk~\cite{Vovk:2001}, his more general ``aggregating algorithm''
reduces to
Bayesian updating when log loss is used.
However there is an implicit design variable in mixability that to date has
not been fully exploited.
The aggregating algorithm makes use of a distance between the current
distribution and a prior which serves as a regularizer.
In particular the aggregating algorithm uses the KL-divergence.
We consider the general setting of an arbitrary loss and an arbitrary
regularizer (in the form of a Bregman divergence) and show that we recover
the core technical result of traditional mixability: if a loss is mixable in
our generalized sense then there is a generalized aggregating algorithm which
can be guaranteed to have constant regret. The generalized aggregating
algorithm is developed by optimizing the bound that defines our new notion
of mixability.
Our approach relies heavily on dual representations of entropy functions
defined on the probability simplex (hence the title). By doing so we
gain new insight into why the original mixability argument works and
a broader understanding of when constant regret guarantees are possible.
\subsection{Mixability in Prediction With Expert Advice Games}
A prediction with expert advice game is defined by its loss, a collection
of experts that the player must compete against, and a fixed number of rounds.
Each round the experts reveal their predictions to the player and then
the player makes a prediction.
An observation is then revealed to the experts and the player and all
receive a penalty determined by the loss.
The aim of the player is to keep its total loss close to that of the best
expert once all the rounds have completed.
The difference between the total loss of the player and the total loss of
the best expert is called the regret and is the typically the focus of
the analysis of this style of game.
In particular, we are interested in when the regret is \emph{constant},
that is, independent of the number of rounds played.
More formally, let $X$ denote a set of possible \emph{observations}
and let $\mathcal{A}$ denote a set of \emph{actions} or \emph{predictions}
the experts and player can perform.
A \emph{loss} $\ell : \mathcal{A} \to \mathbb{R}^X$ assigns the penalty
$\ell_x(a)$ to predicting $a \in \mathcal{A}$ when $x \in X$ is observed.
The set of experts is denoted $\Theta$ and the set of distributions over
$\Theta$ is denoted $\Delta_\Theta$.
In each round
$t = 1, \ldots, T$, each expert $\theta\in\Theta$ makes a prediction
$a^t_\theta \in \mathcal{A}$.
These are revealed to the player who makes a prediction $\hat{\act}^t\in\mathcal{A}$.
Once observation $x^t\in X$ is revealed the experts receive loss
$\ell_{x^t}(a^t_\theta)$ and the player receives loss
$\ell_{x^t}(\hat{\act}^t)$.
The aim of the player is to minimize its \emph{regret}
\bwnote{Made notation consistent here}
\(
\operatorname{Regret}(T) := L^T - \min_\theta L_\theta^T
\)
where $L^T := \sum_{t=1}^T \ell_{x^t}(\hat{\act}^t)$ and
$L_\theta^T = \sum_{t=1}^T \ell_{x^t}(a^t_\theta)$.
We will say the game has \emph{constant regret} if there exists a player
who can always make predictions that guarantee $\operatorname{Regret}(T) \le R_{\ell,\Theta}$
for all $T$ and all expert predictions $\{a^t_\theta\}_{t=1}^T$ where
$R_{\ell,\Theta}$ is a constant that may depend on $\ell$ and $\Theta$.
In \cite{Vovk:1990, Vovk:1995}, Vovk showed that if the loss for a game
satisfies a condition called mixability then a player making
predictions using the aggregating algorithm (AA) will achieve constant
regret.
\begin{definition}[Mixability and the Aggregating Algorithm]
\label{def:vovk-mix}
Given $\eta > 0$, a loss $\ell : \mathcal{A} \to \mathbb{R}^X$ is \emph{$\eta$-mixable}
if, for all
expert predictions $a_\theta \in \mathcal{A}$, $\theta \in \Theta$ and all
mixture distributions $\mu \in \Delta_\Theta$ over experts
there exists
a prediction $\hat{\act} \in \mathcal{A}$ such that
for all
outcomes $x \in X$
we have
\begin{equation}\label{eq:vovk-mix}
\ell_x(\hat{\act})
\le
-\eta^{-1} \log \sum_{\theta \in \Theta}
\exp\left(
-\eta \ell_x(a_\theta)
\right) \mu_\theta.
\end{equation}
The \emph{aggregating algorithm} starts with a mixture
$\mu^0 \in \Delta_\Theta$ over experts. In round $t$, experts predict
$a^t_\theta$ and the player predicts
the $\hat{\act}^t \in \mathcal{A}$ guaranteed by the $\eta$-mixability
of $\ell$ so that \eqref{eq:vovk-mix} holds for $\mu = \mu^{t-1}$
and $a_\theta = a^t_\theta$.
Upon observing $x^t$, the mixture $\mu^t \in \Delta_\Theta$ is set so that
$\mu_\theta^t
\propto \mu_\theta^{t-1} e^{-\eta \ell_{x^t}(a^t_\theta)}$.
\end{definition}
Mixability can be seen as a weakening of exp-concavity
(see \cite[\S3.3]{Cesa-Bianchi:2006}) that requires just enough of the loss
to ensure constant regret.
\begin{theorem}[Mixability implies constant regret \cite{Vovk:1995}]
\label{thm:vovk-mix}
If a loss $\ell$ is $\eta$-mixable then the aggregating algorithm
will achieve $\operatorname{Regret}(T) = \eta^{-1} \log |\Theta|$.
\end{theorem}
\subsection{Contributions}
The key contributions of this paper are as follows. We provide a new
general definition (Definition~\ref{def:mix}) of mixability and an induced
generalized aggregating algorithm (Definition~\ref{def:updates}) and show
(Theorem~\ref{thm:main}) that prediction with expert advice using a
$\Phi$-mixable loss and the associated generalized aggregating algorithm is
guaranteed to have constant regret. The proof illustrates that the log
and exp functions that arise in the classical aggregating algorithm are
themselves not special, but rather it is a translation invariant property
of the convex conjugate of and entropy $\Phi$ defined on a probability
simplex that is the crucial property that leads to constant regret.
We characterize
(Theorem~\ref{thm:legendre}) for which entropies $\Phi$ there exists
$\Phi$-mixable losses via the Legendre property.
\bwnote{tweak if necessary}
We show that
$\Phi$-mixability of a loss can be expressed directly in terms of the Bayes
risk associated with the loss (Definition~\ref{def:mix-F} and
Theorem~\ref{thm:mix-proper}), reflecting the situation that holds for
classical mixability~\cite{Erven:2012}. As part of this analysis we show
that proper losses are quasi-convex (Lemma~\ref{lem:proper-qc}) which, to
the best of our knowledge appears to be a new result.
\subsection{Related Work}
The starting point for mixability and the aggregating algorithm is the work
of \cite{Vovk:1995,Vovk:1990}.
The general setting of prediction with expert advice is summarized in
\cite[Chapters 2 and 3]{Cesa-Bianchi:2006}. There one can find a range of
results that study different aggregation schemes and different
assumptions on the losses (exp-concave, mixable).
Variants of the aggregating algorithm have been studied for classically
mixable losses, with a trade-off between tightness of the bound (in a
constant factor) and the computational complexity \cite{Kivinen1999}.
Weakly mixable losses are a generalization of mixable losses. They have
been studied in \cite{Kalnishkan2008} where it is shown there exists a
variant of the aggregating algorithm that achieves regret $C\sqrt{T}$ for
some constant $C$.
Vovk~\cite[in \S2.2]{Vovk:2001} makes the observation that his Aggregating
Algorithm
reduces to Bayesian mixtures in the case of the log loss game. See also the
discussion in \cite[page 330]{Cesa-Bianchi:2006} relating certain
aggregation schemes to Bayesian updating.
The general form of updating we propose is similar to that considered by
Kivinen and Warmuth~\cite{Kivinen:1997}
who consider finding a vector $w$ minimizing
\(
d(w,s) + \eta L(y_t, w\cdot x_t)
\)
where $s$ is some starting vector, $(x_t, y_t)$ is the instance/label
observation at round $t$ and $L$ is a loss. The key difference between
their formulation and ours is that our loss term is (in their notation)
$w\cdot L(y_t, x_t)$ -- \textit{i.e.}, the linear combination of the losses of the
$x_t$ at $y_t$ and not the loss of their inner product.
Online methods of density estimation for exponential families are discussed in
\cite[\S3]{Azoury:2001} where the authors compare the online and offline updates of
the same sequence and make heavy use of the relationship between the KL
divergence between members of an exponential family and an associated Bregman
divergence between the parameters of those members.
The analysis of mirror descent \cite{Beck:2003} shows that it achieves
constant regret when the entropic regularizer is used.
However, there is no consideration regarding whether similar results
extend to other entropies defined on the simplex.
We stress that the idea of the more general regularization and updates is
hardly new. See for example the discussion of potential based methods in
\cite{Cesa-Bianchi:2006} and other references later in the paper. The key
novelty is the generalized notion of mixability, the name of which is
justified by the key new technical result --- a constant regret bound
assuming the general mixability condition achieved via a generalized
algorithm which can be seen as intimately related to mirror descent.
Crucially, our result depends on some properties of the conjugates of potentials defined over probabilities that do not hold for potential functions defined over more general spaces.
\section{Generalized Mixability and Aggregation via Convex Duality}
In this section we introduce our generalizations of mixability and the
aggregating algorithm.
One feature of our approach is the way the generalized aggregating algorithm
falls out of the definition of generalized mixability as the minimizer of
the mixability bound.
Our approach relies on concepts and results from convex analysis.
Terms not defined below can be found in a reference such as
\cite{Hiriart-Urruty:2001}.
\subsection{Definitions and Notation}\label{sub:defn}
A convex function $\Phi : \Delta_\Theta \to \mathbb{R}$ is called an \emph{entropy} (on $\Delta_\Theta$)
if it is proper (\textit{i.e.}, $-\infty < \Phi \ne +\infty$), convex\footnote{
While the information theoretic notion of Shannon entropy as a measure of
uncertainty is concave, it is
convenient for us to work with convex functions on the simplex which
can be thought of as certainty measures.
}, and lower semi-continuous.
In the following example and elsewhere we use $\mathds{1}$ to denote
the vector $\mathds{1}_\theta = 1$ for all $\theta\in\Theta$ so that
$|\Theta|^{-1} \mathds{1} \in \Delta_\Theta$ is the uniform distribution over
$\Theta$.
\begin{example}[Entropies]\label{ex:entropies}
The \emph{(negative) Shannon entropy}
$H(\mu) := \sum_\theta \mu_\theta \log \mu_\theta$;
the \emph{quadratic entropy}
$Q(\mu) := \sum_\theta (\mu - |\Theta|^{-1}\mathds{1})^2$;
the \emph{Tsallis entropies}
$S_\alpha(\mu) := \alpha^{-1} \left(
\sum_\theta \mu_\theta^{\alpha+1} - 1
\right)$ for $\alpha \in (-1,0) \cup (0, \infty)$;
and the \emph{R\'enyi entropies}
$R_\alpha(\mu) = \alpha^{-1} \left(
\log \sum_\theta \mu_\theta^{\alpha + 1}
\right)$, for $\alpha \in (-1, 0)$.
We note that both Tsallis and R\'enyi entropies limit to Shannon
entropy $\alpha \to 0$ (cf. \cite{Maszczyk:2008,Van-Erven:2012}).
\end{example}
Let $\inner{\mu, v}$ denote the inner product between
$\mu \in \Delta_\Theta$ and $v\in\Delta_\Theta^*$, the dual space of $\Delta_\Theta$.
The \emph{Bregman divergence} associated with a suitably differentiable
entropy $\Phi$ on $\Delta_\Theta$ is given by
\begin{equation}
\label{eq:bregman-def}
D_\Phi(\mu, \mu')
=
\Phi(\mu) - \Phi(\mu') - \inner{\mu - \mu', \nabla\Phi(\mu')}
\end{equation}
for all $\mu \in \Delta_\Theta$ and $\mu' \in \operatorname{ri}(\Delta_\Theta)$, the relative interior of
$\Delta_\Theta$.
Given an entropy $\Phi : \Delta_\Theta \to \mathbb{R}$, we define its \emph{entropic dual}
to be $\Phi^*(v) := \sup_{\mu\in\Delta_\Theta} \inner{\mu,v} - \Phi(\mu)$ where
$v \in \Delta_\Theta^*$, \textit{i.e.}, the dual space to $\Delta_\Theta$.
Note that one could also write the supremum over
$\mathbb{R}^{\Theta}$ by setting $\Phi(\mu) = +\infty$ for $\mu \notin\Delta_\Theta$ so
that $\Phi^*$ is just the usual convex dual (cf. \cite{Hiriart-Urruty:2001}).
Thus, all of the standard results about convex duality also
hold for entropic duals provided some care is taken with the domain of
definition.
We note that although the regular convex dual of $H$ defined over all of $\mathbb{R}^\Theta$ is $v \mapsto \sum_\theta \exp(v_\theta-1)$ its entropic dual is
$H^*(v) = \log\sum_\theta \exp(v_\theta)$.
For differentiable $\Phi$, it is known \cite{Hiriart-Urruty:2001} that the
supremum defining $\Phi^*$ is attained at $\mu = \nabla\Phi^*(v)$.
That is,
\begin{equation}\label{eq:conjugate-sup}
\Phi^*(v) = \inner{\nabla\Phi^*(v),v} - \Phi(\nabla\Phi^*(v)).
\end{equation}
A similar result holds for $\Phi$ by applying this result to $\Phi^*$ and
using $\Phi = (\Phi^*)^*$.
We will make repeated use of two easy established properties of entropic duals
(see Appendix~\ref{app:lemmas} for proof).
\begin{lemma}\label{lem:prob-duals}
If $\Phi$ is an entropy over $\Delta_\Theta$ and $\Phi_\eta := \eta^{-1}\Phi$ denotes
a scaled version of $\Phi$ then 1) for all $\eta > 0$ we have
$\Phi_\eta^*(v) = \eta^{-1}\Phi^*(\eta v)$; and 2) the entropic dual
$\Phi^*$ is \emph{translation invariant} -- \textit{i.e.}, for all
$v \in \Delta_\Theta^*$ and $\alpha \in \mathbb{R}$ we have
$\Phi^*(v + \alpha\mathds{1}) = \Phi^*(v) + \alpha$ and hence for differentiable
$\Phi^*$ we have
$\nabla\Phi^*(v + \alpha\mathds{1}) = \nabla\Phi^*(v)$.
\end{lemma}
The translation invariance if $\Phi^*$ is central to our analysis.
It is what ensures our $\Phi$-mixability inequality \eqref{eq:mixability}
``telescopes'' when it is summed.
The proof of the original mixability result (Theorem~\ref{thm:vovk-mix}) uses
a similar telescoping argument that works due to the interaction of $\log$
and $\exp$ terms in Definition~\ref{def:vovk-mix}.
Our results show that this telescoping property is not due to any special
properties of $\log$ and $\exp$, but rather because of the translation
invariance of the entropic dual of Shannon entropy, $H$.
The following analysis generalizes that of the original work on mixability
precisely because this property holds for the dual of any entropy.
\subsection{$\Phi$-Mixability and the Generalized Aggregating Algorithm}
For convenience, we will use $A \in \mathcal{A}^{\Theta}$ to denote a collection of
expert predictions and $A_\theta \in \mathcal{A}$ to denote the prediction of
expert $\theta$.
Abusing notation slightly, we will write $\ell(A) \in \mathbb{R}^{X\times\Theta}$ for
the matrix of loss values $[ \ell_x(A_\theta) ]_{x,\theta}$,
and $\ell_x(A) = [\ell_x(A_\theta)]_\theta \in \mathbb{R}^\Theta$ for the vector of
losses for each expert $\theta$ on outcome $x$.
\begin{definition}[$\Phi$-mixability]\label{def:mix}
Let $\Phi$ be an entropy on $\Delta_\Theta$. A loss $\ell : \mathcal{A} \to \mathbb{R}^X$ is
$\Phi$-mixable if for all $A \in \mathcal{A}^{\Theta}$, all $\mu\in\Delta_\Theta$,
there exists an $\hat{\act} \in \mathcal{A}$ such that for all $x \in X$
\begin{equation}\label{eq:mixability}
\ell_x(\hat{\act})
\le
\operatorname{Mix}_{\ell,x}^\Phi(A,\mu)
:= \inf_{\mu'\in\Delta_\Theta} \inner{\mu', \ell_x(A)} + D_\Phi(\mu', \mu).
\end{equation}
\end{definition}
The term on the right-hand side of \eqref{eq:mixability} has some intuitive
appeal.
Since $\inner{\mu',A} = \E{\theta \sim \mu'}{\ell_x(A_\theta)}$
(\textit{i.e.}, the expected loss
of an expert drawn at random according to $\mu'$) we can view the
optimization as a trade off between finding a mixture $\mu'$ that tracks the
expert with the smallest loss upon observing outcome $x$ and keeping $\mu'$
close to $\mu$, as measured by $D_\Phi$.
In the special case when $\Phi$ is Shannon entropy, $\ell$ is log loss, and
expert predictions $A_\theta \in \Delta_X$ are distributions over $X$ such an
optimization is equivalent to Bayesian updating \cite{Williams:1980}.
To see that $\Phi$-mixability is indeed a generalization of
Definition~\ref{def:vovk-mix}, we
make use of an alternative form for the right-hand side of
the bound in the $\Phi$-mixability definition that ``hides'' the infimum
inside $\Phi^*$.
As shown in Appendix~\ref{app:lemmas} this is a straight-forward consequence of
\eqref{eq:conjugate-sup}.
\begin{lemma}\label{lem:alt-mix}
The mixability bound
\begin{equation}\label{eq:alt-mix}
\operatorname{Mix}_{\ell,x}^\Phi(A,\mu)
=
\Phi^*(\nabla\Phi(\mu)) - \Phi^*(\nabla\Phi(\mu) - \ell_x(A)).
\end{equation}
Hence, for $\Phi = \eta^{-1}H$ we have
$\operatorname{Mix}_{\ell,x}^\Phi(A,\mu)
= -\eta^{-1}\log\sum_\theta\exp(-\eta \ell_x(A_\theta))\mu_\theta$
which is the bound in Definition~\ref{def:vovk-mix}.
\end{lemma}
We now define a generalization of the Aggregating Algorithm of
Definition~\ref{def:vovk-mix} that very naturally relates to our
definition of $\Phi$-mixability:
starting with some initial distribution over experts, the algorithm
repeatedly incorporates the information about the experts' performances
by finding the minimizer $\mu'$ in \eqref{eq:mixability}.
\begin{definition}[Generalized Aggregating Algorithm]\label{def:updates}
The algorithm begins with a mixture distribution $\mu^0 \in \Delta_\Theta$ over
experts.
On round $t$, after receiving expert predictions $A^t \in \mathcal{A}^\Theta$,
the \emph{generalized aggregating algorithm} (GAA) predicts
any $\hat{\act} \in \mathcal{A}$
such that $\ell_x(\hat{\act}) \le \operatorname{Mix}_{\ell,x}^\Phi(A^t,\mu^{t-1})$
for all $x$ which is
guaranteed to exist by the $\Phi$-mixability of $\ell$.
After observing $x^t \in X$, the GAA updates the mixture
$\mu^{t-1} \in \Delta_\Theta$ by setting
\begin{equation}
\label{eq:gen-aa-update}
\mu^t
:=
\argmin_{\mu' \in \Delta_\Theta}
\inner{\mu',\ell_{x^t}(A^t)} + D_\Phi(\mu', \mu^{t-1}).
\end{equation}
\end{definition}
We now show that this updating process simply aggregates the
per-expert losses $\ell_x(A)$ in the dual space
$\Delta_\Theta^*$ with $\nabla\Phi(\mu^0)$ as the starting point.
The GAA is therefore closely related to mirror descent techniques
\cite{Beck:2003}.
\begin{lemma}\label{lem:updates}
The GAA updates $\mu^t$ in \eqref{eq:gen-aa-update} satisfy
\(
\nabla\Phi(\mu^t)
=
\nabla\Phi(\mu^{t-1}) - \ell_{x^t}(A^t)
\)
for all $t$ and so
\begin{equation}\label{eq:updates}
\nabla\Phi(\mu^T) = \nabla\Phi(\mu^0) - \sum_{t=1}^T \ell_{x^t}(A^t).
\end{equation}
\end{lemma}
The proof is given in Appendix~\ref{app:lemmas}.
Finally, to see that the above is indeed a generalization of the Aggregating
Algorithm from Definition~\ref{def:vovk-mix} we need only apply
Lemma~\ref{lem:updates} and observe that for
$\Phi = \eta^{-1} H$ we have
$\nabla\Phi(\mu) = \eta^{-1}(\log(\mu) + \mathds{1})$ and so
$\log \mu^t = \log \mu^{t-1} - \eta \ell_{x^t}(A^t)$.
Exponentiating this vector equality element-wise gives
$\mu_\theta^t \propto \mu^{t-1}_\theta \exp(-\eta \ell_{x^t}(A_\theta^t))$.
\section{Properties of $\Phi$-mixability}
In this section we establish a number of key properties for $\Phi$-mixability,
the most important of these being that $\Phi$-mixability implies constant
regret.
We also show that $\Phi$-mixability is not a vacuous concept for $\Phi$ other
than Shannon entropy by showing that any Legendre $\Phi$ has $\Phi$-mixable
losses and that this is a necessary condition for such losses to exist.
\subsection{$\Phi$-mixability Implies Constant Regret}
\begin{theorem}\label{thm:main}
If $\ell : \mathcal{A} \to \mathbb{R}^X$ is $\Phi$-mixable then there is
a family of strategies parameterized by $\mu\in\Delta_\Theta$ which,
for any sequence of observations
$x^1, \ldots, x^T \in X$
and sequence of expert predictions $A^1, \ldots , A^T \in \mathcal{A}^\Theta$,
plays a sequence $\hat{\act}^1, \ldots, \hat{\act}^T \in \mathcal{A}$ such that for all
$\theta \in \Theta$
\begin{equation}\label{eq:main}
\sum_{t=1}^T \ell_{x^t}(\hat{\act}^t)
\le
\sum_{t=1}^T \ell_{x^t}(A^t_\theta) +
D_\Phi(\delta_\theta, \mu) .
\end{equation}
\end{theorem}
The proof is in Appendix~\ref{app:proof-of-main} and is a straight-forward
consequence of Lemma~\ref{lem:alt-mix} and the translation invariance of
$\Phi^*$.
The standard notion of mixability is recovered when $\Phi = \frac{1}{\eta}H$
for $\eta > 0$ and $H$ the Shannon entropy on $\Delta_\Theta$.
In this case, Theorem~\ref{thm:vovk-mix} is obtained as a corollary
for $\mu = |\Theta|^{-1}\mathds{1}$, the uniform distribution
over $\Theta$.
A compelling feature of our result is that it gives a natural interpretation
of the constant $D_\Phi(\delta_\theta, \pi)$ in the regret bound: if
$\pi$ is the initial guess as to which expert is best before the game starts,
the ``price'' that is paid by the player is exactly how far (as measured by
$D_\Phi$) the initial guess was from the distribution that places all its
mass on the best expert.
The following example computes mixability bounds for the alternative
entropies introduced in \S\ref{sub:defn}.
They will be discussed again in \S\ref{sub:asymptotics} below.
\begin{example}\label{ex:regrets}
Consider games with $K = |\Theta|$ experts and $\mu = K^{-1}\mathds{1}$.
For the (negative) Shannon entropy,
the regret bound from Theorem~\ref{thm:main} is
$D_H(\delta_\theta, \mu) = \log K$.
For quadratic entropy the regret bound is
$D_Q(\delta_\theta, \mu) = 1 - \frac{2(K-1)}{K^2}$.
For the family of Tsallis entropies
the regret bound
given by
$D_{S_\alpha}(\delta_\theta, K^{-1}\mathds{1}) = \alpha^{-1}(1-K^{-\alpha})$.
For the family of R\'enyi entropies
the regret bound becomes
$D_{R_\alpha}(\delta_\theta, K^{-1}\mathds{1}) = \log K$.
\end{example}
A second, easily established result concerns the mixability of scaled entropies.
The proof is by observing that in \eqref{eq:mixability} the only term in the
definition of
$\operatorname{Mix}^{\Phi_\eta}_{\ell,x}$ involving $\eta$ is
$D_{\Phi_\eta} = \frac{1}{\eta}D_\Phi$.
The quantification over $A,\mu,\hat{\act}, \mu'$ and $x$ in the original definition
have been translated into infima and suprema.
\begin{lemma}\label{lem:scaling}
The function
\(
M(\eta)
:=
\inf_{A,\mu} \sup_{\hat{\act}} \inf_{\mu', x} \;
\operatorname{Mix}^{\Phi_\eta}_{\ell,x}(A,\mu) - \ell_x(\hat{\act})
\)
is non-increasing.
\end{lemma}
This implies that there is a well-defined maximal $\eta > 0$ for which a given
loss $\ell$ is $\Phi_\eta$-mixable since $\Phi_\eta$-mixability is equivalent
to $M(\eta) \ge 0$.
We will call this maximal $\eta$ the \emph{$\Phi$-mixability constant} for $\ell$
and denote it $\eta(\ell,\Phi) := \sup \{ \eta > 0 : M(\eta) \ge 0 \}$.
This constant is central to the discussion in Section~\ref{sub:regret} below.
\subsection{$\Phi$-Mixability of Proper Losses and Their Bayes Risks}
Entropies are known to be closely related to the Bayes risk of what are
called proper losses or proper scoring rules~\cite{Dawid:2007,Gneiting:2007}.
Here, the predictions are distributions over outcomes, \textit{i.e.}, points in $\Delta_X$.
To highlight this we will use $p$, $\hat{p}$ and $P$ instead of $a$, $\hat{\act}$
and $A$ to denote actions.
If a loss $\ell : \Delta_X \to \mathbb{R}^X$ is used to assign a
penalty $\ell_x(\hat{p})$ to a prediction $\hat{p}$ upon outcome $x$
it is said to be \emph{proper} if its expected value under $x\sim p$
is minimized by predicting $\hat{p} = p$.
That is, for all $p, \hat{p} \in \Delta_X$ we have
\[
\E{x\sim p}{\ell_x(\hat{p})}
=
\inner{p,\ell(\hat{p})}
\ge
\inner{p,\ell(p)}
=: -F^\ell(p)
\]
where $-F^\ell$ is the \emph{Bayes risk} of $\ell$ and is necessarily
concave~\cite{Erven:2012}, thus making $F^\ell : \Delta_X\to\mathbb{R}$ convex
and thus an entropy.
The correspondence also goes the other way: given any convex function
$F : \Delta_X\to\mathbb{R}$ we can construct a unique proper loss~\cite{Vernet:2011}.
The following representation can be traced back to ~\cite{Savage:1971} but
is expressed here using convex duality.
\begin{lemma}\label{lem:proper-loss}
If $F:\Delta_X\to\mathbb{R}$ is a differentiable entropy then the loss
$\ell^F :\Delta_X\to\mathbb{R}$ defined by
\begin{equation}\label{eq:proper-loss}
\ell^F(p)
:=
F^*(\nabla F(p))\mathds{1} - \nabla F(p)
\end{equation}
is proper.
\end{lemma}
It is straight-forward to show that the proper loss associated with the
negative Shannon entropy $\Phi = H$ is the log loss, that is,
$\ell^{H}(\mu) := -\left(\log \mu(\theta)\right)_{\theta\in\Theta}$.
This connection between losses and entropies lets us define the
$\Phi$-mixability of a proper loss strictly in terms of its associated entropy.
This is similar in spirit to the result in \cite{Erven:2012} which shows that
the original mixability (for $\Phi = H$) can be expressed in terms of the
relative curvature of Shannon entropy and the loss's Bayes risk.
We use the following definition to explore the optimality of Shannon mixability
in Section~\ref{sub:regret} below.
\begin{definition}\label{def:mix-F}
An entropy $F : \Delta_X \to \mathbb{R}$ is \emph{$\Phi$-mixable} if
\begin{equation}\label{eq:mix-F}
\sup_{P,\mu} \;
F^*\left(
\left\{ \Phi^*(\nabla\Phi(\mu) - \ell^F_x(P)) \right\}_x
- \Phi^*(\nabla\Phi(\mu))\mathds{1}
\right)
\le 0
\end{equation}
where $\ell^F$ is as in Lemma~\ref{lem:proper-loss} and the supremum is
over expert predictions $P \in \Delta_X^\Theta$ and
mixtures over experts $\mu\in\Delta_\Theta$.
\end{definition}
Although this definition appears complicated due to the handling of vectors
in $\mathbb{R}^X$ and $\mathbb{R}^\Theta$, it has a natural interpretation in terms of
\emph{risk measures} from mathematical finance \cite{Follmer:2004}.
Given some convex function $\alpha : \Delta_X \to \mathbb{R}$, its associated risk
measure is its dual $\rho(v) := \sup_{p\in\Delta_X} \inner{p,-v} - \alpha(p) = \alpha^*(-v)$ where $v$ is a \emph{position} meaning $v_x$ is some monetary value
associated with outcome $x$ occurring.
Due to its translation invariance, the quantity $\rho(v)$ is often interpreted as
the amount of ``cash'' (\textit{i.e.}, outcome independent value) an agent would ask for to
take on the uncertain position $v$.
Observe that the risk $\rho^F$ for when $\alpha = F$ satisfies
$\rho^F\circ\ell^F = 0$ so that $\ell^F(p)$ is always a $\rho^F$-risk free
position.
If we now interpret $\mu^* = \nabla\Phi(\mu)$ as a position over outcomes in
$\Theta$ and $\Phi^*$ as a risk for $\alpha = \Phi$ the term
$\left\{ \Phi^*(\mu^* - \ell^F_x(P)) \right\}_x - \Phi^*(\mu^*)\mathds{1}$
can be seen as the change in $\rho^\Phi$ risk when shifting position
$\mu^*$ to $\mu^* - \ell^F_x(P)$ for each possible outcome $x$.
Thus, the mixability condition in \eqref{eq:mix-F} can be viewed as a
requirement that a $\rho^F$-risk free change in positions over $\Theta$ always
be $\rho^F$-risk free.
The following theorem shows that the entropic version of $\Phi$-mixability
Definition~\ref{def:mix-F} is equivalent to the loss version in Definition~\ref{def:mix} in the case of proper losses.
Its proof can be found in Appendix~\ref{app:mix-proper}
and relies on Sion's theorem and the facts that proper losses are quasi-convex.
This latter fact appears to be new so we state it here as a separate lemma
and prove it in Appendix~\ref{app:lemmas}.
\begin{lemma}\label{lem:proper-qc}
If $\ell : \Delta_X \to \mathbb{R}$ is proper then $p' \mapsto \inner{p, \ell(p')}$
is quasi-convex for all $p \in \Delta_X$.
\end{lemma}
\begin{theorem}\label{thm:mix-proper}
If $\ell : \Delta_X \to \mathbb{R}^X$ is proper and has Bayes risk $-F$ then $F$ is an
entropy and $\ell$ is $\Phi$-mixable if and only if $F$ is $\Phi$-mixable.
\end{theorem}
The entropic form of mixability in \eqref{eq:mix-F} shares some similarities with
expressions for
the classical mixability constants given in \cite{Haussler:1998} for binary
outcome games and in \cite{Erven:2012} for general games.
Our expression for the mixability is more general than the previous two
being both for binary and non-binary outcomes and for general entropies.
It is also arguably more efficient since the optimization in \cite{Erven:2012}
for non-binary outcomes requires inverting a Hessian matrix at each point in the
optimization.
\subsection{Characterizing and Comparing $\Phi$-mixability}
Although Theorem~\ref{thm:main} recovers the already known constant regret
bound for Shannon-mixable losses, it is natural to ask whether the result is
vacuous or
not for other entropies.
That is, do there exist $\Phi$-mixable losses for $\Phi$ other than Shannon
entropy?
If so, do such $\Phi$-mixable losses exist for any entropy $\Phi$?
The next theorem answers both of these questions, showing that the existence
of ``non-trivial'' $\Phi$-mixable losses is intimately related to the behaviour
of an entropy's gradient at the simplex's boundary.
Specifically, an entropy $\Phi$ is said to be \emph{Legendre}
\cite{Rockafellar:1997} if: a) $\Phi$ is strictly convex
in $\operatorname{int}(\Delta_\Theta)$; and b)
$\|\nabla \Phi(\mu)\| \to \infty$ as $\mu \to \mu_b$ for any $\mu_b$ on the boundary
of $\Delta_\Theta$.
We will say a loss is \emph{non-trivial} if there exist distinct actions which
are optimal for distinct outcomes (see \ref{app:legendre} for formal definition).
This, for example, rules out constant losses -- \textit{i.e.}, $\ell(a) = k \in \mathbb{R}^X$ for all $a\in\mathcal{A}$ -- are easily\footnote{
The inequality in \eqref{eq:mixability} reduces to
$0 \le \inf_{\mu'} D_\Phi(\mu',\mu)$ which is true for all
Bregman divergences.
}
seen to be $\Phi$-mixable for any $\Phi$.
For technical reasons we will further restrict our attention to \emph{curved}
losses by which we mean those losses with strictly concave Bayes risks.
We conjecture that the following theorem also holds for non-curved losses.
\begin{theorem}\label{thm:legendre}
There exist non-trivial, curved $\Phi$-mixable losses
if and only if
the entropy $\Phi$ is Legendre.
\end{theorem}
The proof is in Appendix~\ref{app:legendre}.
From this result we can deduce that there are no $Q$-mixable
losses. Also, since it is easy to show the derivatives $\nabla S_\alpha$ and $\nabla R_\alpha$ are unbounded for $\alpha \in (0,1)$, the entropies $S_\alpha$ and
$R_\alpha$ are Legendre. Thus there exist $S_\alpha$- and $R_\alpha$-mixable losses when $\alpha\in(-1,0)$.
\section{Conclusions and Open Questions}
The main purpose of this work was to shed new light on mixability by
casting it within the broader notion of $\Phi$-mixability.
We showed that the constant regret bounds enjoyed by mixability losses
are due to the translation invariance of entropic duals, and so are also
enjoyed by any $\Phi$-mixable loss.
The definitions and technical machinery presented here allow us to ask
precise questions about entropies and the optimality of their associated
aggregating algorithms.
\begin{amrnote}{}
Summarize findings before leading into open questions.
\end{amrnote}
\begin{amrnote}{}
Emphasize this work presenting mixability in a new light via convexity.
Argue that it gives a much fuller picture of when constant regret bounds
are possible and shows a strong connection between constant regret
and Legendre functions (cf. barrier methods?)
\end{amrnote}
\subsection{Are All Legendre Entropies ``Equivalent''?}
Since Theorem~\ref{thm:legendre} shows the existence of $\Phi$-mixable losses, a natural question concerns the relationship between the sets of losses that
are mixable for different choices of $\Phi$.
For example, are there losses that are $H$-mixable but not $S_\alpha$-mixable,
or vice-versa?
We conjecture that essentially all Legendre entropies $\Phi$
have the same $\Phi$-mixable losses up to a scaling factor.
\begin{amrnote}{}
Prove following or state as conjecture. I believe it is true.
It is equivalent to showing that
$\operatorname{Mix}^{\Phi}_{\ell,x}(A_\eta, \mu_\eta)
>
\eta^{-1}\operatorname{Mix}^{\Phi'}_{\eta\ell,x}(A_\eta, \mu_\eta)$
yields a contradiction
where $A_\eta$ and $\mu_\eta$ are the witnesses to the
non-$\Phi'_\eta$-mixability of $\ell$ -- that is,
$\inf_{A,\mu} \sup_{\hat{\act}} \inf_{x} \operatorname{Mix}^{\Phi'_\eta}_{\ell,x}(A,\mu) < 0$.
Since the RHS looks like a directional derivative of $\Phi^*$, I expect
the upper bound on the LHS should violate the Legendre assumption
on $\Phi'$.
However, the analysis is made tricky by the dependence of $A_\eta$ and
$\mu_\eta$ on $\eta$ as $\eta \to 0$.
\end{amrnote}
\begin{conjecture}\label{con:equivalent}
Let $\Phi$ be a entropy on $\Delta_\Theta$ and $\ell$ be a $\Phi$-mixable loss.
If $\Psi$ is a Legendre entropy on $\Delta_\Theta$ then there
exists an $\eta > 0$ such that $\ell$ is $\eta^{-1}\Psi$-mixable.
\end{conjecture}
Some intuition for this conjecture is derived from observing that
$\operatorname{Mix}^{\Psi_\eta}_{\ell,x} = \eta^{-1} \operatorname{Mix}^\Psi_{\eta\ell,x}$ and that
as $\eta \to 0$ the function $\eta\ell$ behaves like a constant loss and
will therefore be mixable. This means that scaling up $\operatorname{Mix}^\Psi_{\eta\ell,x}$
by $\eta^{-1}$ should make it larger than $\operatorname{Mix}^\Phi_{\ell,x}$.
However, some subtlety arises in ensuring that this dominance occurs uniformly.
\subsection{Asymptotic Behaviour}\label{sub:asymptotics}
There is a lower bound due to Vovk \cite{Vovk:1995} for general losses
$\ell$ which shows that if one is allowed to vary the number of rounds
$T$ and the number of experts $K = |\Theta|$, then no regret bound can
be better than the optimal regret bound obtained by Shannon
mixability. Specifically, for a fixed loss $\ell$ with optimal Shannon
mixability constant $\eta_\ell$, suppose that for some $\eta' >
\eta_\ell$ we have a regret bound of the form $(\log K)/ \eta'$
as well as some strategy $L$ for the learner that supposedly
satisfies this regret bound. Vovk's lower bound shows, for this
$\eta'$ and $L$, that there exists an instantiation of the prediction
with expert advice game with $T$ large enough and $K$ roughly exponential in $T$ (and both are still finite) for which the alleged regret bound will fail to hold at the
end of the game with non-zero probability.
The regime in which Vovk's lower bound holds suggests that the best achievable regret with respect to the number of experts grows as $\log K$. Indeed, there is a lower bound for general losses $\ell$ that shows the regret of the best possible algorithm on games using $\ell$ must grow like $\Omega(\log_2 K)$ \cite{Haussler:1998}.
The above lower bound arguments apply when the number of experts is
large (\textit{i.e.}, exponential in the number of rounds) or if we consider the
dynamics of the regret bound as $K$ grows. This leaves open the
question of the best possible regret bound for moderate and possibly
fixed $K$ which we formally state in the next section.
This question that serves as a strong motivation for the study of generalized mixability considered here. Note also that the above lower bounds are consistent with the fact that there cannot be non-trivial, $\Phi$-mixable losses for non-Legendre $\Phi$ (\textit{e.g.}, the quadratic entropy $Q$) since the growth of the regret bound as a function of $K$
(cf. Example~\ref{ex:regrets}) is less than $\log K$ and hence violates the above lower bounds.
\subsection{Is There An ``Optimal'' Entropy?} \label{sub:regret}
Since we believe that $\Phi$-mixability for Legendre $\Phi$ yield the same
set of losses, we can ask whether, for a fixed loss $\ell$,
some $\Phi$ give better regret bounds than others.
These bounds depend jointly on the largest $\eta$ such that
$\ell$ is $\Phi_\eta$-mixable and the value of $D_\Phi(\delta_\theta, \mu)$.
We can define the optimal regret bound
one can achieve for a particular loss $\ell$ using the generalized aggregating
algorithm with $\Phi_\eta := \tfrac 1 \eta \Phi$ for some $\eta > 0$. This
allows us to compare entropies on particular losses, and we can say that an
entropy \emph{dominates} another if its optimal regret bound is better for all
losses $\ell$.
Recalling the definition of the maximal $\Phi$-mixability constant from Lemma~\ref{lem:scaling}, we can determine a quantity of more direct interest:
the best regret bound one can obtain using a scaled copy of $\Phi$.
Recall that if $\ell$ is $\Phi$-mixable, then the best regret bound one can
achieve from the generalized aggregating algorithm is $\inf_{\mu}\sup_{\theta}
D_\Phi(\delta_\theta,\mu)$.
We can therefore define the best regret bound for $\ell$ on a scaled version of
$\Phi$ to be
$R_{\ell,\Phi} :=
\eta(\ell,\Phi)^{-1} \inf_{\mu}\sup_{\theta} D_\Phi(\delta_\theta,\mu)$
which simply corresponds to the regret bound for the entropy
$\Phi_{\eta(\ell,\Phi)}$.
Note a crucial property of $R_{\ell,\Phi}$, which will be very useful in
comparing entropies: $R_{\ell,\Phi} = R_{\ell,\alpha\Phi}$ for all $\alpha>0$.
(This follows from the observation that
$\eta(\ell,\alpha\Phi) = \eta(\ell,\Phi)/\alpha$.)
That is, $R_{\ell,\Phi}$ is independent of the particular scaling we
choose for $\Phi$.
We can now use $R_{\ell,\Phi}$ to define a scale-invariant relation over entropies.
Define
$\Phi \geq_\ell \Psi$ if $R_{\ell,\Phi} \leq R_{\ell,\Psi}$, and $\Phi
\geq_* \Psi$ if $\Phi \geq_\ell \Psi$ for all losses $\ell$.
In the latter case we say
$\Phi$
\emph{dominates} $\Psi$.
By construction, if one entropy dominates another its regret bound is guaranteed
to be tighter and therefore its aggregating algorithm will achieve better
worst-case regret.
As discussed above, one natural candidate for a universally dominant entropy is the Shannon entropy.
\begin{conjecture}
\label{quest:msr-1}
For all choices of $\Theta$, the negative Shannon entropy dominates all
other entropies. That is, $H \geq_* \Phi$ for all
$\Theta$ and all convex $\Phi$ on $\Delta_\Theta$.
\end{conjecture}
Although we have not been able to prove this conjecture we were able to collect
some positive evidence in the form of Table~\ref{tab:regrets}.
Here, we took the entropic form of $\Phi$-mixability from
Definition~\ref{def:mix-F} and implemented\footnote{
In order to preserve anonymity the code will not be made available
until after publication.
}
it as an optimization problem in
the language R and computed $\eta(\ell^F, \Phi)$ for $F$ and $\Phi$ equal to the entropies introduced in Example~\ref{ex:entropies} for two expert games with two outcomes.
The maximal $\eta$ (and hence the optimal regret bounds) for each pair was found doing a binary search
for the
zero-crossing of $M(\eta)$ from Lemma~\ref{lem:scaling} and then applying the
bounds from Example~\ref{ex:regrets}.
Although we were expecting the dominant entropy for each loss $\ell^F$ to be
its ``matching'' entropy (\textit{i.e.}, $\Phi = F$), as can be seen from the table the
optimal regret bound for every loss was obtained in the column for $H$.
However, one interesting feature for these matching cases is that the optimal
$\eta$ (shown in parentheses) is always equal to 1.
\begin{conjecture}
Suppose $|X| = |\Theta|$ so that $\Delta_\Theta = \Delta_X$.
Given a Legendre $\Phi : \Delta_\Theta \to \mathbb{R}$ and its associated proper loss
$\ell^\Phi : \Delta_X \to \mathbb{R}^X$, the
maximal $\eta$ such that $\ell^\Phi$ is $\eta^{-1}\Phi$-mixable is
$\eta = 1$.
\end{conjecture}
We conjecture that this pattern will hold for matching entropies and losses
for larger numbers of experts and outcomes and hope to test or prove this in future work.
\begin{table} \centering
\caption{Mixability and optimal regrets for pairs of losses and entropies in
2 outcome/2 experts games.
Entries show the regret bound
$\eta^{-1}D_{\Phi}(\delta_\theta, \frac{1}{2}\mathds{1})$
for the maximum $\eta$ (in parentheses).
\label{tab:regrets}}
\begin{tabular}{@{} lccccccccc @{}}
& \multicolumn{9}{c}{\textbf{Entropy}}
\\ \cmidrule{2-10}
\textbf{Loss}
& $H$
&
& $S_{-.1}$
& $S_{-.5}$
& $S_{-.9}$
&
& $R_{-.1}$
& $R_{-.5}$
& $R_{-.9}$
\\ \midrule
$\log$
& 0.69 ($1^*$)
&
& 0.74 (.97)
& 1.17 (.71)
& 5.15 (.19)
&
& 0.77 (0.9)
& 1.38 (0.5)
& 6.92 (0.1)
\\
$\ell^Q$
& 0.34 (2)
&
& 0.37 (1.9)
& 0.58 (1.4)
& 2.57 (0.4)
&
& 0.38 (1.8)
& 0.69 (1)
& 3.45 (0.2)
\\
$\ell^{S_{-.5}}$
& 0.49 (1.4)
&
& 0.53 (1.4)
& 0.82 ($1^*$)
& 3.64 (.26)
&
& 0.54 (1.3)
& 0.98 (.71)
& 4.90 (.14)
\\
$\ell^{R_{.5}}$
& 0.34 (2)
&
& 0.37 (1.9)
& 0.58 (1.4)
& 2.57 (.37)
&
& 0.38 (1.8)
& 0.69 ($1^*$)
& 3.46 (0.2)
\\\bottomrule
\end{tabular}
\end{table}
\subsubsection*{Acknowledgments}
We would like to thank Matus Telgarsky for help with restricted duals, Brendan
van Rooyen for noting that there are no quadratic mixable losses, and Harish
Guruprasad for identifying a flaw in an earlier ``proof'' of the quasi-convexity
of proper losses.
Mark Reid is supported by an ARC Discovery Early Career Research Award (DE130101605) and part of this work was developed while he was visiting Microsoft Research.
NICTA is funded by the Australian Government and as an ARC ICT Centre of Excellence.
\newpage
\bibliographystyle{unsrt}
|
2,877,628,089,076 | arxiv | \section{Introduction}
The theoretical prediction of the excitation spectra of interacting electronic systems is a major challenge in quantum chemistry and condensed matter physics. A method that has been gaining popularity in the past years is time-dependent density functional theory (TDDFT) offering a rigorous and computationally efficient approach for treating excited states of large molecules and nanoscale systems.
In TDDFT the interacting electronic density is calculated from a system of non-interacting electrons moving in an effective local Kohn-Sham (KS) potential \cite{KS}.
The KS potential is the sum of the external, the Hartree and the so-called exchange-correlation (XC) potential, $v_{\rm xc}(\vr t)$, which, due to the Runge-Gross theorem \cite{rg0}, is a functional of the density. In the linear response regime, the excitation energies can be extracted from the poles of the linear density response function. As a consequence, given the variational derivative $f_{\rm xc}(\vr t,\vr' t')=\delta v_{{\rm xc}}(\vr t)/\delta n(\vr' t')$, also known as the XC kernel \cite{grosskohn}, it is possible to formulate an RPA-like equation for the exact excitation spectrum \cite{pgg96,casbook}. In practical calculations, this equation is solved using some approximate XC potential and kernel where the most popular ones are based on the adiabatic local density approximation (ALDA), leading to kernels local in both space and time. Despite this simple structure optical excitations of small molecules are successfully predicted. However, several shortcomings have also been reported: excitons in solids are not captured, \cite{reining,marini} double excitations are missing \cite{m1} and charge-transfer (CT) excitations are qualitatively incorrect \cite{dreuw,mt06,handy,roi1}. In this work we will be concerned with the last problem and to see why ALDA fails in this case, we consider a charge transfer between two neutral Coulombic fragments. The asymptotic limit of the excitation energy is then given by
\begin{equation}
\omega_{\rm CT}=I_d-A_a-1/R,
\end{equation}
where $I_d$ is the ionization energy of the donor, $A_a$ is the affinity of the acceptor and $R$ is their separation.
In TDDFT the starting point is the KS system which yields the exact $I_d$ but only an approximate $A_a$. Thus, the XC kernel must both account for the $1/R$ correction and shift the KS affinity.
The linear response equations involve, however, only matrix elements of $f_{\rm xc}$ between so-called excitation functions $\Phi_{ia}(\vr)=\varphi_i(\vr)\varphi_a(\vr)$, i.e., products of occupied and unoccupied KS orbitals. As the distance between the fragments increases these products vanish exponentially and thus there is no correction to the KS eigenvalue differences unless the kernel diverges \cite{barct,tozer2}. Kernels from the ALDA, or adiabatic GGA's for that matter, do not contain such divergency and it is as yet not understood how this extreme behavior should be incorporated in approximate functionals.
Whenever two subsystems are spatially well-separated it is possible to treat one of the subsystems in terms of an ensemble containing states with different number of particles. DFT has been generalized to non-integer charges and as an important consequence it was found that the XC potential jumps discontinuously by a constant at integer particle numbers in order to align the highest occupied KS eigenvalue with the chemical potential \cite{pplb82}. In this way, the true affinity, $A$, is equal to the sum of the KS affinity, $A_s$, and the discontinuity. Not surprisingly, it has therefore been argued that the discontinuity must play an important role in describing CT excitations within TDDFT \cite{vcu09}.
So far, only the XC potential has been the target for investigating discontinuities in DFT and TDDFT \cite{lein,burketran,gl}. In this work we instead examine possible discontinuities of the XC kernel. We demonstrate the existence of a discontinuity and we study its properties. Furthermore, we give an explicit example through a numerical study of the EXX functional. Finally, as a first application, we demonstrate the crucial role of the discontinuity for capturing CT excitations in linear response TDDFT.
\section{Derivative discontinuity in DFT}
We start by considering a static system of electrons described by a statistical operator $\hat{\rho}=\sum_k\alpha_k|\Psi_k \rangle\langle \Psi_k|$, where $|\Psi_k\rangle$ denotes the ground state of $k$ particles corresponding to the Hamiltonian $\hat{H}=\hat{T}+\hat{V}+\int d\vr\, w(\vr)\hat{n}(\vr)$, in which $\hat{T}$ is the kinetic energy, $\hat{V}$ the inter-particle interaction and $w$ is the external potential.
The ground-state energy $E_0$ of the system with average number of particles $N$ is obtained by minimizing the functional $E_w[n]=F[n]+\int d\vr \, w(\vr) n(\vr)$, where
$F[n]=\min_{\hat{\rho}\to n}{\rm Tr}[\hat{\rho}(\hat{T}+\hat{V})]$, under the constraint that $N=\int d\vr \,n(\vr)$. At the minimum $n=n[w,N]$ coincides with the ground-state density. The XC energy is defined as $E_{\rm xc}[n]=F[n]-T_s[n]-U[n]$, where $T_s[n]=\min_{\hat{\rho}\to n}{\rm Tr}[\hat{\rho}\hat{T}]$ is the non-interacting kinetic energy and $U[n]$ is the Hartree energy. Assuming that the density can be reproduced by an ensemble of non-interacting electrons the XC part of the KS potential is given by $v_{{\rm xc}}(\vr)=\delta E_{{\rm xc}}/\delta n(\vr)$. In general $E_0$ and, in particular, $E_{\rm xc}[n[w,N]]$ has derivative discontinuities at integer particle numbers $N_0$ \cite{perlev,ss83}.
The partial derivative of $E_{\rm xc}$ with respect to $N$,
\begin{eqnarray}
\frac{\partial E_{{\rm xc}}}{\partial N}=\int d \vr\,\, v_{{\rm xc}}(\vr)\frac{\partial n(\vr)}{\partial N},
\label{forstaderivN}
\end{eqnarray}
has two sources of discontinuous behavior: (i) the quantity
\begin{equation}
f(\vr)\equiv\frac{\partial n(\vr)}{\partial N},
\end{equation}
known as the Fukui function, may have different right and left limits, $f^+$ and $f^-$ (the superscript $\pm$ refers to the value of the quantity at $N=N_0+0^\pm$), (ii) the XC potential may be discontinuous, $v^+_{{\rm xc}}(\vr)=v^-_{{\rm xc}}(\vr)+\Delta_{\rm xc}$, where $\Delta_{\rm xc}$ is a constant. Below we will show that also the second variation of $E_{\rm xc}$ with respect to the density, the static XC kernel $f_{\rm xc}(\vr,\vr')=\delta v_{\rm xc}(\vr)/\delta n(\vr')$, has discontinuities which are related to derivative discontinuities in the density itself. As pointed out in previous work \cite{hvb2,awst3,casbook}, the {\em particle number conserving} density response is unaffected by adding to $f_{\rm xc}$ a function depending only on one of the coordinates. In line with the results of Ref. \onlinecite{gal1} we therefore argue that the discontinuities of $f_{\rm xc}$ must be of the form
\begin{equation}
f^+_{\rm xc}(\vr,\vr')=f^-_{\rm xc}(\vr,\vr')+g_{\rm xc}(\vr)+g_{\rm xc}(\vr').
\end{equation}
In the following we will show a simple procedure for determining $\Delta_{\rm xc}$ and $g_{\rm xc}(\vr)$ which is useful whenever $E_{\rm xc}$ is an implicit functional of the density via, e. g., the KS Green function.
\subsection{XC potential}
For $N> N_0$, we write $v_{\rm xc}(\vr)=v^-_{\rm xc}(\vr)+\Delta_{\rm xc}(\vr)$ and cast Eq. (\ref{forstaderivN}) into
\begin{eqnarray}
\int d\vr\,\, \Delta_{\rm xc}(\vr)f(\vr)=\frac{\partial E_{{\rm xc}}}{\partial N}-\int d \vr\, v^-_{{\rm xc}}(\vr)f(\vr).
\label{muxc}
\end{eqnarray}
In the limit $N\rightarrow N_{0}^+$, $\Delta_{\rm xc}(\vr)\to\Delta_{\rm xc}$ and we find a formal expression for the discontinuity of $v_{\rm xc}$
\begin{eqnarray}
\Delta_{\rm xc}=\left.\frac{\partial E_{{\rm xc}}}{\partial N}\right|_+-\int d \vr\, v^-_{{\rm xc}}(\vr)f^+(\vr).
\label{muxc2}
\end{eqnarray}
This expression can be used as the starting point for deriving the well-known MBPT-formula for the correction to the gap \cite{perde}, as we will now demonstrate. From the Klein functional within MBPT \cite{klein} it is possible to construct an XC energy functional in terms of the KS Green function $G_s(\vr,\vr',\omega)$ \cite{vbdvls05}. In this case $\Sigma_{\rm xc}=\delta E_{\rm xc}/\delta G_s$, where $\Sigma_{\rm xc}$ is the self-energy evaluated at $G_s$. The derivative of $E_{\rm xc}$ with respect to $N$ is then given by
\begin{eqnarray}
\frac{\partial E_{\rm xc}}{\partial N}=\int \frac{d\omega}{2\pi}\, \int d\vr d\vr' \Sigma_{{\rm xc}}(\vr,\vr',\omega)\frac{\partial G_s(\vr,\vr',\omega)}{\partial N}.
\label{dedg}
\end{eqnarray}
In order to evaluate the derivative of $G_s$ with respect to $N$ we consider an ensemble described by a spin-compensated mixture of states with electron number $N_0$ and $N_0+1$. The KS ensemble Green function for $N\in [N_0,N_0+1]$ is given by
\begin{eqnarray}
G_s(\vr,\vr',\omega)&=&\sum_{k=1}^{N_0}\frac{\varphi_k(\vr)\varphi_k(\vr')}{\omega-\varepsilon_k-i\eta}+\sum_{k=N_0+2}^{\infty}\frac{\varphi_k(\vr)\varphi_k(\vr')}{\omega-\varepsilon_k+i\eta}\nn\\
&&\!\!\!\!\!\!\!\!\!\!\!+\frac{p}{2}\frac{\varphi_{\rm L}(\vr)\varphi_{\rm L}(\vr')}{\omega-\varepsilon_{\rm L}-i\eta}+\left(1-\frac{p}{2}\right)\frac{\varphi_{\rm L}(\vr)\varphi_{\rm L}(\vr')}{\omega-\varepsilon_{\rm L}+i\eta}
\end{eqnarray}
where $p=N-N_0$ and the subscript ${\rm L}$ signifies the lowest unoccupied molecular orbital (LUMO) of the KS system, which is considered partially occupied and partially unoccupied. Notice that the KS orbitals $\varphi_k$ and eigenvalues $\varepsilon_k$ also depend on $N$ via the KS potential $V_s$. The derivative of $G_s$ with respect to $N$ is now easily carried out
\begin{eqnarray}
\frac{\partial G_s(\vr,\vr',\omega)}{\partial N}&=&\frac{1}{2}\frac{\varphi_{\rm L}(\vr)\varphi_{\rm L}(\vr')}{\omega-\varepsilon_{\rm L}-i\eta}-\frac{1}{2}\frac{\varphi_{\rm L}(\vr)\varphi_{\rm L}(\vr')}{\omega-\varepsilon_{\rm L}+i\eta}\nonumber\\
&&+\int d\vr_1\frac{\delta G_s(\vr,\vr',\omega)}{\delta V_s(\vr_1)}\frac{\partial V_s(\vr_1)}{\partial N}.
\label{muxc3}
\end{eqnarray}
From Eq. (\ref{muxc3}) and Eq. (\ref{dedg}) we find
\begin{eqnarray}
\left.\frac{\partial E_{\rm xc}}{\partial N}\right|_+&=&\int d\vr d\vr' \varphi^+_{\rm L}(\vr)\Sigma^+_{{\rm xc}}(\vr,\vr',\varepsilon_{\rm L})\varphi^+_{\rm L}(\vr')\nn\\
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!+\int d\vr d\vr'd\vr_1\Sigma^+_{{\rm xc}}(\vr,\vr',\omega)\left.\frac{\delta G_s(\vr,\vr',\omega)}{\delta V_s(\vr_1)}\frac{\partial V_s(\vr_1)}{\partial N}\right|_+
\label{sig}
\end{eqnarray}
The second term on the right hand side of Eq. (\ref{muxc2}) can be written as
\begin{eqnarray}
\int d \vr\, v^-_{{\rm xc}}(\vr)f^+(\vr)&=&\int d \vr\, v^-_{{\rm xc}}(\vr)|\varphi^+_{\rm L}(\vr)|^2\nn\\
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!+\int d\vr'd\vr_1\left.v^+_{{\rm xc}}(\vr')\frac{\delta n(\vr')}{\delta V_s(\vr_1)}\frac{\partial V_s(\vr_1)}{\partial N}\right|_+.
\label{vxi}
\end{eqnarray}
The discontinuity $\Delta_{\rm xc}$ is now easily determined. The second terms on the right hand side of Eqs. (\ref{sig}-\ref{vxi}) will cancel by virtue of the linearized Sham-Schl\"uter equation \cite{ss} and we find
\begin{eqnarray}
\Delta_{\rm xc}=\int d\vr d\vr' \varphi_{\rm L}(\vr)\Sigma^+_{{\rm xc}}(\vr,\vr',\varepsilon_{\rm L})\varphi_{\rm L}(\vr')\nn\\
-\int d \vr\, v^-_{{\rm xc}}(\vr)|\varphi_{\rm L}(\vr)|^2,
\label{muxc4}
\end{eqnarray}
where we have omitted the superscript on the orbitals since they are continuous with respect to $N$. Eq. (\ref{muxc4}) agrees with the one in Ref. \onlinecite{perde}.
\subsection{XC kernel}
Next, we turn to the XC kernel for which we exhibit the discontinuities by taking the functional derivative of $\partial E_{\rm xc}/\partial N$ (Eq. (\ref{forstaderivN})) with respect to $w$. This yields
\begin{eqnarray}
\frac{\delta}{\delta w(\vr_1)}\frac{\partial E_{\rm xc}}{\partial N}&=&\!\int\! d \vr d \vr'\,\chi(\vr_1,\vr') f_{{\rm xc}}(\vr',\vr)f(\vr)\nn\\
&&+\!\int \!d \vr \,
v_{\rm xc}(\vr)\frac{\delta f(\vr)}{\delta w(\vr_1)}
\label{muxcdeltan1}
\end{eqnarray}
where we have used the chain rule
\begin{equation}
\frac{\delta v_{\rm xc}(\vr)}{\delta w(\vr_1)}=\int d\vr'\frac{\delta v_{\rm xc}(\vr)}{\delta n(\vr')}\frac{\delta n(\vr')}{\delta w(\vr_1)}
\end{equation}
and identified the linear density response function $\chi(\vr_1,\vr')=\delta n(\vr')/\delta w(\vr_1)$. Then, we write $f_{\rm xc}(\vr,\vr')=f^-_{\rm xc}(\vr,\vr')+g_{\rm xc}(\vr,\vr')$, insert in Eq. (\ref{muxcdeltan1}), and take the limit $N\rightarrow N_{0}^+$. According to the discussion above, in this limit $g_{\rm xc}(\vr,\vr')\to g_{\rm xc}(\vr)+g_{\rm xc}(\vr')$. Using furthermore that $\int d\vr\,\chi(\vr,\vr')=0$ and $\int d\vr f(\vr)=1$ we find the following equation for the discontinuity $g_{\rm xc}$ of $f_{\rm xc}$:
\begin{widetext}
\begin{eqnarray}
&&\!\!\!\!\!\!\!\!\!\!\!\int d \vr \,\chi(\vr_1,\vr) g_{\rm xc}(\vr)=\left.\frac{\delta}{\delta w(\vr_1)}\frac{\partial E_{\rm xc}}{\partial N}\right|_+-\!\int\! d \vr d \vr'\,\chi(\vr_1,\vr) f^-_{{\rm xc}}(\vr,\vr')f^+(\vr')-\!\int \!d \vr \,
v^+_{\rm xc}(\vr)\left.\frac{\delta f(\vr)}{\delta w(\vr_1)}\right |_+
\label{muxcdeltan}
\end{eqnarray}
We notice that Eq. (\ref{muxcdeltan}) only determines $g_{\rm xc}$ up to a constant. This constant can, however, easily be fixed by considering $\partial^2 E_{\rm xc}/\partial N^2$ in the limit $N\rightarrow N_{0}^+$
\begin{eqnarray}
2\int d \vr \,f^+(\vr)g_{\rm xc}(\vr)=\left.\frac{\partial^2 E_{\rm xc}}{\partial N^2}\right|_+-\int d \vr' \,v^+_{\rm xc}(\vr')\left.\frac{\partial f(\vr')}{\partial N}\right|_+ -\int d \vr d\vr' \, f^+(\vr)f^-_{{\rm xc}}(\vr,\vr')f^+(\vr').
\label{muxcdeltandeltan}
\end{eqnarray}
\end{widetext}
This equation does not allow an arbitrary constant in $g_{\rm xc}$. Consequently, Eq. (\ref{muxcdeltan}) and Eq. (\ref{muxcdeltandeltan}) together uniquely determines the discontinuity of $f_{\rm xc}$.
To gain insight on the $\vr$-dependence of $g_{\rm xc}(\vr)$ we employ a common energy denominator approximation (CEDA) \cite{kli} to Eq. (\ref{muxcdeltan}). To do so, we first notice that the derivative with respect to $w$ can be replaced by the derivative with respect to $V_s$ using the chain-rule since the density is a functional of $w$ only via $V_s$. The CEDA then allows us to partially invert the KS response function $\chi_s$ analytically. We will focus on the left hand side of Eq. (\ref{muxcdeltan}) and on the last term on the right hand side. These terms are less sensitive to the approximation used for $E_{\rm xc}$ and should therefore give rise to a general behavior. If all energy denominators are set to the constant $\Delta\epsilon$ we find on the left hand side of Eq. (\ref{muxcdeltan})
\begin{eqnarray}
\int d \vr \,\chi_s(\vr_1,\vr) g_{\rm xc}(\vr)&\approx&-\frac{2}{\Delta\epsilon} n(\vr_1)g_{\rm xc}(\vr_1)\nn\\
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!+\frac{2}{\Delta\epsilon} \!\int d\vr\,\gamma(\vr_1,\vr)g_{\rm xc}(\vr)\gamma(\vr,\vr_1)
\label{cedag}
\end{eqnarray}
where $\gamma$ is the KS density matrix. If we focus on the last term in Eq. (\ref{muxcdeltan}) and use $f^+(\vr)\approx |\varphi_{\rm L}(\vr)|^2$ we find in the CEDA
\begin{eqnarray}
\!\int \!d \vr \,
v^+_{\rm xc}(\vr)\left.\frac{\delta f(\vr)}{\delta V_s(\vr_1)}\right |_+\approx-\frac{2}{\Delta\epsilon} |\varphi_{\rm L}(\vr_1)|^2 v_{\rm xc}^-(\vr_1)\nn\\
\!\!\!\!\!\!\!+\frac{2}{\Delta\epsilon} |\varphi_{\rm L}(\vr_1)|^2 \int d\vr |\varphi_{\rm L}(\vr)|^2 v_{\rm xc}^-(\vr)\nn\\
+\frac{4}{\Delta\epsilon}\varphi_{\rm L}(\vr_1)\int d\vr\,\gamma(\vr_1,\vr)\varphi_{\rm L}(\vr)v_{\rm xc}^-(\vr)
\label{cedavx}
\end{eqnarray}
From Eq. (\ref{cedag}) and Eq. (\ref{cedavx}) we can extract an approximate asymptotic behavior
\begin{equation}
g_{\rm xc}(\vr)\sim -\frac{|\varphi_{\rm L}(\vr)|^2}{n(\vr)}v_{\rm xc}(\vr)\sim e^{2(\sqrt{2I}-\sqrt{2A_s})\, r},
\label{gxcdiv}
\end{equation}
where $I$ is the ionization energy \cite{ab} and $A_s$ the KS affinity. Thus, we can conclude that if $I>A_s$ $g_{\rm xc}$ contains a term which diverges exponentially as $r\to\infty$. That the first term of Eq. (\ref{muxcdeltan}) would exactly cancel the term of Eq. (\ref{gxcdiv}) is highly unlikely and we will explicitly see that within an approximation that accounts for the derivative discontinuity as the EXX approximation this is not the case. To obtain Eq. (\ref{gxcdiv}) we have used the CEDA but we will show below that the discontinuity obtained from the full solution of Eq. (\ref{muxcdeltan}) exhibits the same behavior. In addition, we will demonstrate that this feature is responsible for sharp peak structures as well as divergences in the kernel of a combined donor-acceptor system.
\section{Discontinuity of the dynamical XC kernel}
So far the analysis has been limited to the static case. To investigate if the kernel has discontinuities at finite frequency the discussion above must be generalized to an ensemble which allows the number of particles to change in time. To this end, we consider the following statistical operator $\hat{\rho}(t)=\sum_k\alpha_k(t)|\Psi_{k}(t)\rangle\langle\Psi_{k}(t)|$,
where $\alpha_k(t)$ are given time-dependent coefficients whose sum is equal to 1 and $|\Psi_k(t)\rangle$ is the many-body state of $k$ particles at time $t$. It is possible to prove a Runge-Gross-like theorem for this ensemble which allows us to define the XC potential as a functional of the ensemble density. In this way, the functional derivative $\delta v_{\rm xc}(\vr t)/\delta n(\vr' t')$ contains no arbitrariness, but leaves the possibility for a discontinuity of the form $f^+_{\rm xc}(\vr,\vr';t-t')=f^-_{\rm xc}(\vr ,\vr';t-t')+g_{\rm xc}(\vr ;t-t')+g_{\rm xc}(\vr';t-t')$. If we vary $v_{\rm xc}(\vr t)$ with respect to the time-dependent number of particles $N(t')$ and evaluate the derivative at the ground state with $N=N^{+}_0$ an expression for the frequency-dependent discontinuity $g_{\rm xc}(\vr,\omega)$ can be derived
\begin{eqnarray}
\left.\frac{\delta v_{\rm xc}(\vr t)}{\delta N(t')}\right |_{n^+_0}=\int d\vr' f^-_{\rm xc}(\vr,\vr', t-t')f^+(\vr')\nonumber\\
+\int d\vr' g_{\rm xc}(\vr', t-t')f^+(\vr')+g_{\rm xc}(\vr, t-t'),
\end{eqnarray}
where we again have used the chain rule. Below we will see with a numerical example how the frequency dependence modifies the discontinuity making the divergency stronger than in the static case and allows for a correct description of CT excitations in a combined donor-acceptor system.
This will be illustrated in the time-dependent (TD) EXX approximation. For a derivation and analysis of the EXX kernel we refer the reader to Ref. \cite{hvB3,gorexx}.
From now on the subscript ${\rm xc}$ will be replaced by ${\rm x}$ to denote quantities in the TDEXX approximation.
\begin{figure}[t]
\includegraphics[width=8.5cm, clip=true]{ct2.eps}\\
\caption{Left: EXX potential and kernel for a heteronuclear system of four electrons. Right: the discontinuity $g_{\rm x}(x,0)$ calculated from Eq. (\ref{muxcdeltan}) for the isolated two-electron subsystem as well as $G(x)$ for $N=2.0001$. Note that the potentials have been rescaled and shifted for better visibility.}
\label{ediag}
\end{figure}
\section{Results}
In the single-pole (SP) approximation \cite{pgg96} of TDDFT the XC correction to the KS excitation energy $\omega_q$ is given by twice the matrix element $\langle q|f_{{\rm xc}}(\omega)|q\rangle=\int d\vr d\vr' \Phi_q(\vr) f_{\rm xc}(\vr,\vr',\omega) \Phi_q(\vr')$ at $\omega=\omega_q$, where the index $q=ia$ corresponds to an arbitrary excitation.
In Ref. \cite{gs99} it has been shown that in TDEXX $2\langle q|f_{{\rm x}}(\omega_q)|q\rangle=\langle a|\Sigma_{\rm x}-v_{\rm x}|a\rangle-\langle i|\Sigma_{\rm x}-v_{\rm x}|i\rangle-\langle aa|v|ii\rangle$,
where $v$ is the Coulomb interaction. Considering a CT excitation between HOMO ($i={\rm H}$) and LUMO ($a={\rm L}$) we see that the last term goes as $1/R$. Setting, as usual \cite{gkkg}, $\langle {\rm H}|\Sigma_{\rm x}-v_{\rm x}|{\rm H}\rangle=0$ and using the previously mentioned result $\langle {\rm L}|\Sigma_{\rm x}-v_{\rm x}|{\rm L}\rangle=\Delta_{\rm x}$, we can deduce that \cite{ctneep}
\begin{eqnarray}
\omega_{\rm CT}&=&\omega_{\rm HL}+2\langle {\rm HL}|v+f_{{\rm x}}(\omega_{\rm HL})|{\rm HL}\rangle\nn\\
&&\to\omega_{\rm HL}+\Delta_{{\rm x}}-1/R.
\label{asymp}
\end{eqnarray}
The kernel $f_{\rm x}$ thus produces a finite correction if evaluated at $\omega_{\rm HL}$ and yields exactly the results corresponding to first order G\"orling-Levy perturbation theory \cite{glper,gs99,spagk,spkf}. In the following we will see that it is the discontinuity of the kernel that yields this correct result. We have deliberately used the SPA and not the full solution of Casida equations in conjunction with the EXX kernel, a procedure which would imply the inclusion of higher orders in the explicit dependence on the Coulomb interaction. Our motives are to study an exact property of the kernel well captured in the SPA of TDEXX but may be subject to errors inherent to the approximation when including higher orders.
We model \cite{jes88} a stretched diatomic molecule in terms of 1D atoms described by $Q/\sqrt{(x-x_0)^2+1}$, where $x_0$ is the location of the atom and $Q$ is the nuclear charge, and replace everywhere the Coulomb interaction $v$ with a soft-Coulomb interaction $1/\sqrt{(x-x')^2+1}$. We study two different systems, one ionic and one neutral system. In the ionic system the discontinuity is important already at the level of the XC potential whereas in the neutral system the discontinuity appears only in the XC kernel.
\subsection{Ionic system}
In the first example we study an ionic system and set $Q=2$ on the left atom (donor) and that of the right atom (acceptor) to $Q=4$ and solve the ground-state KS problem with 4 electrons. In the ground state at internuclear separation $R=10$ a.u. we find two electrons on each atom and in Fig. \ref{ediag} (left panel) the EXX potential is displayed (fade line) in arbitrary units. Two steps are clearly visible, one between the atoms and another
one on the right side of the acceptor. As a consequence, $v_{\rm x}$ is shifted upwards in the acceptor region placing the KS LUMO of the isolated acceptor above the HOMO of the isolated donor. This implies that the KS affinity of the acceptor becomes closer to the true affinity. In the same figure and panel we also display the quantity,
$
F_{\rm HL}(x,\omega)=\int dx' f_{\rm x}(x,x',\omega)\Phi_{\rm HL}(x'),
\label{intker}
$
at $\omega=0$. The function $F_{\rm HL}$ is seen to have peaks in correspondence with the steps of $v_{\rm x}$ and is shifted downwards over the acceptor with respect to the donor. Despite the fact that $\Phi_{\rm HL}$ tends to zero as $R$ increases the peaks of $F_{\rm HL}$ become sharper and higher, and the shift increases in size. The right panel in the same figure shows the function $G(x)=\int dx' f_{\rm x}(x,x',0)f(x')$, accessible from Eq. (\ref{muxcdeltan}), for the isolated acceptor when $N=2.0001$ (full line), as well as the discontinuity $g_{\rm x}(\vr,0)$ in the limit $N=2^+$ (dashed line). The potentials $v_{\rm x}$ are also shown (fade lines) calculated from the same ensembles, i.e., $N=2.0001$ and $N=2$. The function $G(x)$ has peaks whose positions follow the steps of $v_{\rm x}$ and whose hight increases as $N$ approaches $2^+$. In the same limit, also the difference between the $G(\infty)$ value and the value of $G$ in the central plateau-like region increases consistently with the fact that $\int dx \, G(x)f(x)$ has to remain finite (see Eq. (\ref{muxcdeltandeltan})). Eventually $G(x)$ turns into the discontinuity $g_{\rm x}(x,0)$ which diverges as $x\to \infty$, in agreement with our previous analysis. Notice that CEDA has not been made here. If we compare $F_{\rm HL}$ and $G$ from the different panels we see a very similar structure. We therefore conclude that the peaked structure of the kernel in the donor-acceptor system is just the discontinuity of the ensemble EXX kernel. As discussed above, most part of the CT excitation energy is contained already in the KS eigenvalue differences due to the step in $v_{\rm x}$. A more critical example is therefore the neutral system, studied below.
\subsection{Neutral system}
An example where the kernel needs to account for $\Delta_{\rm x}$ is the same system
\begin{figure}[t]
\includegraphics[width=8.5cm, clip=true]{ct.eps}\\
\caption{Same system as in Fig. \ref{ediag} but with six electrons. Left: AEXX kernel. Right: TDEXX kernel at $\omega_{\rm HL}$. Note that the potentials have been rescaled for better visibility.}
\label{ediag2}
\end{figure}
but with 6 electrons, i.e., a neutral system. The ground-state has 2 electrons on the left atom (acceptor) and 4 electrons on the right atom (donor). In Fig. \ref{ediag2} we plot the EXX potential (fade line) for $R=12$ a.u. and a step-like structure between the atoms can be observed. However, as $R$ is increased the step reduces in size and eventually goes to zero. In the left panel we plot $F_{\rm HL}$ at $\omega=0$, i.e., in the adiabatic (AEXX) approximation, and for different separations $R$. Again, we find a peak structure in the kernel between the donor and the acceptor as well as a shift that increases exponentially with $R$. In this case, compared to the previous, the peaks are less pronounced but the step due to the plateau is much larger. Evaluating the kernel at the first KS CT excitation frequency increases the exponential growth of the step by around a factor of two (right panel). Thus, whereas the overall shape remains unaltered the magnitude of the shift is strongly influenced by the frequency dependence. This fact plays a crucial role in the description of the CT excitation for this system. We notice here that even if the step in $v_{\rm xc}$ disappears in the dissociation limit the step in $f_{\rm xc}$ remains. This is not a contradiction as the discontinuity might show up only in the second derivative. Fig. \ref{excit} illustrates the behavior of the SP CT excitation energy as a function of $R$ for the system of Fig. \ref{ediag2} in four different approximations. In TDEXX with the correction $\langle {\rm HL}|f_{{\rm x}}(\omega_{\rm HL})|{\rm HL}\rangle$ we find that the divergency of the kernel over the acceptor exactly compensates for the decreasing overlap $\Phi_{\rm HL}$, thus yielding a finite value as $R\to \infty$ as well as the right $1/R$ asymptotic behavior as it should according to Eq. (\ref{asymp}). In the adiabatic case we find instead that $\langle {\rm HL}|f_{{\rm x}}(0)|{\rm HL}\rangle$ tends to 0 as $R\to \infty$, although reproducing the fully frequency dependent result up to around $R=8$.
\begin{figure}[t]
\includegraphics[width=7.5cm, clip=true]{excitct.eps}\\
\caption{CT excitation energies as a function of separation $R$ in different approximations. }
\label{excit}
\end{figure}
We notice that even if the kernel is very large over the acceptor it will not affect the excitations which are localized there since any constant will vanish by the fact that $\Phi_q$ integrates to zero. Thus only excitations involving a transfer of charge from one atom to the other will be influenced by the discontinuity.
\section{Conclusions}
In conclusion we have analyzed the discontinuity of the XC kernel of an ensemble with time-dependent particle numbers. In a combined system of two atoms we have seen that the divergency of the discontinuity as $r\to\infty$ can generate a kernel which diverges in the dissociation limit, and thus compensate for the vanishing overlap of acceptor and donor orbitals. This feature is crucial for the description of CT excitations but may also be important whenever there are excitations for which the KS orbital overlap is too small to give a correction.
|
2,877,628,089,077 | arxiv | \section{Introduction}
Understanding Nature at its fundamental level is the eternal quest of the Particle Physicists. The Standard Model (SM)\,\cite{Glashow:1961tr, Weinberg:1967tq, Salam:1968rm, Glashow:1970gm} of Particle Physics is one of the most successful theories that tries to describe the fundamental building blocks and the three fundamental forces of Nature in a single framework. It tells us about the spin-half fermions: quarks and leptons that constitute the fundamental building blocks of Nature. In the quark sector, we get the up ($u$), down ($d$), charm ($c$), strange($s$), top ($t$) and beauty ($b$) quarks which are categorized in terms of three generations as shown below.
\begin{eqnarray}
\begin{array}{ccc}
\begin{pmatrix}
u\\
d
\end{pmatrix} &\begin{pmatrix}
c\\
s
\end{pmatrix} & \begin{pmatrix}
t\\
b
\end{pmatrix}\\
1^{\text{st}} & 2^{\text{nd}} & 3^{\text{rd}}
\end{array}
\end{eqnarray}
Similarly, we present the lepton family comprising electron ($e$), muon ($\mu$), tauon ($\tau$), electron type neutrino ($\nu_e$), muon type neutrino ($\nu_\mu$) and tauon type neutrino ($\nu_\tau$) in the following way,
\begin{eqnarray}
\begin{array}{ccc}
\begin{pmatrix}
e\\
\nu_e
\end{pmatrix} &\begin{pmatrix}
\mu\\
e_\mu
\end{pmatrix} & \begin{pmatrix}
\tau\\
\nu_\tau
\end{pmatrix}\\
1^{\text{st}} & 2^{\text{nd}} & 3^{\text{rd}}
\end{array}
\end{eqnarray}
It is to be noted that $e$, $\mu$ and $\tau$ have got electromagnetic charge, $Q=-1$, whereas the neutrinos are charge neutral. On the other hand, all quarks carry fractional charges. For $u$, $c$ and $t$, the charge is $+2/3$ and the same for $d$, $s$ and $b$ is $-1/3$. Besides, the SM tells about the massless gamma photon ($\gamma$) which mediates the electromagnetic (em) interaction. Similarly, $W^{+}, W^{-}$ and $Z^0$ which are massive gauge bosons act as mediators of the weak force. Similarly, SM tells about eight gluons that mediate the the strong interaction. All these mediators are spin-one particles. Both quarks and leptons participate in the weak interaction, whereas the strong interaction is experienced only by the quarks, and except neutrinos all particles participate in em interaction. Besides, there exists a spin-zero particle named Higgs boson($H$) which is often associated with the origin of masses of the fundamental particles.
The symmetry group of the SM is considered to be $SU(3)_c \otimes SU(2)_L \otimes U (1)_Y $. The $SU(3)_c$ is associated with the strong interaction. The Glashow-Weinberg-Salam (GWS) model relates the unified form of em and weak forces, known as Electroweak (EW) force with the gauge group $SU(2)_L\otimes U(1)_Y$. The GWS model assumes that right handed neutrinos are absent in nature. All the six generations of quarks and leptons with their left handed counterparts transform as doublets\,($L$) and their right handed counterparts (except for neutrino) transform as singlets under $SU(2)_L$. The EW interaction is associated with three weak charges: $T^1$, $T^2$, $T^3$ and weak hypercharge $Y$. The charges $T^3$ and $Y$ linearly combine to give the em charge, $Q=T^3+Y/2$. On incorporating the Higgs mechanism\,\cite{Englert:1964et, Higgs:1964ia, Higgs:1964pj} in the model, the symmetry of the Lagrangian is broken down spontaneously and the fundamental particles except the neutrinos get masses. Along with the quarks and leptons, the gauge bosons $W^{\pm}$ and $Z^0$ acquire masses. After spontaneous symmetry breaking, the $\gamma$ remains massless. Hence, even though the gauge symmetry is broken, still the $U(1)_{Q}$ symmetry is preserved. The success of the GWS model is confirmed by the discovery of $W^{\pm}$\,\cite{Arnison:1983rp, Banner:1983jy} and $Z^0$\,\cite{Arnison:1983mk, Bagnaia:1983zx} and finally the Higgs boson\,\cite{Aad:2012tfa,Chatrchyan:2012ufa}.
The neutrinos within the standard model remain massless, because GWS model does not allow righthanded neutrinos. The Quantum mechanics says that if neutrinos were massive, then certainly they must change their flavour from one type to another ($\nu_{e}\leftrightarrow \nu_\mu$, $\nu_\mu\leftrightarrow\nu_\tau$ and $\nu_e\leftrightarrow\nu_\tau$). This phenomenon which is called Neutrino oscillation\,\cite{Bilenky:1976yj, Bilenky:1978nj, Bilenky:1987ty} is confirmed by the experiments\,\cite{Davis:1968cp, Ahmad:2001an, Fukuda:2001nj, Ahmad:2002jz, Bionta:1987qt, Fukuda:1998mi, Eguchi:2002dm, An:2012eh, Ahn:2002up, Abe:2011sj, Abe:2013hdq, Hirata:1987hu} and thus the neutrinos are massive for certain. This observation goes against the prediction of SM or GWS model. Experiments so far, have not encountered the trace of the righthanded neutrinos. Hence, the GWS model might not be wrong in excluding the righthanded Dirac neutrinos. In order to solve this conjecture, the theorists relate neutrinos with Majorana nature than Dirac's and this hypothesis takes us beyond the SM. The Majorana nature is associated with the indistinguishability of particle and antiparticle states for neutral fermions like neutrinos\,\cite{Majorana:1937vz,Furry:1939qr}. The origin of the neutrino mass is associated with the seesaw mechanism\,\cite{Minkowski:1977sc,GellMann:1980vs, Mohapatra:1979ia, Weinberg:1979sa, Xing:2009in, Konetschny:1977bn,Magg:1980ut, Schechter:1980gr, Cheng:1980qt, Lazarides:1980nt, Mohapatra:1980yp, Foot:1988aq, Ma:1998dn} which in turn solely depends on the Majorana nature of the neutrinos. It is hypothesised that the seesaw mechanism occurs at a very high energy scale ($E\sim 10^{14}\, Gev$) and it is associated with heavy Majorana neutrinos. The Majorana behaviour of neutrinos still awaits experimental evidences. Apart from this, the SM or GWS model is silent on the existence of Dark matter (DM)\,\cite{Rubin:1970gg, Rubin:1980hj, Bertone:2005bj, Ma:2006km}, a kind of matter which does not interact with em radiation. Of which Universe is composed, the DM constitutes $27\,\%$, whereas the normal matter contributes by an amount of $5\,\%$.
Both of these problems related to neutrino and DM open new pathways to think beyond SM. Instead of going beyond the SM, one alternative strategy is to review the primary axioms of the SM. The triumph of SM is beyond doubt and hence the fundamental framework of GWS model cannot be tempered a lot. In the present work, we shall try to modify the GWS model without deviating much from the original philosophy. For the sake of argument, if it is confirmed in future experiments that there exists no right handed Dirac neutrino and at the same time, neutrinos are not Majorana particles as well, then explaining the origin of neutrino mass will become difficult. We shall try to find this answer by looking into the possible coexistence of hidden matter with SM particles and the presence of hidden forces of Nature along with EW interaction. To keep the discussion simple, we shall consider the leptons only.
\section{Postulates of the Extended GWS model }
We draw the motivation from Preon and Rishon models\,\cite{Fritzsch:1981zh, Harari:1980ez, Pati:1974yy, Dugne:1999ez} of particle physics that emphasizes on the composite nature of the quarks and leptons. But most of these models are ruled out after the discovery of Higgs boson. In the present model, we assume that neutrinos are Dirac particles and there exists no righthanded neutrino and the mass of the neutrinos appear because of its composite nature. If neutrinos become composite particles, then the charged leptons cannot remain untouched by this fact. But unlike the other composite models, we emphasize here that all generations of the leptons may not have got substructure. We introduce a few hidden particles in the extended GWS model, that directly do not take part in EW interaction, but interact via some hidden forces. We posit the axioms of the model as shown below.
\begin{enumerate}
\item[(a)]There are two fundamental complex hidden scalar particles \emph{Dakshina} ($\chi$) and \emph{Vama}($\xi$), and two fundamental fermions: electron ($e$) and electron type neutrino ($\nu_e$) present within the SM.
\item[(b)]These fundamental particles are associated with Hidden charge ($Q_h$) and the bound states are formed such that net hidden charge is zero.
\item[(c)]There are two kinds hidden forces \emph{Shupti}\,(\emph{Sh}) and \emph{Sushupti}\,\,(\emph{Ssh}) in addition to the electromagnetic and weak force.
\item [(d)]The fundamental particles: $e$, $\chi$ and $\xi$ get masses through the Higgs mechanism and $\nu_e$ remains massless.
\end{enumerate}
(The words \emph{Dakshina}, \emph{Vama}, \emph{Shupti} and \emph{Sushupti} are derived from Sanskrit which means right, left, state of sleep and state of deep sleep respectively.)
In this model, only the first generation of the leptons are fundamental particles. The $e$ and $\nu_e$ are the spin doublets ($2_s$) and $\xi$ and $\eta$ are spin singlet($1_s$) states. Hence, the bound state of a fundamental fermion and hidden singlet is a spin doublet state, $2_s\otimes 1_s = 2_s$. Thus, we obtain the other two generations of the leptons as shown below.
\begin{eqnarray}
\begin{pmatrix}
e\\
\nu_e
\end{pmatrix},\quad \begin{pmatrix}
e\,\chi\\
\nu_e \,\chi
\end{pmatrix},\quad \begin{pmatrix}
e\,\xi\\
\nu_e\,\xi
\end{pmatrix}.
\end{eqnarray}
We identify the bound states in the following way,
\begin{eqnarray}
\mu = (e\chi), \quad\nu_{\mu}= (\nu_e\,\chi),\quad \tau = (e\,\xi), \quad\nu_{\tau}= (\nu_e\,\xi).
\end{eqnarray}
Thus, we see that the $e$ and $\mu$ (or $\tau$) cannot be treated on same footing and the same holds good for $\nu_{e}$ and $\nu_{\mu}$ (or $\nu_\tau$). In addition, we see that all the three generations now share the same lepton number, $L_e$ and the necessity of individual lepton number vanishes. These features go against the primary axioms of SM.
The particles $\mu$ and $\tau$ are hidden charge neutral and have got electromagnetic charge as $-1$, whereas the $\nu_\mu$ and $\nu_\tau$ are neutral in terms of both hidden and electromagnetic charges. Hence the fundamental fermions and the hidden scalars must carry equal and opposite hidden charge. We assign the hidden charges to the particles as shown below.
\begin{eqnarray}
Q_{h}(e,\nu_e)= -1,\quad Q_h(\chi,\xi)=+1.
\end{eqnarray}
The \emph{Shupti} interaction is responsible for forming the bound states, whereas the \emph{Sushupti} is experienced only by the hidden scalars $\chi$ and $\xi$. We consider that $Sushupti$ converts $\chi$ to $\xi$ and vice versa.
\section{The extended GWS model}
We consider that the \emph{Sh} and \emph{Ssh} interactions are associated with gauge groups $U(1)_{h}$ and $SU(2)_h$ respectively. The $U(1)_h$ is generated by $Q_{h}$, whereas the hidden isospin charges, $T_{h}^{i}=\tau^{i}/2$ generate $SU(2)_h$, where $i=1,2,3$ and $\tau^i$'s are the Pauli spin matrices. So, in the light of hidden interaction, Standard model gauge group is extended to $SU(2)_L\otimes U(1)_{Y}\otimes U(1)_h\otimes SU(2)_h$. The charges associated with the particles are described in Table.\,(\ref{table1})
We choose the left-handed fermions,
\begin{eqnarray}
L=\begin{pmatrix}
\nu_{eL}\\
e_L
\end{pmatrix}
\end{eqnarray}
as doublet under $SU(2)_L$ and singlet under $SU(2)_h$. It transforms under $U(1)_{Y}$ and $U(1)_{h}$. The right handed electron $e_R$ is chosen as singlet under both $SU(2)_L$ and $SU(2)_h$ and it transforms under both $U(1)_{Y}$ and $U(1)_{h}$.
\begin{eqnarray}
L &\overset{U(1)_Y}{\longrightarrow}& e^{-i\,\frac{Y}{2}\alpha(x)}\,L = e^{\frac{+i}{2}\alpha(x)}\,L,\nonumber\\
L &\overset{U(1)_h}{\longrightarrow}& e^{-i\,Q_h\beta(x)}\,L = e^{+i\beta(x)}\,L,\nonumber\\
L &\overset{SU(2)_L}{\longrightarrow}& e^{-i\,\frac{\tau^i}{2}\theta^{i}(x)}\,L,\nonumber\\
R &\overset{U(1)_Y}{\longrightarrow}& e^{-i\,\frac{Y}{2}\alpha(x)}\,R = e^{+i\alpha(x)}\,R,\nonumber\\
R &\overset{U(1)_h}{\longrightarrow}& e^{-i\,Q_h\beta(x)}\,R= e^{+i\beta(x)}\,R,\nonumber\\
L &\overset{SU(2)_h}{\longrightarrow}& L'=L,\quad R \overset{SU(2)_L}{\longrightarrow} R'=R,\nonumber\\
R &\overset{SU(2)_h}{\longrightarrow}& R'=R.\nonumber
\end{eqnarray}
The hidden scalar fields $\chi$ and $\xi$ form a doublet under $SU(2)_h$,
\begin{eqnarray}
\phi_h= \begin{pmatrix}
\chi\\
\xi
\end{pmatrix}.
\end{eqnarray}
and also transform under $U(1)_h$. It transforms as a singlet under both $U(1)_{Y}$ and $SU(2)_L$.
\begin{eqnarray}
\phi_h &\overset{U(1)_h}{\longrightarrow}& e^{-i\,Q_h\beta(x)}\,\phi_h = e^{-i\beta(x)}\,\phi_h,\nonumber\\
\phi_h &\overset{SU(2)_h}{\longrightarrow}& e^{-i\,\frac{\tau^i}{2}\kappa^{i}(x)}\,\phi_h,\nonumber\\
\phi_h &\overset{U(1)_Y}{\longrightarrow}& \phi'_h=\phi_h,\quad \phi_h \overset{SU(2)_L}{\longrightarrow} \phi'_h=\phi_h\nonumber
\end{eqnarray}
The complex scalar fields,
\begin{eqnarray}
\Phi=\begin{pmatrix}
\phi^+\\
\phi^0
\end{pmatrix},
\end{eqnarray}
is a doublet under $SU(2)_L$. It transforms under $U(1)_Y$, but transforms as singlet under both $U(1)_h$ and $SU(2)_h$.
\begin{eqnarray}
\Phi &\overset{U(1)_Y}{\longrightarrow}& e^{-i\,\frac{Y}{2}\alpha(x)}\,\Phi = e^{-\frac{i}{2}\alpha(x)}\,\Phi,\nonumber\\
\Phi &\overset{SU(2)_L}{\longrightarrow}& e^{-i\,\frac{\tau^i}{2}\theta^{i}(x)}\,\Phi,\nonumber\\
\Phi &\overset{U(1)_h}{\longrightarrow}& \Phi'=\Phi,\quad \Phi \overset{SU(2)_h}{\longrightarrow} \Phi'=\Phi\nonumber
\end{eqnarray}
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|}
\hline
{}& $Q_{em}$ & $Y$& $T^3$&$Q_h$ & $T^3_h$\\
\hline
\hline
$\nu_{eL}$ & $0$ & $-1$ & $+\frac{1}{2}$ & $-1$ & $0$\\
\hline
$e_{L}$ & $-1$ & $-1$ & $-\frac{1}{2}$ & $-1$ & $0$\\
\hline
$e_{R}$ & $-1$ & $-2$ & $0$ & $-1$ & $0$\\
\hline
$\chi$ & $0$ & $0$ & $0$ & $+1$ & $+\frac{1}{2}$\\
\hline
$\xi$ & $0$ & $0$ & $0$ & $+1$ & $-\frac{1}{2}$\\
\hline
$\phi^+$ & $+1$ & $+1$ & $+\frac{1}{2}$ & $0$ & $0$\\
\hline
$\phi^0$ & $0$ & $+1$ & $-\frac{1}{2}$ & $0$ & $0$\\
\hline
\end{tabular}
\caption{\label{table1} \footnotesize The charges associated with different fields for $SU(2)_L\otimes U(1)_{Y}\otimes U(1)_h\otimes SU(2)_h$ theory are shown.}
\end{table}
The above transformation rules say that the $L$, $R$ participates in both EW and hidden interactions, $\Phi$ and $\phi_h$ participate in EW interaction and hidden interaction respectively. The covariant derivative of this theory is shown below.
\begin{eqnarray}
D_\mu &= &\partial_{\mu} -i\, g \frac{\tau^i}{2}W_{\mu}^i - i\, g'\frac{Y}{2} B_{\mu}- i\, g_h \frac{\tau^i}{2}G_{\mu}^i\nonumber\\
&&-i\, g'_h Q_{h} C_{\mu},
\end{eqnarray}
where, $B_\mu$ and $W_{\mu}^i$ represent the gauge bosons associated with EW interaction and that $C_\mu$ and $G_{\mu}^i$ are related with Hidden interaction.
We present the Lagrangian($\mathcal{L}$) describing the ``EW $+$ Hidden'' force under $SU(2)_L\otimes U(1)_Y\otimes SU(2)_h\otimes U(1)_h$ gauge symmetry as in the following,
\begin{eqnarray}
\mathcal{L}= \mathcal{L}_F + \mathcal{L}_H +\mathcal{L}_S +\mathcal{L}_G +\mathcal{L}_Y,
\end{eqnarray}
where,
\begin{eqnarray}
\mathcal{L}_F &=& i\,\bar{L}\gamma^\mu(\partial_\mu-i\,g \frac{\tau^i}{2}W_{\mu}^i + i\, \frac{g'}{2} B_\mu)L + i\,\bar{e}_R \gamma^{\mu} (\partial_\mu +\nonumber\\
&&i\,g'B_{\mu})e_R - g'_h(\bar{L}\gamma^\mu L +\bar{e}_R \gamma^\mu e_{R})C_\mu,\nonumber\\
\mathcal{L}_h &=& \lbrace(\partial^\mu -i\,g_h \frac{\tau^i}{2}G^{\mu^{i}}-i\,g'_h C^{\mu})\phi_h \rbrace^{\dagger} \lbrace(\partial_\mu -i\,g_h \frac{\tau^i}{2}G_{\mu}^{i}\nonumber\\
&&-i\,g'_h C_{\mu})\phi_h \rbrace,\nonumber\\
\mathcal{L}_S &=& \lbrace(\partial^\mu -i\,g \frac{\tau^i}{2}W^{\mu^{i}}-i\,\frac{g'}{2} B^{\mu})\Phi \rbrace^{\dagger} \lbrace(\partial_\mu -i\,g \frac{\tau^i}{2}W_{\mu}^{i}\nonumber\\
&&-i\,\frac{g'}{2} B_{\mu})\Phi \rbrace + \mu^2 (\Phi^\dagger\Phi)-\lambda(\Phi^\dagger \Phi)^2,\nonumber\\
\mathcal{L}_G &=&-\frac{1}{4}B_{\mu\nu}B^{\mu\nu}-\frac{1}{4}W^{i}_{\mu\nu}W^{{\mu\nu}^i}-\frac{1}{4}C_{\mu\nu}C^{\mu\nu}-\nonumber\\
&&\frac{1}{4}G^{i}_{\mu\nu}G^{{\mu\nu}^i},\nonumber
\end{eqnarray}
where, $\mu^2>0, \lambda>0$. The $B_{\mu\nu}$, $W_{\mu\nu}^i$, $C_{\mu\nu}$ and $G_{\mu\nu}^i$, represent the Field tensors for the $B_{\mu}$, $W_{\mu}^i$, $C_\mu$ and $G_{\mu}^i$ respectively. Here, The $\mathcal{L}_Y$ is the Yukawa term and this is presented below.
\begin{eqnarray}
\mathcal{L}_Y &=& -y_e (\bar{L} \Phi e_R)-y_{\chi}^2 (\phi_h ^{\dagger} \Phi) (\Phi^{\dagger}\phi_h)-y_{\xi}^2 (\phi_h ^{\dagger} \tilde{\Phi}) (\tilde{\Phi}^{\dagger}\phi_h)\nonumber\\
&&+ H.C.
\end{eqnarray}
The $SU(2)_L$ scalar field doublet, $\Phi$ is parametrized in the following way,
\begin{equation}
\Phi=\frac{1}{\sqrt{2}}e^{-i\frac{\tau^i\rho^{i}(x)}{2v}}\begin{pmatrix}
0\\
v+H
\end{pmatrix},
\end{equation}
where $v$ is the vacuum expectation value(vev), $v=\sqrt{\mu^2/\lambda}$, $\rho^i(x)$'s are the ``would be Goldstone bosons''. We take the unitary gauge which involves a gauge transformation by a $SU(2)_L$ element, $U(\rho^i)=e^{i\frac{\tau^i\rho^{i}(x)}{2v}}$. This bars the Goldstone's boson to appear in the final equation of motion and the $\Phi$ becomes real,
\begin{equation}
\Phi=\frac{1}{\sqrt{2}}\begin{pmatrix}
0\\
v+H
\end{pmatrix},
\end{equation}
where, $H$ is the Higgs Field. The Unitary gauge changes: $L\rightarrow L'$, $W_{\mu}^i\rightarrow W_{\mu}^{i'}$, $B_\mu$ and $e_R$ do not change. (Note that in the subsequent discussion, in order to keep the expressions simple, we shall write the primed fields as the original one). The hidden scalar field $\phi_h$ being a $SU(2)_L$ singlet, remains unaffected. Similarly, the Unitary gauge has no effect on the hidden gauge fields $C_\mu$ and $G_{\mu}^i$. The Higgs Mechanism breaks the $SU(2)_L\otimes U(1)_Y\otimes SU(2)_h\otimes U(1)_h$ gauge symmetry to $U(1)_{e.m}\otimes U(1)_h$. All the hidden gauge bosons along with the physical photon field $A_{\mu}$ remain massless. In fact, the gauge bosons $G_\mu^i$'s remain massless after Higgs mechanism, but we expect $\chi$ and $\xi$ to acquire unequal masses. Therefore, the $SU(2)_h$ symmetry is not preserved. We may split the Lagrangian into two parts: the first part is the GWS framework, $\mathcal{L}_{GWS}$ which tells us about the dynamics of electron, electron type neutrino, Higgs Boson and how they interact via electromagnetic and weak interactions; and the second part which is $\mathcal{L}_{hidden}$ tells us about the hidden scalar particles \emph{Dakshina} and \emph{Vama}, and how these particles along with $e$ and $\nu_e$ take part in the hidden interactions: \emph{Shupti} and \emph{Sushupti}. It also tells us how \emph{Dakshina} and \emph{Vama} interact with Higgs boson. We present the Lagrangian as shown below.
\begin{eqnarray}
\mathcal{L}_{U(1)_{Q}\otimes U(1)_h} &=& \mathcal{L}_{GWS} + \mathcal{L}_{hidden}.
\end{eqnarray}
where,
\begin{eqnarray}
\mathcal{L}_{GWS} &=& i\,\bar{e}\gamma^{\mu}\partial_\mu e +i\,\bar{\nu}_{eL}\gamma^\mu \partial_\mu \nu_{eL} -\frac{y_{e} v}{\sqrt{2}} \bar{e}\,e\nonumber\\
&& + \frac{g}{\sqrt{2}}\left( \bar{\nu}_{eL}\,\gamma^{\mu}\,e_L\, W_{\mu}^+ +\bar{e}_{L}\,\gamma^{\mu}\,\nu_{eL}\, W_{\mu}^{-}\right)\nonumber\\
&& + \frac{g}{2}\,\left(\bar{\nu}_{eL}\,\gamma^\mu\,\nu_{eL}-\bar{e}_L\,\gamma^\mu\,e_L\right)\left( Z_\mu\cos\theta_W \right.\nonumber\\
&& \left.+ A_\mu \sin\theta_W\right)\nonumber\\
&&- \frac{g'}{2}\,\left(\bar{\nu}_{eL}\,\gamma^\mu\,\nu_{eL}+\bar{e}_L\,\gamma^\mu\,e_L + 2 \bar{e}_R\,\gamma^\mu\,e_R \right)\nonumber\\
&&\left( A_\mu\cos\theta_W - Z_\mu \sin\theta_W\right)\nonumber\\
&&+\frac{g^2 v^2}{4}W_{\mu}^+ W^{\mu^-}+ \frac{v^2}{8}\,(g^2+g'^2)Z^{\mu} Z_{\mu}\nonumber\\
&&+\,\partial_\mu H\,\partial^\mu H - \mu^2\,H^2-\lambda\,v\,H^3-\frac{\lambda}{4}\,H^4\nonumber\\
&& + \frac{g^2}{8}(H^2 + 2\,H v)\left(\frac{1}{\cos^2\theta_{W}}Z^{\mu} Z_{\mu}\right.\nonumber\\
&&\left.+ 2 W_{\mu}^+ W^{\mu^-}\right)-\frac{y_e}{\sqrt{2}}\,H \bar{e}e
\end{eqnarray}
where, $W_\mu^{+}=\frac{1}{\sqrt{2}}\,(W_\mu^1-i\,W_\mu^2)$, $W^-_\mu=\frac{1}{\sqrt{2}}\,(W_\mu^1+i\,W_\mu^2)$ and, $\theta_{W}=\tan^{-1}(g'/g)$ and,
\begin{eqnarray}
\mathcal{L}_{hidden}&=& \partial ^\mu\,\chi^* \partial_\mu \chi- y_{\chi}^2 v^2\,\chi^*\chi +\partial ^\mu \xi^*\, \partial_\mu \xi- y_{\xi}^2v^2\,\xi^*\xi\nonumber\\
&& + \frac{g_h}{2}\, \lbrace i\, (\chi^*\, \partial^\mu \chi-\chi\,\partial^\mu \chi^*)- i\,(\xi^* \,\partial^\mu \xi\nonumber\\
&& - \xi\,\partial^\mu \xi^*)\rbrace\,G_{\mu}^3\nonumber \\
&& + \frac{g_{h}}{2}\, i\,(\xi^*\,\partial^\mu \chi-\chi\, \partial^\mu \xi^*)\,G^*_\mu \nonumber\\
&&+ \frac{g_{h}}{2}\, i\,(\chi^*\,\partial^\mu \xi-\xi\,\partial^\mu \chi^*)\,G_\mu \nonumber\\
&& +\, g'_h\,\lbrace i\,(\chi^*\, \partial^\mu \chi-\chi\,\partial^\mu \chi^*) + i\,(\xi^*\, \partial^\mu \xi-\xi\,\partial^\mu \xi^*)\nonumber\\
&&-(\bar{e}\gamma^\mu e +\bar{\nu}_{eL} \gamma^{\mu}\nu_{eL})\rbrace\,C_{\mu} \nonumber\\
&& +\left( \frac{g_h}{2}\,(\chi^*\,G^{\mu^3}+G^{\mu^*}\,\xi^*)+g'_h \chi^*\,C^\mu \right)\nonumber\\
&&\left( \frac{g_h}{2}\,(\chi\,G^{^3}_\mu + G_\mu\,\xi)+g'_h \chi\,C_\mu \right)\nonumber\\
&&+\left( \frac{g_h}{2}\,(\xi^*\,G^{\mu^3}-G^\mu\,\chi^*)-g'_h \xi^*\,C^\mu \right)\nonumber\\
&&\left( \frac{g_h}{2}\,(\xi\,G^{^3}_\mu + G^*_\mu\,\chi)-g'_h \xi\,C_\mu \right)\nonumber\\
&&-(H^2+2\,v\,H)(y_{\chi}^2\,\chi^*\chi +y_{\xi}^2\,\xi^*\xi),
\end{eqnarray}
where, $G_\mu=(G_\mu^1-i\,G_\mu^2)$, and $G^*_\mu=(G_\mu^1+i\,G_\mu^2)$.
The \emph{Shupti} is associated with the $C_\mu$ vector field which will be represented as ``hidden photon''($\gamma_h$) in subsequent discussion. Similarly, \emph{Sushupti} is mediated by the hidden vector bosons, $G_{\mu}$, $G_{\mu}^*$ and $G_\mu^3$ and these are represented as $G$, $G^*$ and $G^3$ respectively. We discuss some important points of the model below.
\begin{enumerate}
\item[(a)]We see that the hidden photon field ($C_\mu$) like the physical photon field ($A_\mu$) is massless, but unlike the latter, the former may interact with the electrically neutral particles. We get the the following primary vertices for the \emph{Shupti} interaction (see Fig.\,(\ref{shupti})),
\begin{eqnarray}
e^{-}&\rightarrow & e^- +\gamma_h,\nonumber\\
\nu_{e}&\rightarrow &\nu_{e} +\gamma_h,\\
\chi &\rightarrow & \chi +\gamma_h,\\
\xi &\rightarrow & \xi + \gamma_h.
\end{eqnarray}
\begin{figure}
\includegraphics[scale=0.2]{Shupti}
\caption{\label{shupti}\scriptsize The primary vertices for \emph{Shupti} interaction.}
\end{figure}
\begin{figure}
\includegraphics[scale=0.2]{Sushupti}
\caption{\label{sushupti}\scriptsize The primary vertices for \emph{Sushupti} interaction.}
\end{figure}
\item[(b)]The \emph{Dakshina}\,($\chi$)and \emph{Vama}\,($\xi$) have got only hidden charge and hidden isospin charges. The fields take part neither in electromagnetic nor weak interactions and hence these are undetectable. So, these particles may be considered as candidates of the dark matter\,\cite{Matos:1999et, Boehm:2003hm, Boehm:2020wbt}. Needless to mention, $e$ and $\nu_e$ take part in hidden interaction in addition to electromagnetic and weak interactions.
\item [(c)] In this model, the electron neutrino ($\nu_e$) is massless. However, the other two leptons, $\nu_{\mu}$ and $\nu_{\tau}$ get masses because they are bound states. The $\nu_e$ interact with $\chi$ and $\xi$ respectively through the exchange of $\gamma_h$. The masses of $\chi$ and $\xi$ are identified as $M_{\chi}=y_{\chi}v$ and $M_{\xi}=y_{\xi}v$ respectively. As $m_e << m_\mu,\,m_\tau$, the $M_{\chi}$ and $M_{\xi}$ must be quite high. We see that $m_\tau > m_\mu $, so $y_\xi \,>\,y_\chi$. In addition, we realize that $m_{\nu_\mu},\,m_{\nu_{\tau}}<< m_{\mu}$ and $m_{\tau}$. So we understand that the bound state of $\chi$ (or $\xi$) formed with massive $e$, and that between $\chi$ (or $\xi$) with massless $\nu_e$ do not share the same properties. It appears that the \emph{Sh} interaction does not treat massive and massless particles equally in the context of composite state. This requires a thorough analysis and detailed investigation which is beyond the scope of our present work.
\item[(d)] According to the second postulate, we see that in addition to the bound states like $\mu$ and $\tau$, we may get the DM states like $\chi\bar{\chi}$, $\xi\bar{\xi}$, $\chi\bar{\xi}$ and $\bar{\chi}\xi$. These states are unobservable. However, the present model does not deny the existence of the following dileptonic states like $e\bar{\nu}_e$, $\bar{e}\nu_e$ and $\nu_e\bar{\nu}_e$. We expect that the upcoming experiments will be able to witness these particles.
\begin{figure}
\includegraphics[scale=0.25]{muondecay}
\caption{\label{muondecay}\scriptsize The muon decay process is depicted in the light of present model. In the above diagram, time flows in the upward direction. The muon is a bound state of $e$ and $\chi$. The electron decays to $\nu_{e}$ via emission of $W^-$. The $\nu_e$ and $\chi$ bind together to form $\nu_{\mu}$. The $W^{-}$ decays to $e$ and $\bar{\nu}_e$.}
\end{figure}
\item[(e)] We see that similar to the \emph{Sh} interaction, the \emph{Ssh} force is associated with `Hidden Isospin raising' (HIRC), `Hidden Isospin lowering' (HILC) and `Hidden neutral Isospin' (HNIC) currents. The corresponding primary vertices are shown below.
\begin{eqnarray}
\xi &\rightarrow & \chi+ G, \hspace{0.2 cm} (HIRC)\\
\chi &\rightarrow & \xi+ G^*, \hspace{0.2 cm} (HILC)\\
\xi\,(\chi) &\rightarrow & \xi\,(\chi)+ G^3, \hspace{0.2 cm} (HINC)
\end{eqnarray}
The Feynman diagrams are depicted in Fig.\,(\ref{sushupti}).
\item[(f)] We highlight that in the light of present model, it is possible to explain the decay processes of leptons. To exemplify, we choose the $\mu$ decay. We know in the SM,
\begin{eqnarray}
\mu\rightarrow e + \nu_\mu + \bar{\nu}_e .
\end{eqnarray}
This process is explained in the light of composite nature of $\mu^-$ as in the following: the $e$ present in the $\mu$, decays to $\nu_e$ and $W^-$; the $\chi$ sticks to $\nu_e$ and forms $\nu_\mu$ and the $W^-$ decays to $e$ and $\bar{\nu}_e$ (See Fig.\,(\ref{muondecay})). The process is depicted below,
\begin{eqnarray}
\mu\,(e\,\chi)\rightarrow \nu_{\mu}\,(\nu_e\,\chi) +\, W^-;\quad W^-\rightarrow e + \bar{\nu}_e\nonumber
\end{eqnarray}
\item[(g)] In the light of present model, we can explain the neutrino flavour oscillation in terms of composite nature and decay to hidden scalar particles. We try to explain this process by assuming that hidden scalar fields $\chi$ and $\xi$ are present as a uniform background. The electron neutrino ($\nu_e$) while it travels, may get bound to the $\chi$ (or $\xi$) because of \emph{Shupti} interaction and accordingly $\nu_\mu$ (or $\nu_{\tau}$) state is formed. For an observer, $\nu_e$ converts to $\nu_{\mu}$ (or $\nu_{\tau}$) and similarly, when this bound state is broken, the $\chi$ (or $\xi$) is released and one observes the same as a conversion from $\nu_{\mu}$ (or $\nu_\tau$) to $\nu_e$.
\begin{eqnarray}
\nu_e + \chi (\xi)\rightarrow \nu_{\mu}\,(\nu_\tau)\rightarrow \nu_e + \chi(\xi).
\end{eqnarray}
The conversion of $\nu_{\mu}$ to $\nu_{\tau}$ and that from $\nu_\tau$ to $\nu_{\mu}$ can be understood in terms of the Hidden isospin raising and lowering currents (H.I.R.C and H.I.L.C) and these are depicted in Fig.\,(\ref{neutrinooscillation}). To illustrate,
\begin{eqnarray}
\nu_{\mu}&\rightarrow & \nu_{\tau} +G^*,\quad G^* \rightarrow \chi + \bar{\xi}.
\end{eqnarray}
and,
\begin{eqnarray}
\nu_{\tau}&\rightarrow & \nu_{\mu} + G ,\quad G \rightarrow \bar{\chi} + \xi.
\end{eqnarray}
The detectors are unable to detect these particles, like $\chi$ or $\xi$ that are present either in initial or final state, or in both. Hence, the discrepancies that are observed in the neutrino oscillation experiments are owing to the presence of the above-said hidden particles and not because of the fourth sterile neutrino state\,\cite{Dodelson:2005tp, TheIceCube:2016oqi, Aartsen:2020iky, Aartsen:2020fwb}.
\end{enumerate}
\begin{figure}
\includegraphics[scale=0.25]{neutrinooscillation}
\caption{\label{neutrinooscillation}\scriptsize The neutrino flavour oscillation between $\nu_{\mu}$ and $\nu_{\tau}$ is described in the light of present model. In the above Feynman diagrams, the time flows in the upward direction. The $\nu_{\mu}$ is composed of $\nu_e$ and $\chi$. The $\chi$ decays to $\xi$ via emission of $G^*$. The $\chi$ binds itself to $\nu_e$ and forms $\nu_{\mu}$. The $G^*$ decays to $\chi$ and $\bar{\xi}$. The hidden scalar particles are unobservable and hence it appears that $\nu_{\mu}$ is converted to $\nu_{\tau}$. Similarly, $\nu_{\tau}$ is converted to $\nu_{\mu}$ via the emission of $G$.}
\end{figure}
\section{Summary and Discussion}
The present model, unlike the other composite frameworks, does not insist on all the leptons having substructures. The present framework contains the essence of GWS model within itself and also allows the hidden matter fields: \emph{Vama} and \emph{Dakshina} to reside within the framework of SM. The model posits $e$ and $\nu_e$ as fundamental particles and the rest as composite states. Through Higgs Mechanism, both electron and the hidden matter fields obtain masses. The hidden matter fields do not interact via EW force, rather they experience two new forces: \emph{Shupti} and \emph{Sushupti}. On the other hand, $e$ and $\nu_e$ feel both EW and \emph{Shupti} forces . The gauge group that describes the EW and hidden interaction is $SU(2)_L\otimes U(1)_Y\otimes SU(2)_h\otimes U(1)_h$ and Higgs mechanism breaks the symmetry to $U(1)_Q\otimes U(1)_h$. The model denies the existence of either Majorana or righthanded Dirac neutrinos.
We see that the GWS model and other Composite models like Preon or Rishon models treat the charged leptons on equal footing. In GWS model, all the leptons appear as fundamental, whereas the composite models posit all the leptons to have substructures. But in our model, we see that electron and muon cannot be treated on an equal footing. This is important concerning the results obtained from the recent muon $g-2$ experiment, which reveals that the $g$ value of muon is slightly higher than the predicted value\,\cite{Borsanyi:2020mff, Aoyama:2020ynm, Abi:2021gix}. Perhaps, the \emph{Sh} interaction of $e$ with $\chi$ plays a role in elevating the $g$ value of muon.
One may say that the present framework is an extension of the GWS model in terms of curtailing the number of fundamental leptons and also introducing hidden matter fields and forces. It is often seen that neutrino mass models conjecture either the presence of righthanded neutrino or neutrinos being fundamentally Majorana particles. However, both the cases are not validated experimentally. The present analysis says that only $\nu_e$, being a fundamental particle is massless. The $\nu_e$ in association with hidden scalar fields $\chi$ and $\xi$ forms $\nu_\mu$ and $\nu_\tau$ respectively, and thus may acquire masses. The formation of the bound state and the flavour changing phenomenon are attributed to the Hidden forces. So, the hidden matter fields and hidden forces are part and parcel of SM. However, a detailed analysis of \emph{Shupti} and \emph{Sushupti} forces requires further investigation.
Although we have discussed the lepton sector only, yet the SM is not complete without the quarks. Based on the symmetry arguments, one may try to describe the second and third generations of quarks as composite particles. But the question remains whether it is required or not as the problems within the lepton sector are not akin to those of quarks. But avenues are still open as the fundamental laws of Nature are not understood in full and the present work adds a new perspective to the same.
|
2,877,628,089,078 | arxiv | \section{Introduction}
The design of large scale distributed networks (e.g., mesh and ad-hoc networks, relay networks) poses a set of new challenges to information theory, communication theory and network theory. Such networks are characterized by the large size of the network both in terms of the number of nodes (i.e., dense) and in terms of the geographical area the network covers. Each terminal can be severely constrained by its computational and transmission/receiving power. Moreover, delay and complexity constraints along with diversity-limited channel behavior may require transmissions under insufficient levels of coding protection causing link outages. These constraints require an understanding of the performance limits of such networks {\it jointly in terms of power and bandwidth efficiency and link reliability}, especially when designing key operational elements essential in these systems such as multihop routing algorithms, bandwidth allocation policies and relay deployment models.
This paper applies tools from information theory and statistics to evaluate the end-to-end performance limits of various multihop routing algorithms in wireless networks focusing on the tradeoff between energy efficiency and spectral efficiency; which is also known as the {\it power-bandwidth tradeoff}. In particular, our main interest is in the power-limited {\it wideband} communication regime, in which transmitter power is much more costly than bandwidth. Since bandwidth is in abundance, communication in this regime is characterized by low signal-to-noise ratios (SNRs), very low signal power spectral densities, and negligible interference power.
{\it Relation to Previous Work.} While the power-bandwidth tradeoff characterizations of various point-to-point and multi-user communication settings can be found in the literature, previous work addressing the fundamental limits over large adhoc wireless networks has generally focused either only on the energy efficiency performance \cite{Dana03} or only on the spectral efficiency performance \cite{Gupta00}\nocite{Gastpar02}-\cite{Boel2006}. The analytical tools to study the power-bandwidth tradeoff in the power-limited regime have been previously developed in the context of point-to-point
single-user communications \cite{Verdu02}, and were extended to multi-user (point-to-multipoint and multipoint-to-point) settings
\cite{Shamai01}-\nocite{Caire04}\nocite{Lapidoth03}\cite{Muharem03}, as well as to adhoc wireless networking examples of
single-relay channels \cite{Abbas03}-\cite{Yao03}, multihop networks under additive white Gaussian noise (AWGN) \cite{Laneman05} and dense multi-antenna relay networks \cite{oyman_pbt2005}.
{\it Contributions.} This paper characterizes the power-bandwidth tradeoff in a wideband linear multihop network with quasi-static frequency-selective fading processes and orthogonal frequency-division multiplexing (OFDM) modulation over all links; with a key emphasis on the power-limited wideband regime. Our analysis considers open-loop (fixed-rate) and closed-loop (rate-adaptive) multihop relaying techniques and focuses on the impact of routing with spatial reuse on the statistical properties of the end-to-end conditional mutual information (conditioned on the specific values of the channel fading parameters and therefore treated as a random variable \cite{Ozarow94}) and on the energy and spectral efficiency measures of the wideband regime (computed from the conditional mutual information). Our results demonstrate the realizability of the multihop diversity advantages in the case of routing with spatial reuse for wideband OFDM systems under wireless channel effects such as path-loss and quasi-static frequency-selective multipath fading. The first author reported earlier analytical results for the case of multihop routing with no spatial reuse in \cite{Oyman06b}, which was the first work to observe the effect of multihop diversity for enhancing end-to-end link reliability in diversity-limited flat-fading wireless systems.
\section{Network Model and Definitions}
\subsection{General Assumptions}
We model a linear multihop network as a network in which a pair of source and destination terminals communicate with each other by routing their data through multiple intermediate relay terminals, as depicted in Fig. \ref{linear_net}. If the linear multihop network consists of $N+1$ terminals; the source terminal is identified as ${\cal T}_1$, the destination terminal is identified as ${\cal T}_{N+1}$, and the intermediate relay terminals are identified as ${\cal T}_2$-${\cal T}_{N}$, where $N$ is the number of hops along the transmission path. Because terminals cannot transmit and receive at the same time in the same frequency band, we only focus on time-division based (half duplex) relaying, which orthogonalizes the use of the time and frequency resources
between the transmitter and receiver of a given radio. Moreover, we consider {\it full decoding} of the entire codeword at the intermediate relay terminals, which is also called {\it regeneration} or {\it decode-and-forward} in various contexts. In particular, for any given message to be conveyed from ${\cal T}_1$ to ${\cal T}_{N+1}$, we consider a simple $N$-hop decode-and-forward multihop routing protocol, in which, at hop $n$, relay terminal ${\cal T}_{n+1}, \, n=1,...,N-1$, attempts to fully decode the intended message based on its observation of the transmissions of terminal ${\cal T}_{n}$ and forwards its re-encoded version over hop $n+1$ to terminal ${\cal T}_{n+1}$. We consider multihop relaying protocols with no interference across different hops, as well as those with {\it spatial reuse}, for which we allow a certain number of terminals over the linear network to transmit simultaneously over the same time slot and frequency band.
\begin{figure} [t]
\begin{center}
\includegraphics[width=3.4in, keepaspectratio]{linear_net.eps}
\end{center}
\caption{Linear multihop network model.}
\label{linear_net}
\end{figure}
To facilitate parallel transmission of several packets through the linear multihop network, the available bandwidth is reused between transmitters, with a minimum separation of $K$ terminals between simultaneously transmitting terminals ($2 \leq K \leq N$); such that $N$ is divisible by $K$ and $M=N/K$ simultaneous transmissions are allowed at any time. Such spatial reuse schemes enable multiple nodes to transmit leading to more efficient use of
bandwidth, but introducing intra-route interference. For the case of no spatial reuse, we have $K=N$ and $M=1$. In decoding the message, terminals regard all interference signals not originating from the preceding node as noise, i.e., the receiver at terminal ${\cal T}_{n+1}$ treats all received signal components other than that from terminal ${\cal T}_n$ as noise.
\subsection{Channel and Signal Model}
We consider the wideband channel over each hop to exhibit quasi-static frequency-selective fading with AWGN over the bandwidth of interest, and assume perfectly synchronized transmission/reception between the terminals. Using OFDM modulation turns the frequency-selective fading channel into a set of $W$ parallel frequency-flat fading channels rendering multi-channel equalization particularly simple since for each OFDM tone a narrowband receiver can be employed. We assume that the length of the cyclic prefix (CP) in the OFDM system is greater than the length of the discrete-time baseband channel impulse response. This assumption guarantees that the frequency-selective fading channel decouples into a set of parallel frequency-flat fading channels. Our channel model accommodates multihop routing protocols with spatial reuse as well as those without spatial reuse. At hop $n$ and tone $w$, the discrete-time memoryless complex baseband input-output channel relation is given by ($n=1,...,N$ and $w=1,...,W$)
$$
{y}_{n,w} = \left( \frac{1}{d_{n}} \right)^{p/2} H_{n,w} \,s_{n,w} + \sum_{l \in {\cal L}_n} \left( \frac{1}{f_{n,l}} \right)^{p/2} G_{n,l,w} \,i_{n,l,w} + z_{n,w},
$$
where $y_{n,w} \in {\Bbb C}$ is the received signal at terminal ${\cal T}_{n+1}$,
$s_{n,w} \in {\Bbb C}$ is the temporally i.i.d. zero-mean
circularly symmetric complex Gaussian scalar transmit signal from ${\cal T}_n$
satisfying the average transmit power constraint ${\Bbb E}\left[|s_{n,w}|^2\right] = P_s$,
$i_{n,l,w} \in {\Bbb C}$ is the temporally i.i.d. zero-mean
circularly symmetric complex Gaussian scalar transmit signal from intra-route interference source $l$
satisfying the average transmit power constraint ${\Bbb E}\left[|i_{n,l,w}|^2\right] = P_i$,
$z_{n,w} \in {\Bbb C}$ is the temporally white zero-mean circularly symmetric
complex Gaussian noise signal at ${\cal T}_{n+1}$, independent across $n$ and $w$ and independent from the input signals $\{s_{n,w}\}$ and $\{i_{n,l,w}\}$, with single-sided noise spectral density $N_0$, $d_{n}$ is the inter-terminal distance
between terminals ${\cal T}_n$ and ${\cal T}_{n+1}$, $f_{n,l}$ is the inter-terminal distance between interference source $l$ and terminal ${\cal T}_{n+1}$, set ${\cal L}_n$ contains the indices of the subset of terminals ${\cal T}_1$-${\cal T}_{N+1}$ over the linear multihop network contributing to the intra-route interference seen during the reception of terminal ${\cal T}_{n+1}$ and
$p$ is the path loss exponent ($p \geq 2$). All of the discrete-time channels are assumed to be frequency-selective with $V$ delay taps indexed by $v=0,...,V-1$, under a certain power delay profile (PDP) such that their frequency responses sampled at tones $w=1,...,W$ are
$$
H_{n,w}=\sum_{v=0}^{V-1} h_{n,v} e^{-j 2 \pi v w / W},\,\,\,\,
G_{n,l,w}=\sum_{v=0}^{V-1} g_{n,l,v} e^{-j 2 \pi v w / W},
$$
for the signal and interference components, respectively, where $h_{n,v} \in {\Bbb C}$ and $g_{n,l,v} \in {\Bbb C}$ are random variables of arbitrary continuous distributions representing the signal and interference channel gains at receiving terminal ${\cal T}_{n+1}$, due to fading (including shadowing and microscopic fading effects) over the wireless links. We assume that the linear multihop network has a one-dimensional geometry such that the source terminal ${\cal T}_1$ and destination terminal ${\cal T}_{N+1}$ are separated by a distance $D$ and all intermediate terminals ${\cal T}_2$-${\cal T}_{N}$ (in that order) are equidistantly positioned on the line between ${\cal T}_1$ and ${\cal T}_{N+1}$, i.e., the inter-terminal distance $d_n$ is chosen as $d_n=D/N$.
The channel fading statistics over the linear multihop network (modeled by random variables $\{h_{n,v}\}$ and $\{g_{n,l,v}\}$) are assumed to be based on i.i.d. realizations across different hops and taps (across $n$ and $v$). Furthermore, our channel model concentrates on the quasi-static regime, in which, once drawn, the channel variables $\{h_{n,v}\}$ and $\{g_{n,l,v}\}$ remain fixed for the entire duration of the respective hop transmissions, i.e., each codeword spans a single fading state, and that the channel coherence time is much larger than the coding block length, i.e., slow fading assumption. Although we assume that each receiving terminal ${\cal T}_{n+1}$ accurately estimates and tracks its channel and therefore possesses the perfect knowledge of the signal channel states $\{h_{n,v}\}_{v=0}^{V-1}$ and aggregate interference powers due to sources in ${\cal L}_n$, we consider two separate cases regarding the availability of channel state information (CSI) at the transmitters:
(i) {\it Fixed-rate transmissions:} No terminal possesses transmit CSI which necessitates a fixed-rate transmission strategy for all terminals, where the rate is chosen to meet a certain level of reliability with a certain probability,
(ii) {\it Rate-adaptive transmissions:} Each transmitting terminal ${\cal T}_{n},\,n=1,...,N$ possesses the knowledge of the channel states $\{h_{n,v}\}_{v=0}^{V-1}$ and aggregate interference powers due to sources in ${\cal L}_n$, and this allows for adaptively choosing the transmission rate over hop $n$ in a way that guarantees reliable communication provided that the coding blocklength is arbitrarily large.
It should be emphasized that we only assume the presence of local CSI at the terminals so that each terminal knows perfectly the receive (and possibly transmit) CSI regarding only its neighboring links, and our work does not assume the presence of global CSI at the terminals. In general, due to slow fading, each terminal in the linear multihop network may be able to obtain full channel state information (CSI) for its neighboring links through feedback mechanisms.
\subsection{Coding Framework}
To model block-coded communication over the linear multihop network, a $(\{M_n\}_{n=1}^N,\{Q_n\}_{n=1}^N,Q)$ multihop code ${\cal C}_Q$ is defined by a codebook of $\sum_{n=1}^N M_n$ codewords such that $M_n$ is the number of messages (i.e., number of codewords) for transmission over hop $n$, $Q_n$ is the coding blocklength over hop $n$, $R_n = (1/Q_n)\ln(M_n)$ is the rate of communication over hop $n$ (in nats per channel use), and $Q = \sum_{n=1}^N Q_n$ is the fixed total number of channel uses over the multihop link, representing a delay-constraint in the end-to-end sense, i.e., the $N$-hop routing protocol to convey each message from ${\cal T}_1$ to ${\cal T}_{N+1}$ takes place over the total duration of $\sum_{n=1}^N Q_n = Q$ symbol periods. Let ${\cal S}_{Q_n}$ be the set of all sequences of length $Q_n$ that can be transmitted on the channel over hop $n$ and ${\cal Y}_{Q_n}$ be the set of all sequences of length $Q_n$ that can be received. The codebook for multihop transmissions is determined by the encoding functions $\phi_n,\,n=1,...,N$, that map each message $w_n\in {\cal W}_n = \{1,...,M_n\}$ over hop $n$
to a transmit codeword ${\bf s}_{n} \in {\Bbb C}^{W \times Q_n}$, where $s_{n,w} [q] \in {\cal S}_{1}$ is the transmitted
symbol over hop $n$ and tone $w$ during channel use $\sum_{m=1}^{n-1}Q_m+q,\,q=1,...,Q_n$. Each receiving terminal employs a decoding function $\psi_n \, , \,n=1,...,N$ to perform the mapping ${\Bbb C}^{W\times Q_n} \rightarrow \hat{w}_n \in {\cal W}_n$ based on its observed signal ${\bf y}_n \in {\Bbb C}^{W \times Q_n}$, where $y_{n,w}[q] \in {\cal Y}_{1}$ is the received symbol over hop $n$ and tone $w$ at time $\sum_{m=1}^{n-1}Q_m+q$. The codeword error probability for the $n$-th hop is given by $\epsilon_n = {\Bbb P}(\psi_n({\bf y}_n) \neq w_n)$. An $N$-tuple of multihop rates $(R_1,...,R_N)$ is
achievable if there exists a sequence of $(\{M_n\}_{n=1}^N, \{Q_n\}_{n=1}^N, Q)$
multihop codes $\{{\cal C}_Q:Q=1,2,...\}$ with $Q=\sum_{n=1}^N Q_n$, $Q_n >0, \forall n$, and vanishing $\epsilon_n,\, \forall n$.
\subsection{Power-Bandwidth Tradeoff Measures}
We assume that the linear multihop network is supplied with finite total average transmit power $P$ (in Watts (W)) over unconstrained bandwidth $B$ (in Hertz (Hz)). The available transmit power is shared equally among $M=N/K$ simultaneous transmissions and $W$ OFDM tones of equal bandwidth $B/W$, leading to $P_s=P/(M\,W)$ and $P_i=P/(M\,W)$. If the transmitted codewords over the linear multihop network are chosen to achieve the desired end-to-end data rate per unit bandwidth (target spectral efficiency) $R$, reliable communication requires that $R \leq {\cal I}\left({E_b}/{N_0}\right)$ as $Q_n \rightarrow \infty,\,\forall n$, where ${\cal I}$ denotes the conditional mutual information (in nats/second/Hertz (nats/s/Hz)) which is a random variable under quasi-static fading, and $E_b/N_0$ is the energy per information bit normalized by the background noise spectral level, expressed as $E_b/N_0 = \mathsf{SNR}/I(\mathsf{SNR})$ for $\mathsf{SNR} = P/(N_0B)$ and $I$ denoting the conditional mutual information as a function of $\mathsf{SNR}$ \footnote{The use of $I$ and ${\cal I}$ avoids assigning the same symbol to conditional mutual information functions of $\mathsf{SNR}$ and ${E_b}/{N_0}$.}.
There exists a tradeoff between the efficiency measures ${E_b}/{N_0}$ and ${\cal I}$ (known as the power-bandwidth tradeoff) in achieving a given target data rate. When ${\cal I} \ll 1$, the system operates in the {\it power-limited wideband regime}; i.e., the bandwidth is large and the main concern is the limitation on power. Particular emphasis throughout our analysis is placed on this wideband regime, i.e., regions of low ${E_b}/{N_0}$.
Defining $({E_b}/{N_0})_{\mathrm{min}}$ as the minimum system-wide
${E_b}/{N_0}$ required to convey any positive rate reliably, we have
$({E_b}/{N_0})_{\mathrm{min}} = \min_{\vspace*{2mm}\mathsf{SNR}} \, \mathsf{SNR}/I(\mathsf{SNR})$.
In most scenarios, ${E_b}/{N_0}$ is minimized in the wideband regime when $\mathsf{SNR}$ is low and $I$ is near zero.
We consider the first-order behavior of $\cal{I}$ as a function of
${E_b}/{N_0}$ when ${\cal I} \rightarrow 0$ by analyzing the affine function (in decibels)
\footnote{ $\,\,u(x)=o(v(x)), x \rightarrow L$ stands for $\lim_{x
\rightarrow L}\frac{u(x)}{v(x)}=0$.} \footnote{ $\, \, \, =^{\!\!\!\!\!\mbox{\tiny
a.s.}}\,\,\,$ denotes statistical equality with probability $1$.}
$$
10\log_{10}\frac{E_b}{N_0} \left( {\cal I} \right) \, \, \, =^{\!\!\!\!\!\mbox{\tiny
a.s.}}\,\,\, 10\log_{10}\frac{E_b}{N_0}_{\mathrm{min}} +
\frac{\cal I}{S_0}10\log_{10}2 + o({\cal I}),
$$
where $S_0$ denotes the ``wideband'' slope of mutual
information in b/s/Hz/(3 dB) at the point $({E_b}/{N_0})_{\mathrm{min}}$,
$$
S_0 \, \, \, =^{\!\!\!\!\!\mbox{\tiny
a.s}}\,\,\, \lim_{\frac{E_b}{N_0} \downarrow \frac{E_b}{N_0}_{\mathrm{min}}}
\frac{{\cal I}(\frac{E_b}{N_0})}{10\log_{10}\frac{E_b}{N_0}-10\log_{10}
\frac{E_b}{N_0}_{\mathrm{min}}}10\log_{10}2 .
$$
It can be shown that \cite{Verdu02}
\begin{equation}
\frac{E_b}{N_0}_{\mathrm{min}} \, \, \, =^{\!\!\!\!\!\mbox{\tiny
a.s}}\,\,\, \lim_{\mathsf{SNR} \rightarrow 0} \,\frac{\ln{2}}{\dot{I}(\mathsf{SNR})},
\label{min_ener}
\end{equation}
and
\begin{equation}
S_0 \, \, \, =^{\!\!\!\!\!\mbox{\tiny
a.s}}\,\,\, \lim_{\mathsf{SNR} \rightarrow 0} \frac{2{\left[ \dot{I}(\mathsf{SNR}) \right]}^2}{-\ddot{I}(\mathsf{SNR})},
\label{wideband_slope}
\end{equation}
where $\dot{I}$ and $\ddot{I}$ denote the first and second order derivatives of
$I(\mathsf{SNR})$ (evaluated in nats/s/Hz) with respect to $\mathsf{SNR}$.
\section{Power-Bandwidth Tradeoff in Wideband Linear Multihop Networks}
We begin this section by characterizing the end-to-end mutual information over the linear multihop network considering the use of point-to-point capacity achieving codes over each hop. For the mutual information analysis, we do not impose any delay constraints on the multihop system and allow each coded transmission to have an arbitrarily large blocklength (i.e. assume large $\{Q_n\}$), although we will be concerned with the relative sizes of blocklengths over multiple hops. It is assumed that the nodes share a band of radio frequencies allowing for a signaling rate of $B$ complex-valued symbols per second. For any given spatial reuse separation $K$, the time-division based multihop routing protocol is specified by the time-sharing constants $\{\lambda_k\}_{k=1}^K,\,\sum_{k=1}^K \lambda_k=1$, where $\lambda_{k} \in [0,1]$ is defined as the fractional time during which reuse phase $k$ is active ($k=1,...,K$), with simultaneous transmission and reception over the corresponding $M=N/K$ hops. An example time-division based multihop routing protocol with spatial reuse and time-sharing is depicted in Fig. \ref{linear_net_reuse} for $N=4,K=2,M=2$.
\begin{figure} [t]
\begin{center}
\includegraphics[width=3.4in, keepaspectratio]{Linear_net_reuse.eps}
\end{center}
\caption{Linear multihop network model with spatial reuse and time-sharing ($N=4, K=2, M=2$).}
\label{linear_net_reuse}
\end{figure}
For any given reuse phase $k$, the set of hops performing simultaneous transmissions are indexed by $m=1,...,M$. If the transmitted codewords over reuse phase $k$ are chosen based on the data rate per unit bandwidth (spectral efficiency) $\tilde{R}_k$, reliable communication requires that the condition $\tilde{R}_k \leq \min_m I_{k,m}(\mathsf{SNR})$ is met for all $k$, where $I_{k,m}$ denotes the mutual information over link $m$ during reuse phase $k$; such that hop index $n=(m-1)K+k$. The end-to-end conditional (instantaneous) mutual information $I$ of the linear multihop network
can be expressed in the form \cite{Oyman06b, Laneman05}
\begin{equation}
I(\mathsf{SNR}) = \max_{\sum_{k=1}^K \lambda_k = 1} \min_k \left\{ \lambda_k
\min_{m} I_{k,m}(\mathsf{SNR}) \right\},
\label{cap_minimax}
\end{equation}
where $I_{k,m}(\mathsf{SNR})$ is the conditional mutual information given (in nats/s/Hz) by \cite{Boel_cap99}
\begin{equation}
I_{k,m}(\mathsf{SNR}) = \frac{1}{W} \sum_{w=1}^W \ln\left(1 + \mathsf{SINR}_{(m-1)K+k,w}(\mathsf{SNR})\right),
\label{cap_fsh}
\end{equation}
as a function of the received signal-to-interference-and-noise ratio (SINR), which is given at terminal ${\cal T}_{n+1}$ and tone $w$ by
$$
\mathsf{SINR}_{n,w}(\mathsf{SNR}) = \frac{N^{p-1} \,K \, |H_{n,w}|^2\,\mathsf{SNR}}{D^{p}} \left(1+\zeta_{n,w}(\mathsf{SNR})\right)^{-1},
$$
where $\zeta_{n,w}(\mathsf{SNR})$ is the aggregate intra-route interference power scaled down by noise power that satisfies $\lim_{\mathsf{SNR} \rightarrow 0} \zeta_n(\mathsf{SNR})=0$.
\subsection{Fixed-Rate Multihop Relaying}
A suboptimal strategy that yields a lower bound to the conditional mutual information in (\ref{cap_minimax}) is equal time-sharing ($\lambda_k = 1/K$) and fixed-rate (open-loop) transmission over all hops, i.e. the rate over hop $n$ equals $R_n=R,\,\forall n$ for some fixed value of $R$. This strategy is applicable in the absence of rate adaptation mechanisms if CSI is not available at the transmitters. In this setting, the end-to-end conditional mutual information can be expressed as
\begin{eqnarray}
I(\mathsf{SNR}) & = & \frac{1}{K\,W} \, \min_{k,m} \, \sum_{w=1}^W \ln \left(1 + \mathsf{SINR}_{(m-1)K+k,w}(\mathsf{SNR}) \right) \nonumber \\
&=& \frac{1}{K\,W} \, \min_n \, \sum_{w=1}^W \ln \left(1 + \mathsf{SINR}_{n,w}(\mathsf{SNR}) \right).
\label{cap_eq_ts}
\end{eqnarray}
{\bf \noindent Theorem 1:} {\it In the wideband regime, for time-division based linear
multihop networks employing the fixed-rate decode-and-forward relaying protocol (equal time-sharing),
the power-bandwidth tradeoff can be characterized as a function of
the channel fading parameters through the following relationships:}
$$
\frac{E_b}{N_0}_{\mathrm{min}} \, \, =^{\!\!\!\!\!\mbox{\tiny
a.s.}}\,\,\, \frac{\ln{2}}{\min_n\,(1/W) \sum_{w=1}^W |H_{n,w}|^2} \left(\frac{D^p}{N^{p-1} \,K}\right),
$${\it and}
$$
S_0 \, \, \, =^{\!\!\!\!\!\mbox{\tiny
a.s.}}\,\,\, \frac{2}{K}.
$$
{\it In the limit of large $N$, $(E_b/N_0)_{\mathrm{min}}$ converges in distribution as follows:}\footnote{ $\, \, \, \longrightarrow^{\!\!\!\!\!\!\!\mbox{\tiny
d}}\,\,\,$ denotes convergence in distribution.}
$$
\frac{E_b}{N_0}_{\mathrm{min}} \, \longrightarrow^{\!\!\!\!\!\!\!\mbox{\tiny
d}}\,\,\,\,\,\,\,\, \frac{\ln{2}}{a_N \, \Theta + b_N} \left(\frac{D^p}{N^{p-1} \,K}\right),
$$
{\it where $a_N>0$, $b_N$ are sequences of constants and $\Theta$ follows one of the three families of extreme-value distributions $\mu$: i) Type I, $\mu(x) = 1 - \exp \left( -\exp(x) \right),\,-\infty<x<\infty,$ ii) Type II,
$\mu(x) = 1 - \exp \left( -(-x)^{-\gamma} \right),\,\gamma >0$ if $x<0$ and
$\mu(x) = 1$ otherwise, iii) Type III, $\mu(x) = 1 - \exp \left( -x^{\gamma} \right),\,\gamma >0$ if $x \geq 0$ and
$\mu(x) = 0$ otherwise.}
\vspace*{2mm}
{\noindent \bf Proof.} We begin by applying (\ref{min_ener})-(\ref{wideband_slope}) to (\ref{cap_eq_ts}), which yields the non-asymptotic results of the theorem. Denoting $\beta_N = \min_{n=1,...,N} \,(1/W) \sum_{w=1}^W |H_{n,w}|^2$, if there exist sequences of constants $a_N > 0$, $b_N$, and some nondegenerate distribution function $\mu$ such that $(\beta_N - b_N)/a_N$ converges in distribution to $\mu$ as $N \rightarrow \infty$, i.e.,
$$
{\Bbb P}\left(\frac{\beta_N - b_N}{a_N} \leq x \right) \longrightarrow \mu(x)\,\,\,\,\,\,\mathrm{as} \,\, N \rightarrow \infty,
$$
then $\mu$ belongs to one of the three families of extreme-value distributions above \cite{Leadbetter83}. The exact asymptotic limiting distribution is determined by the distribution of $(1/W) \sum_{w=1}^W |H_{n,w}|^2$, and to which one of the three domains of attraction it belongs. Consequently, we have $\beta_N \, \longrightarrow^{\!\!\!\!\!\!\!\mbox{\tiny d}}\,\,\,\, a_N \,\, \Theta + b_N$, which completes the proof of the theorem. \hfill \fbox
\vspace*{2mm}
In the presence of non-ergodic, or even ergodic but slow fading channel variations, one approach toward the
information-theoretic characterization of the end-to-end performance under fixed-rate transmissions (in the absence of transmit CSI at all terminals) involves the consideration of
{\it outage probability} \cite{Ozarow94}. We define the end-to-end outage in a linear multihop network as the event that
the conditional mutual information based on the instantaneous channel fading parameters $\{h_{n,v}\}$ and $\{g_{n,l,v}\}$ cannot support the considered data rate.
Expressed mathematically, the end-to-end outage probability is given in terms of end-to-end conditional mutual information $I(\mathsf{SNR})$ as
$P_{\mathrm{out}} = {\Bbb P} \left( I(\mathsf{SNR}) < R \right)$,
where $R$ is the desired end-to-end data rate per unit bandwidth (spectral efficiency). Following the results of Theorem 1, a similar outage characterization is applicable to the power-bandwidth tradeoff in the wideband regime; in particular we can write $(E_b/N_0)_{\mathrm{min}}$ as
$$
\frac{E_b}{N_0}_{\mathrm{min,out}} = \frac{\ln{2}}{a_N \,\mu^{-1}(P_{\mathrm{out}}) + b_N} \left(\frac{D^p}{N^{p-1} \,K}\right).
$$
\subsection{Rate-Adaptive Multihop Relaying}
The conditional mutual information in (\ref{cap_minimax}) is achievable by the linear
multihop network under optimal time-sharing and rate adaptation to instantaneous fading variations. Because the transmission rate of each codeword over each hop is chosen so that reliable decoding is always possible (the rate is changed on a codeword by codeword basis to adapt to the instantaneous rate which depends on the channel fading conditions), the system is never in outage under this closed-loop strategy (assuming infinite block-lengths). Although outage may be irrelevant on a per-hop basis (full reliability given infinite block-lengths), investigating the statistical properties of end-to-end mutual information over the linear multihop network still yields beneficial insights in applications sensitive to certain QoS constraints (e.g. throughput, reliability, delay or energy constraints). Applying Lemma 1 in \cite{Oyman06b}, the end-to-end conditional mutual information under the rate-adaptive multihop relaying strategy becomes
\begin{equation}
I(\mathsf{SNR}) = \left(\sum_{k=1}^K \frac{1}{\min_{m} I_{k,m}(\mathsf{SNR})}\right)^{-1},
\label{cap_optimal_ts}
\end{equation}
where $I_{k,m}(\mathsf{SNR})$ was given earlier in (\ref{cap_fsh}).
\vspace*{2mm}
{\bf \noindent Theorem 2:} {\it In the wideband regime, for time-division based linear
multihop networks employing the rate-adaptive decode-and-forward relaying protocol (optimal time-sharing),
the power-bandwidth tradeoff can be characterized as a function of
the channel fading parameters through the following relationships:}
$$\frac{E_b}{N_0}_{\mathrm{min}} \, \, =^{\!\!\!\!\!\mbox{\tiny
a.s}}\,\,\, \left(\frac{D^p}{N^{p-1} \,K}\right)
\sum_{k=1}^K \frac{\ln{2}}{\min_m (1/W) \sum_{w=1}^W |H_{(m-1)K+k,w}|^2},
$$
{\it and}
$$
S_0 \, \, \, =^{\!\!\!\!\!\mbox{\tiny
a.s.}}\,\,\, \frac{2}{K}.
$$
{\it In the limit of large $N$ and for fixed $M$, $(E_b/N_0)_{\mathrm{min}}$ converges almost surely (with probability $1$) to the deterministic quantity} \footnote{ $\, \, \, \longrightarrow^{\!\!\!\!\!\!\!\!\mbox{\tiny
a.s.}}\,\,\,$ denotes convergence with probability $1$.}
$$
\frac{E_b}{N_0}_{\mathrm{min}} \, \, \longrightarrow^{\!\!\!\!\!\!\!\!\mbox{\tiny
a.s.}}\,\,\, \ln{2}\left(\frac{D^{p}}{N^{p-1}}\right) \chi +o\left(\frac{1}{N^{p-1}}\right).
$$
{\it where the constant $\chi$ is given by}
$$
\chi = {\Bbb
E}\left[ \frac{1}{\min_{m=1,...,M}(1/W)\sum_{w=1}^W|H_{m,w}|^2} \right].
$$
\vspace*{.1mm}
\subsection{Remarks on Theorems 1 and 2}
Theorems 1 and 2 suggest that the channel dependence of the power-bandwidth tradeoff is reflected by the randomness of $(E_b/N_0)_{\mathrm{min}}$ for both fixed-rate and rate-adaptive multihop relaying schemes in the presence of spatial reuse and frequency selectivity. We observe under rate-adaptive relaying in the wideband regime that, as the number of hops tends to infinity, $(E_b/N_0)_{\mathrm{min}}$ converges almost surely to a \emph{deterministic} quantity independent of the fading channel realizations. Similarly, for fixed-rate relaying, we observe a weaker convergence (in distribution) for $(E_b/N_0)_{\mathrm{min}}$ in the case of asymptotically large number of hops. This averaging effect achieved by fixed-rate and rate-adaptive relaying schemes can be interpreted as {\it multihop diversity}, a phenomenon first observed in \cite{Oyman06b} for routing with no spatial reuse in frequency-flat fading channels; and now shown to be also realizable with spatial reuse and frequency selectivity. Although fixed-rate relaying for asymptotically large $N$ improves the outage performance, this framework does not yield the fast averaging effect that leads to the strong convergence of $(E_b/N_0)_{\mathrm{min}}$, that is observed under rate-adaptive relaying. However, the variability of $(E_b/N_0)_{\mathrm{min}}$ still reduces under fixed-rate relaying leading to weak convergence; i.e., as the number of hops grows, the $\min$ operation on the channel powers reduces both the mean and variance of the end-to-end mutual information while the loss in the mean is more than compensated by the reduction in path loss as per-hop distances become shorter.
We note that in both fixed-rate and rate-adaptive multihop relaying, the enhancement in energy efficiency and end-to-end link reliability comes at a cost in terms of loss in spectral efficiency, as reflected through the wideband slope $S_0$, which decreases inversely
proportionally with spatial reuse separation $K$ (recall that $2 \leq K \leq N$). However, it should be emphasized that in comparison with no spatial reuse, the wideband slope improves significantly; justifying the spectral efficiency advantages of multihop routing techniques with spatial reuse in the wideband regime, especially in light of the earlier result in \cite{Oyman06b} suggesting that $S_0 = 2/N$ in quasi-static fading linear multihop networks with no spatial reuse.
For the following numerical study, we consider multihop routing over a frequency-selective channel with $V=2$, $W=4$ as well as a frequency-flat channel with $V=W=1$. For each channel tap, the fading realization has a complex Gaussian (Ricean) distribution with mean $1/\sqrt{2}$ and variance $1/2$, under an equal-power PDP. The path-loss exponent is assumed to be $p=4$, and the average received SNR between the terminals ${\cal T}_1$ and ${\cal T}_{N+1}$ is normalized to $0\,\mathrm{dB}$. We plot in Fig.~\ref{mutual_inf} the cumulative distribution function (CDF) of the end-to-end mutual information for both fixed-rate and rate-adaptive multihop relaying schemes with varying number of hops $N = 1,8$ in cases of frequency-flat fading and frequency-selective fading; also considering spatial reuse separation values of $K=4,8$ when $N=8$. As predicted by our analysis, routing with spatial reuse combined with rate-adaptive relaying provides significant advantages in terms of spectral efficiency performance. With increasing number of hops, for both frequency-flat and frequency-selective channels, we observe that the CDF of mutual information sharpens around the mean, yielding significant enhancements particularly at low outage probabilities over single-hop communication due to multihop diversity gains. In other words; our results show that multihop diversity gains remain viable under frequency-selective fading; and may be combined with the inherent frequency diversity available in each link, to realize a higher overall diversity advantage. Finally, consistent with our analysis, the numerical results show that the rate of end-to-end link stabilization with multihopping is much faster with rate-adaptive relaying than with fixed-rate relaying.
\begin{figure} [t]
\centerline{\epsfxsize=3.8in \epsffile{isssta08_plot.eps}}
\caption{Cumulative distribution function (CDF) of end-to-end mutual information for fixed-rate and rate-adaptive multihop relaying schemes for various values of $N$ and $K$ in frequency-flat and frequency-selective channels.}
\label{mutual_inf}
\end{figure}
\section{Conclusions}
This paper presented analytical and empirical results to show the realizability of the multihop diversity advantages in the cases of fixed-rate and rate-adaptive routing with spatial reuse for wideband OFDM systems under wireless channel effects such as path-loss and quasi-static frequency-selective multipath fading. These contributions demonstrate the applicability of the multihop diversity phenomenon for general channel models and routing protocols beyond what was reported earlier in \cite{Oyman06b} and show that this phenomenon can be exploited in designing multihop routing protocols to simultaneously enhance the end-to-end link reliability, energy efficiency and spectral efficiency of OFDM-based wideband mesh networks.
\begin{footnotesize}
\renewcommand{\baselinestretch}{0.2}
\bibliographystyle{IEEE}
|
2,877,628,089,079 | arxiv | \section{The Proposed \textsc{C-BANDIT} Algorithms}
\label{sec:algorithm}
We now propose an approach that extends the classical multi-armed bandit algorithms (such as UCB, Thompson Sampling, KL-UCB) to the correlated MAB setting. At each round $t+1$, the UCB algorithm \cite{auer2002finite} selects the arm with the highest UCB index $I_{k,t}$, i.e.,
\begin{align}
k_{t+1} = \arg \max_{k \in \mathcal{K}} I_{k,t}, \quad I_{k,t} = \hat{\mu}_k(t) + B\sqrt{\frac{2 \log (t)}{n_k(t)}}, \label{eqn:UCB1_index}
\end{align}
where $\hat{\mu}_k(t)$ is the empirical mean of the rewards received from arm $k$ until round $t$, and $n_k(t)$ is the number of times arm $k$ is pulled till round $t$. The second term in the UCB index causes the algorithm to explore arms that have been pulled only a few times (i.e., those with small $n_k(t)$). Recall that we assume all rewards to be bounded within an interval of size $B$. When the index $t$ is implied by context, we abbreviate $\hat{\mu}_k(t)$ and $ I_k(t)$ to $\hat{\mu}_k$ and $I_k$ respectively in the rest of the paper.
Under Thompson sampling \cite{agrawal2013further}, the arm $k_{t+1} =\arg$ $\max_{k \in \mathcal{K}} S_{k,t}$ is selected at time step $t+1$. Here, $S_{k,t}$ is the sample obtained from the posterior distribution of $\mu_k$, That is,
\begin{align}
k_{t+1} = \arg\max_{k \in \mathcal{K}} S_{k,t}, \quad S_{k,t} \sim \mathcal{N}\left(\hat{\mu}_k(t), \frac{\beta B}{n_k(t) + 1}\right),
\label{eqn:ts_index}
\end{align}
here $\beta$ is a hyperparameter for the Thompson Sampling algorithm
In the correlated MAB framework, the rewards observed from one arm can help estimate the rewards from other arms. Our key idea is to use this information to reduce the amount of exploration required. We do so by evaluating the \emph{empirical pseudo-reward} of every other arm $\ell$ with respect to an arm $k$ at each round $t$. Using this additional information, we identify some arms as \emph{empirically non-competitive} at round $t$, and only for this round, do not consider them as a candidate in the UCB/Thompson Sampling/(any other bandit algorithm).
\subsection{Empirical Pseudo-Rewards}
\label{sec:pseudo_reward}
In our correlated MAB framework, pseudo-reward of arm $\ell$ with respect to arm $k$ provides us an estimate on the reward of arm $\ell$ through the reward sample obtained from arm $k$. We now define the notion of empirical pseudo-reward which can be used to obtain an \textit{optimistic estimate} of $\mu_\ell$ through just reward samples of arm $k$.
\begin{defn}[Empirical and Expected Pseudo-Reward]
\label{defn:empirical_pseudo_reward}
After $t$ rounds, arm $k$ is pulled $n_k(t)$ times. Using these $n_k(t)$ reward realizations, we can construct the empirical pseudo-reward $\hat{\phi}_{\ell, k}(t)$ for each arm $\ell$ with respect to arm $k$ as follows.
\begin{align}
\hat{\phi}_{\ell, k}(t) \triangleq \frac{\sum_{\tau=1}^{t} \mathbbm{1}_{k_\tau = k} \ s_{\ell, k}(r_{k_\tau})}{n_k(t)}, \qquad \ell \in \{1,\ldots, K\} \setminus \{k\},
\end{align}
The expected pseudo-reward of arm $\ell$ with respect to arm $k$ is defined as
\begin{align}
\phi_{\ell, k} \triangleq \E{s_{\ell, k}(R_k)}.
\end{align}
For convenience, we set $\hat{\phi}_{k,k}(t) = \hat{\mu}_k(t)$ and $\phi_{k,k} = \mu_k$.
\end{defn}
Observe that $\E{s_{\ell,k}(R_k)} \geq \E{\E{R_\ell | R_k=r}} = \mu_\ell$. Due to this, empirical pseudo-reward $\hat{\phi}_{\ell, k}(t)$ can be used to obtain an estimated upper bound on $\mu_{\ell}$. Note that the empirical pseudo-reward $\hat{\phi}_{\ell, k}(t)$ is defined with respect to arm $k$ and it is only a function of the rewards observed by pulling arm $k$.
\subsection{The \textsc{C-Bandit} Algorithm}
\label{sec:modified_ucb}
Using the notion of empirical pseudo-rewards, we now describe a 3-step procedure to fundamentally generalize classical bandit algorithms for the correlated MAB setting.
\noindent
\textbf{Step 1: Identify the set $\mathcal{S}_t$ of significant arms:} At each round $t$, define $\mathcal{S}_t$ to be the set of arms that have at least $t/K$ samples, i.e., $\mathcal{S}_t = \{k \in \mathcal{K}: n_k(t) > \frac{t}{K}\}$. As $\mathcal{S}_t$ is the set of arms that have relatively \emph{large} number of samples, we use these arms for the purpose of identifying \emph{empirically competitive} and \emph{empirically non-competitive} arms. Furthermore, define $k^{\text{emp}}(t)$ to be the arm that has the highest empirical mean in set $\mathcal{S}_t$, i.e., $k^{\text{emp}}(t) = \argmax_{k \in \mathcal{S}_t} \hat{\mu}_k(t)$.
\footnote{If one were to use all arms (even those that have few samples) to identify empirically non-competitive arms, it can lead to incorrect inference, as pseudo-rewards with few samples will have larger noise, which can in-turn lead to elimination of the optimal arm. Using only the arms that have been pulled $\frac{t}{K}$ times in $\mathcal{S}_t$, allows us to ensure that the non-competitive arms are pulled only $\mathrm{O}(1)$ times as we show in \Cref{sec:regret}. }
\noindent
\textbf{Step 2: Identify the set of \emph{empirically competitive} arms $\mathcal{A}_t$ :}
Using the empirical mean, $\hat{\mu}_{k^{\text{emp}}}(t)$, of the arm with highest empirical reward in the set $\mathcal{S}_t$, we define the notions of empirically non-competitive and empirically competitive arms below.
\begin{defn}[Empirically Non-Competitive arm at round $t$]
An arm $k$ is said to be Empirically Non-Competitive at round $t$, if $\min_{\ell \in \mathcal{S}_t} \hat{\phi}_{k, \ell}(t) < \hat{\mu}_{k^{\text{emp}}}(t)$.
\end{defn}
\begin{defn}[Empirically Competitive arm at round $t$]
An arm $k$ is said to be Empirically Competitive at round $t$ if $\min_{\ell \in \mathcal{S}_t} \hat{\phi}_{k, \ell}(t) \geq \hat{\mu}_{k^{\text{emp}}}(t)$. The set of all empirically competitive arms at round $t$ is denoted by $\mathcal{A}_t$.
\end{defn}
The expression $\min_{\ell \in \mathcal{S}_t} \hat{\phi}_{k, \ell}(t)$ provides the tightest estimated upper bound on mean of arm $k$, through the samples of arms in $\mathcal{S}_t$. If this estimated upper bound is smaller than the estimated mean of $k^{\text{emp}}(t)$, then we call arm $k$ as \emph{empirically non-competitive} as it seems unlikely to be optimal through the samples of arms in $\mathcal{S}_t$. If the estimated upper bound of arm $k$ is greater than $\hat{\mu}_{k^{\text{emp}}}(t)$ , i.e., $\min_{\ell \in \mathcal{S}_t} \hat{\phi}_{k, \ell}(t) \geq \hat{\mu}_{k^{\text{emp}}}(t)$, we call arm $k$ as empirically competitive at round $t$, as it cannot be inferred as sub-optimal through samples of arms in $\mathcal{S}_t$. Note that the set of empirically competitive and empirically non-competitive arms is evaluated at each round $t$ and hence an arm that is empirically non-competitive at round $t$ may be empirically competitive in subsequent rounds.
\noindent
\textbf{Step 3: Play BANDIT algorithm in $\{\mathcal{A}_t \cup \{k^{\text{emp}}(t)\}\}$} As empirically non-competitive arm seem sub-optimal to be selected at round $t$, we only consider the set of empirically competitive arms along with $k^{\text{emp}}(t)$ in this step of the algorithm. At round $t$, we play a BANDIT algorithm from the set $\mathcal{A}_t \cup \{k^{\text{emp}}(t)\}$. For instance, the C-UCB pulls the arm $$k_t = \arg \max_{k \in \{\mathcal{A}_t \cup k^{\textit{emp}}(t) \}} I_{k,t-1},$$
where $I_{k,t-1}$ is the UCB index defined in \eqref{eqn:UCB1_index}.
Similarly, C-TS pulls the arm $$k_t = \arg \max_{k \in \{\mathcal{A}_t \cup k^{\textit{emp}}(t) \}} S_{k,t-1},$$ where $S_{k,t}$ is the Thompson sample defined in \eqref{eqn:ts_index}). At the end of each round we update the empirical pseudo-rewards $\hat{\phi}_{\ell,k_t}(t)$ for all $\ell$, the empirical reward for arm $k_t$.
Note that our \textsc{C-BANDIT} approach allows using any classical Multi-Armed Bandit algorithm in the correlated Multi-Armed Bandit setting. This is important because some algorithms such as Thompson Sampling and KL-UCB are known to obtain much better empirical performance over UCB. Extending those to the correlated MAB setting allows us to have the superior empirical performance over UCB even in the correlated setting. This benefit is demonstrated in our simulations and experiments described in \Cref{sec:simulation} and \Cref{sec:experiments}.
\begin{rem}[Pseudo-lower bounds]
If suppose one had the information about pseudo-lower bounds (which are lower bounds on conditional expected rewards), then it is possible to use this in our correlated bandit framework. In step 2 of our algorithm, we identify an arm $k$ as empirically non-competitive if $\min_{\ell \in \mathcal{S}_t} \hat{\phi}_{k,\ell}(t) < \hat{\mu}_k^{\text{emp}}(t)$. We can maintain empirical pseudo-lower bound $\hat{w}_{i,j}(t)$ of each arm $i$ with respect to every other arm $j$. Then, we can replace the step 2 of our algorithm by calling an arm empirically non-competitive if $\min_{\ell \in \mathcal{S}_t} \hat{\phi}_{k,\ell}(t) < \max_{i \in \mathcal{S}_t} \max_{j \in \mathcal{S}_t} \hat{w}_{i,j}(t)$. In the situation where pseudo-lower bounds are unknown, they can be set to $-\infty$ and the algorithm reduces to the C-Bandit algorithm proposed in the paper. We can expect the empirical performance of this algorithm (which is aware of pseudo-lower bounds) to be slightly better than the C-Bandit algorithm. However, its regret guarantees will be the same as that of the C-Bandit algorithm. This is because pseudo-upper bounds are crucial to deciding whether an arm is competitive/non-competitive (defined in the next section), and pseudo-lower bounds are not. Put differently, even in the presence of pseudo-lower bound, the definition of non-competitive and competitive arms (Definition 5) remains the same.
\end{rem}
\begin{algorithm}[t]
\hrule
\vspace{0.1in}
\begin{algorithmic}[1]
\STATE \textbf{Input:} Pseudo-rewards $s_{\ell,k}(r)$
\STATE \textbf{Initialize:} $n_k = 0, I_k = \infty$ for all $k \in \{1, 2, \dots K\}$
\FOR{ each round $t$}
\STATE Find $\mathcal{S}_t = \{k: n_k(t) \geq \frac{t}{K}\}$, the arm that have been pulled significant number of times till $t-1$. Define $k^{\text{emp}}(t) = \argmax_{k \in \mathcal{S}_t} \hat{\mu}_k(t)$.
\STATE Initialize the empirically competitive set $\mathcal{A}_t$ as an empty set $\{ \}$.
\FOR{$k \in \mathcal{K}$}
\IF {$ \min_{\ell \in \mathcal{S}_t} \hat{\phi}_{k,\ell}(t) \geq \hat{\mu}_{k^{\text{emp}}}(t)$}
\STATE Add arm $k$ to the empirically competitive set: $\mathcal{A}_t = \mathcal{A}_t \cup \{k\}$
\ENDIF
\ENDFOR
\STATE Apply UCB1 over arms in $\mathcal{A}_t \cup \{k^{\text{emp}}(t)\} $ by pulling arm $k_t = \arg \max_{k \in \mathcal{A}_t \cup \{k^{\text{emp}}(t)\}} I_k(t-1)$
\STATE Receive reward $r_{t}$, and update $n_{k_t}(t) = n_{k_t}(t) + 1$
\STATE Update Empirical reward:
$\hat{\mu}_{k_t}(t) = \frac{\hat{\mu}_{k_t}(t-1)(n_{k_t}(t)-1) + r_t }{n_{k_t}(t)}$
\STATE Update the UCB Index: $I_{k_t}(t) = \hat{\mu}_{k_t}(t) + B\sqrt{\frac{2 \log t}{n_{k_t}(t)}}$
\STATE Update empirical pseudo-rewards for all $ k \neq k_t$: $\hat{\phi}_{k, k_t}(t) = \sum_{\tau: k_\tau = k_t} s_{k,k_\tau}( r_\tau)/n_{k_t}(t)$
\ENDFOR
\end{algorithmic}
\vspace{0.1in}
\hrule
\caption{C-UCB Correlated UCB Algorithm}
\label{alg:formalAlgo}
\vspace{-0.2cm}
\end{algorithm}
\begin{algorithm}[t]
\hrule
\vspace{0.1in}
\begin{algorithmic}[1]
\STATE Steps 1 - 10 as in C-UCB
\STATE \textbf{Apply TS over arms in $\mathcal{A}_t \cup \{k^{\text{emp}}(t)\} $} by pulling arm $k_t = \arg \max_{k \in \mathcal{A}_t \cup \{k^{\text{emp}}(t)\}} S_{k,t}$, where $S_{k,t} \sim \mathcal{N}\left(\hat{\mu}_k(t), \frac{\beta B}{n_k(t) + 1}\right)$.
\STATE Receive reward $r_{t}$, and update $n_{k_t}(t)$, $\hat{\mu}_{k_t}(t)$ and empirical pseudo-rewards $\hat{\phi}_{k,k_t}(t)$.
\end{algorithmic}
\vspace{0.1in}
\hrule
\caption{C-TS Correlated TS Algorithm}
\label{alg:formalAlgoTS}
\vspace{-0.2cm}
\end{algorithm}
\section{Standard Results from Previous Works}
\begin{fact}[Hoeffding's inequality]
Let $Z_1, Z_2 \ldots Z_n$ be i.i.d random variables bounded between $[a, b]: a \leq Z_i \leq b$, then for any $\delta>0$, we have
$$\Pr\left(\left| \frac{\sum_{i = 1}^{n} Z_i}{n} - \E{Z_i} \right| \geq \delta\right) \leq \exp \left( \frac{-2 n \delta^2}{(b - a)^2}\right).$$
\end{fact}
\begin{lem}[Standard result used in bandit literature]
If $\hat{\mu}_{k,n_k(t)}$ denotes the empirical mean of arm $k$ by pulling arm $k$ $n_k(t)$ times through any algorithm and $\mu_k$ denotes the mean reward of arm $k$, then we have
$$\Pr\left(\hat{\mu}_{k,n_k(t)} - \mu_k \geq \epsilon, \tau_2 \geq n_k(t) \geq \tau_1 \right) \leq \sum_{s = \tau_1}^{\tau_2}\exp \left(- 2 s \epsilon^2\right).$$
\label{lem:UnionBoundTrickInt}
\end{lem}
\begin{proof}
Let $Z_1, Z_2, ... Z_t$ be the reward samples of arm $k$ drawn separately. If the algorithm chooses to play arm $k$ for $m^{th}$ time, then it observes reward $Z_m$. Then the probability of observing the event $\hat{\mu}_{k,n_k(t)} - \mu_k \geq \epsilon, \tau_2 \geq n_k(t) \geq \tau_1$ can be upper bounded as follows,
\begin{align}
\Pr\left(\hat{\mu}_{k,n_k(t)} - \mu_k \geq \epsilon, \tau_2 \geq n_k(t) \geq \tau_1 \right) &= \Pr\left( \left( \frac{\sum_{i=1}^{n_k(t)}Z_i}{n_k(t)} - \mu_k \geq \epsilon \right), \tau_2 \geq n_k(t) \geq \tau_1 \right) \\
&\leq \Pr\left( \left(\bigcup_{m = \tau_1}^{\tau_2} \frac{\sum_{i=1}^{m}Z_i}{m} - \mu_k \geq \epsilon \right), \tau_2 \geq n_k(t) \geq \tau_1 \right) \label{upperBoundTrick}\\
&\leq \Pr \left(\bigcup_{m = \tau_1}^{\tau_2} \frac{\sum_{i=1}^{m}Z_i}{m} - \mu_k \geq \epsilon \right) \\
&\leq \sum_{s = \tau_1}^{\tau_2}\exp \left( - 2 s \epsilon^2\right).
\end{align}
\end{proof}
\begin{lem}[From Proof of Theorem 1 in \cite{auer2002finite}]
\label{lem:ucbindexmore}
Let $I_k(t)$ denote the UCB index of arm $k$ at round $t$, and $\mu_k = \E{g_{k}(X)}$ denote the mean reward of that arm. Then, we have
$$\Pr(\mu_k > I_k(t)) \leq t^{-3}.$$
\end{lem}
Observe that this bound does not depend on the number $n_k(t)$ of times arm $k$ is pulled. UCB index is defined in equation (6) of the main paper.
\begin{proof}
This proof follows directly from \cite{auer2002finite}. We present the proof here for completeness as we use this frequently in the paper.
\begin{align}
\Pr(\mu_k > I_k(t)) &= \Pr\left(\mu_k > \hat{\mu}_{k,n_k(t)} + \sqrt{\frac{2 \log t}{n_k(t)}}\right) \\
&\leq \sum_{m = 1}^{t} \Pr \left(\mu_k > \hat{\mu}_{k,m} + \sqrt{\frac{2 \log t}{m}} \right) \label{unionTrick}\\
&= \sum_{m =1}^{t} \Pr \left(\hat{\mu}_{k,m} - \mu_k < - \sqrt{\frac{2 \log t}{m}}\right) \\
&\leq \sum_{m = 1}^{t} \exp\left(- 2 m \frac{2 \log t}{m}\right) \label{eqn:ucbindex}\\
&= \sum_{m = 1}^{t} t^{-4} \\
&= t^{-3}.
\end{align}
where \eqref{unionTrick} follows from the union bound and is a standard trick (\Cref{lem:UnionBoundTrickInt}) to deal with random variable $n_k(t)$. We use this trick repeatedly in the proofs. We have \eqref{eqn:ucbindex} from the Hoeffding's inequality.
\end{proof}
\begin{lem} Let $\E{\mathbbm{1}_{I_k > I_{k^*}}}$ be the expected number of times $I_k (t)> I_{k^*}(t)$ in $T$ rounds. Then, we have
$$\E{\mathbbm{1}_{I_k > I_{k^*}}} = \sum_{t = 1}^{T} Pr(I_k > I_{k^*}) \leq \frac{8 \log (T)}{\Delta_k^2} + \left(1 + \frac{\pi^2}{3} \right).$$
\label{lem:AuerResult}
\end{lem}
The proof follows the analysis in Theorem 1 of \cite{auer2002finite}. The analysis of $Pr(I_k > I_{k^*})$ is done by by evaluating the joint probability $\Pr\left(I_k(t) > I_{k^*}(t), n_k(t) \geq \frac{8 \log T}{\Delta_k^2}\right)$. Authors in \cite{auer2002finite} show that the probability of pulling arm $k$ jointly with the event that it has had at-least $\frac{8 \log T}{\Delta_k^2}$ pulls decays down with $t$, i.e., $\Pr\left(I_k(t) > I_{k^*}(t), n_k(t) \geq \frac{8 \log T}{\Delta_k^2}\right) \leq t^{-2}$.
\begin{lem}[Theorem 2 \cite{lai1985asymptotically}]
Consider a two armed bandit problem with reward distributions $\Theta = \{f_{R_1}(r), f_{R_2}(r)\}$, where the reward distribution of the optimal arm is $f_{R_1}(r)$ and for the sub-optimal arm is $f_{R_2}(r)$, and $\E{f_{R_1}(r)} > \E{f_{R_2}(r)}$; i.e., arm 1 is optimal. If it is possible to create an alternate problem with distributions $\Theta' = \{f_{R_1}(r), \tilde{f}_{R_2}(r)\}$ such that $\E{\tilde{f}_{R_2}(r)} > \E{f_{R_1}(r)}$ and $0< D(f_{R_2}(r)||\tilde{f}_{R_2}(r)) < \infty$ (equivalent to assumption 1.6 in \cite{lai1985asymptotically}), then for any policy that achieves sub-polynomial regret, we have $$\liminf\limits_{T \rightarrow \infty} \frac{\E{n_2(T)}}{\log T} \geq \frac{1}{D(f_{R_2}(r) || \tilde{f}_{R_2}(r))}.$$
\label{lem:LaiRobbins2Arms}
\end{lem}
\begin{proof}
Proof of this is derived from the analysis done in \cite{banditalgs}. We show the analysis here for completeness. A bandit instance $v$ is defined by the reward distribution of arm 1 and arm 2. Since policy $\pi$ achieves sub-polynomial regret, for any instance $v$, $\mathbb{E}_{v,\pi}\left[(Reg(T))\right] = \mathrm{O}(T^p)$ as $T \rightarrow \infty$, for all $p > 0$.
Consider the bandit instances $\Theta = \{f_{R_1}(r), f_{R_2}(r)\}$, $\Theta' = \{f_{R_1}(r), \tilde{f}_{R_2}(r)\}$, where $\E{f_{R_2}(r)} < \E{f_{R_1}(r)} < \E{\tilde{f}_{R_2}(r)}$. The bandit instance $\Theta'$ is constructed by changing the reward distribution of arm 2 in the original instance, in such a way that arm 2 becomes optimal in instance $\Theta'$ without changing the reward distribution of arm 1 from the original instance.
From divergence decomposition lemma (derived in \cite{banditalgs}), it follows that $$D(\mathbb{P}_{\Theta,\Pi} || \mathbb{P}_{\Theta',\Pi}) = \mathbb{E}_{\Theta,\pi}\left[n_2(T)\right] D(f_{R_2}(r) || \tilde{f}_{R_2}(r)).$$
The high probability Pinsker's inequality (Lemma 2.6 from \cite{Tsybakov:2008:INE:1522486}, originally in \cite{highProbPinsker}) gives that for any event $A$, $$\mathbb{P}_{\Theta, \pi}(A) + \mathbb{P}_{\Theta',\pi}(A^c) \geq \frac{1}{2}\exp\left(-D(\mathbb{P}_{\Theta,\pi} || \mathbb{P}_{\Theta',\pi})\right),$$
or equivalently,
$$D(\mathbb{P}_{\Theta,\pi} || \mathbb{P}_{\Theta',\pi}) \geq \log \frac{1}{2(\mathbb{P}_{\Theta,\pi}(A) + \mathbb{P}_{\Theta',\pi}(A^c))}.$$
If arm 2 is suboptimal in a 2-armed bandit problem, then $\E{Reg(T)} = \Delta_2 \E{n_2(T)}.$ Expected regret in $\Theta$ is $$\mathbb{E}_{\Theta,\pi}\left[Reg(T)\right] \geq \frac{T \Delta_2}{2} \mathbb{P}_{\Theta,\pi}\left(n_2(T) \geq \frac{T}{2}\right),$$
Similarly regret in bandit instance $\Theta'$ is $$\mathbb{E}_{\Theta',\pi}\left[Reg(T)\right] \geq \frac{T \delta}{2} \mathbb{P}_{\Theta',\pi}\left(n_2(T) < \frac{T}{2}\right),$$
since suboptimality gap of arm $1$ in $\Theta'$ is $\delta$. Define $\kappa(\Delta_2, \delta) = \frac{\min(\Delta_2, \delta)}{2}$. Then we have, $$\mathbb{P}_{\Theta,\pi}\left(n_2(T) \geq \frac{T}{2}\right) + \mathbb{P}_{\Theta',\pi}\left(n_2(T) < \frac{T}{2}\right) \leq \frac{\mathbb{E}_{\Theta,\pi}\left[Reg(T)\right] + \mathbb{E}_{\Theta',\pi}\left[Reg(T)\right]}{\kappa(\Delta_2, \delta) T}.$$
On applying the high probability Pinsker's inequality and divergence decomposition lemma stated earlier, we get
\begin{align}
D(f_{R_2}(r) || \tilde{f}_{R_2}(r)) \mathbb{E}_{\Theta,\pi}\left[n_2(T)\right] &\geq \log\left(\frac{\kappa(\Delta_2, \delta) T}{2 (\mathbb{E}_{\Theta,\pi}\left[Reg(T)\right] + \mathbb{E}_{\Theta',\pi}\left[Reg(T)\right]) }\right) \\
&= \log\left(\frac{\kappa(\Delta_2,\delta)}{2}\right) + \log(T) \nonumber \\
&\qquad - \log(\mathbb{E}_{\Theta,\pi}\left[Reg(T)\right] + \mathbb{E}_{\Theta',\pi}\left[Reg(T)\right]).
\end{align}
Since policy $\pi$ achieves sub-polynomial regret for any bandit instance, $\mathbb{E}_{\Theta,\pi}\left[Reg(T)\right] + \mathbb{E}_{\Theta',\pi}\left[Reg(T)\right] \leq \gamma T^p$ for all $T$ and any $p > 0$,
hence,
\begin{align}
\liminf\limits_{T \rightarrow \infty} D(f_{R_2}(r) || \tilde{f}_{R_2}(r)) \frac{\mathbb{E}_{\Theta,\pi}\left[n_2(T)\right]}{\log T} &\geq 1 - \limsup\limits_{T \rightarrow \infty} \frac{\mathbb{E}_{\Theta,\pi}\left[Reg(T)\right] + \mathbb{E}_{\Theta',\pi}\left[Reg(T)\right]}{\log T} + \nonumber \\
&\quad \liminf\limits_{T \rightarrow \infty} \frac{\log\left(\frac{\kappa(\Delta_2,\delta)}{2}\right)}{\log T} \\
&= 1.
\end{align}
Hence, $\liminf\limits_{T \rightarrow \infty} \frac{\mathbb{E}_{\Theta,\pi}\left[n_2(T)\right]}{\log T} \geq \frac{1}{D(f_{R_2}(r) || \tilde{f}_{R_2}(r))}.$
\end{proof}
\section{Results for any \textsc{C-Bandit} Algorithm}
\begin{lem}
Define $E_1(t)$ to be the event that arm $k^*$ is empirically \textit{non-competitive} in round $t+1$, then,
$$\Pr(E_1(t)) \leq 2Kt \exp \left(\frac{-t \Delta_{\text{min}}^2}{2 K}\right),$$
where $\Delta_{\text{min}} = \min_k \Delta_k$, the gap between the best and second-best arms.
\label{lem:eliminatedOptimal}
\end{lem}
\begin{proof}
The arm $k^*$ is empirically non-competitive at round $t$ if $k^* \neq k^{\text{emp}}$ and the empirical psuedo-reward of arm $k^*$ with respect to arms $\ell \in \mathcal{S}_t$ is smaller than $\hat{\mu}_{k^{\text{emp}}}(t)$. This event can only occur if at-least one of the two following conditions is satisfied, i) the empirical mean of $k^{\text{emp}} \neq k^*$ is greater than $\mu_k^* - \frac{\Delta_{\text{min}}}{2}$ or ii) the empirical pseudo-reward of arm $k^*$ with respect to arms in $\mathcal{S}_t$ is smaller than $\mu_{k^*} - \frac{\Delta_{\text{min}}}{2}$. We use this observation to analyse the $\Pr(E_1(t))$.
\begin{align}
&\Pr(E_1(t)) \leq \Pr\left( \left(\max_{\{\ell: n_\ell(t) > t/K, \ell \neq k^*\}} \hat{\mu}_{\ell}(t) > \mu_{k^*} - \frac{\Delta_{\text{min}}}{2} \right) \bigcup \left(\min_{\{\ell:n_\ell(t) > t/K\}} \hat{\phi}_{k^*, \ell}(t) < \mu_{k^*} - \frac{\Delta_{\text{min}}}{2} \right) \right) \\
&\leq \Pr\left(\max_{\{\ell:n_\ell(t) > t/K, \ell \neq k^*\}} \hat{\mu}_{\ell}(t) > \mu_{k^*} - \frac{\Delta_{\text{min}}}{2} \right) + \Pr\left(\min_{\{\ell:n_\ell(t) > t/K\}} \hat{\phi}_{k^*, \ell}(t) < \mu_{k^*} - \frac{\Delta_{\text{min}}}{2} \right)
\label{eq:simpleUnbd}\\
&\leq \sum_{\ell \neq k^*} \Pr\left(\hat{\mu}_{\ell}(t) > \mu_{k^*} - \frac{\Delta_{\text{min}}}{2}, n_{\ell}(t) > \frac{t}{K} \right) + \sum_{\ell = 1}^{K} \Pr\left(\hat{\phi}_{k^*, \ell}(t) < \mu_{k^*} - \frac{\Delta_{\text{min}}}{2}, n_\ell(t) > \frac{t}{K} \right) \label{eq:atleastOne} \\
&= \sum_{\ell \neq k^*} \Pr\left(\hat{\mu}_{\ell}(t) - \mu_{\ell} > \mu_{k^*} - \mu_{\ell} - \frac{\Delta_{\text{min}}}{2}, n_{\ell}(t) > \frac{t}{K} \right) \nonumber \\
&+ \sum_{\ell = 1}^{K} \Pr\left(\hat{\phi}_{k^*, \ell}(t) - \phi_{k^*, \ell} < \mu_{k^*} - \phi_{k^*, \ell} - \frac{\Delta_{\text{min}}}{2}, n_\ell(t) > \frac{t}{K} \right) \\
&\leq \sum_{\ell \neq k^*} \Pr\left(\frac{\sum_{\tau = 1}^{t} \mathbbm{1}_{\{k_\tau = \ell\}} r_\tau}{n_\ell(t)} - \mu_{\ell} > \frac{\Delta_{\text{min}}}{2}, n_{\ell}(t) > \frac{t}{K} \right) \nonumber \\
&+ \sum_{\ell = 1}^{K} \Pr\left(\frac{\sum_{\tau = 1}^{t} \mathbbm{1}_{\{k_\tau = \ell\}} s_{k^*,\ell}(r_\tau)}{n_\ell(t)} - \phi_{k^*, \ell} < - \frac{\Delta_{\text{min}}}{2}, n_\ell(t) > \frac{t}{K} \right) \\
&\leq 2Kt \exp\left(\frac{- t \Delta_{\text{min}}^2}{2 K}\right), \label{eq:appliedChernoff}
\end{align}
Here \eqref{eq:simpleUnbd} follows from union bound. We have \eqref{eq:appliedChernoff} from the Hoeffding's inequality, as we note that rewards $\{r_\tau : \tau=1,\ldots, t, ~ k_{\tau}=k\}$ and pseudo-rewards $\{s_{k^*,l}: \tau_1, \ldots, t, ~ k_{\tau} = l\} $ form a collection of i.i.d. random variables each of which is bounded between $[-1,1]$ with mean $\mu_k$ and $\phi_{k^*,l}$. The term $t$ before the exponent in \eqref{eq:appliedChernoff} arises as the random variable $n_k(t)$ can take values from $t/K$ to $t$ (\Cref{lem:UnionBoundTrickInt}).
\end{proof}
\begin{lem} For a sub-optimal arm $k \neq k^*$ with sub-optimality gap $\Delta_k$,
$$\Pr\left(k = k^{\text{emp}}(t), n_{k^*}(t) \geq \frac{t}{K}\right) \leq t\exp\left(\frac{-t\Delta_k^2}{2K}\right).$$
\label{lem:kempLem}
\end{lem}
\begin{proof}
We bound this probability as,
\begin{align}
&\Pr\left(k = k^{\text{emp}}(t), n_{k^*}(t) \geq \frac{t}{K}\right) \nonumber\\
&= \Pr\left(k = k^{\text{emp}}(t), n_{k^*}(t) \geq \frac{t}{K}, n_k(t) \geq \frac{t}{K}\right) \label{eqn:kempMeanstk}\\
&\leq \Pr\left(\hat{\mu}_k(t) \geq \hat{\mu}_{k^*}(t), n_k(t) \geq \frac{t}{K}, n_{k^*}(t) \geq \frac{t}{K}\right) \\
&\leq \Pr\left( \left(\hat{\mu}_{k^*}(t) < \mu_{k^*} - \frac{\Delta_k}{2} \bigcup \hat{\mu}_k(t) > \mu_{k^*} - \frac{\Delta_k}{2} \right), n_k(t) \geq \frac{t}{K}, n_{k^*}(t) \geq \frac{t}{K}\right) \label{eqn:necessaryThings}\\
&= \Pr\left( \left(\hat{\mu}_{k^*}(t) < \mu_{k^*} - \frac{\Delta_k}{2} \bigcup \hat{\mu}_k(t) > \mu_k + \frac{\Delta_k}{2} \right), n_k(t) \geq \frac{t}{K}, n_{k^*}(t) \geq \frac{t}{K}\right) \\
&\leq \Pr\left( \hat{\mu}_{k^*}(t) - \mu_{k^*} < - \frac{\Delta_k}{2}, n_{k^*}(t) \geq \frac{t}{K}\right) + \Pr\left( \hat{\mu}_k(t) - \mu_k > \frac{\Delta_k}{2} , n_k(t) \geq \frac{t}{K}\right) \\
&\leq 2t\exp\left(\frac{-t \Delta_k^2}{2K}\right) \label{eqn:chernoffplusunion}
\end{align}
We have \eqref{eqn:kempMeanstk} as arm $k$ needs to be pulled at least $\frac{t}{K}$ in order to be arm $k^{\text{emp}}(t)$ at round $t$. The selection of $k^{\text{emp}}$ is only done from the set of arms that have been pulled atleast $\frac{t}{K}$ times. Here, \eqref{eqn:chernoffplusunion} follows from the Hoeffding's inequality. The term $t$ before the exponent in \eqref{eqn:chernoffplusunion} arises as the random variable $n_k(t)$ can take values from $t/K$ to $t$ (\Cref{lem:UnionBoundTrickInt}).
\end{proof}
\begin{lem}
If for a suboptimal arm $k \neq k^*$, $\tilde{\Delta}_{k,k^*} > 0$, then,
$$\Pr(k_{t+1} = k, n_{k^*}(t) = \max_k n_k(t)) \leq t \exp\left(\frac{-2t\tilde{\Delta}_{k,k^*}^2}{ K}\right).$$
Moreover, if $\tilde{\Delta}_{k,k^*} \geq \sqrt{\frac{2 K \log t_0}{t_0}}$ for some constant $t_0 > 0$. Then,
$$\Pr(k_{t+1} = k , n_{k^*}(t) = \max_k n_k(t)) \leq t^{-3} \quad \forall t > t_0.$$
\label{lem:suboptimalNotCompetitive}
\end{lem}
\begin{proof}
We now bound this probability as,
\begin{align}
&\Pr(k_{t+1} = k , n_{k^*}(t) = \max_k n_k(t)) \nonumber \\ &\leq\Pr\left(k_{t+1} = k, n_{k^*}(t) \geq \frac{t}{K}\right) \label{eq:needAtleast}\\
&= \Pr\left(k \in \{\mathcal{A}_t \cup \{k_{\text{emp}}(t)\} \}, k_{t+1} = k, n_{k^*}(t) \geq \frac{t}{K}\right) \label{eq:hastohappen} \\
&\leq \Pr\left(k \in \mathcal{A}_t, k_{t+1} = k, n_{k^*}(t) \geq \frac{t}{K}\right) + \Pr\left(k = k^{\text{emp}}(t), n_{k^*}(t) \geq \frac{t}{K}\right) \\
&\leq \Pr\left(k \in \mathcal{A}_t, k_{t+1} = k, n_{k^*}(t) \geq \frac{t}{K}\right) + 2t\exp\left(\frac{-t\Delta_k^2}{2K}\right) \label{eq:fromkempLem}\\
&\leq \Pr\left(\hat{\mu}_{k^*}(t) < \hat{\phi}_{k,k^*}(t), k_{t+1} = k, n_{k^*}(t) \geq \frac{t}{K} \right) + 2t\exp\left(\frac{-t\Delta_k^2}{2K}\right) \label{eq:necessaryCondn}\\
&\leq \Pr\left(\hat{\mu}_{k^*}(t) < \hat{\phi}_{k,k^*}(t) , n_{k^*}(t) \geq \frac{t}{K} \right) + 2t\exp\left(\frac{-t\Delta_k^2}{2K}\right)\\
&\leq \Pr\left(\frac{\sum_{\tau = 1}^{t}\mathbbm{1}_{\{k_\tau = k^*\}}r_\tau}{n_{k^*}(t)} < \frac{\sum_{\tau = 1}^{t}\mathbbm{1}_{\{k_\tau = k^*\}}s_{k,k^*}(r_\tau)}{n_{k^*(t)}} , n_{k^*}(t) \geq \frac{t}{K}\right) + 2t\exp\left(\frac{-t\Delta_k^2}{2K}\right)\\
&= \Pr\left(\frac{\sum_{\tau = 1}^{t} \mathbbm{1}_{\{k_\tau = k^*\}}(r_{\tau} - s_{k,k^*})}{n_{k^*}(t)} - (\mu_{k^*} - \phi_{k,k^*}) < - \tilde{\Delta}_{k,k^*} , n_{k^*} \geq \frac{t}{K} \right) + 2t\exp\left(\frac{-t\Delta_k^2}{2K}\right) \label{eqn:lemchernoff}\\
&\leq t \exp \left( \frac{- t \tilde{\Delta}_{k,k^*}^2}{2 K} \right) + 2t\exp\left(\frac{-t\Delta_k^2}{2K}\right) \\
&\leq 3t^{-3} \quad \forall t > t_0.\label{lastStepHere}
\end{align}
We have \eqref{eq:needAtleast} as $n_{k^*}(t)$ needs to be at-least $\frac{t}{K}$ for $n_{k^*}(t)$ to be $\max_{k} n_k(t)$. Equation \eqref{eq:hastohappen} holds as arm $k$ needs to be in the set $\{\mathcal{A}_t \cup \{k^{\text{emp}}(t)\} \}$ to be selected by C-BANDIT at round $t$. Inequality \eqref{eq:fromkempLem} arises from the result of \Cref{lem:kempLem}. The inequality \eqref{eq:necessaryCondn} follows as $\phi_{k,k^*} > \hat{\mu}_{k^*}$ is a necessary condition for arm $k$ to be in the competitive set $\mathcal{A}_t$ at round $t$. Here, \eqref{eqn:lemchernoff} follows from the Hoeffding's inequality as we note that rewards $\{r_\tau - s_{k,k^*}(r_\tau): \tau=1,\ldots, t, ~ k_{\tau}=k^*\}$ form a collection of i.i.d. random variables each of which is bounded between $[-1,1]$ with mean $(\mu_k - \phi_{k, k^*})$. The term $t$ before the exponent in \eqref{eqn:lemchernoff} arises as the random variable $n_k(t)$ can take values from $t/K$ to $t$ (\Cref{lem:UnionBoundTrickInt}). Step \eqref{lastStepHere} follows from the fact that $\tilde{\Delta}_{k,k^*} \geq 2\sqrt{\frac{2 K \log t_0}{t_0}}$ for some constant $t_0 > 0$.
\end{proof}
\section{Algorithm specific results for C-UCB}
\begin{lem}
If $\Delta_{\text{min}} \geq 4\sqrt{\frac{2K \log t_0}{t_0}}$ for some constant $t_0 > 0$, then,
$$\Pr(k_{t+1} = k , n_k(t) \geq s) \leq 3 t^{-3} \quad \text{for } s > \frac{t}{2 K}, \forall t > t_0.$$
\label{lem:noMorePulls}
\end{lem}
\begin{proof}
By noting that $k_{t + 1} = k$ corresponds to arm $k$ having the highest index among the set of arms that are not empirically \textit{non-competitive} (denoted by $\mathcal{A}$), we have,
\begin{align}
\Pr(k_{t + 1} = k , n_k(t) \geq s) &= \Pr(I_k(t) = \arg \max_{k' \in \mathcal{A}} I_{k'}(t) , n_k(t) \geq s) \\
&\leq \Pr(E_1(t) \cup \left(E_1^c(t), I_k(t) > I_{k^*}(t)\right) , n_k(t) \geq s) \label{eliminating1}\\
&\leq \Pr(E_1(t) , n_k(t) \geq s) + \Pr(E_1^c(t), I_k(t) > I_{k^*}(t) , n_k(t) \geq s ) \label{unionBound}\\
&\leq 2Kt \exp\left(\frac{-t \Delta_{\text{min}}^2}{2 K}\right) + \Pr\left(I_k(t) > I_{k^*}(t) , n_k(t) \geq s\right). \label{usedHoeffding}
\end{align}
Here $E_1(t)$ is the event described in \Cref{lem:eliminatedOptimal}. If arm $k^*$ is not empirically non-competitive at round $t$, then arm $k$ can only be pulled in round $t + 1$ if $I_k(t) > I_{k^*}(t)$, due to which we have \eqref{eliminating1}. Inequalities \eqref{unionBound} and \eqref{usedHoeffding} follow from union bound and \Cref{lem:eliminatedOptimal} respectively.
We now bound the second term in \eqref{usedHoeffding}.
\begin{align}
&\Pr(I_k(t) > I_{k^*}(t) , n_k(t) \geq s) = \nonumber \\
&\Pr\left(I_k(t) > I_{k^*}(t) , n_k(t) \geq s, \mu_{k^*} \leq I_{k^*}(t)\right) + \nonumber \\
&\quad \Pr\left(I_k(t) > I_{k^*}(t), n_k(t) \geq s | \mu_{k^*} > I_{k^*}(t) \right) \times \Pr\left(\mu_{k^*} > I_{k^*}(t) \right) \label{conditionTerm} \\
&\leq \Pr\left(I_k(t) > I_{k^*}(t), n_k(t) \geq s, \mu_{k^*} \leq I_{k^*}(t)\right) + \Pr\left(\mu_{k^*} > I_{k^*}(t)\right) \label{droppingTerms}\\
&\leq \Pr\left(I_k(t) > I_{k^*}(t), n_k(t) \geq s, \mu_{k^*} \leq I_{k^*}(t)\right) + t^{-3} \label{usingHoeffdingAgain}\\
&= \Pr\left(I_k(t) > \mu_{k^*} , n_k(t) \geq s\right) + t^{-4} \label{usingConditioning} \\
&= \Pr\left(\hat{\mu}_k(t) + \sqrt{\frac{2 \log t}{n_k(t)}} > \mu_{k^*} , n_k(t) \geq s \right) + t^{-3} \label{expandingIndex}\\
&= \Pr\left(\hat{\mu}_k(t) - \mu_k > \mu_{k^*} - \mu_k - \sqrt{\frac{2 \log t}{n_k(t)}} , n_k(t) \geq s \right) + t^{-3} \\
&= \Pr\left( \frac{\sum_{\tau = 1}^{t} \mathbbm{1}_{\{k_\tau = k\}}r_\tau}{n_k(t)} - \mu_k > \Delta_k - \sqrt{\frac{2 \log t}{n_k(t)}} , n_k(t) \geq s\right) + t^{-3} \\
&\leq t \exp\left(-2 s \left(\Delta_k - \sqrt{\frac{2 \log t}{s}}\right)^2\right) + t^{-3} \label{eqn:chernoffagain}\\
&\leq t^{-3}\exp\left(-2 s \left(\Delta_k^2 - 2 \Delta_k \sqrt{\frac{2 \log t}{s}}\right)\right) + t^{-3} \\
&\leq 2 t^{-3} \quad \text{ for all } t > t_0. \label{finalCondn}
\end{align}
We have \eqref{conditionTerm} holds because of the fact that $P(A) = P(A|B)P(B) + P(A|B^c)P(B^c)$, Inequality \eqref{usingHoeffdingAgain} follows from \Cref{lem:ucbindexmore}. From the definition of $I_k(t)$ we have \eqref{expandingIndex}. Inequality \eqref{eqn:chernoffagain} follows from Hoeffding's inequality and the term $t$ before the exponent in \eqref{eqn:chernoffagain} arises as the random variable $n_k(t)$ can take values from $s$ to $t$ (\Cref{lem:UnionBoundTrickInt}). Inequality \eqref{finalCondn} follows from the fact that $s > \frac{t}{2 K}$ and $\Delta_k \geq 4\sqrt{\frac{2K \log t_0}{t_0}}$ for some constant $t_0 > 0.$
Plugging this in the expression of $\Pr(k_t = k, n_k (t) \geq s)$ \eqref{usedHoeffding} gives us,
\begin{align}
\Pr(k_{t+1} = k , n_k (t) \geq s) &\leq 2 K t \exp\left(\frac{-t \Delta_{\text{min}}^2}{2 K}\right) + \Pr(I_k(t) > I_{k^*}(t) , n_k(t) \geq s) \\
&\leq 2Kt\exp\left(\frac{-t \Delta_{\text{min}}^2}{2 K}\right) + 2t^{-3} \\
&\leq 2(K+1) t^{-3}. \label{usingConditiont0}
\end{align}
Here, \eqref{usingConditiont0} follows from the fact that $\Delta_{\text{min}} \geq 4\sqrt{\frac{2K \log t_0}{t_0}}$ for some constant $t_0 > 0$.
\end{proof}
\begin{lem}
If $\Delta_{\text{min}} \geq 4\sqrt{\frac{2 K \log t_0}{t_0}}$ for some constant $t_0 > 0$, then, $$\Pr\left(n_k(t) > \frac{t}{ K}\right) \leq (2K + 2)K \left(\frac{t}{K}\right)^{-2} \quad \forall t > K t_0.$$
\label{lem:suboptimalNotPulled}
\end{lem}
\begin{proof}
We expand $\Pr\left(n_k(t) > \frac{t}{K}\right)$ as,
\begin{align}
\Pr\left(n_k(t) \geq \frac{t}{K}\right) &= \Pr\left( n_{k}(t) \geq \frac{t}{K} \mid n_k(t - 1) \geq \frac{t}{K} \right) \Pr\left( n_k(t - 1) \geq \frac{t}{K} \right) + \nonumber \\
&\quad \Pr\left(k_t = k , n_k(t - 1) = \frac{t}{K} - 1\right) \\
&\leq \Pr\left(n_k(t - 1) \geq \frac{t}{K}\right) + \Pr\left(k_t = k , n_k(t - 1) = \frac{t}{K} - 1\right) \\
&\leq \Pr\left(n_k(t - 1) \geq \frac{t}{K}\right) + (2K + 2) (t - 1)^{-3} \quad \forall (t - 1) > t_0. \label{fromPrevLemma}
\end{align}
Here, \eqref{fromPrevLemma} follows from \Cref{lem:noMorePulls}.\\
This gives us $$\Pr\left(n_k(t) \geq \frac{t}{K}\right) - \Pr\left(n_k(t - 1) \geq \frac{t}{K}\right) \leq (2K + 2)(t - 1)^{-3}, \quad \forall (t - 1) > t_0.$$
Now consider the summation $$ \sum_{\tau = \frac{t}{K}}^{t} \Pr\left(n_k(\tau) \geq \frac{t}{K}\right) - \Pr\left(n_k(\tau - 1) \geq \frac{t}{K}\right) \leq \sum_{\tau = \frac{t}{K}}^{t}(2K + 2)(\tau - 1)^{-3}.$$ This gives us, $$\Pr\left(n_k(t) \geq \frac{t}{K}\right) - \Pr\left(n_k\left(\frac{t}{K} - 1\right) \geq \frac{t}{K}\right) \leq \sum_{\tau = \frac{t}{K}}^{t}(2K + 2)(\tau - 1)^{-3}.$$
Since $\Pr\left(n_k\left(\frac{t}{K} - 1\right)\geq \frac{t}{K}\right) = 0$, we have,
\begin{align}
\Pr\left(n_k(t) \geq \frac{t}{K}\right) &\leq \sum_{\tau = \frac{t}{K}}^{t}(2K + 2)(\tau - 1)^{-3} \\
&\leq (2K + 2)K \left(\frac{t}{K}\right)^{-2} \quad \forall t > K t_0.
\end{align}
\end{proof}
\section{Regret Bounds for C-UCB}
\label{proof:UCB}
\textbf{Proof of Theorem 1}
We bound $\E{n_k(T)}$ as,
\begin{align}
\E{n_k(T)} &= \E{\sum_{t = 1}^{T}\mathbbm{1}_{\{k_t = k\}}}\\
&= \sum_{t = 0}^{T-1} \Pr(k_{t+1} = k) \\
&= \sum_{t = 1}^{K t_0} \Pr(k_t = k) + \sum_{t = K t_0}^{T-1} \Pr(k_{t+1} = k) \\
&\leq K t_0 + \sum_{t = K t_0}^{T-1}\Pr(k_{t+1} = k, n_{k^*}(t) = \max_{k'} n_{k'}(t)) \nonumber \\
&+ \sum_{t = K t_0}^{T-1} \sum_{k' \neq k^*} \Pr(n_{k'}(t) = \max_{k''} n_{k''}(t))\Pr(k_{t+1} = k | n_{k'}(t) = \max_{k''} n_{k''}(t)) \\
&\leq K t_0 + \sum_{t = K t_0}^{T-1} \Pr(k_{t+1} = k, n_{k^*}(t) = \max_{k'} n_{k'}(t)) \nonumber \\
&+ \sum_{t = K t_0}^{T-1} \sum_{k' \neq k^*} \Pr(n_{k'}(t) = \max_{k''} n_{k''}(t)) \\
&\leq K t_0 + \sum_{t = K t_0}^{T - 1} 3t^{-3} + \sum_{t = K t_0}^{T} \sum_{k' \neq k^*} \Pr\left(n_{k'}(t) \geq \frac{t}{K}\right) \label{usingSomeLemma1}\\
&\leq K t_0 + \sum_{t = 1}^{T} 3t^{-3} + (K + 1)K(K - 1) \sum_{t = K t_0}^{T} 2\left(\frac{t}{K}\right)^{-2}. \label{usingSomeOtherLemma}
\end{align}
Here, \eqref{usingSomeLemma1} follows from \Cref{lem:suboptimalNotCompetitive} and \eqref{usingSomeOtherLemma} follows from \Cref{lem:suboptimalNotPulled}.
\textbf{Proof of Theorem 2}
For any suboptimal arm $k \neq k^*$,
\begin{align}
\E{n_k(T)} &\leq \sum_{t = 1}^{T} \Pr(k_t = k) \\
&= \sum_{t = 1}^{T} \Pr(E_1(t), k_t = k \cup (E_1^c(t), I_k > I_{k^*}), k_t = k) \label{beatOptimal} \\
&\leq \sum_{t = 1}^{T} \Pr(E_1(t)) + \Pr(E_1^c(t), I_k(t - 1) > I_{k^*}(t - 1), k_t = k) \\
&\leq \sum_{t = 1}^{T} \Pr(E_1(t)) + \Pr(E_1^c(t), I_k(t - 1) > I_{k^*}(t - 1)) \nonumber\\
&\leq \sum_{t = 1}^{T} \Pr(E_1(t)) + \Pr(I_k(t - 1)> I_{k^*}(t - 1), k_t = k) \\
&= \sum_{t = 1}^{T} 2Kt \exp\left(- \frac{t \Delta_{\text{min}}^2}{2 K}\right) + \sum_{t = 0}^{T-1} \Pr\left(I_k(t) > I_{k^*}(t), k_t = k\right) \label{eliminatedArmProb1} \\
&= \sum_{t = 1}^{T} 2Kt \exp\left(- \frac{t \Delta_{\text{min}}^2}{2 K}\right) + \E{\mathbbm{1}_{I_k > I_{k^*}}(T)} \label{followFromDefinition} \\
&\leq 8 \frac{\log (T)}{\Delta_k^2} + \left(1 + \frac{\pi^2}{3}\right) + \sum_{t = 1}^{T} 2Kt \exp\left(- \frac{ t \Delta_{\text{min}}^2}{2 K}\right). \label{fromAuer1}
\end{align}
Here, \eqref{eliminatedArmProb1} follows from \Cref{lem:eliminatedOptimal}. We have \eqref{followFromDefinition} from the definition of $\E{n_{I_k > I_{k^*}}(T)}$ in \Cref{lem:AuerResult}, and \eqref{fromAuer1} follows from \Cref{lem:AuerResult}.
\textbf{Proof of Theorem 3:} Follows directly by combining the results on Theorem 1 and Theorem 2.
\section{Regret analysis for the C-TS Algorithm}
\newadd{
We now present results for C-TS in the scenario where $K = 2$ and Thompson sampling is employed with Beta priors \cite{agrawal2013further}. In order to prove results for C-TS, we assume that rewards are either $0$ or $1$. The Thompson sampling algorithm with beta prior, maintains a posterior distribution on mean of arm $k$ as $Beta\left(n_k(t) \times \hat{\mu}_k(t) + 1, n_k(t) \times (1 - \hat{\mu}_k(t)) + 1 \right)$. Subsequently, it generates a sample $S_{k}(t) \sim Beta\left(n_k(t) \times \hat{\mu}_k(t) + 1, n_k(t) \times (1 - \hat{\mu}_k(t)) + 1 \right)$ for each arm $k$ and selects the arm $k_{t+1} = \argmax_{k \in \mathcal{K}} S_{k}(t)$. The C-TS algorithm with Beta prior uses this Thompson sampling procedure in its last step, i.e., $k_{t+1} = \argmax_{k \in \mathcal{C}_t} S_{k}(t)$, where $\mathcal{C}_t$ is the set of empirically competitive arms at round $t$. We show that in a 2-armed bandit problem, the regret is $\mathrm{O}(1)$ if the sub-optimal arm $k$ is non-competitive and is $\mathrm{O}(\log T)$ otherwise.
For the purpose of regret analysis of C-TS, we define two thresholds, a lower threshold $L_k$, and an upper threshold $U_k$ for arm $k\neq k^*$,
\begin{align}
U_k = \mu_k + \frac{\Delta_k}{3}, \hspace*{3em} L_k = \mu_{k^*} - \frac{\Delta_k}{3}. \label{eq:threshold}
\end{align}
Let $E^{\mu}_{i}(t)$ and $E^{S}_{i}(t)$ be the events that,
\begin{align}
E^{\mu}_k(t) &= \{\hat{\mu}_k(t) \leq U_k \} \nonumber\\
E^{S}_k(t) &= \{S_k(t) \leq L_k \} \label{eq:events}.
\end{align}
To analyse the regret of C-TS, we first show that the number of times arm $k$ is pulled jointly with the event that $n_k(t-1) \geq \frac{t}{2}$ is bounded above by an $\mathrm{O}(1)$ constant, which is independent of the total number of rounds $T$.
\begin{lem}
\label{lem:notPullsuboptimalifEnough}
If $\Delta_{k} \geq 4 \sqrt{\frac{ 2K \log t_{0}}{t_{0}}}$ for some constant $t_{0}>0$, then,
\begin{align*}
\sum_{t = 2t_0}^{T} \Pr\left(k_{t}=k, n_{k}(t-1) \geq \frac{t}{2}\right) = \mathrm{O}(1)
\end{align*}
where $k \neq k^{*}$ is a sub-optimal arm.
\end{lem}
\begin{proof}
We start by bounding the probability of the pull of $k$-th arm at round $t$ as follows,
\begin{align}
\Pr\left(k_{t}=k, n_{k}(t-1) \geq \frac{t}{2}\right) \overset{(a)}{\leq} & \Pr\left(E_{1}(t), k_{t}=k, n_{k}(t-1) \geq \frac{t}{2}\right) + \nonumber \\
& \Pr\left(\overline{E_{1}(t)}, k_{t}=k, n_{k}(t-1) \geq \frac{t}{2}\right) \nonumber\\
\overset{(b)}{\leq} & 2Kt \exp\left(\frac{-t \Delta_{\text{min}}^2}{2 K}\right) + \Pr\left(\overline{E_{1}(t)}, k_{t}=k, n_{k}(t-1) \geq \frac{t}{2}\right)\nonumber\\
\overset{(c)}{\leq} & 2Kt^{-3} + \underbrace{\Pr\left(k_{t} = k, E^{\mu}_k(t), E^{S}_k(t),n_{k}(t-1) \geq \frac{t}{2}\right)}_{\textbf{term A}} + \nonumber\\
&\underbrace{\Pr\left(k_{t} = k, E^{\mu}_k(t), \overline{E^{S}_k(t)},n_{k}(t-1) \geq \frac{t}{2}\right)}_{\textbf{term B}}+ \nonumber \\
&\underbrace{\Pr\left(k_{t} = k, \overline{E^{\mu}_k(t)},n_{k}(t-1) \geq \frac{t}{2}\right)}_{\textbf{term C}}
\label{eq:cric-s}
\end{align}
where $(b)$, comes from \Cref{lem:eliminatedOptimal}. Here, \eqref{eq:cric-s} follows from the fact that $\Delta_{\text{min}} \geq 4\sqrt{\frac{2K \log t_0}{t_0}}$ for some constant $t_0 > 0$. Now we treat each term in \eqref{eq:cric-s} individually. To bound term A, we note that $\Pr\left(k_{t} = k, E^{\mu}_k(t), E^{S}_k(t),n_{k}(t-1) \geq \frac{t}{2}\right) \leq \Pr\left(k_{t} = k, E^{\mu}_k(t), E^{S}_k(t)\right)$. From the analysis in \cite{agrawal2013further} (equation 6), we see that
$\sum_{t = 1}^{T}\Pr\left(k_{t} = k, E^{\mu}_k(t), E^{S}_k(t)\right) = \mathrm{O}(1)$ as it is shown through Lemma 2 in \cite{agrawal2013further} that, \\
$\sum_{t = 1}^{T}\Pr\left(k_{t} = k, E^{\mu}_k(t), E^{S}_k(t)\right) \leq \frac{216}{\Delta_k^2} + \sum_{j = 0}^{T} \Theta\left(e^{-\frac{\Delta_k^2j}{18}} + \frac{1}{e^{\frac{\Delta_k^2 j}{36}} - 1} + \frac{9}{(j + 1)\Delta_k^2}e^{-D_k j} \right)$. \\
Here, $D_k = L_k \log \frac{L_k}{\mu_{k^*}} + (1 - L_k) \log \frac{1 - L_k}{1 - \mu_{k^*}}$.
Due to this, \\ $\sum_{t = 2t_0}^{T} \Pr\left(k_{t} = k, E^{\mu}_k(t), E^{S}_k(t),n_{k}(t-1) \geq \frac{t}{2}\right) = \mathrm{O}(1)$.
\vspace{2mm}
\noindent
We now bound the sum of term B from $t = 1$ to $T$ by noting that \\
$\Pr\left(k_{t} = k, E^{\mu}_k(t), \overline{E^{S}_k(t)},n_{k}(t-1) \geq \frac{t}{2}\right) \leq \Pr\left(k_{t} = k, \overline{E^{S}_k(t)}\right) $. Additionally, from Lemma 3 in \cite{agrawal2013further}, we get that
$\sum_{t = 1}^{T} \Pr\left(k_{t} = k, \overline{E^{S}_k(t)}\right) \leq \frac{1}{d(U_k,\mu_k)} + 1$, where $d(x,y) = x \log \frac{x}{y} + (1 - x) \log \frac{1 - x}{1 - y}$. As a result, we see that
$\sum_{t = 1}^{T} \Pr\left(k_{t} = k, E^{\mu}_k(t), \overline{E^{S}_k(t)},n_{k}(t-1) \geq \frac{t}{2}\right) = \mathrm{O}(1)$.
\vspace{2mm}
\noindent
Finally, for the last term C we can show that,
\begin{align}
(C) &= \Pr\left(k_{t} = k, \overline{E^{\mu}_k(t)},n_{k}(t-1) \geq \frac{t}{2}\right) \nonumber \\
& \leq \Pr\left(\overline{E^{\mu}_k(t)},n_{k}(t-1) \geq \frac{t}{2}\right) \nonumber \\
&= \Pr\left(\hat{\mu}_k - \mu_k > \frac{\Delta_k}{3}, n_k(t-1) \geq \frac{t}{2}\right) \nonumber \\
&\leq t \exp \left(-2 \frac{t}{2} \frac{\Delta_k^2}{9} \right) \label{eq:unbd_and_hoeffding} \\
& \leq t^{-3} \nonumber
\end{align}
Here \Cref{eq:unbd_and_hoeffding} follows from hoeffding's inequality and the union bound trick to handle random variable $n_k(t-1)$. After plugging these results in \eqref{eq:cric-s}, we get that
\begin{align}
\sum_{t = 2t_0}^{T} \Pr\left(k_{t}=k, n_{k}(t-1) \geq \frac{t}{2}\right) &\leq \sum_{t = 2t_0}^{T} 2Kt^{-3} + \sum_{t = 2t_0}^{T} \Pr\left(k_{t} = k, E^{\mu}_k(t), E^{S}_k(t),n_{k}(t-1) \geq \frac{t}{2}\right) + \nonumber\\
& \sum_{t = 2t_0}^{T} \Pr\left(k_{t} = k, E^{\mu}_k(t), \overline{E^{S}_k(t)},n_{k}(t-1) \geq \frac{t}{2}\right)+ \nonumber \\
& \sum_{t = 2t_0}^{T} \Pr\left(k_{t} = k, \overline{E^{\mu}_k(t)},n_{k}(t-1) \geq \frac{t}{2}\right) \\
&\leq \sum_{t = 2t_0}^{T} 2Kt^{-3} + \mathrm{O}(1) + \mathrm{O}(1) + \sum_{t = 2t_0}^{T} t^{-3} \\
&= \mathrm{O}(1)
\end{align}
\end{proof}
We now show that the expected number of pulls by C-TS for a non-competitive arm is bounded above by an $\mathrm{O}(1)$ constant.\\
\noindent
\textbf{Expected number of pulls by C-TS for a non-competitive arm.}
We bound $\E{n_k(t)}$ as
\begin{align}
\E{n_k(T)} &= \E{\sum_{t = 1}^{T}\mathbbm{1}_{\{k_t = k\}}} \nonumber \\
&= \sum_{t = 0}^{T-1} \Pr(k_{t+1} = k) \nonumber \\
&= \sum_{t = 1}^{2t_0} \Pr(k_t = k) + \sum_{t = 2 t_0}^{T-1} \Pr(k_{t+1} = k) \nonumber \\
&\leq 2 t_0 + \sum_{t = 2 t_0}^{T-1} \Pr\left(k_{t+1} = k , n_{k^*}(t) \geq \frac{t}{2}\right) + \sum_{t = 2 t_0}^{T-1} \Pr\left(k_{t+1} = k , n_{k}(t) \geq \frac{t}{2}\right) \\
&\leq 2 t_0 + \sum_{t = 2 t_0}^{T - 1} 3t^{-3} + \sum_{t = 2 t_0}^{T-1} \Pr\left(k_{t+1} = k , n_{k}(t) \geq \frac{t}{2}\right) \label{usingSomeLemma}\\
&= \mathrm{O}(1) \label{eq:lastStepTS}
\end{align}
Here, \eqref{usingSomeLemma} follows from \Cref{lem:suboptimalNotCompetitive} and \eqref{eq:lastStepTS} follows from \Cref{lem:notPullsuboptimalifEnough} and the fact that the sum of $3t^{-3}$ is bounded and $t_0 = \inf \bigg\{\tau > 0: \Delta_{\text{min}},\epsilon_k \geq 4 \sqrt{\frac{ 2K \log \tau}{\tau}} \bigg\}.$
We now show that when the sub-optimal arm $k$ is competitive, the expected pulls of arm $k$ is $\mathrm{O}(\log T)$.\\
\noindent
\textbf{Expected number of pulls by C-TS for a competitive arm $k \neq k^*$.}:
For any sub-optimal arm $k \neq k^*$,
\begin{align}
\E{n_k(T)} &\leq \sum_{t = 1}^{T} \Pr(k_t = k) \nonumber \\
&= \sum_{t = 1}^{T} \Pr((k_t = k, E_1(t)) \cup (E_1^c(t), k_t = k)) \label{stepE12} \\
&\leq \sum_{t = 1}^{T} \Pr(E_1(t)) + \sum_{t = 1}^{T} \Pr(E_1^c(t), k_t = k) \nonumber \\
& \leq \sum_{t = 1}^{T} \Pr(E_1(t)) + \sum_{t = 1}^{T} \Pr(E_1^c(t), k_{t} = k, S_k(t-1) > S_{k^*}(t-1)) \nonumber
\end{align}
\begin{align}
&\leq \sum_{t = 1}^{T} \Pr(E_1(t)) + \sum_{t = 0}^{T-1} \Pr(S_k(t)> S_{k^*}(t), k_{t+1} = k) \nonumber \\
&= \sum_{t = 1}^{T} 2Kt^{-3} + \sum_{t = 0}^{T-1} \Pr\left(S_k(t) > S_{k^*}(t), k_{t+1} = k \right) \label{eliminatedArmProb} \\
&\leq \frac{9\log(T)}{\Delta_k^2} + \mathrm{O}(1) + \sum_{t = 1}^{T} 2 K t^{-3}. \label{fromAuer} \\
&= \mathrm{O}(\log T).
\end{align}
Here, \eqref{eliminatedArmProb} follows from \Cref{lem:eliminatedOptimal}. We have \eqref{fromAuer} from the analysis of Thompson Sampling for the classical bandit problem in \cite{agrawal2013further}. This arises as the term $\Pr\left(S_k(t) > S_{k^*}(t), k_{t+1} = k \right)$ counts the number of times $S_k(t) > S_{k^*}(t)$ and $k_{t+1} = k$. This is precisely the term analysed in Theorem 3 of \cite{agrawal2013further} to bound the expected pulls of sub-optimal arms by TS.
In particular, \cite{agrawal2013further} analyzes the expected number of pull of sub-optimal arm (termed as $\E{k_i(T)}$ in their paper) by evaluating $\sum_{t = 0}^{T-1} \Pr(S_k(t) > S_{k^*}(t), k_{t+1} = k)$ and it is shown in their Section 2.1 (proof of Theorem 1 of \cite{agrawal2013further}) that $\sum_{t = 0}^{T-1} \Pr(S_k(t) > S_{k^*}(t), k_{t+1} = k) \leq \mathrm{O}(1) + \frac{\log(T)}{d(x_i, y_i)}$. The term $x_i$ is equivalent to $U_k$ and $y_i$ is equal to $L_k$ in our notations. Moreover $d(U_k, L_k) \leq \frac{\Delta_k^2}{9}$, giving us the desired result of \eqref{fromAuer}.}
\section{Lower Bounds}
For the proof we define $R_k = Y_k(X)$ and $\tilde{R}_k = g_k(\tilde{X})$, where $f_X(x)$ is the probability density function of random variable $X$ and $f_{\tilde{X}}(x)$ is the probability density function of random variable $\tilde{X}$. Similarly, we define $f_{R_k}(r)$ to be the reward distribution of arm $k$.
\vspace{0.1cm}
\noindent
\textbf{Proof of Theorem 4}
Let arm $k$ be a $\textit{Competitive}$ sub-optimal arm, i.e $\tilde{\Delta}_{k,k^*} < 0$. To prove that regret is $\Omega(\log T)$ in this setting, we need to create a new bandit instance, in which reward distribution of optimal arm is unaffected, but a previously competitive sub-optimal arm $k$ becomes optimal in the new environment. We do so by constructing a bandit instance with latent randomness $\tilde{X}$ and random rewards $\tilde{Y}_k(X)$. Let's denote to $\tilde{Y}_k(\tilde{X})$ to be the random reward obtained on pulling arm $k$ given the realization of $\tilde{X}$. To make arm $k$ optimal in the new bandit instance, we construct $\tilde{Y}_k(X)$ and $\tilde{X}$ in the following manner. Let $\mathcal{Y}_k$ denote the support of $Y_k(X)$.
Define $$\tilde{Y}_k(X) =
\begin{cases}
\bar{g}_k(X) \quad \text{w.p. } 1-\epsilon_1 \\
\tilde{Y}_k(X) \sim \text{Uniform}(\mathcal{Y}_k) \quad \text{w.p. } \epsilon_1
\end{cases}
$$This changes the conditional reward of arm $k$ in the new bandit instance (with increased mean).
Furthermore, Define $$\tilde{X} =
\begin{cases}
S(R_{k^*}) \quad w.p. 1 - \epsilon_2 \\
\text{Uniform} \sim \mathcal{X} \quad w.p. \epsilon_2.
\end{cases},
$$
with $S(R_{k^*}) = \arg \max_{\underline{g}_{k^*}(x) < R_{k^*} < \bar{g}_{k^*}(x)} \bar{g}_k(x)$.
\noindent
Here $R_{k^*}$ represents the random reward of arm $k^*$ in the original bandit instance.
This construction of $\tilde{X}$ is possible for some $\epsilon_1, \epsilon_2 > 0$, whenever arm $k$ is competitive by definition. Moreover, under such a construction one can change reward distribution of $\tilde{Y}_{k^*}(\tilde{X})$ such that reward $\tilde{R}_{k^*}$ has the same distribution as $R_{k^*}$. This is done by changing the conditional reward distribution, $f_{\tilde{Y}_{k^*} | X}(r) = \frac{f_{Y_{k^*} | X}(r) f_X(x)}{f_{\tilde{X}}(x)}$.
Due to this, if an arm is competitive, there exists a new bandit instance with latent randomness $\tilde{X}$ and conditional rewards $\tilde{Y}_{k^*}|X$ and $\tilde{Y}_k | X$ such that $f_{R_{k^*}} = f_{\tilde{R}_k^*}$ and $\E{\tilde{R}_k} > \mu_{k^*}$, with $f_{R_k}$ denoting the probability distribution function of the reward from arm $k$ and $\tilde{R}_k$ representing the reward from arm $k$ in the new bandit instance.
Therefore, if these are the only two arms in our problem, then from \Cref{lem:LaiRobbins2Arms}, $$\lim_{T \rightarrow \infty}\inf \frac{\E{n_k(T)}}{\log T} \geq \frac{1}{D(f_{R_k}(r) || f_{\tilde{R}_{k}}(r))},$$
where $f_{\tilde{R}_k}(r)$ represents the reward distribution of arm $k$ in the new bandit instance.
Moreover, if we have more $K - 1$ sub-optimal arms, instead of just 1, then $$\lim_{T \rightarrow \infty}\inf \frac{\E{\sum_{\ell \neq k^*} n_{\ell}(T)}}{\log T} \geq \frac{1}{D(f_{R_{k}}(r)|| f_{\tilde{R}_{k}}(r))}.$$
Consequently, since $\E{Reg(T)} = \sum_{ell = 1}^{K} \Delta_\ell \E{n_{\ell}(T)}$, we have
\begin{align}
\lim_{T \rightarrow \infty}\inf \frac{\E{Reg (T)}}{\log (T)} \geq \max_{k \in \mathcal{C}}\frac{\Delta_k}{D(f_{R_k} || f_{\tilde{R}_k})}.
\end{align}
\vspace{0.1cm}
\noindent
\textbf{A stronger lower bound valid for the general case}
A stronger lower bound for the general case can be shown by using the result in Proposition 1 of \cite{van2020optimal}. If $\mathcal{P}$ denotes the set of all possible joint probability distribution under which all pseudo-reward constraints are satisfied and $P$ denotes the underlying unknown joint probability distribution which has $k^*$ as the optimal arm. Then, the expected cumulative regret for any algorithm that achieves a sub-polynomial regret is lower bounded as
$$\lim \inf_{T \rightarrow \infty} \frac{Reg(T)}{\log T} \geq L(P),$$
where $L(P)$ is the solution of the optimization problem:
\begin{align}
&\min_{\eta(k) \geq 0, k \in \mathcal{K}} \sum_{k \in \mathcal{K}} \eta(k)\left(\max_{\ell \in \mathcal{K}}\mu_\ell- \mu_k\right) \nonumber\\
&\text{subject to} \sum_{k \in \mathcal{K}} \eta(k) D(P,Q,k) \geq 1, \quad \forall Q \in \mathcal{Q}, \label{optProblem}\\
where \quad & \mathcal{Q} = \{Q \in \mathcal{P} : f_R(R_{k^*} | Q, k^*) = f_R(R_{k^*} | P, k^*) ~~ \text{and} ~~ {k^*} \neq \arg \max_{k \in \mathcal{K}} \mu_k(Q) \}. \nonumber
\end{align}
Here, $D(P,Q,k)$ is the KL-Divergence between reward distributions of arm $k$ under joint probability distributions $P$ and $Q$, i.e., $f_R(R_{k}|\theta,k)$ and $f_R(R_{k}|\lambda,k)$. The term $\mu_k(Q)$ represents the mean reward of arm $k$ under the joint probability distribution $Q$.
To interpret the lower bound, one can think of $\mathcal{Q}$ as the set of all joint probability distributions, under which the reward distribution of arm $k^*$ remains the same, but some other arm $k' \neq k^*$ is optimal under the joint probability distribution. The optimization problem reflects the amount of samples needed to distinguish these two joint probability distributions. This result is based on the original result of \cite{graves1997asymptotically}, which has been used recently in \cite{combes2017minimal, van2020optimal} for studying other bandit problems.
\vspace{0.1cm}
\noindent
\textbf{Lower bound discussion in general framework}
\begin{table}[t]
\centering
\begin{tabular}{|l|l|l|l|l|}
\cline{1-2} \cline{4-5}
\textbf{r} & \textbf{$s_{2,1}(r)$} & & \textbf{r} & \textbf{$s_{1,2}(r)$} \\ \cline{1-2} \cline{4-5}
\textbf{0} & $\frac{2}{3}$ & & \textbf{0} & $\frac{3}{4}$ \\ \cline{1-2} \cline{4-5}
\textbf{1} & $\frac{6}{7}$ & & \textbf{1} & $\frac{2}{3}$ \\ \cline{1-2} \cline{4-5}
\end{tabular}
\\ \vspace{2mm}
\parbox{.45\linewidth}{
\centering
\begin{tabular}{|l|l|l|}
\hline
\textbf{(a)} & $R_2 = 0$ & $R_2 = 1$ \\ \hline
$R_1 = 0$ & 0.1 & 0.2 \\ \hline
$R_1 = 1$ & 0.3 & 0.4 \\ \hline
\end{tabular}
}
\hfill
\parbox{.45\linewidth}{
\centering
\begin{tabular}{|l|l|l|}
\hline
\textbf{(b)} & $R_2 = 0$ & $R_2 = 1$ \\ \hline
$R_1 = 0$ & a & b \\ \hline
$R_1 = 1$ & c & d \\ \hline
\end{tabular}
}
\caption{The top row shows the pseudo-rewards of arms 1 and 2, i.e., upper bounds on the conditional expected rewards (which are known to the player). The bottom row depicts two possible joint probability distribution (unknown to the player). Under distribution (a), Arm 1 is optimal and all pseudo-reward except $s_{2,1}(1)$ are tight.}
\label{tab:pseudoBinappendix}
\end{table}
Consider the example shown in \Cref{tab:pseudoBinappendix}, for the joint probability distribution $(a)$, Arm 1 is optimal. Moreover, all pseudo-rewards except $s_{2,1}(1)$ are tight, i.e.,$s_{\ell,k}(r) = \E{R_\ell | R_k = r}$. For the joint probability distribution shown in $(a)$, expected pseudo-reward of Arm 2 is $0.8$ and hence it is competitive. Due to this, our C-UCB and C-TS algorithms pull Arm 2 $\mathrm{O}(\log T)$ times.
However, it is not possible to construct an alternate bandit environment with joint probability distribution shown in \Cref{tab:pseudoBinappendix}(b), such that Arm 2 becomes optimal while maintaining the same marginal distribution for Arm 1, and making sure that the pseudo-rewards still remain upper bound on conditional expected rewards. Formally, there does not exist $a,b,c,d$ such that $c + d = 0.7$, $\frac{c}{a+c} < 3/4$, $\frac{b}{a+b} < 2/3$, $\frac{d}{b+d} < 2/3$, $\frac{d}{d+c} < 6/7$ and $a+b+c+d = 1$. This suggests that there should be a way to achieve $\mathrm{O}(1)$ regret in this scenario. We believe this can be done by using all the constraints (imposed by the knowledge of pair-wise pseudo-rewards to shrink the space of possible joint probability distributions) when calculating empirical pseudo-reward. However, this becomes tough to implement as the ratings can have multiple possible values and the number of arms is more than 2. We leave the task of coming up with a practically feasible and easy to implement algorithm that achieves bounded regret whenever possible in a general setup as an interesting open problem.
\section{Conclusion}
\label{sec:conclusion}
This work presents a new correlated Multi-Armed bandit problem in which rewards obtained from different arms are correlated. We capture this correlation through the knowledge of \textit{pseudo-rewards}. These pseudo-rewards, which represent upper bound on conditional mean rewards, could be known in practice from either domain knowledge or learned from prior data. Using the knowledge of these pseudo-rewards, we the propose \textit{C-Bandit} algorithm which fundamentally generalizes any classical bandit algorithm to the correlated multi-armed bandit setting. A key strength of our paper is that it allows pseudo-rewards to be loose (in case there is not much prior information) and even then our \textit{C-Bandit} algorithms adapt and provide performance at least as good as that of classical bandit algorithms.
We provide a unified method to analyze the regret of C-Bandit algorithms. In particular, the analysis shows that C-UCB ends up pulling \emph{non-competitive} arms only $\mathrm{O}(1)$ times; i.e., they stop pulling certain arms after a finite time $t$. Due to this, C-UCB pulls only $C-1 \leq K-1$ of the $K-1$ sub-optimal arms $\mathrm{O}(\log T)$ times, as opposed to UCB that pulls {\em all} $K-1$ sub-optimal arms $\mathrm{O}(\log T)$ times. In this sense, our C-Bandit algorithms reduce a $K$-armed bandit to a $C$-armed bandit problem. We present several cases where $C = 1$ for which C-UCB achieves bounded regret. For the special case when rewards are correlated through a latent random variable $X$, we provide a lower bound showing that bounded regret is possible only when $C = 1$; if $C > 1$, then $\mathrm{O}(\log T)$ regret is not possible to avoid. Thus, our C-UCB algorithm achieves bounded regret whenever possible. Simulation results validate the theoretical findings and we perform experiments on {\sc Movielens} and {\sc Goodreads} datasets to demonstrate the applicability of our framework in the context of recommendation systems. The experiments on real-world datasets show that our C-UCB and C-TS algorithms significantly outperform the UCB and TS algorithms.
There are several interesting open problems and extensions of this work, some of which we describe below.
\noindent
\textbf{Extension to light tailed and heavy tailed rewards} In this work, we assume that the rewards have a bounded support. The algorithm and analysis can be extended to settings with sub-gaussian rewards as well. In particular, in step 3 of the algorithm, one would play UCB/TS for sub-gaussian rewards. For instance, the UCB index in the scenario of sub-gaussian rewards can be redefined as $\hat{\mu}_k + \sqrt{\frac{2\sigma^2\log t}{n_k(t)}}$, where $\sigma$ is the sub-Gaussianity parameter of the reward distribution. Similar regret bounds will hold in this setting as well because the Hoeffding's inequality used in our regret analysis is valid for sub-Gaussian rewards as well. For heavy-tailed rewards, the Hoeffding's inequality is not valid. Due to which, one would need to construct confidence bounds for UCB in a different manner as done in \cite{bubeck2013bandits}. On doing so, the C-Bandit algorithm can be employed in heavy-tailed reward settings. However, the regret analysis may not extend directly as one would need to use modified concentration inequalities to obtain bounds on mean reward of arm $k$ as done in Lemma 1 of \cite{bubeck2013bandits}.
\noindent
\textbf{Designing better algorithms.} While our proposed algorithms are order-optimal for the model in Section 2.3, they do not match the pre-log constants in the lower bound of the regret. It may be possible to design algorithms that have smaller pre-log constants in their regret upper bound. Further discussion along these lines is presented in Appendix F. A key advantage of our approach is that our algorithms are easy to implement and they incorporate the classical bandit algorithms nicely for the problem of correlated multi-armed bandits.
\noindent
\textbf{Best-Arm Identification.} We plan to study the problem of best-arm identification in the correlated multi-armed bandit setting, i.e., to identify the best arm with a confidence $1 - \delta$ in as few samples as possible. Since rewards are correlated with each other, we believe the sample complexity can be significantly improved relative to state of the art algorithms, such as LIL-UCB \cite{jamieson2014best, jamieson2014lil}, which are designed for classical multi-armed bandits. Another open direction is to improve the C-Bandit algorithm to make sure that it achieves bounded regret whenever possible in the general framework studied in this paper.
\section{Acknowledgments}
This work was partially supported by the NSF through grants CCF-1840860 and CCF-2007834, the Siebel Energy Institute, the Carnegie Bosch Institute, the Manufacturing Futures Initiative, and the CyLab IoT Initiative. In addition, Samarth Gupta was supported by the CyLab Presidential Fellowship and the David H. Barakat and LaVerne Owen-Barakat CIT Dean's Fellowship.
\section{Experiments}
\label{sec:experiments}
We now show the performance of our proposed algorithms in real-world settings. Through the use of \textsc{MovieLens} and \textsc{Goodreads} datasets, we demonstrate how the correlated MAB framework can be used in practical settings for recommendation system applications. In such systems, it is possible to use the prior available data (from a certain population) to learn the correlation structure in the form of pseudo-rewards. When trying to design a campaign to maximize user engagement in a new unknown demographic, the learned correlation information in the form of pseudo-rewards can help significantly reduce the regret as we show from our results described below.
\subsection{Experiments on the \textsc{MovieLens} dataset}
The \textsc{MovieLens} dataset \cite{movielenspaper} contains a total of 1M ratings for a total of 3883 Movies rated by 6040 Users. Each movie is rated on a scale of 1-5 by the users. Moreover, each movie is associated with one (and in some cases, multiple) genres. For our experiments, of the possibly several genres associated with each movie, one is picked uniformly at random. To perform our experiments, we split the data into two parts, with the first half containing ratings of the users who provided the most number of ratings. This half is used to learn the pseudo-reward entries, the other half is the test set which is used to evaluate the performance of the proposed algorithms. Doing such a split ensures that the rating distribution is different in the training and test data.
\begin{figure}[t]
\centering
\includegraphics[width = 0.6\textwidth]{Figures/genre_exps_shaded_new_25_50_70.pdf}
\caption{Cumulative regret for UCB, C-UCB, TS and C-TS for the application of recommending the best genre in the Movielens dataset, where $p$ fraction of the pseudo-entries are replaced with maximum reward $i.e., 5$. In $(a), p = 0.25$, for $(b), p = 0.50$ and $p = 0.7$ in $(c)$. The value of $C$ is $4,11$ and $13$ in $(a), (b)$ and $(c)$ respectively. As $C$ is smaller than $K$ (i.e., $18$) in each case, we see that C-UCB and C-TS outperform UCB and TS significantly.}
\label{fig:genre_plot}
\vspace{-0.3cm}
\end{figure}
\noindent
\textbf{Recommending the Best Genre.} In our first experiment, the goal is to provide the best genre recommendations to a population with unknown demographic. We use the training dataset to learn the pseudo-reward entries. The pseudo-reward entry $s_{\ell,k}(r)$ is evaluated by taking the empirical average of the ratings of genre $\ell$ that are rated by the users who rated genre $k$ as $r$. To capture the fact that it might not be possible in practice to fill all pseudo-reward entries, we randomly remove $p$-fraction of the pseudo-reward entries. The removed pseudo-reward entries are replaced by the maximum possible rating, i.e., $5$ (as that gives a natural upper bound on the conditional mean reward). Using these pseudo-rewards, we evaluate our proposed algorithms on the test data. Upon recommending a particular genre (arm), the rating (reward) is obtained by sampling one of the ratings for the chosen arm in the test data. Our experimental results for this setting are shown in \Cref{fig:genre_plot}, with $p = 0.25, 0.50$ and $0.70$ (i.e., fraction of pseudo-reward entries that are removed). We see that the proposed C-UCB and C-TS algorithms significantly outperform UCB and TS in all three settings. For each of the three cases we also evaluate the value of $C$ (which is unknown to the algorithm), by always pulling the optimal arm and finding the size of empirically competitive set at $T = 5000$. The value of $C$ turned out to be $4,11$ and $13$ for $p = 0.25, 0.50$ and $0.70$. As $C < 18$ in each case, some of the 18 arms are stopped being pulled after some time and due to this, C-UCB and C-TS significantly outperform UCB and TS respectively. This shows that even when only a subset of the correlations are known, it is possible to exploit them to improve the performance of classical bandit algorithms.
\begin{figure}[t]
\centering
\includegraphics[width = 0.6\textwidth]{Figures/movie_exps_shaded_new_10_40_60.pdf}
\caption{Cumulative regret of UCB, C-UCB, TS and C-TS for providing the best movie recommendations in the Movielens dataset. Each pseudo-reward entry is added by $0.1$ in (a), $0.4$ in (b) and $0.6$ in (c). The value of $C$ is $6,24$ and $39$ in $(a), (b)$ and $(c)$ respectively. As $C$ is smaller than $K$ (i.e., $50$) in each case, we see the superior performance of C-UCB, C-TS over UCB and TS.}
\label{fig:movie_plot}
\vspace{-0.3cm}
\end{figure}
\noindent
\textbf{Recommending the Best Movie.} We now consider the goal of providing the best movie recommendations to the population. To do so, we consider the 50 most rated movies in the dataset. containing 109,804 user-ratings given by 6,025 users.
In the testing phase, the goal is to recommend one of these 50 movies to each user.
As was the case in previous experiment, we learn the pseudo-reward entries from the training data. Instead of using the learned pseudo-reward directly, we add a \textit{safety buffer} to each of the pseudo-reward entry; i.e., we set the pseudo-reward as the empirical conditional mean {\em plus} the {\sc safety buffer}. Adding a buffer will be needed in practice, as the conditional expectations learned from the training data are likely to have some noise and adding a safety buffer allows us to make sure that pseudo-rewards constitute an upper bound on the conditional expectations. Our experimental result in \Cref{fig:movie_plot} shows the performance of C-UCB and C-TS relative to UCB for this setting with safety buffer set to $0.1$ in \Cref{fig:movie_plot}(a), $0.4$ in \Cref{fig:movie_plot}(b) and $0.6$ in \Cref{fig:movie_plot}(c). In all three cases, even after addition of safety buffers, our proposed C-UCB and C-TS algorithms outperform the UCB algorithm.
\subsection{Experiments on the {\sc Goodreads} dataset}
The {\sc Goodreads} dataset \cite{wan2018item} contains the ratings for 1,561,465 books by a total of 808,749 users. Each rating is on a scale of 1-5. For our experiments, we only consider the poetry section and focus on the goal of providing best poetry recommendations to the whole population whose demographics is unknown. The poetry dataset has 36,182 different poems rated by 267,821 different users. We do the pre-processing of goodreads dataset in the same manner as that of the MovieLens dataset, by splitting the dataset into two halves, train and test. The train dataset contains the ratings of the users with most number of recommendations.
\begin{figure}[t]
\centering
\includegraphics[width = 0.6\textwidth]{Figures/book_exps_shaded_new.pdf}
\caption{Cumulative regret of UCB, C-UCB, TS and C-TS for providing best poetry book recommendation in the Goodreads dataset. Every pseudo-reward entry is added by $q$ and $p$ fraction of the pseudo-reward entries are removed, with (a) $p = 0.1, q = 0.1$ and (b) $p = 0.3, q = 0.1$. The value of $C$ is $8$ and $11$ in $(a)$ and $(b)$ respectively. As $C$ is much smaller than $K$ (i.e., $25$) in each case, we see that C-UCB and C-TS outperform UCB and TS significantly.}
\label{fig:book_plot}
\vspace{-0.3cm}
\end{figure}
\noindent
\textbf{Recommending the best poetry book.} We consider the 25 most rated books in the dataset and use these as the set of arms to recommend in the testing phase. These 25 poems have 349,523 user-ratings given by 171,433 users. As with the {\sc MovieLens} dataset, the pseudo-reward entries are learned on the training data. In practical situations it might not be possible to obtain all pseudo-reward entries. Therefore, we randomly select $p$ fraction of the pseudo-reward entries and replace them with maximum possible reward (i.e. $5$). Among the remaining pseudo-reward entries we add a safety buffer of $q$ to each entry. Our result in \Cref{fig:book_plot} shows the performance of C-UCB and C-TS relative to UCB and TS in two scenarios. In scenario (a), $10\%$ of the pseudo-reward entries are replaced by $5$ and remaining are padded with a safety buffer of $0.1$. For case (b), $30\%$ entries are replaced by $5$ and safety buffer is $0.1$. Under both cases, our proposed C-UCB and C-TS algorithms are able to outperform UCB and TS significantly.
\newadd{
\subsection{Pseudo-rewards learned on a smaller dataset}
In our previous set of experiments, half of the dataset was used to learn the pseudo-reward entries. We did additional experiments in a setup where only $10\%$ of the data was used for learning the pseudo-reward entries and tested our algorithms on the remaining dataset. On doing so, we observed that C-UCB and C-TS were still able to outperform UCB and TS in most of our experimental setups. One setting in which the performance of C-UCB was similar to that of UCB is in a scenario where each pseudo-reward entry was padded by $0.6$. As the padding was large, the C-UCB algorithm was not able to identify many arms as non-competitive, leading to a performance that is similar to that of UCB. In all other scenarios, we noted that C-UCB and C-TS significantly outperformed UCB and TS, suggesting that even when smaller dataset is used for learning pseudo-rewards, the C-UCB and C-TS can be quite effective. The results are presented in \Cref{fig:genre_plot_less}, \Cref{fig:movie_plot_less} and \Cref{fig:book_plot_less}.
\begin{figure}[t]
\centering
\includegraphics[width = 0.6\textwidth]{Figures/genre_exps_shaded_new3.pdf}
\caption{Cumulative regret for UCB, C-UCB, TS and C-TS for the application of recommending the best genre in the Movielens dataset, where $p$ fraction of the pseudo-entries are replaced with maximum reward $i.e., 5$. In $(a), p = 0.25$, for $(b), p = 0.50$ and $p = 0.7$ in $(c)$. We used $10\%$ of the dataset to learn the pseudo-reward entry and the algorithms are tested on the remaining dataset. The value of $C$ is $5,11$ and $15$ in $(a), (b)$ and $(c)$ respectively. As $C$ is smaller than $K$ (i.e., $18$) in each case, we see that C-UCB and C-TS outperform UCB and TS significantly. Note that the value of $C$ is larger in the case where only $10\%$ data is used for learning the pseudo-reward.}
\label{fig:genre_plot_less}
\vspace{-0.3cm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width = 0.6\textwidth]{Figures/movie_exps_shaded_new3.pdf}
\caption{Cumulative regret of UCB, C-UCB, TS and C-TS for providing the best movie recommendations in the Movielens dataset. In this experiment $10\%$ of the dataset is used for learning the pseudo-reward entry and the algorithms are tested on the remaining dataset. Each pseudo-reward entry is added by $0.1$ in (a), $0.4$ in (b) and $0.6$ in (c). The value of $C$ is $14,29$ and $42$ in $(a), (b)$ and $(c)$ respectively. Note that the value of $C$ is larger in the case where only $10\%$ data is used for learning the pseudo-reward. As $C$ is still smaller than $K$ (i.e., $50$) in each case, we see the superior performance of C-UCB, C-TS over UCB and TS.}
\label{fig:movie_plot_less}
\vspace{-0.3cm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width = 0.6\textwidth]{Figures/book_exps_shaded3.pdf}
\caption{Cumulative regret of UCB, C-UCB, TS and C-TS for providing best poetry book recommendation in the Goodreads dataset. We used $10\%$ of the dataset to learn the pseudo-reward entry and the algorithms are tested on the remaining dataset. Every pseudo-reward entry is added by $q$ and $p$ fraction of the pseudo-reward entries are removed, with (a) $p = 0.1, q = 0.1$ and (b) $p = 0.3, q = 0.1$. The value of $C$ is $7$ and $12$ in $(a)$ and $(b)$ respectively. As $C$ is much smaller than $K$ (i.e., $25$) in each case, we see that C-UCB and C-TS outperform UCB and TS significantly.}
\label{fig:book_plot_less}
\vspace{-0.3cm}
\end{figure}
}
\section{Introduction}
\label{sec:introduction}
\subsection{Background and Motivation}
\textbf{Classical Multi-armed Bandits.} The \emph{multi-armed bandit} (MAB) problem falls under the class of sequential decision making problems. In the classical multi-armed bandit problem, there are $K$ arms, with each arm having an {\em unknown} reward distribution. At each round $t$, we need to decide an arm $k_{t} \in \mathcal{K}$ and we receive a random reward $R_{k_t}$ drawn from the reward distribution of arm $k_{t}$. The goal in the classical multi-armed bandit is to maximize the {\em long-term} cumulative reward. In order to maximize cumulative reward, it is important to balance the {\em exploration-exploitation} trade-off, i.e., pulling each arm enough number of times to identify the one with the highest mean reward, while trying to make sure that the arm with the highest mean reward is played as many times as possible. This problem has been well studied starting with the work of Lai and Robbins \cite{lai1985asymptotically} that proposed the upper confidence bound (UCB) arm-selection algorithm and studied its fundamental limits in terms of bounds on \emph{regret}. Subsequently, several other algorithms including Thompson Sampling (TS) \cite{agrawal2012analysis} and KL-UCB \cite{garivier2011kl}, have been proposed for this setting. The generality of the classical multi-armed bandit model allows it to be useful in numerous applications. For example, MAB algorithms are useful in medical diagnosis \cite{villar2015multi}, where the arms correspond to the different treatment mechanisms/drugs and are widely used for the problem of ad optimization \cite{agarwal2009explore} by viewing different version of ads as the arms in the MAB problem. The MAB framework is also useful in system testing \cite{tekin2017multi}, scheduling in computing systems \cite{mora2009stochastic, krishnasamy2016regret, joshi2016efficient}, and web optimization \cite{white2012bandit, agarwal2009explore}.
\noindent
\textbf{Correlated Multi-Armed Bandits.} The classical MAB setting implicitly assumes that the rewards are independent across arms, i.e., pulling an arm $k$ does not provide any information about the reward we would have received from arm $\ell$. However, this may not be true in practice as the reward corresponding to different treatment/drugs/ad-versions are likely to be {\em correlated} with each other. For instance, similar ads/drugs may generate similar reward for the user/patient. These correlations, when modeled and accounted for, can allow us to significantly improve the cumulative reward by reducing the amount of \emph{exploration} in bandit algorithms.
\begin{figure}[t]
\centering
\includegraphics[width = 0.6\textwidth]{Figures/pseudoRewardmodel.pdf}
\caption{ Upon observing a reward $r$ from an arm $k$, pseudo-rewards $s_{\ell,k}(r),$ give us an upper bound on the conditional expectation of the reward from arm $\ell$ given that we observed reward $r$ from arm $k$. These pseudo-rewards models the correlation in rewards corresponding to different arms.
}
\label{fig:pseudoModel}
\end{figure}
Motivated by this, we study a variant of the classical multi-armed bandit problem in which rewards corresponding to different arms are correlated to each other, i.e., the conditional reward distribution satisfies $f_{R_\ell | R_{k}}(r_{\ell} | r_k) \neq f_{R_\ell}(r_{\ell})$, whence $\E{R_{\ell} | R_k} \neq \E{R_{\ell}}$. Such correlations can only be learned upon obtaining samples from different arms simultaneously, i.e., by pulling multiple arms at a time. As that is not allowed in the classical Multi-Armed Bandit formulation, we assume the knowledge of such correlations in the form of prior knowledge that might be obtained through domain expertise or from controlled surveys. One way of capturing correlations is through the knowledge of the joint reward distribution. However, if the complete joint reward distribution is known, then the best-arm is known trivially. Instead, in our work, we only assume restrictive information about correlations in the form of \textit{pseudo-rewards} that constitute an upper bound on conditional expected rewards. This makes our model more general and suitable for practical applications. Fig.~\ref{fig:pseudoModel} presents an illustration of our model, where the pseudo-rewards, denoted by $s_{\ell,k}(r)$, provide an upper bound on the reward that we could have received from arm $\ell$ given
that pulling arm $k$ led to a reward of $r$; i.e.,
\vspace{-1mm}
\begin{equation}
\mathbb{E}[R_\ell | R_k = r] \leq s_{\ell,k}(r).
\label{eq:pseudo_reward_defn}
\end{equation}
We show that the knowledge of such bounds, even when they are not all tight, can lead to significant improvement in the cumulative reward obtained by reducing the amount of {\em exploration} compared to classical MAB algorithms. Our proposed MAB model and algorithm can be applied in all real-world applications of the classical Multi-Armed bandit problem, where it is possible to know pseudo-rewards from domain knowledge or through surveyed data. In the next section, we illustrate the applicability of our novel correlated Multi-Armed Bandit model and its differences with the existing contextual and structured bandit works through the example of optimal {\em ad-selection}.
\subsection{An Illustrative Example}
Suppose that a company is to run a display advertising campaign for one of their products, and its creative team have designed several different versions that can be displayed. It is expected that the user engagement (in terms of click probability and time spent looking at the ad) depends the version of the ad that is displayed. In order to maximize the total user engagement over the course of the ad campaign, multi-armed bandit algorithms can be used; different versions of the ad correspond to the {\em arms} and the reward from selecting an arm is given by the clicks or time spent looking at the ad version corresponding to that arm.
\begin{figure}[th]
\centering
\includegraphics[width = 0.7\textwidth]{Figures/xyzsports.pdf}
\caption{The ratings of a user corresponding to different versions of the same ad are likely to be correlated. For example, if a person likes first version, there is a good chance that they will also like the 2nd one as it also related to tennis. However, the population composition is unknown, i.e., the fraction of people liking the first/second or the last version is unknown.}
\label{fig:clooneyEx}
\vspace{-0.2cm}
\end{figure}
\vspace{0.1cm}
\noindent
\textbf{Personalized recommendations using Contextual and Structured bandits.} Although the ad-selection problem can be solved by standard MAB algorithms, there are several specialized MAB variants that are designed to give better performance. For instance, the {\em contextual} bandit problem \cite{zhou15survey, agarwal2014taming} has been studied to provide {\em personalized} displays of the ads to the users. Here, before making a choice at each time step (i.e., deciding which version to show to a user), we observe the {\em context} associated with that user (e.g., age/occupation/income features). Contextual bandit algorithms learn the mappings from the context $\theta$ to the most favored version of ad $k^*(\theta)$ in an online manner and thus are useful for personalized recommendations. A closely related problem is the structured bandit problem \cite{combes2017minimal, lattimore2014bounded, abbasi2011improved, dani2008stochastic}, in which the context $\theta$ (age/ income/ occupational features) is {\em hidden} but the mean rewards for different versions of ad (arms) as a function of hidden context $\theta$ are known. Such models prove useful for personalized recommendation in which the context of the user is unknown, but the reward mappings $\mu_k(\theta)$ are known through surveyed data.
\vspace{0.15cm}
\noindent
\textbf{Global Recommendations using Correlated-Reward Bandits.}
In this work we study a variant of the classical multi-armed bandit problem in which rewards corresponding to different arms are correlated to each other.
In many practical settings, the reward we get from different arms
at any given step are likely to be correlated. In the ad-selection example given in \Cref{fig:clooneyEx}, a user reacting positively (by clicking, ordering, etc.) to the first version of the ad with a girl playing tennis might also be more likely to click the second version as it is also related to tennis; of course one can construct examples where there is negative correlation between click events to different ads. The model we study in this paper explicitly captures these correlations through the knowledge of pseudo-rewards $s_{\ell,k}(r)$ (See \Cref{fig:pseudoModel}). Similar to the classical MAB setting, the goal here is to display versions of the ad to maximize user engagement. In addition, unlike contextual bandits, we do not observe the context (age/occupational/income) features of the user and do not focus on providing personalized recommendation. Instead our goal is to provide global recommendations to a population whose demographics is unknown. Unlike {\em structured bandits}, we do not assume that the mean rewards are functions of a hidden context parameter $\theta$. In structured bandits, although the {\em mean} rewards depend on $\theta$ the reward realizations can still be independent. See \Cref{subsec:strucBandit} for more details.
\subsection{Main Contributions and Organization}
\vspace{0.1cm}
\textbf{i) A General and Previously Unexplored Correlated Multi-Armed Bandit Model.} In \Cref{sec:problem} we describe our novel correlated multi-armed bandit model, in which rewards of a user corresponding to different arms are correlated with each other. This correlation is captured by the knowledge of \textit{pseudo-rewards}, which are upper bounds on the conditional mean reward of arm $\ell$ given reward of arm $k$. In practice, pseudo-rewards can be obtained via expert/domain knowledge (for example, common ingredients in two drugs that are being considered to treat an ailment) or controlled surveys (for example, beta-testing users who are asked to rate different versions of an ad). A key advantage of our framework is that pseudo-rewards are just upper bounds on the conditional expected rewards and can be arbitrarily loose. This also makes the proposed framework and algorithm directly usable in practice -- if some pseudo-rewards are unknown due to lack of domain knowledge/data, they can simply be replaced by the maximum possible reward entries, which serves a natural upper bound.
\vspace{0.1cm}
\noindent
\textbf{ii) An approach to generalize algorithms to the Correlated MAB setting.}
We propose a novel approach in \Cref{sec:algorithm} that extends any classical bandit (such as UCB, TS, KL-UCB etc.) algorithm to the correlated MAB setting studied in this paper. This is done by making use of the pseudo-rewards to reduce exploration in standard bandit algorithms. We refer to this algorithm as \textsc{C-Bandit} where \textsc{Bandit} refers to the classical bandit algorithm used in the last step of the algorithm (i.e., UCB/TS/KL-UCB).
\vspace{0.1cm}
\noindent
\textbf{iii) Unified Regret Analysis} We study the performance of our proposed algorithms by analyzing their expected \emph{regret}, $\E{\text{Reg}(T)}$. The regret of an algorithm is defined as the difference between the cumulative reward of a \emph{genie} policy, that always pulls the optimal arm $k^*$, and the cumulative reward obtained by the algorithm over $T$ rounds. By doing regret analysis of C-UCB, we obtain the following upper bound on the expected regret of C-UCB.
\begin{prop}[Upper Bound on Expected Regret]
The expected cumulative regret of the C-UCB algorithm is upper bounded as
\begin{equation}
\E{Reg(T)} \leq (C-1) \cdot \mathrm{O}(\log T) + \mathrm{O}(1),
\end{equation}
\label{coro:teaser}
\end{prop}
Here $C$ denotes the number of \emph{competitive} arms. An arm $k$ is said to be \emph{competitive} if expected pseudo-reward of arm $k$ with respect to the optimal arm $k^*$ is larger than the mean reward of arm $k^*$, that is, if $\E{s_{k,k^*}(r)} \geq \mu_{k^*}$, otherwise, the arm is said to be non-competitive. The result in \Cref{coro:teaser} arises from the fact that the C-UCB algorithm ends up pulling the non-competitive arms only $\mathrm{O}(1)$ times and only the competitive sub-optimal arms are pulled $\mathrm{O}(\log T)$ times. In contrast to UCB, that pulls all $K-1$ sub-optimal arms $\mathrm{O}(\log T)$ times, our proposed C-UCB algorithm pulls only $C-1 \leq K-1$ arms $\mathrm{O}(\log T)$ times. In fact, when $C = 1$, our proposed algorithm achieves {\em bounded} regret meaning that after some finite step, no arm but the optimal one will be selected. In this sense, we reduce a $K$-armed bandit problem to a $C$-armed bandit problem. We emphasize that $k^*$, $\mu^*$ and $C$ are {\em all} unknown to the algorithm at the beginning.
We present our detailed regret bounds and analysis in \Cref{sec:regret}. A rigorous analysis of the regret achieved under C-UCB is given through a unified technique. This technique can be of broad interest as we also provide a recipe to obtain regret analysis for any \textit{C-Bandit} algorithm. For instance, the analysis of C-KL-UCB can be easily done through our provided outline.
\vspace{0.1cm}
\noindent
\textbf{iv) Evaluation using real-world datasets.}
We perform simulations to validate our theoretical results in \Cref{sec:simulation}. In \Cref{sec:experiments}, we do extensive validation of our results by performing experiments on two real-world datasets, namely \textsc {Movielens} and \textsc{Goodreads}, which show that the proposed approach yields drastically smaller regret than classical Multi-Armed Bandit strategies.
\section{Problem Formulation}
\label{sec:problem}
\subsection{Correlated Multi-Armed Bandit Model}
\label{subsec:generalModel}
\begin{table}[t]
\centering
\begin{tabular}{|l|l|l|l|l|}
\cline{1-2} \cline{4-5}
\textbf{r} & \textbf{$s_{2,1}(r)$} & & \textbf{r} & \textbf{$s_{1,2}(r)$} \\ \cline{1-2} \cline{4-5}
\textbf{0} & 0.7 & & \textbf{0} & 0.8 \\ \cline{1-2} \cline{4-5}
\textbf{1} & 0.4 & & \textbf{1} & 0.5 \\ \cline{1-2} \cline{4-5}
\end{tabular}
\\ \vspace{2mm}
\parbox{.45\linewidth}{
\centering
\begin{tabular}{|l|l|l|}
\hline
\textbf{(a)} & $R_1 = 0$ & $R_1 = 1$ \\ \hline
$R_2 = 0$ & 0.2 & 0.4 \\ \hline
$R_2 = 1$ & 0.2 & 0.2 \\ \hline
\end{tabular}
}
\hfill
\parbox{.45\linewidth}{
\centering
\begin{tabular}{|l|l|l|}
\hline
\textbf{(b)} & $R_1 = 0$ & $R_1 = 1$ \\ \hline
$R_2 = 0$ & 0.2 & 0.3 \\ \hline
$R_2 = 1$ & 0.4 & 0.1 \\ \hline
\end{tabular}
}
\caption{The top row shows the pseudo-rewards of arms 1 and 2, i.e., upper bounds on the conditional expected rewards (which are known to the player). The bottom row depicts two possible joint probability distribution (unknown to the player). Under distribution (a), Arm 1 is optimal whereas Arm 2 is optimal under distribution (b).}
\label{tab:pseudoBin}
\vspace{-0.2cm}
\end{table}
Consider a Multi-Armed Bandit setting with $K$ arms $\{1,2, \ldots K\}$. At each round $t$, a user enters the system and we need to decide an arm $k_t$ to display to the user. Upon pulling arm $k_t$, we receive a random reward $R_{k_t} \in [0,B]$. Our goal is to maximize the cumulative reward over time. The expected reward of arm $k$, is denoted by $\mu_k$. If we knew the arm with highest mean, i.e., $k^* = \arg \max_{k \in \mathcal{K}} \mu_k$ beforehand, then we would always pull arm $k^*$ to maximize expected cumulative reward. We now define the cumulative regret, minimizing which is equivalent to maximizing cumulative reward:
\begin{equation}
Reg(T) = \sum_{t = 1}^{T} \mu_{k_t} - \mu_{k^*} = \sum_{k \neq k^*} n_k(T) \Delta_k.
\label{eqn:regretdefinition}
\end{equation}
Here, $n_k(T)$ denotes the number of times a sub-optimal arm is pulled till round $T$ and $\Delta_k$ denotes the {\em sub-optimality gap} of arm $k$, i.e., $\Delta_k = \mu_{k^*} -\mu_k$.
The classical multi-Armed bandit setting implicitly assumes the rewards $R_1, R_2 \ldots R_K$ are independent, that is, $\Pr(R_{\ell} = r_\ell | R_k = r) = \Pr(R_{\ell} = r_\ell) \quad \forall{r_{\ell},r} \& \forall{\ell,k},$ which implies that, $\E{R_{\ell} | R_k = r} = \E{R_{\ell}} \quad \forall{r,\ell,k}$. However, in most practical scenarios this assumption is unlikely to be true. In fact, rewards of a user corresponding to different arms are likely to be correlated. Motivated by this we consider a setup where the conditional distribution of the reward from arm $\ell$ given reward from arm $k$ is not equal to the probability distribution of the reward from arm $\ell$, i.e., $f_{R_\ell | R_{k}}(r_{\ell} | r_k) \neq f_{R_\ell}(r_{\ell})$, with $f_{R_\ell}(r_{\ell})$ denoting the probability distribution function of the reward from arm $\ell$. Consequently, due to such correlations, we have $\E{R_{\ell} | R_k} \neq \E{R_{\ell}}$.
In our problem setting, we consider that the player has partial knowledge about the joint distribution of correlated arms in the form of \emph{pseudo-rewards}, as defined below:
\begin{defn}[Pseudo-Reward]
Suppose we pull arm $k$ and observe reward $r$, then the pseudo-reward of arm $\ell$ with respect to arm $k$, denoted by $s_{\ell,k}(r)$, is an upper bound on the conditional expected reward of arm $\ell$, i.e.,
\begin{equation}
\mathbb{E}[R_\ell | R_k = r] \leq s_{\ell,k}(r),
\end{equation}
without loss of generality, we define $s_{\ell,\ell}(r) = r$.
\end{defn}
The pseudo-rewards information consists of a set of $K \times K$ functions $s_{\ell,k}(r)$ over $[0,B]$. This information can be obtained in practice through either domain/expert knowledge or from controlled surveys. For instance, in the context of medical testing, where the goal is to identify the best drug to treat an ailment from among a set of $K$ possible options, the effectiveness of two drugs is correlated when the drugs share some common ingredients. Through domain knowledge of doctors, it is possible answer questions such as ``what are the chances that drug $B$ would be effective given drug $A$ was not effective?", through which we can infer the pseudo-rewards.
\begin{table}[t]
\parbox{.3\linewidth}{
\centering
\begin{tabular}{|l|l|l|}
\hline
\textbf{r} & \textbf{$s_{2,1}(r)$} & \textbf{$s_{3,1}(r)$} \\ \hline
\textbf{0} & 0.7 & \textcolor{red}{2} \\ \hline
\textbf{1} & 0.8 & 1.2 \\ \hline
\textbf{2} & \textcolor{red}{2} & 1 \\ \hline
\end{tabular}
}
\hfill
\parbox{.3\linewidth}{
\centering
\begin{tabular}{|l|l|l|}
\hline
\textbf{r} & \textbf{$s_{1,2}(r)$} & \textbf{$s_{3,2}(r)$} \\ \hline
\textbf{0} & 0.5 & 1.5 \\ \hline
\textbf{1} & 1.3 & \textcolor{red}{2} \\ \hline
\textbf{2} & \textcolor{red}{2} & 0.8 \\ \hline
\end{tabular}
}
\hfill
\parbox{.3\linewidth}{
\centering
\begin{tabular}{|l|l|l|}
\hline
\textbf{r} & \textbf{$s_{1,3}(r)$} & \textbf{$s_{2,3}(r)$} \\ \hline
\textbf{0} & 1.5 & \textcolor{red}{2} \\ \hline
\textbf{1} & \textcolor{red}{2} & 1.3 \\ \hline
\textbf{2} & 0.7 & 0.75 \\ \hline
\end{tabular}
}
\caption{If some pseudo-reward entries are unknown (due to lack of prior-knowledge/data), those entries can be replaced with the maximum possible reward and then used in the \textsc{C-BANDIT} algorithm. We do that here by entering $2$ for the entries where pseudo-rewards are unknown.}
\label{tab:paddedEntries}
\vspace{-0.2cm}
\end{table}
\subsection{Computing Pseudo-Rewards from prior-data/surveys}
The pseudo-rewards can also be learned from prior-available data, or through {\em offline} surveys in which users are presented with {\em all} $K$ arms allowing us to sample $R_1, \ldots, R_K$ jointly. Through such data, we can evaluate an estimate on the conditional expected rewards. For example in \Cref{tab:pseudoBin}, we can look at all users who obtained $0$ reward for Arm 1 and calculate their average reward for Arm 2, say $\hat{\mu}_{2,1}(0)$. This average provides an estimate on the conditional expected reward. Since we only need an upper bound on $\E{R_2 | R_1 = 0}$, we can use several approaches to construct the pseudo-rewards.
\begin{enumerate}
\item If the training data is \textit{large}, one can use the empirical estimate $\hat{\mu}_{2,1}(0)$ directly as $s_{2,1}(0)$, because through law of large numbers, the empirical average equals the $\E{R_2 | R_1 = 0}$.
\item Alternatively, we can set $s_{2,1}(0) = \hat{\mu}_{2,1}(0) + \hat{\sigma}_{2,1}(0)$, with $\hat{\sigma}_{2,1}(0)$ denoting the empirical standard deviation on the conditional reward of arm 2, to ensure that pseudo-reward is an upper bound on the conditional expected reward.
\item In addition, pseudo-rewards for any unknown conditional mean reward could be filled with the maximum possible reward for the corresponding arm. \Cref{tab:paddedEntries} shows an example of a 3-armed bandit problem where some pseudo-reward entries are unknown, e.g., due to lack of data. We can fill these missing entries with maximum possible reward $(i.e., 2)$ as shown in \Cref{tab:paddedEntries} to complete the pseudo-reward entries.
\item If through the training data, we obtain a soft upper bound $u$ on $\E{R_2|R_1 = 0}$ that holds with probability $1-\delta$, then we can translate it to the pseudo-reward $s_{2,1}(0) = u \times (1 - \delta) + 2 \times \delta$, (assuming maximum possible reward is 2).
\end{enumerate}
\begin{rem}
Note that the pseudo-rewards are upper bounds on the expected conditional reward and not hard bounds on the conditional reward itself. This makes our problem setup practical as upper bounds on expected conditional reward are easier to obtain, as illustrated in the previous paragraph.
\end{rem}
\begin{rem}[Reduction to Classical Multi-Armed Bandits]
When all pseudo-reward entries are unknown, then all pseudo-reward entries can be filled with maximum possible reward for each arm, that is, $s_{\ell, k}(r) = B$ $\forall{r,\ell,k}$. In such a case, the problem framework studied in this paper reduces to the setting of the classical Multi-Armed Bandit problem and our proposed $\textsc{C-Bandit}$ algorithm performs exactly as standard \textsc{bandit} (for e.g., UCB, TS etc.) algorithms.
\end{rem}
While the pseudo-rewards are known in our setup, the underlying joint probability distribution of rewards is unknown. For instance, \Cref{tab:pseudoBin} (a) and \Cref{tab:pseudoBin} (b) show two joint probability distributions of the rewards that are both possible given the pseudo-rewards at the top of \Cref{tab:pseudoBin}.
If the joint distribution is as given in \Cref{tab:pseudoBin} (a), then Arm 1 is optimal, while Arm 2 is optimal if the joint distribution is as given in \Cref{tab:pseudoBin}(b).
\newadd{
\begin{rem} For a setting where reward domain is \emph{large} or there are a large number of arms, it may be difficult to learn the pseudo-reward entries from prior-data. In such scenarios, the knowledge of additional correlation structure may be helpful to know the value of pseudo-rewards. We describe one such structure in the next section where rewards are correlated through a latent random source and show how to evaluate pseudo-rewards in such a scenario.
\end{rem}
}
\subsection{Special Case: Correlated Bandits with a Latent Random Source}
\label{subsec:specialCase}
\begin{figure}[t]
\centering
\includegraphics[width = 0.6\textwidth]{Figures/exampleBd}
\caption{A special case of our proposed problem framework is a setting in which rewards for different arms are correlated through a hidden random variable X. At each round $X$ takes a realization in $\mathcal{X}$. The reward obtained from an arm $k$ is $Y_k(X)$. The figure illustrates lower bounds and upper bounds on $Y_k(X)$ (through dotted lines). For instance, when $X$ takes the realization $1$, reward of arm 3 is a random variable bounded between $1$ and $3$. }
\label{fig:latentExample}
\vspace{-0.2cm}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 0.6\textwidth]{Figures/pseudoRewardNew}
\caption{An illustration on how to calculate pseudo-rewards in CMAB with latent random source. Upon observing a reward of 4 from arm 1, we can see that the maximum possible reward for arms 2 and 3 is 3.5 and 4 respectively. Therefore, $s_{2,1}(4) = 3.5$ and $s_{3,1}(4) = 4$. }
\label{fig:latentReward}
\vspace{-0.2cm}
\end{figure}
Our proposed correlated multi-armed bandit framework subsumes many interesting and previously unexplored multi-armed bandit settings. One such special case is the correlated multi-armed bandit model where the rewards depend on a common latent source of randomness \cite{gupta2020correlated}. More concretely, the rewards of different arms are correlated through a hidden random variable $X$ (see \Cref{fig:latentExample}). At each round $t$, $X$ takes a an i.i.d. realization $X_t \in \mathcal{X}$ (unobserved to the player) and upon pulling arm $k$, we observe a random reward $Y_k(X_t)$. The latent random variable $X$ here could represent the \textit{features} (i.e., age/occupation etc.) of the user arriving to the system, to whom we show one of the $K$ arms. These \emph{features} of the user are hidden in the problem due to privacy concerns. The random reward $Y_k(X_t)$ represents the preference of user with context $X_t$ for the $k^{th}$ version of the ad, for the application of ad-selection.
In this problem setup, upper and lower bounds on $Y_k(X)$, namely $\bar{g}_k(X)$ and $\underline{g}_k(X)$ are known. For instance, the information on upper and lower bounds of $Y_k(X_t)$ could represent knowledge of the form that \textit{children of age 5-10 rate documentaries only in the range 1-3 out of 5}. Such information can be known or learned through prior available data. While the bounds on $Y_k(X)$ are known, the distribution of $X$ and reward distribution within the bounds is unknown, due to which the optimal arm is not known beforehand. Thus, an online approach is needed to minimize the regret.
It is possible to translate this setting to the general framework described in the problem by transforming the mappings $Y_k(X)$ to pseudo-rewards $s_{\ell,k}(r)$. Recall the pseudo-rewards represent an upper bound on the conditional expectation of the rewards. In this framework, $s_{\ell,k}(r)$ can be calculated as:
$$s_{\ell,k}(r) = \max_{\{x: \underline{g}_k(x) \leq r \leq \bar{g}_k(x)\}} \bar{g}_{\ell}(x),$$
where $\underline{g}_k(x)$ and $\bar{g}_k(x)$ represent upper and lower bounds on $Y_k(x)$. Upon observing a realization from arm $k$, it is possible to estimate the maximum possible reward that would have been obtained from arm $\ell$ through the knowledge of bounds on $Y_k(X)$.
\Cref{fig:latentReward} illustrates how pseudo-reward is evaluated when we obtain a reward $r = 4$ by pulling arm 1. We first infer that $X$ lies in $[0, 0.8]$ if $r = 4$ and then find the maximum possible reward for arm 2 and arm 3 in $[0,0.8]$. Once these pseudo-rewards are constructed, the problem fits in the general framework described in this paper and we can use the algorithms proposed for this setting directly.
\begin{rem}
In the scenario where $\underline{g}_k(x)$ and $\bar{g}_k(x)$ are soft lower and upper bounds, i.e., $\underline{g}_k(x) \leq Y_k(x) \leq \bar{g}_k(x)$ w.p. $1 - \delta$, we can still construct pseudo-reward as follows:
$$s_{\ell,k}(r) = (1 - \delta)^2 \times \left( \max_{\{x: \underline{g}_k(x) \leq r \leq \bar{g}_k(x)\}} \bar{g}_{\ell}(x) \right) + (1 - (1 - \delta)^2) \times M,$$ where $M$ is the maximum possible reward an arm can provide. Thus our proposed framework and algorithms work under this setting as well.\footnote{We evaluate a range of values within which $x$ lies based on the reward with probability $1-\delta$. The maximum possible reward of arm $\ell$ for values of $x$ is then identified with probability $1-\delta$. Due to this, with probability $(1 - \delta)^2$, conditional reward of arm $\ell$ is at-most $\max_{\{x: \underline{g}_k(x) \leq r \leq \bar{g}_k(x)\}} \bar{g}_{\ell}(x)$.}
\end{rem}
\subsection{Comparison with parametric (structured) models}
\label{subsec:strucBandit}
As mentioned in \Cref{sec:introduction}, a seemingly related model is the structured bandits model \cite{combes2017minimal, lattimore2014bounded, gupta2018unified}. Structured bandits is a class of problems that cover linear bandits \cite{abbasi2011improved}, generalized linear bandits \cite{filippi2010parametric}, Lipschitz bandits \cite{magureanu2014lipschitz}, global bandits \cite{ata2015global}, regional bandits \cite{wang2018regional} etc. In the structured bandits setup, mean rewards corresponding to different arms are related to one another through a hidden parameter $\theta$. The underlying value of $\theta$ is fixed and the mean reward mappings $\theta \rightarrow \mu_k(\theta)$ are known. Similarly, \cite{pandey2007multi-armed} studies a dependent armed bandit problem, that also has mean rewards corresponding to different arms related to one another. It considers a parametric model, where mean rewards of different arms are drawn from one of the $K$ clusters, each having an unknown parameter $\pi_{i}$. All of these models are fundamentally different from the problem setting considered in this paper. We list some of the differences with the structured bandits (and the model in \cite{pandey2007multi-armed}) below.
\begin{enumerate}
\item In this work we explicitly model the correlations in the rewards of a user corresponding to different arms. While mean rewards are related to each other in structured bandits and \cite{pandey2007multi-armed}, the reward realizations are not necessarily correlated.
\item The model studied here is non-parametric in the sense that there is no hidden feature space as is the case in structured bandits and the work of Pandey et al. \cite{pandey2007multi-armed}.
\item In structured bandits, the reward mappings from $\theta$ to $\mu_k(\theta)$ need to be {\em exact}. If they happen to be incorrect, then the algorithms for structured bandit cannot be used as they rely on the correctness of $\mu_k(\theta)$ to construct confidence intervals on the unknown parameter $\theta$. In contrast, the model studied here relies on the pseudo-rewards being upper bounds on conditional expectations. These bounds need not be tight and the proposed C-Bandit algorithms adjust accordingly and perform at least as well as the corresponding classical bandit algorithm.
\item Similar to the structured bandits, the unimodal bandit framework \cite{combes2014unimodal, trinh2020solving} also assumes a structure on the mean rewards and does not capture the reward correlations explicitly. Under the unimodal framework, it is assumed that the mean reward $\mu_k$ as a function of the arms $k$ has a single mode. Instead of assuming that mean rewards are related to one another, our framework explicitly captures the inherent correlations in the form of pseudo-reward. Unimodal bandits have often been used to model the problem of link-rate adaptation in wireless networks, where the mean-reward corresponding to different choices of arms is a unimodal function \cite{qureshi2019fast, qureshi2020online, gupta2019link}. The same problem can also be dealt by modeling the correlations explicitly through the pseudo-reward framework described in this paper.
\end{enumerate}
\section{Simulations}
\label{sec:simulation}
We now present the empirical performance of proposed algorithms. For all the results presented in this section, we compare the performance of all algorithms on the same reward realizations and plot the cumulative regret averaged over 100 independent trials. The shaded area represents error bars with one standard deviation. We set $\beta = 1$ for all TS and C-TS plots.
\subsection{Simulations with known pseudo-rewards}
\begin{table}[t]
\centering
\begin{tabular}{|l|l|l|l|l|}
\cline{1-2} \cline{4-5}
\textbf{r} & \textbf{$s_{2,1}(r)$} & & \textbf{r} & \textbf{$s_{1,2}(r)$} \\ \cline{1-2} \cline{4-5}
\textbf{0} & 0.7 & & \textbf{0} & 0.8 \\ \cline{1-2} \cline{4-5}
\textbf{1} & 0.4 & & \textbf{1} & 0.5 \\ \cline{1-2} \cline{4-5}
\end{tabular}
\\ \vspace{2mm}
\parbox{.45\linewidth}{
\centering
\begin{tabular}{|l|l|l|}
\hline
\textbf{(a)} & $R_1 = 0$ & $R_1 = 1$ \\ \hline
$R_2 = 0$ & 0.2 & 0.4 \\ \hline
$R_2 = 1$ & 0.2 & 0.2 \\ \hline
\end{tabular}
}
\hfill
\parbox{.45\linewidth}{
\centering
\begin{tabular}{|l|l|l|}
\hline
\textbf{(b)} & $R_1 = 0$ & $R_1 = 1$ \\ \hline
$R_2 = 0$ & 0.2 & 0.3 \\ \hline
$R_2 = 1$ & 0.4 & 0.1 \\ \hline
\end{tabular}
}
\caption{The top row shows the pseudo-rewards of arms 1 and 2, i.e., upper bounds on the conditional expected rewards (which are known to the player). The bottom row depicts two possible joint probability distribution (unknown to the player). Under distribution (a), Arm 1 is optimal whereas Arm 2 is optimal under distribution (b).}
\label{tab:pseudoBin2}
\vspace{-0.2cm}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width = 0.8\textwidth]{Figures/pseudoRewardITTran}
\caption{Cumulative regret for UCB, C-UCB, TS and C-TS corresponding to the problem shown in \Cref{tab:pseudoBin2}. For the setting (a) in \Cref{tab:pseudoBin2}, Arm 1 is optimal and Arm 2 is non-competitive, in setting (b) of \Cref{tab:pseudoBin2} Arm 2 is optimal while Arm 1 is competitive.}
\label{fig:simulationWPseudoReward}
\vspace{-0.3cm}
\end{figure}
Consider the example shown in \Cref{tab:pseudoBin}, with the top row showing the pseudo-rewards, which are known to the player, and the bottom row showing two possible joint probability distributions (a) and (b), which are unknown to the player. We show the simulation result of our proposed algorithms C-UCB, C-TS against UCB, TS in \Cref{fig:simulationWPseudoReward} for the setting considered in \Cref{tab:pseudoBin}.
\noindent
\textbf{Case (a): Bounded regret}. For the probability distribution (a), notice that Arm 1 is optimal with $\mu_1 = 0.6, \mu_2 = 0.4$. Moreover, $\phi_{2,1} = 0.4 \times 0.7 + 0.6 \times 0.4 = 0.52$. Since $\phi_{2,1} < \mu_1$, Arm 2 is non-competitive. Hence, in \Cref{fig:simulationWPseudoReward}(a), we see that our proposed C-UCB and C-TS Algorithms achieve bounded regret, whereas UCB, TS show logarithmic regret.
\noindent
\textbf{Case (b): All competitive arms}. For the probability distribution (b), Arm 2 is optimal with $\mu_2 = 0.5$ and $\mu_1 = 0.4$. The expected pseudo-reward of arm 1 w.r.t to arm 2 in this case is $\phi_{1,2} = 0.8 \times 0.5 + 0.5 \times 0.5 = 0.65$. Since $\phi_{1,2} > \mu_2$, the sub-optimal arm (i.e., Arm 1) is competitive and hence C-UCB and C-TS also end up exploring Arm 1. Due to this we see that C-UCB, C-TS achieve a regret similar to UCB, TS in \Cref{fig:simulationWPseudoReward}(b). C-TS has empirically smaller regret than C-UCB as Thompson Sampling performs better empirically than the UCB algorithm. The design of our C-Bandit approach allows the use of any other bandit algorithm in the last step, e.g., KL-UCB.
\subsection{Simulations for the latent random source model in \Cref{subsec:specialCase}}
\begin{figure}[t]
\centering
\includegraphics[width=0.6\textwidth]{Figures/ublbsimulationExample}
\caption{Rewards corresponding to the two arms are correlated through a random variable $X$ lying in $(0,6)$. The lines represent lower and upper bounds on reward of Arms 1 ,$Y_1(X)$, and 2, $Y_2(X)$, given the realization of random variable $X$.}
\label{fig:illustrationLBUB}
\vspace{-0.2cm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width = 0.8\textwidth]{Figures/upperLowerIEEETran}
\caption{Simulation results for the example shown in \Cref{fig:illustrationLBUB}. In (a), $X \sim \text{Beta}(1,1)$ and in (b) $X \sim \text{Beta}(1.5,5)$. In case (a), Arm 1 is optimal while Arm 2 is non-competitive (C = 1), due to which we see that C-UCB and C-TS obtain bounded regret. Arm 2 is optimal for the distribution in (b) and Arm 1 is competitive, due to which $C=2$ and we see that C-UCB and C-TS attain a performance similar to UCB and TS.}
\label{fig:upperlowerbdSim}
\vspace{-0.3cm}
\end{figure}
We now show the performance of C-UCB and C-TS against UCB, TS for the model considered in \Cref{subsec:specialCase}, where rewards corresponding to different arms are correlated through a latent random variable $X$. We consider a setting where reward obtained from Arm 1, given a realization $x$ of $X$, is bounded between $2x - 1$ and $2x + 1$, i.e., $2X - 1 \leq Y_1(X) \leq 2X + 1$. Similarly, conditional reward of Arm 2 is, $(3-X)^2 - 1 \leq Y_2(X) \leq (3 - X)^2 + 1$. \Cref{fig:illustrationLBUB} demonstrates these upper and lower bounds on $Y_k(X)$. We run C-UCB, C-TS, TS and UCB for this setting for two different distributions of $X$. For the simulations, we set the conditional reward of both the arms to be distributed uniformly between the upper and lower bounds, however this information is not known to the Algorithms.
\noindent
\textbf{Case (a): $X \sim \text{Beta}(1,1)$}. When $X$ is distributed as $X \sim \text{Beta}(1,1)$, Arm 1 is optimal while Arm 2 is non-competitive. Due to this, we observe that C-UCB and C-TS achieve bounded regret in \Cref{fig:upperlowerbdSim}(a).
\noindent
\textbf{Case (b): $X \sim \text{Beta}(1.5,5)$}. In the scenario where $X$ has the distribution $\text{Beta}(1.5,5)$, Arm 2 is optimal while Arm 1 is competitive. Due to this, C-UCB and C-TS do not stop exploring Arm 1 in finite time and we see the cumulative regret similar to UCB, TS in \Cref{fig:upperlowerbdSim}(b).
Our next simulation result considers a setting where the known upper and lower bounds on $Y_k(X)$ match and the reward $Y_k$ corresponding to a realization of $X$ is deterministic, i.e., $Y_k(X) = g_k(X)$. We show our simulation results for the reward functions described in \Cref{fig:sim_reward_funcs_cont} with three different distributions of $X$. Corresponding to $X \sim \text{Beta}(4,4)$, Arm 1 is optimal and Arms 2,3 are non-competitive leading to bounded regret for C-UCB, C-TS in \Cref{fig:teaserSim}(a). In setting (b), we consider $X \sim \text{Beta}(2,5)$ in which Arm 1 is optimal, Arm 2 is competitive and Arm 3 is non-competitive. Due to this, our proposed C-UCB and C-TS Algorithms stop pulling Arm 3 after some time and hence achieve significantly reduced regret relative to UCB in \Cref{fig:teaserSim}(b). For third scenario (c), we set $X \sim \text{Beta}(1,5)$, which makes Arm 3 optimal while Arms 1 and 2 are competitive. Hence, our algorithms explore both the sub-optimal arms and have a regret comparable to that of UCB, TS in \Cref{fig:teaserSim}(c).
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{Figures/functionsContinuous}
\caption{Reward Functions used for the simulation results presented in \Cref{fig:teaserSim}. The reward $g_k(X)$ is a function of a latent random variable $X$. For instance, when $X = 0.5$, reward from Arms 1,2 and 3 are $g_1(X) = 1$, $g_2(X) = 0.7135$ and $g_3(X) = 0.5$. }
\label{fig:sim_reward_funcs_cont}
\vspace{-0.2cm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width = 0.8\textwidth]{Figures/latentFinalPlot}
\caption{ \sl The cumulative regret of C-UCB and C-TS depend on $C$, the number of \emph{competitive} arms. The value of $C$ depends on the {\em unknown} joint probability distribution of rewards and is not known beforehand. We consider a setup where $C = 1$ in (a), $C = 2$ in (b) and $C = 3$ in (c). Our proposed algorithm pull only the $C-1$ competitive sub-optimal arms $\mathrm{O}(\log T)$ times, as opposed to UCB, TS that pull all $K-1$ sub-optimal arms $\mathrm{O}(\log T)$ times. Due to this, we see that our proposed algorithms achieve bounded regret when $C = 1$. When $C = 3$, our proposed algorithms perform as well as the UCB, TS algorithms.}
\label{fig:teaserSim}
\vspace{-0.3cm}
\end{figure}
\section{Regret Analysis and Bounds}
\label{sec:regret}
We now characterize the performance of the C-UCB algorithm by analyzing the expected value of the cumulative regret \eqref{eqn:regretdefinition}. The expected regret can be expressed as
\begin{align}
\E{Reg(T)} &=
\sum_{k = 1}^{K} \E{n_k(T)} \Delta_k,
\label{eqn:exp_regret}
\end{align}
where $\Delta_k = \mu_{k^*} - \mu_k $ is the sub-optimality gap of arm $k$ with respect to the optimal arm $k^*$, and $n_k(T)$ is the number of times arm $k$ is pulled in $T$ slots.
For the regret analysis, we assume without loss of generality that the rewards are between 0 and 1 for all $k \in \{ 1, 2, \dots K \}$. Note that the \textsc{C-Bandit} algorithms do not require this condition, and the regret analysis can also be generalized to any bounded rewards.
\subsection{Competitive and Non-competitive arms with respect to Arm k}
\label{sec:competitive}
For the purpose of regret analysis in \Cref{sec:regret}, we need to understand which arms are empirically competitive as $t \rightarrow \infty$. We do so by defining the notions of Competitive and Non-Competitive arms.
\begin{defn}[Non-Competitive and Competitive arms]
An arm $\ell$ is said to be non-competitive if the expected reward of optimal arm $k^*$ is larger than the expected pseudo-reward of arm $\ell$ with respect to the optimal arm $k^*$, i.e, if, $\tilde{\Delta}_{\ell,k^*} \triangleq \mu_{k^*} - \phi_{\ell,k^*} > 0$. Similarly, an arm $\ell$ is said to be competitive if $\tilde{\Delta}_{\ell,k^*} = \mu_{k^*} - \phi_{\ell, k^*} <= 0$. The unique best arm $k^*$ has $\tilde{\Delta}_{k^*,k^*} = \mu_{k^*} - \phi_{k^*, k^*} = 0$ and is counted in the set of competitive arms.\footnote{As $t \rightarrow \infty$, only the optimal arm will remain in $\mathcal{S}_t$, and hence the definition of competitive arms only compares the expected mean of arm $k^*$ and expected pseudo-reward of arm $k$ with respect to arm $k^*$}
\end{defn}
We refer to $\tilde{\Delta}_{\ell,k^*}$ as the pseudo-gap of arm $\ell$ in the rest of the paper. These notions of competitiveness are used in the regret analysis in \Cref{sec:regret}. The central idea behind our correlated \textsc{C-BANDIT} approach is that after pulling the optimal arm $k^*$ sufficiently large number of times, the non-competitive (and thus sub-optimal) arms can be classified as empirically non-competitive with increasing confidence, and thus need not be explored. As a result, the non-competitive arms will be pulled only $\mathrm{O}(1)$ times. However, the competitive arms cannot be discerned as sub-optimal by just using the rewards observed from the optimal arm, and have to be explored $\mathrm{O}(\log T)$ times each. Thus, we are able to reduce a $K$-armed bandit to a $C$-armed bandit problem, where $C$ is the number of competitive arms. \footnote{Observe that $k^*$ and subsequently $C$ are both unknown to the algorithm. Before the start of the algorithm, it is not known which arm is optimal/competitive/non-competitive. Algorithm works in an online manner by evaluating the noisy notions of competitiveness, i.e., empirically competitive arms, and ensures that only $C - 1$ of the arms are pulled $\mathrm{O}(\log T)$ times.} We show this by bounding the regret of \textsc{C-BANDIT} approach.
\subsection{Regret Bounds}
In order to bound $\E{Reg(T)}$ in \eqref{eqn:exp_regret}, we can analyze the expected number of times sub-optimal arms are pulled, that is, $\E{n_k(T)}$, for all $k \neq k^*$. \Cref{thm:NonCompetitiveBound} and \Cref{thm:CompetitiveBound} below show that $\E{n_k(T)}$ scales as $O(1)$ and $O(\log T)$ for non-competitive and competitive arms respectively. Recall that a sub-optimal arm is said to be non-competitive if its pseudo-gap $\tilde{\Delta}_{k,k^*}>0$, and competitive otherwise.
\begin{thm}[Expected Pulls of a Non-competitive Arm]
\label{thm:NonCompetitiveBound}
The expected number of times a non-competitive arm with pseudo-gap $\tilde{\Delta}_{k,k^*}$ is pulled by C-UCB is upper bounded as
\begin{align}
\E{n_k(T)} &\leq K t_0 + K^3 \sum_{t= K t_0}^{T} 2 \left(\frac{t}{K}\right)^{-2} + \sum_{t = 1}^{T} 3t^{-3}, \label{eqn:upper_bnd_comp}\\
&= \mathrm{O}(1),
\end{align}
where,
{
\begin{align*}
&t_0 = \inf \bigg\{\tau \geq 2: \Delta_{\text{min}} , \tilde{\Delta}_{k,k^*} \geq 4 \sqrt{\frac{2K\log \tau}{\tau}} \bigg\}.
\end{align*}
}
\end{thm}
\begin{thm}[Expected Pulls of a Competitive Arm]
\label{thm:CompetitiveBound}
The expected number of times a competitive arm is pulled by C-UCB algorithm is upper bounded as
\begin{align}
\E{n_k(T)}
&\leq 8 \frac{\log (T)}{\Delta_k^2} + \left(1 + \frac{\pi^2}{3}\right) + \sum_{t = 1}^{T} 2Kt \exp\left(- \frac{t \Delta_{\text{min}}^2}{2 K}\right), \label{eqn:upper_bnd_non_comp}\\
&= \mathrm{O}(\log T) \quad \text{ where } \Delta_{\text{min}} = \min_k \Delta_k > 0.
\end{align}
\end{thm}
Substituting the bounds on $\E{n_k(T)}$ derived in \Cref{thm:NonCompetitiveBound} and \Cref{thm:CompetitiveBound} into \eqref{eqn:exp_regret}, we get the following upper bound on expected regret.
\begin{coro}[Upper Bound on Expected Regret]
\label{thm:upper_bnd_exp_regret}
The expected cumulative regret of the C-UCB and C-TS algorithms is upper bounded as
\begin{align}
\E{Reg(T)} &\leq \sum_{k \in \mathcal{C} \setminus \{k^*\}} \Delta_k U^{(c)}_k(T) + \sum_{k' \in \{ 1, \ldots , K \} \setminus \{ \mathcal{C} \}
}\Delta_{k'} U^{(nc)}_{k'}(T) , \label{eqn:upper_bnd_exp_regret}\\
&= (C-1) \cdot \mathrm{O}(\log T) + \mathrm{O}(1), \label{eqn:upper_bnd_exp_regret_order}
\end{align}
where $\mathcal{C} \subseteq \{ 1, \ldots , K \}$ is set of competitive arms with cardinality $C$, $U^{(c)}_k (T)$ is the upper bound on $\E{n_k(T)}$ for competitive arms given in \eqref{thm:CompetitiveBound}, and $U^{(nc)}_k(T)$ is the upper bound for non-competitive arms given in \eqref{thm:NonCompetitiveBound}.
\end{coro}
\vspace{0.05cm}
\subsection{Proof Sketch}
We now present an outline of our regret analysis of C-UCB. A key strength of our analysis is that it can be extended very easily to any \textsc{C-BANDIT} algorithm. The results independent of last step in the algorithm are presented in Appendix B, while the rigorous regret upper bounds for C-UCB is presented in Appendix D. We also present a regret analysis for C-TS in a scenario where $K = 2$, and TS is employed with Beta priors in Appendix E.
There are three key components to prove the result in \Cref{thm:NonCompetitiveBound} and \Cref{thm:CompetitiveBound}. The first two components hold independent of which bandit algorithm (UCB/TS/KL-UCB) is used for selecting the arm from the set of competitive arms, which makes our analysis easy to extend to any \textsc{C-BANDIT} algorithm. The third step is specific to the last step in C-BANDIT algorithm. We analyse the third component for C-UCB to provide its rigorous regret results.
\noindent
\textbf{i) Probability of optimal arm being identified as empirically non-competitive at round $t$ (denoted by $\Pr(E_1(t))$) is small.} In particular, we show that $$\Pr(E_1(t)) \leq 2Kt \exp\left(-\frac{t\Delta_{\text{min}}^2}{2K}\right).$$ This ensures that the optimal arm is identified as empirically non-competitive only $\mathrm{O}(1)$ times. We show that the number of times a competitive arm is pulled is bounded as
\begin{equation}
\E{n_k(T)} \leq \sum_{t = 1}^T \Pr(E_1(t)) + \Pr(E^c_1(t), k_t = k, I_{k,t-1} > I_{k^*,t-1}).
\end{equation}
The first term sums to a constant, while the second term is upper bounded by the number of times UCB pulls the sub-optimal arm $k$. Due to this the upper bound on the number of pulls of competitive arm by C-UCB / C-TS is only an additive constant more than the upper bound on the number of pulls for an arm by UCB / TS algorithms and hence we have same pre-log constants for the upper bound on the pulls of competitive arms.
\noindent
\textbf{ii) Probability of identifying a non-competitive arm as empirically competitive jointly with optimal arm being pulled more than $\frac{t}{K}$ times is small.} Notice that the first two steps of our algorithm involve identifying the set of arms $\mathcal{S}_t$ that have been pulled at least $\frac{t}{K}$ times, and eliminating arms which are empirically non-competitive with respect to the set $\mathcal{S}_t$ for round $t$. We show that the joint event that arm $k^* \in \mathcal{S}_t$ and a non-competitive arm $k$ is identified as empirically non-competitive is small. Formally,
\begin{equation}
\Pr\left(k_{t+1} = k, n_{k^*}(t) \geq \frac{t}{K}\right) \leq t\exp\left(-\frac{t \tilde{\Delta}_{k,k^*}}{2K}\right). \label{eqn:eq2}
\end{equation}
This occurs because upon obtaining a \textit{large} number of samples of arm $k^*$, expected reward of arm $k^*$ (i.e., $\mu_{k^*}$) and expected pseudo-reward of arm $k$ with respect to arm $k^*$ (i.e., $\phi_{k,k^*}$) can be estimated \textit{fairly accurately}. Since the pseudo-gap of arm $k$ is positive (i.e., $\mu_{k^*} > \phi_{k,k^*}$), the probability that arm $k$ is identified as empirically competitive is small.
An implication of \eqref{eqn:eq2} is that the expected number of times a non-competitive arm is identified as empirically competitive jointly with the optimal arm having at least $\frac{t}{K}$ pulls at round $t$ is bounded above by a constant.
\noindent
iii) \textbf{Probability that a sub-optimal arm is pulled more than $t/K$ times at round $t$ is small.} Formally, we show that for C-UCB, we have
\begin{equation}
\Pr\left(n_k(t) \geq \frac{t}{K}\right) \leq (2K + 2) \left(\frac{t}{K}\right)^{-2} \quad \forall t > Kt_0, k \neq k^*
\label{eq:eq3}
\end{equation}
This component of our analysis is specific to the classical bandit algorithm used in \textsc{C-BANDIT}. Intuitively, a result of this kind should hold for any \textit{good performing} classical multi-armed bandit algorithm.
We reach the result of \eqref{eq:eq3} in C-UCB by showing that
\begin{equation}
\Pr\left(k_{t+1} = k, n_k(t) > \frac{t}{2K}\right) \leq t^{-3} \quad \forall t > t_0, k \neq k^*
\label{eq:eq4}
\end{equation}
The probability of selecting a sub-optimal arm $k$ after it has been pulled \textit{significantly} many times is small as with more number of pulls, the exploration component in UCB index of arm $k$ becomes small, and consequently it is likely to be smaller than the UCB index of optimal arm $k^*$ (as it has larger empirical mean reward or has been pulled fewer number of times). Our analysis in \Cref{lem:suboptimalNotPulled} shows how the result in \eqref{eq:eq4} can be translated to obtain \eqref{eq:eq3} (this translation is again not dependent on which bandit algorithm is used in \textsc{C-BANDIT}).
We show that the expected number of pulls of a non-competitive arm $k$ can be bounded as
\begin{equation}
\E{n_k(T)} \leq \sum_{t = 1}^{T} \Pr\left(k_{t+1} = k, k^* = \argmax_{k} n_k(t)\right) + \Pr\left(k^* \neq \argmax_{k} n_k(t) \right)
\label{eqn:noncompNum}
\end{equation}
The first term in \eqref{eqn:noncompNum} is $\mathrm{O}(1)$ due to \eqref{eqn:eq2} and the second term is $\mathrm{O}(1)$ due to \eqref{eq:eq3}. Refer to Appendix D for rigorous regret analysis of C-UCB.
\subsection{Discussion on Regret Bounds}
\normalfont
\noindent
\textbf{Competitive Arms.} Recall than an arm is said to be competitive if $\mu_{k^*}$ (i.e., expected reward from arm $k^*$) $> \E{\phi_{k,k^*}} = \E{\tilde{\mathbb{E}}[R_{k'} | R_k]}$. Since the distribution of reward of each arm is unknown, initially the Algorithm does not know which arm is \textit{competitive} and which arm is \textit{non-competitive}.
\vspace{0.1cm}
\noindent
\textbf{Reduction in effective number of arms.} Interestingly, our result from \Cref{thm:NonCompetitiveBound} shows that the C-UCB algorithm, that operates in a sequential fashion, makes sure that \textit{non-competitive} arms are pulled only $\mathrm{O}(1)$ times. Due to this, only the competitive arms are pulled $\mathrm{O}(\log T)$ times. Moreover, the pre-log terms in the upper bound of UCB and C-UCB for these arms is the same. In this sense, our \textsc{C-BANDIT} approach reduces a $K$-armed bandit problem to a $C$-armed bandit problem. Effectively only $C-1 \leq K-1$ arms are pulled $\mathrm{O}(\log T)$ times, while other arms are stopped being pulled after a finite time.
\begin{table}[]
\centering
\begin{tabular}{|l|l|l|l|}
\hline
\textbf{$p_1(r)$} & \textbf{r} & \textbf{$s_{2,1}(r)$} & \textbf{$s_{3,1}(r)$} \\ \hline
0.2 & \textbf{0} & 0.7 & 2 \\ \hline
0.2 & \textbf{1} & 0.8 & 1.2 \\ \hline
0.6 & \textbf{2} & 2 & 1 \\ \hline
\end{tabular}
\caption{Suppose Arm 1 is optimal and its unknown probability distribution is $(0.2,0.2,0.6)$, then $\mu_1 = 1.4$, while $\phi_{2,1} = 1.5$ and $\phi_{3,1} = 1.2$. Due to this Arm 2 is Competitive while Arm 3 is non-competitive}
\label{tab:comp}
\vspace{-0.2cm}
\end{table}
\noindent
Depending on the joint probability distribution, different arms can be optimal, competitive or non-competitive. \Cref{tab:comp} shows a case where arm 1 is optimal and the reward distribution of arm $1$ is $(0.2,0.2,0.6)$, which leads to $\mu_1 = 1.4 > \phi_{3,1} = 1.2$ and $\mu_1 = 1.4 < \phi_{2,1} = 1.5$. Due to this Arm 2 is competitive while Arm 3 is non-competitive.
\vspace{0.1cm}
\noindent
\textbf{Achieving Bounded Regret.}
If the set of competitive arms $\mathcal{C}$ is a singleton set containing only the optimal arm (i.e., the number of competitive arms $C =1$), then our algorithm will lead to (see \eqref{eqn:upper_bnd_exp_regret_order}) an expected regret of $\mathrm{O}(1)$, instead of the typical $\mathrm{O}(\log T)$ regret scaling in classic multi-armed bandits. One such scenarion in which this can happen is if pseudo-rewards $s_{k,k^*}$ of all arms with respect to optimal arm $k^*$ match the conditional expectation of arm $k$. Formally, if $s_{k,k^*} = \E{R_k | R_{k^*}} \forall{k}$, then $\E{s_{k,k^*}} = \E{R_k} = \mu_k < \mu_{k^*}$. Due to this, all sub-optimal arms are non-competitive and our algorithms achieve only $\mathrm{O}(1)$ regret. We now evaluate a lower bound result for a special case of our model, where rewards are correlated through a latent random variable $X$ as described in \Cref{subsec:specialCase}.
We present a lower bound on the expected regret for the model described in \Cref{subsec:specialCase}. Intuitively, if an arm $\ell$ is \textit{competitive}, it can not be deemed sub-optimal by only pulling the optimal arm $k^*$ infinitely many times. This indicates that exploration is necessary for competitive sub-optimal arms. The proof of this bound closely follows that of the 2-armed classical bandit problem \cite{lai1985asymptotically}; i.e., we construct a new bandit instance under which a previously sub-optimal arm becomes optimal without affecting reward distribution of any other arm.
\begin{thm}[Lower Bound for Correlated MAB with latent random source]
\label{thm:lower_bnd_exp_regret}
For any algorithm that achieves a sub-polynomial regret, the expected cumulative regret for the model described in \Cref{subsec:specialCase} is lower bounded as
\begin{equation}
\lim_{T \rightarrow \infty}\inf \frac{\E{Reg (T)}}{\log (T)} \geq
\begin{cases}
\max_{k \in \mathcal{C}}\frac{\Delta_k}{D(f_{R_k} || f_{\tilde{R}_k})} \quad &\text{if } C > 1\\
0 \quad &\text{if } C = 1.
\end{cases}
\vspace{-0.2cm}
\end{equation}
\label{thm:lowerBound}
\end{thm}
\vspace{-0.2cm}
Here $f_{R_k}$ is the reward distribution of arm $k$, which is linked with $f_X$ since $R_k = Y_k(X)$. The term $f_{\tilde{R}_{k}}$ represents the reward distribution of arm $k$ in the new bandit instance where arm $k$ becomes optimal and distribution $f_{R_{k^{*}}}$ is unaffected. The divergence term represents "the amount of distortion needed in reward distribution of arm $k$ to make it better than arm $k^*$", and hence captures the problem difficulty in the lower bound expression.
\vspace{0.1cm}
\noindent
\textbf{Bounded regret whenever possible for the special case of \Cref{subsec:specialCase}.}
From \Cref{thm:upper_bnd_exp_regret}, we see that whenever $C > 1$, our proposed algorithm achieves $\mathrm{O}(\log T)$ regret matching the lower bound given in Theorem \ref{thm:lowerBound} order-wise. Also, when $C = 1$, our algorithm achieves $\mathrm{O}(1)$ regret. Thus, our algorithm achieves bounded regret whenever possible, i.e., when $C = 1$ for the model described in \Cref{subsec:specialCase}. In the general problem setting, a lower bound $\Omega(\log T)$ exists whenever it is possible to change the joint distribution of rewards such that the marginal distribution of optimal arm $k^*$ is unaffected and pseudo-rewards $s_{\ell,k}(r)$ still remain an upper bound on $\E{R_{\ell} | R_k = r}$ under the new joint probability distribution. In general, this can happen even if $C = 1$, we discuss one such scenario in the Appendix F and explain the challenges that need to come from the algorithmic side to meet the lower bound. |
2,877,628,089,080 | arxiv | \section{Introduction}
Autonomous driving has tremendous potential to benefit traffic safety and efficiency as well as change radically future mobility and transportation. In the architecture of automated vehicles, decision-making is a key component to realize autonomy \cite{pendleton2017perception}.
Although the rule-based method has achieved fair performance in specific driving scenarios like DARPA \cite{buehler2009darpa}, it is non-trivial to design behavioral rules manually in a complex driving environment. Inspired by the success of reinforcement learning (RL) on games and robotic control \cite{mnih2015DQN,silver2017mastering}, RL-enabled methods have become a powerful learning framework in autonomous driving, capable of learning complicated policies for high dimensional environments.
As a pioneering work, Lillicap \emph{et al.} (2016) proposed a Deep Deterministic Policy Gradient (DDPG) algorithm, and employed it to realize lane-keeping on the TORCS platform with the simulated images as policy inputs \cite{lillicrap2015DDPG}. Inspired by this study, Wayve (2018) applied the DDPG algorithm to a real vehicle equipped with a monocular camera, achieving a fairly good lane-following effect on a 250m-long rural road \cite{wayve}. Wolf \emph{et al.} (2017) employed Deep Q-network (DQN) to learn a Q-value network for lane-keeping, which maps the simulated image to the expected return of 5 different steering angle values. At each instant, the ego vehicle selects and executes the steering angle with the highest Q-value \cite{wolf2017learning}. Besides, other RL algorithms, such as A3C \cite{perot2017end,jaritz2018end} , inverse RL \cite{zou2018inverse}, and SAC \cite{chen2019SACdriving}, were also utilized to learn policy, which makes decisions based on simulated images.
The common feature of the aforementioned works is to utilize the original sensor information such as images or point clouds as driving states, which are mapped to the corresponding control command through the learned policy. This is a typical end-to-end decision-making framework originated from DQN \cite{mnih2015DQN}, which maintains an integrated perception and decision-making module and eliminates the need for manually designed driving states. However, the data generated in the driving simulators during training usually suffers from a great discrepancy with real vehicle sensors. This characteristic probably leads to the lacking of robustness and limits its scalability to practical applications. Additionally, to learn a policy based on the raw information, RL algorithms should simultaneously tackle two problems : (i) extract critical indicators like speed or relative distance between vehicles as the underlying state representation, and (ii) learning to maximize expected return. This places a heavy burden on the learning progress, especially in a complex driving environment. Therefore, the RL-based end-to-end decision-making methods are usually applied to simple driving tasks, such as lane keeping wherein no other surrounding vehicles are involved \cite{lillicrap2015DDPG,wayve,wolf2017learning,perot2017end,jaritz2018end, zou2018inverse}, and path tracking \cite{chen2019SACdriving}.
Isele \emph{et al.} (2018) showed that compared with end-to-end decision-making that takes raw sensors data as inputs, real-valued representations, such as relative speed and distance from surrounding vehicles, can greatly improve the driving performance, due to the reduced state space being easier to learn and the meaningful indicators helping the system to generalize \cite{isele2018navigating}.
Under this scheme, Wang \emph{et al.} (2017) combined the recurrent neural network and the DQN algorithm to learn a policy for on-ramp merging, which outputs acceleration and steering angle based on the indicators of two surrounding vehicles on the target lane \cite{wang2017formulation}. Duan \emph{et al.} (2020) established a 26-dimensional state vector, consisting of the indicators of the four nearest surrounding vehicles, as well as the road and destination information \cite{duan2020hierarchical}. Based on this state description and a parallel RL algorithm, the ego vehicle successfully learned to execute multiple behaviors, such as car-following, lane-changing, and overtaking, on a simulated two-lane highway.
Guan \emph{et al.} (2020) developed a cooperative control for 8 connected automated vehicles at an unsignalized intersection \cite{GUAN}, where each vehicle is represented by its velocity and distance to the intersection center, thereby formulating a 16-dimensional state vector. The indicators of each vehicle are sorted according to a pre-designed order, and the Proximal Policy Optimization (PPO) algorithm \cite{schulman2017PPO} is adopted to obtain the final converged policy.
Although existing RL methods based on vector-based state representation have attained some elegant demonstrations in complex traffic scenarios, they still suffer two inevitable shortcomings resulting from the high dynamics of surrounding vehicles, namely, (1) dimension sensitivity and (2) permutation sensitivity. The dimension sensitivity means that RL-enabled algorithms relying on approximation functions, such as multi-layer neural network (NN), require that the input dimensions of policy and value functions are fixed. However, the state vector with a prior fixed dimension can poorly reflect surrounding vehicles because their number and relative positions within the perception range usually change dynamically during riding. Existing researches \cite{wang2017formulation,mirchevska2018high,duan2020hierarchical,wang2018reinforcement, wang2019continuous,GUAN} usually meet the fixed dimension requirement by removing the information of some vehicles or adding virtual vehicles. Obviously, the former will cause the loss of information, while the latter probably leads to a redundant representation.
Permutation sensitivity means that for the same driving environment, different orders of surrounding vehicles in the state vector will formulate distinct state vectors, resulting in quite different policy outputs. This unreasonable phenomenon will bring difficulties to policy learning. One intuitive approach is to manually design a permutation rule for all surrounding vehicles, for example, sorting vehicles in increasing order according to their relative distance from the ego \cite{duan2020hierarchical, GUAN}. However, this brings a burden of
manually designing permutation rules separately for different driving scenarios, and also introduces the discontinuity of the vectors, which is prone to reducing the policy performance.
Moreover, most RL-based decision-making algorithms have been developed and work well on sorts of simulators like CARLA \cite{dosovitskiy2017carla} and SUMO \cite{SUMO2018}, but their effectiveness on real vehicles has rarely been validated.
In this paper, we propose a new RL algorithm for autonomous driving decision making, called encoding distributional soft actor-critic (E-DSAC), which can deal with the dimension sensitivity and permutation sensitivity problems mentioned above. The contributions and novelty of this paper are summarized as follows:
\begin{enumerate}
\item Inspired by the existing permutation invariant state representation method that introduces a feature NN to encode the indicators of each surrounding vehicle into an encoding vector \cite{duan2021fixeddimensional}, an encoding distributional policy iteration (DPI) framework is developed by embedding the encoding module in the distributional RL framework. The proposed DPI framework is proved to exhibit important properties in terms of convergence and global optimality.
\item Based on the developed encoding DPI framework, we propose the encoding DSAC (E-DSAC) algorithm by adding the gradient-based update rule of feature NN to the policy evaluation process of the DSAC algorithm \cite{duan2021distributional}. Compared with existing RL-based decision-marking methods \cite{wang2017formulation,mirchevska2018high,duan2020hierarchical,wang2018reinforcement, wang2019continuous,GUAN}, E-DSAC is suitable for situations where the number of surrounding vehicles is variable and eliminates the requirement for manually pre-designed sorting rules, leading to higher policy performance and generality.
\item The multi-lane driving task and the corresponding reward function are designed to verify the effectiveness of the proposed algorithm. Results show that the policy learned by E-DSAC can realize efficient, smooth, and relatively safe autonomous driving in the designed scenario. And the final policy performance learned by E-DSAC is about three times that of DSAC. Furthermore, its effectiveness has also been verified in real vehicle experiments.
\end{enumerate}
The paper is organized as follows. In Section \ref{sec.preliminary}, we introduce the key notations and some preliminaries. Section \ref{sec:method} develops the encoding DPI framework and proposes the E-DSAC algorithm. Section \ref{sec:simulation} presents the simulation results in a four-lane highway driving scenario that show the efficacy of E-DSAC, and Section \ref{sec:real_veh_test} demonstrates the performance of E-DSAC in real vehicle experiments. Finally, Section \ref{sec:conclusion} concludes this paper.
\section{Preliminaries}
\label{sec.preliminary}
In this section, we first introduce the basic principles of reinforcement learning (RL) and the distributional soft actor-critic (DSAC) algorithm. Then, we will show some key notations about the driving state representation.
\subsection{Basic Principles of RL}
\label{sec.notation}
We describe the autonomous driving process in a standard RL setting wherein an agent interacts with an environment in discrete time.
At current state $s_t$, the agent will take action $a_t$ according to policy $\pi$ and then the environment will return next state $s_{t+1}$ according to the environment model $p(s_{t+1}|s_t,a_t)$, i.e, $s_{t+1} \sim p(s_{t+1}|s_t,a_t)$ and a scalar reward $r_t$. This process will repeat until the episode ends. In this paper, the policy is assumed to be stochastic, denoted as $\pi(a_t|s_t)$, which maps a given state to a probability distribution over actions. To ease the exposition, the current and next state-action pairs are also denoted as $(s,a)$ and $(s',a')$, respectively, and we will use $\rho_{\pi}$ to denote the state or state-action distribution induced by policy $\pi$.
The standard RL aims to learn a policy to maximize the expected accumulated return. This paper employs a more general entropy-augmented policy objective, which is widely used in soft RL methods \cite{Haarnoja2017Soft-Q,Haarnoja2018SAC,duan2021distributional}, to encourage exploration and further improve performance
\begin{equation}
\label{eq.policy_objective}
J_{\pi} = \mathop{\mathbb E}\displaylimits_{(s_{i \ge t},a_{i \ge t})\sim \rho_{\pi}}\Big[\sum^{\infty}_{i=t}\gamma^{i-t} [r_i+\alpha\mathcal{H}(\pi(\cdot|s_i))]\Big],
\end{equation}
where $\gamma \in (0, 1)$ is the discount factor, $\mathcal{H}(\pi(\cdot|s))=\mathop{\mathbb E}\displaylimits_{a\sim \pi}[-{\rm log} \pi(a|s)$] is the policy entropy, $\alpha$ is the coefficient that determines the relative importance of the entropy term against the reward. Under this scheme, we define the Q-value as
\begin{equation}
\label{eq.Q_definition}
\begin{aligned}
&Q^{\pi}(s_t,a_t)\\
&=\mathop{\mathbb E}\displaylimits_{\VECTOR{s_i,a_i\ge t}\sim\pi}\Big[r(s_t, a_t)+\sum^{\infty}_{i=t+1}\gamma^{i-t} [r(s_i,a_i)-\alpha {\rm log} \pi(a_i|s_i)]\Big]\\
&=r(s_t, a_t)+\gamma \mathop{\mathbb E}\displaylimits_{s'\sim p,a'\sim\pi}[Q^{\pi}(s',a') -\alpha {\rm log} \pi(a'|s') ].
\end{aligned}
\end{equation}
to indicate the return for choosing $a_t$ in state $s_t$ and thereafter following policy $\pi$. Then the objective \eqref{eq.policy_objective} of soft RL can be rewritten as
\begin{equation}
\label{eq.policy_imp}
\pi_{\rm{new}}=\arg\max_{\pi} \mathop{\mathbb E}\displaylimits_{s\sim \rho_{\pi},a\sim \pi}\big[Q^{\pi}(s,a)-\alpha \log\pi(a|s)\big].
\end{equation}
\subsection{Distributional Soft Actor-Critic}
\label{sec.DSAC}
Under the framework of soft RL, we proposed a distributional soft actor-critic (DSAC) algorithm in 2021, which achieved SOTA performance in many continuous control tasks \cite{duan2021distributional}. Unlike mainstream RL algorithms that only learn the expected return, i.e., Q-value $Q(s,a)$, DSAC attempts to learn the distribution of return to reduce the overestimation of the value function, thereby improving policy performance. In DSAC, we define the state-action return as
\begin{equation}
Z^{\pi}(s_t,a_t)=r(s_t,a_t)+\sum^{\infty}_{i=t+1}\gamma^{i-t} [r(s_i,a_i)-\alpha {\rm log} \pi(a_i|s_i)],
\end{equation}
which is a random variable due to the randomness in the state transition $p$ and policy $\pi$. We define $\mathcal{Z}^{\pi}(Z^{\pi}(s,a)|s,a): \mathcal{S}\times\mathcal{A}\rightarrow \mathcal{P}(Z^{\pi}(s,a))$ as a mapping from $(s,a)$ to a distribution over state-action returns, and call it the state-action return distribution or distributional value function. Obviously, the expectation of the return distribution is the Q-value in \eqref{eq.Q_definition}:
\begin{equation}
\label{eq.Q_and_distribution}
Q^{\pi}(s,a)=\mathop{\mathbb E}\displaylimits_{\substack{Z\sim \mathcal{Z}}}[Z^{\pi}(s,a)].
\end{equation}
The return distribution satisfies the following distributional Bellman operator
\begin{equation}
\label{eq.bellman}
\mathcal{T^{\pi}}Z^\pi(s,a) \overset{D}{=}r(s,a)+\gamma( Z^\pi(s',a')-\log\pi(a'|s')),
\end{equation}
where $s'\sim p$, $a'\sim \pi$, $A \overset{D}{=} B$ denotes that two random variables $A$ and $B$ have equal probability laws. To implement \eqref{eq.bellman}, the return distribution can be learned by
\begin{equation}
\label{eq.policy_eva}
\mathcal{Z}_{\rm{new}} = \arg\min_{\mathcal{Z}}\mathop{\mathbb E}\displaylimits_{\rho_\pi}\big[D_{\rm KL}(\mathcal{T}^{\pi} \mathcal{Z}_{\rm{old}}(\cdot|s,a),\mathcal{Z}(\cdot|s,a))\big],
\end{equation}
where $\mathcal{T^{\pi}}Z(s,a)\sim\mathcal{T}^{\pi} \mathcal{Z}(\cdot|s,a)$ and $D_{\rm KL}$ represents the Kullback-Leibler (KL) divergence.
It has been proved that the distributional RL (DRL) framework that alternates between distributional policy evaluation based on \eqref{eq.policy_eva} and policy improvement based on \eqref{eq.policy_imp} will lead to the maximum entropy objective in \eqref{eq.policy_objective} \cite{duan2021distributional}.
\subsection{State Representation of Autonomous Driving}
In addition to the basic RL algorithm, to implement RL in autonomous driving, one essential thing is to design the state $s$ to reasonably describe the driving task based on the given observed information. Given a typical driving task, the observation $\mathcal{O}\in \overline{\mathcal{O}}$ from the driving environment should consist of two components: (a) the information set of surrounding vehicles $\mathcal{X}=\{x_1,x_2,...,x_M\}$, where $x_{i} \in \mathbb{R}^{d_1}$ is the real-valued indicator vector of the $i$th vehicle, and (b) the feature vector containing other indicators related to the ego vehicle and road geometry, denoted by $x_{\rm else} \in \mathbb{R}^{d_2}$. Thus, we can write $\mathcal{O}=\{\mathcal{X},x_{\rm else}\}$. Noted that in this setting, the observation $\mathcal{O}$ can be seen as the raw state of the autonomous driving problem, which is assumed to contain all the information required for policy learning. Therefore, the aforementioned equation in Section \ref{sec.notation} and \ref{sec.DSAC} also holds for $\mathcal{O}$. The reward function can be denoted as $r(\mathcal{O}, a, \mathcal{O'})$ in this case.
During driving, the set size $M$ of $\mathcal{X}$, i.e., the number of surrounding vehicles within the perception range of the ego car, is constantly changing due to the dynamic nature of the traffic. Assuming that the range of the number of surrounding vehicles is $[1,N]$, the space of $\mathcal{X}$ can be denoted as $\overline{\mathcal{X}}=\{\mathcal{X}|\mathcal{X}=\{x_1,\cdots,x_{M}\},x_i\in\mathbb{R}^{d_1},i\le M,M\in[1,N]\cap\mathbb{N}\}$, i.e., $\mathcal{X}\in\overline{\mathcal{X}}$. Noted that the subscript $i$ of $x_{i}$ in $\mathcal{X}$ represents the ID of a certain vehicle. We denote the mapping from the observation $\mathcal{O}$ to the state vector $s$, which is taken as the policy and value inputs, as $U$, that is,
\begin{equation}
\label{eq.mapping_s}
s=U(\SPACE{O})=U(\SPACE{X},x_{\rm else}).
\end{equation}
One widely used mapping method, called fixed-permutation (FP) representation, is to directly concatenate the variables in $\mathcal{O}$, i.e.,
\begin{equation}
\label{eq.order_mapping}
s= U_{\rm FP}(\SPACE{O})=[x_{o(1)}^{\top},\dots,x_{o(M)}^{\top},x_{\rm else}^{\top}]^{\top},
\end{equation}
where $o$ denotes the pre-designed sorting rule, for instance, the surrounding vehicles are arranged in increasing order according to their relative distance from the ego.
As mentioned before, RL algorithms based on $U_{\rm FP}(\SPACE{O})$ suffer from two challenges: (1) dimension sensitivity and (2) permutation sensitivity. On one hand, from \eqref{eq.order_mapping}, dimension sensitivity means that the different number of surrounding vehicles would lead to different state dimension, i.e., ${\rm dim}(s)=Md_1+d_2$. Since the input dimension of policy and value functions should be fixed due to the structure limit of approximate functions, such as multi-layer neural networks (NNs), RL methods relying on \eqref{eq.order_mapping} can only consider a fixed number of surround vehicles. On the other hand, permutation sensitivity indicates that different permutations $o$ of $x_i$ correspond to different state vectors $s$, leading to different policy outputs. However, a reasonable driving decision should be permutation invariant to the order of objects in $\mathcal{X}$ because all possible permutations correspond to the same driving scenario.
\section{Encoding DSAC}
\label{sec:method}
To handle dimension sensitivity and
permutation sensitivity problems, this section proposes a permutation-invariant version of DSAC for self-driving decision-making, called encoding DSAC (E-DSAC), by embedding the permutation invariant state representation into DSAC.
\subsection{Permutation Invariant State Representation}
Firstly, we introduce a fixed-dimensional and permutation invariant state representation method called encoding sum and concatenation (ESC) \cite{duan2021fixeddimensional}. We employ a feature NN $h(\VECTOR{x};\phi )$ to encode the indicators of each surrounding vehicle $\VECTOR{x}\in\SPACE{X}$,
\begin{equation}
\VECTOR{x}_{\rm encode}=h(\VECTOR{x};\phi ),
\end{equation}
where $\VECTOR{x}_{\rm encode} \in \mathbb{R}^{d_3}$ is the corresponding encoding vector of each $\VECTOR{x} \in \mathbb{R}^{d_1}$ in the set $\SPACE{X}$, $\phi$ represents the parameters of feature NN.
Then, we obtain the representation vector $x_{\rm set}$ of all surrounding vehicles by summing the encoding vector of each surrounding vehicle
\begin{equation}
\label{eq04:PI_encode}
\VECTOR{x}_{\rm set}=\sum_{\VECTOR{x}\in\SPACE{X}}h(\VECTOR{x};\phi ).
\end{equation}
After concatenating with other indicators related to the ego vehicle and road geometry $x_{\rm else}$, we can obtain the final state representation $s$,
\begin{equation}
\label{eq.pi_state}
\VECTOR{s}=U_{\rm ESC}(\SPACE{O};\phi )=[\VECTOR{x}_{\rm set}^\top,\VECTOR{x}_{\rm else}^\top]^\top=\Big[\sum_{\VECTOR{x}\in\SPACE{X}}h^\top(\VECTOR{x};\phi ),\VECTOR{x}_{\rm else}^\top\Big]^\top.
\end{equation}
From \eqref{eq.pi_state}, it is clear that $\text{dim}(s)=\text{dim}(h_{\phi })+\text{dim}(x_{\rm else })=d_2+d_3$ for $\forall M \in [1,N]$. In other words, $U_{\rm ESC}(\SPACE{O};\phi )$ is fixed-dimensional. Furthermore, the summation operator in \eqref{eq04:PI_encode} is permutation invariant w.r.t. objects in $\SPACE{X}$. Thus, $U_{\rm ESC}(\SPACE{O};\phi )$ is a fixed-dimensional and permutation invariant state representation of observation $\mathcal{O}$. Besides, more importantly, the injectivity of $U_{\rm ESC}(\SPACE{O};\phi )$ can be guaranteed by carefully designing the architecture of the feature NN.
\begin{lemma}\label{lemma.encoding}
(Injectivity of ESC\cite{duan2021fixeddimensional}). Let $\SPACE{O}=\{\SPACE{X},\VECTOR{x}_{\rm else}\}$, where $\VECTOR{x}_{\rm else}\in\mathbb{R}^{d_2}$ and $\mathcal{X}=\{x_1,x_2,...,x_M\}$. Denote the space of $\SPACE{X}$ as $\overline{\SPACE{X}}$, where $\overline{\mathcal{X}}=\{\mathcal{X}|\mathcal{X}=\{x_1,\cdots,x_{M}\},x_i\in[c_{\rm min},c_{\rm max}]^{d_1},i\le M,M\in[1,N]\cap\mathbb{N}\}$, in which $c_{\rm min}$ and $c_{\rm max}$ are the lower and upper bounds of all elements in $\forall\VECTOR{x_i}$, respectively. Noted that the size $M$ of the set $\mathcal{X}$ is variable. If the feature NN $h(\VECTOR{x};\VECTOR{\phi}):\mathbb{R}^{d_1}\rightarrow\mathbb{R}^{d_3}$ is over-parameterized (i.e., the number of hidden neurons is sufficiently large) with a linear output layer, and its output dimension $d_3\ge Nd_1+1$, there always $\exists \phi^{\dagger}$ such that the mapping $U_{\rm ESC}(\mathcal{O};\phi^{\dagger}): \overline{\mathcal{X}}\times \mathbb{R}^{d_2}\rightarrow \mathbb{R}^{d_3+d_2}$ in \eqref{eq.pi_state} is injective.
\end{lemma}
\subsection{Encoding Distributional Policy Iteration}
We assume that the random returns $Z(s,a)$ and action $a$ obey the Gaussian distribution. Therefore, both the state-action return distribution and policy functions are modeled as Gaussian with mean and covariance given by NN, denoted as $\mathcal{Z}(\cdot|s,a;\theta)$ and $\pi_{\omega}(\cdot|s;\omega)$, where $\theta$ and $\omega$ are parameters. For ease of presentation, we will also denote $\mathcal{Z}(\cdot|s,a;\theta)$, $\pi(\cdot|s;\omega)$, and $h(x;\phi)$ as $\mathcal{Z}_{\theta}(\cdot|s,a)$, $\pi_{\omega}(\cdot|s)$, and $h_{\phi}(x)$, respectively, when it is clear from context.
By taking $U_{\rm ESC}(\SPACE{O};\phi )$ as the input of $\pi_{\omega }$, the policy function can be expressed as
\begin{equation}
\begin{aligned}
\pi(U_{\rm ESC}(\SPACE{O};\phi );\omega )=\pi(\sum_{\VECTOR{x}\in\SPACE{X}}h(\VECTOR{x};\phi ),\VECTOR{x}_{\rm else};\omega ).
\end{aligned}
\end{equation}
Similarly, the return distribution $\mathcal{Z}(s,a)$ can be formalized as
\begin{equation}
\begin{aligned}
\mathcal{Z}(U_{\rm ESC}(\SPACE{O};\phi );\theta )=\mathcal{Z}(\sum_{\VECTOR{x}\in\SPACE{X}}h(\VECTOR{x};\phi ),\VECTOR{x}_{\rm else};\theta ).
\end{aligned}
\end{equation}
It is clear that both $\pi_\omega(U_{\rm ESC}(\SPACE{O};\phi ))$ and $\mathcal{Z}_{\theta}(U_{\rm ESC}(\SPACE{O};\phi ))$ are permutation invariant to objects in $\mathcal{X}$. This enables us to use DSAC to learn a permutation invariant policy for autonomous driving considering variable surrounding vehicles.
By embedding the ESC state representation into the DRL framework, the objective of distributional policy evaluation in \eqref{eq.policy_eva} can be rewritten as
\begin{equation}
\label{eq.encoding_policy_eva_objec}
\begin{aligned}
&J_{\mathcal{Z}}(\theta,\phi)=\mathop{\mathbb E}\displaylimits_{\rho_\pi}\big[D_{\rm KL}(\mathcal{T}^{\pi}\mathcal{Z}_{\theta_{\rm old}}(\cdot|s,a)\big|_{\VECTOR{s}=U_{\rm ESC}(\SPACE{O};\VECTOR{\phi_{\rm old}})},\\
&\qquad\qquad\qquad\qquad\qquad\qquad\mathcal{Z}_{\theta}(\cdot|s,a)\big|_{\VECTOR{s}=U_{\rm ESC}(\SPACE{O};\phi )})\big].
\end{aligned}
\end{equation}
In this case, $\rho_{\pi}$ represents the observation-action distribution induced by policy $\pi$. Noted that \eqref{eq.encoding_policy_eva_objec} is a joint optimization objective for both feature and value networks. Then, from \eqref{eq.policy_imp}, the policy network will be updated by maximizing the following objective
\begin{equation}
\label{eq:encoding_policy_imp}
J_{\pi}(\omega )=\mathop{\mathbb E}\displaylimits_{\mathcal{O}\sim \rho_\pi,\VECTOR{a}\sim\pi_{\omega }}[Q(\VECTOR{s},\VECTOR{a};\theta )-\alpha\log\pi(\VECTOR{a}|\VECTOR{s};\omega )\big|_{\VECTOR{s}=U_{\rm ESC}(\SPACE{O};\phi )}].
\end{equation}
Based on \eqref{eq.encoding_policy_eva_objec} and \eqref{eq:encoding_policy_imp}, we can derive the encoding DRL framework shown in Algorithm \ref{alg:EDPI}.
\begin{algorithm}[!htb]
\caption{Encoding DPI Framework}
\label{alg:EDPI}
\begin{algorithmic}
\STATE Initialize parameters $\theta $, $\omega $, $\phi $ and entropy coefficient $\alpha$
\REPEAT
\STATE 1. Encoding Distributional Policy Evaluation
\setlength{\leftskip}{2em}
\STATE Estimate $V_{\theta}$ and $h_\phi$ using policy $\pi_\omega$
\REPEAT
\STATE \begin{equation}
\label{eq.encoding_PE}
\{\theta,\phi\}
\leftarrow \arg\min_{\theta,\phi}J_{\mathcal{Z}}(\theta,\phi)
\end{equation}
\UNTIL Convergence
\setlength{\leftskip}{0em}
\STATE 2. Encoding Policy Improvement
\begin{equation}
\label{eq.encoding_PI}
\omega \leftarrow \arg\max_{\omega}J_{\pi}(\omega)
\end{equation}
\UNTIL Convergence
\end{algorithmic}
\end{algorithm}
Next, we will prove that the proposed encoding DPI framework would lead to policy improvement with respect to the maximum entropy objective.
\begin{lemma}
\label{lemma.UAT}
(Universal Approximation Theorem \cite{Hornik1990Universal}). For any continuous function $F(x):\mathbb{R}^n\rightarrow\mathbb{R}^d$ on a compact set $\Omega$, there exists an over-parameterized NN, which uniformly approximates $F(x)$ and its gradient to within arbitrarily small error $\epsilon \in \mathbb{R}_{+}$ on $\Omega$.
\end{lemma}
\begin{lemma}\label{lemma.edpe}
(Encoding Distributional Policy Evaluation). Suppose both feature NN $h(\VECTOR{x};\VECTOR{\phi}):\mathbb{R}^{d_1}\rightarrow\mathbb{R}^{d_3}$ and value NN $\mathcal{Z}(\cdot|s,a; \theta)$ are over-parameterized, with $d_3\ge Nd_1+1$. Let
\begin{equation}
\nonumber
\begin{aligned}
&\{\theta^{i+1},\phi^{i+1}\}=\arg\min_{\{\theta,\phi\}}\mathop{\mathbb E}\displaylimits_{\rho_\pi}\big[D_{\rm KL}(\mathcal{T}^{\pi}\mathcal{Z}_{\theta_i}(\cdot|s,a)\big|_{\VECTOR{s}=U_{\rm ESC}(\SPACE{O};\VECTOR{\phi_i})},\\
&\qquad\qquad\qquad\qquad\qquad\qquad\mathcal{Z}_{\theta}(\cdot|s,a)\big|_{\VECTOR{s}=U_{\rm ESC}(\SPACE{O};\phi )})\big],
\end{aligned}
\end{equation}
and denote the state-action return distribution as $\mathcal{Z}^i(\cdot|\mathcal{O},a)=\mathcal{Z}(\cdot|U_{\rm ESC}(\mathcal{O};\phi^{i}),a;{\theta^{i}})$, which maps the observation-action pair $(\mathcal{O},a)$ to a distribution over random state-action returns $Z^i(\mathcal{O},a)$, i.e., $Z^i(\mathcal{O},a)\sim\mathcal{Z}^i(\cdot|\mathcal{O},a)$. Consider the distributional bellman backup operator $\mathcal{T}^{\pi}$ in \eqref{eq.bellman} and define $\mathcal{T}^{\pi}\mathcal{Z}^i(\cdot|\mathcal{O},a)$ as the distribution of $\mathcal{T}^{\pi}Z^i(\mathcal{O},a)$, i.e., $\mathcal{T}^{\pi}Z^i(\mathcal{O},a)\sim\mathcal{T}^{\pi}\mathcal{Z}^i(\cdot|\mathcal{O},a)$. Then, given any policy $\pi$, the sequence $\mathcal{Z}^{i}$ will converge to $\mathcal{Z}^{\pi}$ as $i\rightarrow \infty$.
\end{lemma}
\begin{proof}
Let $\overline{Z}$ denote the space of soft return function $Z$. From Lemma \ref{lemma.encoding}, there exists $\phi_{i+1}$ such that $U_{\rm ESC}(\mathcal{O};\phi^{i+1})$ is an injective mapping. Then, based on Lemma \ref{lemma.UAT}, given any parameters $\{\theta^{i},\phi^{i}\}$, there exists $\theta^{i+1}$ such that
\begin{equation}
\nonumber
\begin{aligned}
&D_{\rm KL}(\mathcal{T}^{\pi}\mathcal{Z}_{\theta^{i}}(\cdot|s,a)\big|_{U_{\rm ESC}(\mathcal{O};\phi^{i})},\\
&\qquad\qquad\qquad\qquad\mathcal{Z}_{\theta^{i+1}}(\cdot|s,a)\big|_{U_{\rm ESC}(\mathcal{O};\phi^{i+1})}) = 0,
\end{aligned}
\end{equation}
which is equivalent to
\begin{equation}
\nonumber
\mathcal{Z}_{\theta^{i+1}}(\cdot|U_{\rm ESC}(\mathcal{O};\phi^{i+1}),a)=\mathcal{T}^{\pi}\mathcal{Z}_{\theta^{i}}(\cdot|U_{\rm ESC}(\mathcal{O};\phi^{i}),a).
\end{equation}
Then, we can directly apply the standard convergence results for distributional policy evaluation (Lemma 1 of \cite{duan2021distributional}), that is, $\mathcal{T}^{\pi}: \overline{Z}\rightarrow\overline{Z}$ is a $\gamma$-contraction in terms of some measure. Therefore, $\mathcal{T}^{\pi}_{\mathcal{D}}$ has a unique fixed point, which is $Z^{\pi}$, and the sequence $Z^{i}$ will converge to it as $i\rightarrow \infty$, i.e., $\mathcal{Z}^{i}$ will converge to $\mathcal{Z}^{\pi}$ as $i\rightarrow \infty$.
\end{proof}
\begin{lemma}\label{lemma.epi}
(Encoding Policy Improvement) Suppose
\begin{equation}
\label{eq.lemma4_assump}
\mathcal{Z}(\cdot|U_{\rm ESC}(\mathcal{O};\phi_{\rm old}),a;\theta_{\rm old})=\mathcal{Z}^{\pi_{\omega_{\rm old}}}(\cdot|\mathcal{O},a),
\end{equation}
and the policy NN $\pi(\cdot|s)$ is over-parameterized.
Let
\begin{equation}
\label{eq.lemma4_updaterule}
\begin{aligned}
&\omega_{\rm new}=\arg\max_{\omega}J_{\pi}(\omega),
\end{aligned}
\end{equation}
where
\begin{equation}
\nonumber
J_{\pi}(\omega)=\mathop{\mathbb E}\displaylimits_{\substack{\mathcal{O}\sim \rho_\pi,\\\VECTOR{a}\sim\pi_{\omega }}}[Q(\VECTOR{s},\VECTOR{a};\theta_{\rm old} )-\alpha\log\pi_{\omega}(\VECTOR{a}|\VECTOR{s} )\big|_{\VECTOR{s}=U_{\rm ESC}(\SPACE{O};\phi_{\rm old})}].
\end{equation}
Then $Q^{\pi_{\omega_{\rm new}}}(\mathcal{O},a) \ge Q^{\pi_{\omega_{\rm old}}}(\mathcal{O},a)$ for $\forall(\mathcal{O},a) \in \overline{\mathcal{O}}\times\mathcal{A}$.
\begin{proof}
Firstly, let
\begin{equation}
\label{eq.lemma4_another_policy}
\pi_{\rm new}(\cdot|\mathcal{O})=\arg\max_{\pi}\mathop{\mathbb E}\displaylimits_{\VECTOR{a}\sim\pi }[Q^{\pi_{\omega_{\rm old}}}(\mathcal{O},\VECTOR{a})-
\alpha\log\pi(\VECTOR{a}|\mathcal{O})],
\end{equation}
for $\forall \mathcal{O}\in \overline{\mathcal{O}}$.
Given an arbitrary problem, the return distributions of two different observations $\mathcal{O}\neq \mathcal{O'} \in \overline{\mathcal{O}}$ fall into the following two cases:
\begin{enumerate}[\quad(1)]
\item $ \exists a\in\mathcal{A}$ such that $\mathcal{Z}^{\pi_{\omega_{\rm old}}}(\cdot|\mathcal{O},a)\neq\mathcal{Z}^{\pi_{\omega_{\rm old}}}(\cdot|\mathcal{O'},a)$;
\item $\mathcal{Z}^{\pi_{\omega_{\rm old}}}(\cdot|\mathcal{O},a)=\mathcal{Z}^{\pi_{\omega_{\rm old}}}(\cdot|\mathcal{O'},a)$ for $ \forall a\in\mathcal{A}$.
\end{enumerate}
For case 1, to ensure \eqref{eq.lemma4_assump} holds, it must follow that
\begin{equation}
\mathcal{O}\neq \mathcal{O'} \rightarrow
U_{\rm ESC}(\mathcal{O};\phi_{\rm old}) \neq U_{\rm ESC}(\mathcal{O'};\phi_{\rm old}),
\end{equation}
that is, $U_{\rm ESC}(\mathcal{O};\phi_{\rm old})$ is an injective function. For case 2, although there is a possibility that $U_{\rm ESC}(\mathcal{O};\phi_{\rm old}) = U_{\rm ESC}(\mathcal{O'};\phi_{\rm old})$, we always have that $\pi_{\rm new}(\cdot|\mathcal{O})=\pi_{\rm new}(\cdot|\mathcal{O'})$ since $Q^{\pi_{\omega_{\rm old}}}(\mathcal{O},\VECTOR{a})=Q^{\pi_{\omega_{\rm old}}}(\mathcal{O'},\VECTOR{a})$. Therefore, one has
\begin{equation}
\begin{aligned}
\pi_{\rm new}(\cdot|\mathcal{O})\neq\pi_{\rm new}(\cdot|\mathcal{O'})\rightarrow U_{\rm ESC}(\mathcal{O};\phi_{\rm old}) \neq U_{\rm ESC}(\mathcal{O'};\phi_{\rm old}).
\end{aligned}
\end{equation}
Furthermore, from Lemma \ref{lemma.UAT}, there exist $\omega^{\dagger}$ such that
\begin{equation}
\label{eq.policy_equal}
\pi(\cdot|s;\omega^{\dagger})\big|_{\VECTOR{s}=U_{\rm ESC}(\SPACE{O};\VECTOR{\phi_{\rm old}})}=\pi_{\rm new}(\cdot|\mathcal{O}),\quad \forall \mathcal{O}\in \overline{\mathcal{O}}.
\end{equation}
Based on \eqref{eq.lemma4_updaterule}, one has
\begin{equation}
\label{eq.policy_inequality_1}
\begin{aligned}
J_{\pi}(\omega^{\dagger})\le \max_{\omega} J_{\pi}(\omega) = J_{\pi}(\omega_{\rm new}).
\end{aligned}
\end{equation}
From \eqref{eq.lemma4_assump} and \eqref{eq.Q_and_distribution}, it is clear that
\begin{equation}
\label{eq:Q_relation2}
Q^{\pi_{\omega_{\rm old}}}(\mathcal{O}, a)=Q(s, a;{\theta_{\rm old}})\big|_{s=U_{\rm ESC}(\mathcal{O};\phi_{\rm old})},\forall \{\mathcal{O},a\}\in \overline{\mathcal{O}}\times\mathcal{A}.
\end{equation}
Then, according to \eqref{eq.lemma4_another_policy},
\begin{equation}
\label{eq.policy_inequality_2}
\begin{aligned}
&J_{\pi}(\omega_{\rm new})\\
&=\mathop{\mathbb E}\displaylimits_{\substack{\mathcal{O}\sim \rho_{\pi},\\\VECTOR{a}\sim\pi_{\omega_{\rm new}}}}[Q^{\pi_{\omega_{\rm old}}}(\mathcal{O}, a)-\alpha\log\pi_{\omega_{\rm new}}(\VECTOR{a}|\VECTOR{s} )\big|_{\VECTOR{s}=U_{\rm ESC}(\SPACE{O};\phi_{\rm old})}]\\
&\le \mathop{\mathbb E}\displaylimits_{\substack{\mathcal{O}\sim \rho_\pi,\\a\sim \pi_{{\rm new}}}}[Q^{\pi_{\omega_{\rm old}}}(\mathcal{O},a)-\alpha\log\pi_{{\rm new}}(a|\mathcal{O})].
\end{aligned}
\end{equation}
Therefore, combining \eqref{eq.policy_equal}, \eqref{eq.policy_inequality_1} and \eqref{eq.policy_inequality_2}, it follows that
\begin{equation}
\label{eq.lemma4_policy_equality}
J_{\pi}(\omega_{\rm new})=\mathop{\mathbb E}\displaylimits_{\substack{\mathcal{O}\sim\rho_\pi,\\a\sim \pi_{{\rm new}}}}[Q^{\pi_{\omega_{\rm old}}}(\mathcal{O},a)-\alpha\log\pi_{{\rm new}}(a|\mathcal{O})].
\end{equation}
From \eqref{eq.lemma4_another_policy}, for $\forall \mathcal{O} \in \overline{\mathcal{O}}$, one has
\begin{equation}
\label{eq.lemma4_value_imp}
\begin{aligned}
&\mathop{\mathbb E}\displaylimits_{a\sim \pi_{{\rm new}}}[Q^{\pi_{\omega_{\rm old}}}(\mathcal{O},a)-\alpha\log\pi_{{\rm new}}(a|\mathcal{O})]\ge \\
& \qquad \mathop{\mathbb E}\displaylimits_{a\sim \pi_{\omega_{\rm old}}}[Q^{\pi_{\omega_{\rm old}}}(\mathcal{O},a)-\alpha\log\pi_{\omega_{\rm old}}(\VECTOR{a}|\VECTOR{s} )\big|_{\VECTOR{s}=U_{\rm ESC}(\SPACE{O};\phi_{\rm old})}].
\end{aligned}
\end{equation}
Based on \eqref{eq.lemma4_policy_equality}, by replacing $ \pi_{{\rm new}}$ with $ \pi_{\omega_{\rm new}}$, \eqref{eq.lemma4_value_imp} can be rewritten as
\begin{equation}
\begin{aligned}
&\mathop{\mathbb E}\displaylimits_{a\sim \pi_{\omega_{\rm new}}}[Q^{\pi_{\omega_{\rm old}}}(\mathcal{O},a)-\alpha\log\pi_{\omega_{\rm new}}(a|s)\big|_{\VECTOR{s}=U_{\rm ESC}(\SPACE{O};\VECTOR{\phi_{\rm old}})}]\ge \\
& \qquad \mathop{\mathbb E}\displaylimits_{a\sim \pi_{\omega_{\rm old}}}[Q^{\pi_{\omega_{\rm old}}}(\mathcal{O},a)-\alpha\log\pi_{\omega_{\rm old}}(\VECTOR{a}|\VECTOR{s} )\big|_{\VECTOR{s}=U_{\rm ESC}(\SPACE{O};\phi_{\rm old})}].
\end{aligned}
\end{equation}
Next, from \eqref{eq.Q_definition}, it follows that
\begin{equation}
\nonumber
\begin{aligned}
&Q^{\pi_{\omega_{\rm old}}}(\mathcal{O}, a) \\
&= r+\gamma\mathop{\mathbb E}\displaylimits_{\mathcal{O}',a'\sim \pi_{\omega_{\rm old}}}[Q^{\pi_{\omega_{\rm old}}}(\mathcal{O}',a')-\alpha\log\pi_{\omega_{\rm old}}(a'|s')]\\
&\le r+\gamma\mathop{\mathbb E}\displaylimits_{\mathcal{O}',a'\sim \pi_{\omega_{\rm new}}}[Q^{\pi_{\omega_{\rm old}}}(\mathcal{O}',a')-\alpha\log\pi_{\omega_{\rm new}}(a'|s')]\\
&\vdots\\
&\le Q^{\pi_{\omega_{\rm new}}}(\mathcal{O},a), \quad \forall(\mathcal{O},a)\in\overline{\mathcal{O}}\times\mathcal{A},
\end{aligned}
\end{equation}
where $s'=U_{\rm ESC}(\SPACE{O'};\VECTOR{\phi_{\rm old}})$ and the last step is derived by repeatedly expanding $Q^{\pi_{\omega_{\rm old}}}$ on the right-hand side by applying \eqref{eq.Q_definition}.
\end{proof}
\end{lemma}
\begin{theorem}\label{theorem.dspi}
(Encoding Distributional Policy Iteration). The encoding distributional policy iteration, which alternates between encoding distributional policy evaluation and encoding policy improvement, can converge to a policy $\pi^*$ such that $Q^{\pi^*}(\mathcal{O}, a)\ge Q^{\pi}(\mathcal{O}, a)$ for $\forall\pi$ and $\forall (\mathcal{O}, a)\in\overline{\mathcal{O}}\times\mathcal{A}$, assuming that $|\mathcal{A}|<\infty$ and reward is bounded.
\end{theorem}
\begin{proof}
Let $\pi_{\omega_k}$ denote the policy at iteration $k$. For $\forall \pi_{\omega_k}$, we can always find $\{\theta_k,\phi_k\}$ such that $\mathcal{Z}(\cdot|U_{\rm ESC}(\mathcal{O};\phi_{k}),a;{\theta_{k}})=\mathcal{Z}^{\pi_{\omega_k}}(\cdot|\mathcal{O},a)$ through distributional soft policy evaluation process following from Lemma \ref{lemma.edpe}. Therefore, we can obtain $Q^{\pi_{\omega_k}}(\mathcal{O},a)$ according to \eqref{eq.Q_and_distribution}. By Lemma \ref{lemma.epi}, the sequence $Q^{\pi_{\omega_k}}(\mathcal{O},a)$ is monotonically increasing for $\forall(\mathcal{O},a)\in\overline{\mathcal{O}}\times\mathcal{A}$. Since $Q^{\pi}$ is bounded everywhere for $\forall\pi$ (both the reward and policy entropy are bounded), the policy sequence $\pi_{\omega_k}$ converges to some $\pi^{\dagger}$ as $k\rightarrow\infty$. At convergence, it must follow that
\begin{equation}
\begin{aligned}
&\mathop{\mathbb E}\displaylimits_{a\sim \pi^{\dagger}}[Q^{\pi^{\dagger}}(s,a)-\alpha\log\pi^{\dagger}(a|s)]\ge \\
&\qquad \quad \mathop{\mathbb E}\displaylimits_{a\sim \pi}[Q^{\pi^{\dagger}}(s,a)-\alpha\log\pi(a|s)],\quad \forall \pi, \forall s \in \mathcal{S}.
\end{aligned}
\end{equation}
Using the same iterative argument as in Lemma \ref{lemma.epi}, we have
\begin{equation}
\nonumber
Q^{\pi^{\dagger}}(s, a) \ge Q^{\pi}(s, a), \quad \forall \pi, \forall(\mathcal{O},a)\in\overline{\mathcal{O}}\times\mathcal{A}.
\end{equation}
Hence $\pi^{\dagger}$ is optimal, i.e., $\pi^{\dagger}=\pi^*$.
\end{proof}
\subsection{Algorithm}
For practical applications, directly solving \eqref{eq.encoding_PE} and \eqref{eq.encoding_PI} may be intractable due to the high-dimensional and nonlinear characteristics of NNs. In this case, the gradient method is an effective manner to find the nearly optimal solutions iteratively.
\subsubsection{Encoding Distributional Policy Evaluation}
To derive the gradient of value and feature functions, we first rewrite their objective \eqref{eq.encoding_policy_eva_objec} as
\begin{equation}
\nonumber
\begin{aligned}
&J_{\mathcal{Z}}(\theta,\phi)\\
&= \mathbb{E}_{(\SPACE{O},a)\sim\mathcal{B}}\Big[D_{\rm{KL}}(\mathcal{T}^{\pi_{\omega'}}_{\mathcal{D}}\mathcal{Z}_{\theta'}(\cdot|s,a),\mathcal{Z}_{\theta}(\cdot|s,a))\Big]\\
&= \mathbb{E}_{(\SPACE{O},a)\sim\mathcal{B}}\Big[\sum_{\mathcal{T}^{\pi_{\omega'}}_{\mathcal{D}}Z(s,a)}\mathcal{T}^{\pi_{\omega'}}_{\mathcal{D}}\mathcal{Z}_{\theta'}(\mathcal{T}^{\pi_{\omega'}}_{\mathcal{D}}Z(s,a)|s,a)\\
& \qquad\qquad\qquad\qquad\qquad \log\frac{\mathcal{T}^{\pi_{\omega'}}_{\mathcal{D}}\mathcal{Z}_{\theta'}(\mathcal{T}^{\pi_{\omega'}}_{\mathcal{D}}Z(s,a)|s,a)}{\mathcal{Z}_{\theta}(\mathcal{T}^{\pi_{\omega'}}_{\mathcal{D}}Z(s,a)|s,a)}\Big]\\
&= -\mathbb{E}_{(\SPACE{O},a)\sim\mathcal{B}}\Big[\mathbb{E}_{\mathcal{T}^{\pi_{\omega'}}_{\mathcal{D}}Z(s,a)\sim\mathcal{T}^{\pi_{\omega'}}_{\mathcal{D}}\mathcal{Z}_{\theta'}(\cdot|s,a)}\\
&\qquad \qquad \qquad \qquad \qquad \log\mathcal{Z}_{\theta}(\mathcal{T}^{\pi_{\omega'}}_{\mathcal{D}}Z(s,a)|s,a)\Big]+c\\
&= -\mathbb{E}_{(\SPACE{O},a)\sim\mathcal{B}}\Big[\mathop{\mathbb E}\displaylimits_{\substack{(r,\mathcal{O}')\sim \mathcal{B},a'\sim\pi_{\omega'},\\ Z(s',a')\sim\mathcal{Z}_{\theta'}(\cdot|s',a')}}\\
&\qquad \qquad \qquad \qquad \qquad\log\mathcal{Z}_{\theta}(\mathcal{T}^{\pi_{\omega'}}_{\mathcal{D}}Z(s,a)|s,a)\Big]+c\\
&= -\mathop{\mathbb E}\displaylimits_{\substack{(\mathcal{O},a,r,\mathcal{O}')\sim\mathcal{B},a'\sim\pi_{\omega'},\\Z(s',a')\sim\mathcal{Z}_{\theta'}(\cdot|s',a')}}\Big[\log\mathcal{Z}_{\theta}(\mathcal{T}^{\pi_{\omega'}}_{\mathcal{D}}Z(s,a)|s,a)\Big]+c,
\end{aligned}
\end{equation}
where $\VECTOR{s}=U_{\rm ESC}(\SPACE{O};\phi )$, $\VECTOR{s}'=U_{\rm ESC}(\SPACE{O}';\phi ')$, $\mathcal{B}$ is a replay buffer of previously sampled experience, $\theta'$, $\phi'$ and $\omega'$ are parameters of target return distribution, feature and policy NNs, which are used to stabilize the learning process and evaluate the target.
Then, the corresponding gradient of the value network can be derived as
\begin{equation}
\label{eq05:value_gradient}
\begin{aligned}
&\nabla_{\theta }J_{\SPACE{Z}}(\theta ,\phi )=\\
& -\mathop{\mathbb E}\displaylimits_{\substack{(\mathcal{O},a,r,\mathcal{O}')\sim\mathcal{B},\\
a'\sim\pi_{\omega'},\\Z(s',a')\sim\mathcal{Z}_{\theta'}}}\Big[\nabla_\theta\log\mathcal{Z}_{\theta}(\mathcal{T}^{\pi_{\omega'}}_{\mathcal{D}}Z(s,a)|s,a)\big|_{\substack{\VECTOR{s}=U_{\rm ESC}(\SPACE{O};\phi ),\\\VECTOR{s}'=U_{\rm ESC}(\SPACE{O}';\phi ')}}\Big],
\end{aligned}
\end{equation}
and the gradient w.r.t the feature network can be calculated as
\begin{equation}
\label{eq05:encode_gradient}
\begin{aligned}
&\nabla_{\phi }J_{\SPACE{Z}}(\theta ,\phi )\\
&= -\mathop{\mathbb E}\displaylimits_{\substack{(\mathcal{O},a,r,\mathcal{O}')\sim\mathcal{B},\\
a'\sim\pi_{\omega'},\\Z(s',a')\sim\mathcal{Z}_{\theta'}}}\Big[
\frac{\partial\log\SPACE{Z}(\SPACE{T}^{\pi_{\omega '}}_{\SPACE{D}}Z(\VECTOR{s},\VECTOR{a})|\VECTOR{s},\VECTOR{a};\theta )}{\partial \VECTOR{x}_{\rm set}}\times\\
&\qquad\qquad \qquad \qquad\qquad \qquad \sum_{\VECTOR{x}\in \SPACE{X}}\nabla_{\phi }h(\VECTOR{x};\phi ) \Big|_{\substack{\VECTOR{s}=U_{\rm ESC}(\SPACE{O};\phi ),\\\VECTOR{s}'=U_{\rm ESC}(\SPACE{O}';\phi ')}}\Big].
\end{aligned}
\end{equation}
\subsubsection{Encoding Policy Improvement}
Since $\mathcal{Z}_{\theta}$ is assumed to be a Gaussian model, it can be expressed as $\mathcal{Z}_{\theta}(\cdot|s,a)=\mathcal{N}(Q_{\theta}(s,a),\sigma_{\theta}(s,a)^2)$, where $Q_{\theta}(s,a)$ and $\sigma_{\theta}(s,a)$ are the outputs of value network. To obtain the gradient of the policy NN, we first need to reparameterize the stochastic policy $\pi_\omega(a|s)$ using the following deterministic form
\begin{equation}
\nonumber
\label{eq03:repara_policy}
\VECTOR{a}=\check{\pi}_\omega(\VECTOR{s},\xi),
\end{equation}
where $\xi$ is an auxiliary random variable and $\check\pi$ is the reparameterized policy. In particular, since $\pi_{\omega}(\cdot|s)$ is assumed to be a Gaussian in this paper, $\pi_{\omega}(s,\xi)$ can be formulated as
\begin{equation}
\nonumber
\check{\pi}_\omega(s,\xi)=a_{\rm{mean}}+\xi \odot a_{\rm{std}},
\end{equation}
where $a_{\rm{mean}}\in \mathbb{R}^{{\rm{dim}}(\mathcal{A})}$ and $a_{\rm{std}}\in \mathbb{R}^{{\rm{dim}}(\mathcal{A})}$ are the mean and standard deviation of $\pi_{\omega}(\cdot|s)$, $\odot$ represents the Hadamard product and ${\xi} \sim \mathcal{N}(0,\bf{I}_{{\rm{dim}}(\mathcal{A})})$.
Then the policy update gradients can be approximated with
\begin{equation}
\label{eq04:policy_gradient_based_on_Q_PI}
\begin{aligned}
&\nabla_{\omega }J_{\pi}(\omega )=\mathbb{E}_{\mathcal{O}\sim\SPACE{\beta},\xi}\Big[-\alpha\nabla_{\omega }\log\pi(\VECTOR{a}|\VECTOR{s};\omega )+\big(\nabla_{\VECTOR{a}}Q(\VECTOR{s},\VECTOR{a};\theta )
\\&\qquad\qquad-\alpha\nabla_{\VECTOR{a}}\log\pi(\VECTOR{a}|\VECTOR{s};\omega )\big)\nabla_{\omega }\check{\pi}(\VECTOR{s},\xi;\omega)\big|_{\substack{\VECTOR{s}=U_{\rm ESC}(\SPACE{O};\phi ),\\\VECTOR{a}=\check{\pi}_{\omega}(\VECTOR{s},\xi)}}\Big].
\end{aligned}
\end{equation}
\subsubsection{Pseudocode}
As for the target NNs, we adopt a slow-moving update rate to stabilize the learning process, that is,
\begin{equation}
\label{eq.target_update}
\begin{aligned}
y' \leftarrow \tau y+(1-\tau)y',
\end{aligned}
\end{equation}
where $\tau$ is the synchronization rate, and $y$ represents the parameters $\theta$, $\phi$, and $\omega$. Finally, according to \cite{duan2021distributional,Haarnoja2018ASAC}, the entropy coefficient $\alpha$ is updated by minimizing the following objective
\begin{equation}
\label{eq.entropy_objective}
J(\alpha)=\mathbb{E}_{a\sim \pi_\omega}[-\alpha \log\pi(a|s;\omega)-\alpha\overline{\mathcal{H}}],
\end{equation}
where $\overline{\mathcal{H}}$ is the expected entropy.
Finally, we proposed the E-DSAC algorithm according to the above analysis. The detail of our algorithm can be shown as Algorithm \ref{alg:PI-DSAC}.
\begin{algorithm}[!htb]
\caption{E-DSAC Algorithm}
\label{alg:PI-DSAC}
\begin{algorithmic}
\STATE Initialize parameters $\theta $, $\omega $, $\phi $ and entropy coefficient $\alpha$
\STATE Initialize target parameters $\theta '\leftarrow\theta$, $\omega '\leftarrow\omega $ and $\phi '\leftarrow\phi $
\STATE Initialize learning rate $\beta_{\SPACE{Z}}$, $\beta_{\pi}$, $\beta_{h}$, $\beta_{\alpha}$ and $\tau$
\STATE Initialize iterative step $k=0$
\REPEAT
\STATE Receive observation $\SPACE{O}$ and calculate state $\VECTOR{s}$ using \eqref{eq.pi_state}
\STATE Select action $\VECTOR{a}\sim\pi_{\omega }(\cdot|\VECTOR{s})$, observe $\SPACE{O}'$ and $r$
\STATE Store transition tuple $(\SPACE{O},\VECTOR{a},r,\SPACE{O}')$ in $\SPACE{B}$
\STATE
\STATE Randomly choose $N$ samples $(\SPACE{O},\VECTOR{a},r,\SPACE{O}')$ from $\SPACE{B}$
\STATE Calculate states $s$ and $s'$ using \eqref{eq.pi_state} and obtain the augmented tuple $(\SPACE{O},\VECTOR{s},\VECTOR{a},r,\SPACE{O}',\VECTOR{s}')$
\STATE Update value network with
\eqref{eq05:value_gradient}: \\
\qquad \qquad $\theta \leftarrow \theta - \beta_{\SPACE{Z}}\nabla_{\theta }J_{\SPACE{Z}}(\theta ,\phi )$
\STATE Update feature network with \eqref{eq05:encode_gradient}: \\
\qquad \qquad $\phi \leftarrow \phi - \beta_{h}\nabla_{\phi }J_{\SPACE{Z}}(\theta ,\phi )$
\IF{$k \% m = 0$}
\STATE Update policy network with \eqref{eq04:policy_gradient_based_on_Q_PI}: \\ \qquad \qquad $\omega \leftarrow \omega + \beta_{\pi}\nabla_{\omega } J_{\pi}(\omega )$
\STATE Update target networks with \eqref{eq.target_update}
\STATE Update entropy coefficient $\alpha$ with \eqref{eq.entropy_objective}: \\
\qquad \qquad $\VECTOR{\alpha} \leftarrow \VECTOR{\alpha} - \beta_{\alpha}\nabla_{\VECTOR{\alpha}} J_{\alpha}(\VECTOR{\alpha})$
\ENDIF
\STATE $k=k+1$
\UNTIL Convergence
\end{algorithmic}
\end{algorithm}
\begin{figure}[!htb]
\captionsetup{singlelinecheck = false,labelsep=period, font=small}
\centering{\includegraphics[width=0.48\textwidth]{figure/E-DSAC.pdf}}
\caption{E-DSAC diagram. E-DSAC first updates the distributional value NN and feature NN based on the samples collected from the buffer. Then, the output of the value network is used to guide the update of the policy network.}
\label{f:E-DSAC}
\end{figure}
\section{Simulation Verification}
\label{sec:simulation}
This section validates the effectiveness of the proposed E-DSAC algorithm in a simulating multi-lane highway driving task.
\subsection{Task Description}
As shown in Fig. \ref{f:scenario}, we built a one-way four-lane virtual road in SUMO \cite{SUMO2018} based on the 32km-long section of the Chinese Lianhuo Highway from point A (113$^\circ$29$'$03$''$E, 34$^\circ$51$'$48$''$N) to point B
(113$^\circ$48$'$53$''$E, 34$^\circ$49$'$53$''$N). Each lane is 3.75m wide. The speed limits of the four lanes from bottom to top are restricted to 60-100km/h, 80-100km/h, 90-120km/h, and 100-120km/h, respectively. The ego car would be initialized at a random position. To imitate real traffic situations, we arrange various motor vehicles on this road, including trucks, motorcycles, and cars of different sizes. All surrounding vehicles are initialized randomly at the beginning of each episode, and their movements are controlled by the car-following and lane-changing models of SUMO. The ego vehicle aims to ride as fast as possible without losing the guarantee of driving safety, regulations, and smoothness.
\begin{figure}[!htb]
\captionsetup{singlelinecheck = false,labelsep=period, font=small}
\centering{\includegraphics[width=0.48\textwidth]{figure/map.pdf}}
\caption{Illustration of the multi-lane highway driving scenario.}
\label{f:scenario}
\end{figure}
\subsection{Problem Formulation}
\subsubsection{Observation Design}
As mentioned before, the observation $\mathcal{O}$ consists of the information of surrounding vehicles and indicators related to the ego car and road geometry. As shown in Table \ref{tab:observation} and Fig. \ref{f:state_illustration}, we consider six indicators for each surrounding vehicle within the perception range, including the longitudinal and lateral relative distance from the ego vehicle $D_{\rm long}$ and $D_{\rm lat}$, the relative speed $v_{\rm other}-v_{\rm ego}$, the heading angle relative to the lane centerline $\Phi_{\rm other}$, length $L_{\rm other}$, and width $W_{\rm other}$, i.e., $\VECTOR{x}=[D_{\rm long},D_{\rm lat},v_{\rm other}-v_{\rm ego},\Phi_{\rm other},L_{\rm other},W_{\rm other}]^\top$. In this simulation, the virtual sensor system of the ego car consists of a 360$^\circ$ lidar and a camera (see the shaded area in Fig. \ref{f:scenario}). According to the specifications of existing sensors such as HDL-32E and Mobileye 630 \cite{cao2020novel}, the effective range of the lidar is set to 80m, and that of the camera is set to 100m with a 38$^\circ$ horizontal field of view. Only the surrounding vehicles within the perception range and not blocked by other vehicles can be observed. Besides, each variable of surrounding vehicles is added with noise from a zero-mean Gaussian distribution before being observed. In particular, we assume the observation error of $x$ obeys $\mathcal{N}(0,{\rm diag}(0.14,0.14, 0.15,1,0.05,0.05))$.
\begin{figure}[!htb]
\captionsetup{singlelinecheck = false,labelsep=period, font=small}
\centering{\includegraphics[width=0.48\textwidth]{figure/observation.pdf}}
\caption{Illustration of some observation indicators.}
\label{f:state_illustration}
\end{figure}
In addition, $x_{\rm else}$ is designed as a 20-dimensional vector, of which 7 indicators are the information of the ego vehicle, namely vehicle speed $v_{\rm ego}$, lateral speed $v_y$, yaw rate $\Upsilon$, heading angle relative to the lane centerline $\Phi_{\rm ego}$, steering wheel angle $\Xi$, longitudinal acceleration $acc_{x}$, lateral acceleration $acc_{y}$. The other components of $x_{\rm else}$ are related to road geometry, including
the distance from the ego to the lane centerline $D_{\rm center}$, distance to the left and right road edge $D_{\rm left}$, $D_{\rm right}$, the lane number $N_{\rm lane}$, the driving time in current lane $t_{\rm lanekeep}$, the difference between the ego speed and the upper and lower lane speed limits $v_{\rm upper}-v_{\rm ego}$, $v_{\rm ego}-v_{\rm lower}$. Furthermore, the lane direction changes at five different positions in front of the ego are used to describe the road shape, denoted as $\Phi_{\rm 10}$, $\Phi_{\rm 20}$, $\Phi_{\rm 30}$, $\Phi_{\rm 40}$ and $\Phi_{\rm 50}$. See Table \ref{tab:observation} and Fig. \ref{tab:observation} for more details.
\begin{table}[!htb]
\centering
\caption{Observation list}
\label{tab:observation}
\begin{tabular}{cccc}
\toprule
$\mathcal{O}$&Name & Symbol &Unit \\
\midrule
$x$&Longitudinal Distance&$D_{\rm long}$&m\\
&Lateral distance&$D_{\rm lat}$&m\\
&Relative speed&$v_{\rm other}-v_{\rm ego}$&m/s\\
&Relative heading angle&$\Phi_{\rm other}$& rad \\
&Length&$L_{\rm other}$& m \\
&Width&$W_{\rm other}$& m \\
\midrule
$x_{\rm else}$&Ego speed&$v_{\rm ego}$&m/s\\
&Lateral speed&$v_{y}$&m/s\\
&Yaw rate&$\Upsilon$&rad/s\\
&Heading angle of ego&$\Phi_{\rm ego}$&rad\\
&Steering wheel angle&$\xi$&rad\\
&Longitudinal acceleration&$acc_{x}$&m/${\rm{s}}^2$\\
&Lateral acceleration&$acc_{y}$&m/${\rm{s}}^2$\\
&Distance to centerline&$D_{\rm center}$&m\\
&Distance to left edge&$D_{\rm left}$&m\\
&Distance to right edge&$D_{\rm right}$&m\\
&Lane number&$N_{\rm lane}$& \\
&Speed difference to upper limit&$v_{\rm upper}-v_{\rm ego}$&m/s\\
&Speed difference to lower limit&$v_{\rm ego}-v_{\rm lower}$&m/s\\
&Lane-keeping time&$t_{\rm lanekeep}$&s\\
&Number of surrounding vehicles&$M$&\\
&Road direction change 10m ahead&$\Phi_{\rm 10}$&rad\\
&Road direction change 20m ahead&$\Phi_{\rm 20}$&rad\\
&Road direction change 30m ahead&$\Phi_{\rm 30}$&rad\\
&Road direction change 40m ahead&$\Phi_{\rm 40}$&rad\\
&Road direction change 50m ahead&$\Phi_{\rm 50}$&rad\\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Action Design} To prevent large discontinuity of the steering wheel angle, we choose the increment of the steering wheel angle and the expected longitudinal acceleration as actions, denoted as $\Delta\xi$ and $acc_{x,{\rm exp}}$, i.e, $\VECTOR{a} = [\Delta\xi,acc_{x,{\rm exp}}]^\top$. The expected steering wheel angle $\xi_{\rm exp}$ can be directly calculated as $\xi_{\rm exp}=\xi+\Delta\xi$. Since vehicles are controlled by saturated actuators, we let $\Delta\xi\in[-\frac{\pi}{9},\frac{\pi}{9}]$, $acc_{x,{\rm exp}}\in[-4,2]$ m/${\rm{s}}^2$.
\subsubsection{Reward Design}
To realize reasonable autonomous driving on multi-lane highway, the reward function should consider driving efficiency, safety, regulations, and smoothness to guide the learning of driving policy. The reward function $r(\SPACE{O}, a, \SPACE{O}')$ can be expressed as
\begin{equation}
\label{eq05:reward_function}
r=\left\{\begin{array}{cc}-5000, &{\rm failure}\\R_{\rm speed}+R_{\rm smooth}+R_{\rm regulation}+R_{\rm safe},&{\rm else}\end{array}\right.,
\end{equation}
where $R_{\rm speed}$, $R_{\rm smooth}$, $R_{\rm regulation}$ and $R_{\rm safe}$ are the reward terms to encourage driving efficiency, smoothness, regulations, and safety, respectively. Moreover, -5000 is a large negative reward, which is used to punish some devastating events, including collision, driving off the road edge, and continuous lane changes in a short time (i.e., changing lanes when $t_{\rm lanekeep}< 3$).
$R_{\rm speed}$ is designed to encourage the ego vehicle to drive quickly by changing to high-speed lane:
\begin{equation}
\nonumber
\label{eq:speed_reward}
R_{\rm speed}=-0.6(v_{\rm max}-v_{\rm ego})^2,
\end{equation}
where $v_{\rm max}$ represents the maximum speed limit in all lanes, i.e., $v_{\rm max}=120$km/h.
$R_{\rm smooth}$ aims to make the driving process smoother and more comfortable by regularizing control variables $acc_x$, $\Delta \xi$ and certain states of the ego vehicle, expressed as
\begin{equation}
\label{eq:comfort_reward}
\nonumber
\begin{aligned}
R_{\rm smooth}=&-{acc_x}^2-5(acc_{x,{\rm exp}}-acc_x)^2-80{\xi_{\text{exp}}}^2
-300{\Delta \xi}^2 \\
&-500{\Phi_{\rm ego}}^2-30{v_y}^2-500{\Upsilon}^2-{acc_y}^2.
\end{aligned}
\end{equation}
The regulation term $R_{\rm rule}$ encourages the ego vehicle to comply with driving rules
\begin{equation}
\label{eq:legal_reward}
\nonumber
\begin{aligned}
R_{\rm rule}= &\underbrace{-10{D_{\rm center}}^2-40(1-\tanh(4\min\{D_{\rm left},D_{\rm right}\}))}_{\text {deviation punishment}}\\
&\underbrace{-{\rm sgn}(v_{\rm ego}-v_{\rm upper})(v_{\rm ego}-v_{\rm upper})^2}_{\text {overspeed punishment}}\\
&\underbrace{-{\rm sgn}(v_{\rm lower}-v_{\rm ego})(v_{\rm lower}-v_{\rm ego})^2}_{\text {underspeed punishment}}.
\end{aligned}
\end{equation}
where $\rm sgn(\cdot)$ denotes the sign function, i.e., ${\rm sgn}(a)=1$ if $a\ge 0$, ${\rm sgn}(a)=0$, otherwise.
Last but not least, $R_{\rm safe}$ aims to improve driving safety by penalizing the distance between vehicles that are too close. As shown in Fig. \ref{f:risk_area}, we first introduce the lateral gap $D_{\rm LatGap}$ and longitudinal gap $D_{\rm LongGap}$ between the ego vehicle and each surrounding vehicle, expressed as
\begin{equation}
D_{\rm LatGap}=|D_{\rm lat}|-\frac{W_{\rm other}+W_{\rm ego}}{2},
\end{equation}
and
\begin{equation}
D_{\rm LongGap}=|D_{\rm long}|-\frac{L_{\rm other}+L_{\rm ego}}{2},
\end{equation}
where $L_{\rm ego}$ and $W_{\rm ego}$ are the length and width of the ego vehicle. When $D_{\rm LatGap}\le 0$, a rear-end collision may occur between the vehicle and the corresponding surrounding vehicle. This means that we need to impose penalties if the longitudinal gap is too small in this case. On the other hand, when $D_{\rm LongGap}\le 0$, a side collision may occur, and penalties are required if the lateral gap is too small. Therefore, $R_{\rm safe}$ is designed as
\begin{equation}
\label{eq:safe_reward}
\nonumber
\begin{aligned}
&R_{\rm safe}=\underbrace{70}_{\text{single-step incentive}} \\
&\underbrace{-40\sum_{\VECTOR{x}\in\SPACE{X}}{\rm sgn}(-D_{\rm LatGap}){\rm sgn}(D_{\rm long})\Big(1-\tanh{\frac{D_{\rm LongGap}}{v_{\rm ego}}}\Big)}_{\text{Penalty for being close to cars ahead} }\\
&\underbrace{-25\sum_{\VECTOR{x}\in\SPACE{X}}{\rm sgn}(-D_{\rm LatGap}){\rm sgn}(-D_{\rm long})\Big(1-\tanh{\frac{D_{\rm LongGap}}{\VECTOR{v_{\rm other}}}}\Big)}_{\text{Penalty for being close to cars behind}}\\
&\underbrace{-40\sum_{\VECTOR{x}\in\SPACE{X}}{\rm sgn}(-D_{\rm LongGap})\Big(1-\tanh(1.5D_{\rm LatGap})\Big)}_{\text{Penalty for being close to side cars}}.
\end{aligned}
\end{equation}
\begin{figure}[!htb]
\captionsetup{singlelinecheck = false,labelsep=period, font=small}
\centering{\includegraphics[width=0.4\textwidth]{figure/safety_reward.pdf}}
\caption{Collision risk area.}
\label{f:risk_area}
\end{figure}
\subsection{Algorithm Details}
Based on the above problem setting, we validate the effectiveness of the proposed E-DSAC by comparing it with the DSAC algorithm that takes $U_{\rm FP}(\SPACE{O})$ in \eqref{eq.mapping_s} as states. Since $U_{\rm FP}(\SPACE{O})$ is permutation sensitive and only suitable for a fixed number of surrounding vehicles, DSAC only considers the nearest 6 vehicles, which are arranged in increasing order according to relative distance. The proposed E-DSAC algorithm is implemented in two settings: (1) considering the nearest 6 vehicles, i.e., $M=6$; (2) considering all observed surrounding vehicles within the perception region, i.e., $M\in[1,N]$ is constantly changing.
The value function, policy, and feature NN adopt almost the same NN architecture, which contains 5 hidden layers, with 128 units per layer. All hidden layers take Gaussian Error Linear Units (GELU) \cite{hendrycks2016gelu} as activation functions. According to Lemma \ref{lemma.encoding}, the output dimension $d_3$ of $h_{\phi}$ should satisfy that $d_3\ge Nd_1+1$. We assume the maximum number of surrounding vehicles with perception region is 20, i.e., $N=20$, so we set $d_3=121$. The Adam method \cite{Diederik2015Adam} with a cosine annealing learning rate is used to update all NNs. See Table \ref{tab:hyper} for more training details.
\begin{table}[!htb]
\centering
\caption{Training hyper-parameters}
\label{tab:hyper}
\begin{tabular}{ccc}
\toprule
Name & symbol &Value \\
\hline
Batch size& & 256\\
Value learning rate& $\beta_{\SPACE{Z}}$ & $8\times 10^{-5}\rightarrow 4\times 10^{-5}$\\
Feature learning rate & $\beta_h$&$8\times 10^{-5}\rightarrow 4\times 10^{-5}$\\
Policy learning rate & $\beta_\pi$&$5\times 10^{-5}\rightarrow 4\times 10^{-5}$\\
Learning rate of $\alpha$ &$\beta_\alpha$& $1\times 10^{-4}\rightarrow4\times 10^{-5} $ \\
Discounted factor&$\gamma$ & 0.99\\
Update delay & $m$&2\\
Target update rate & $\tau$&0.001\\
Target entropy & $\overline{\SPACE{H}}$&-2\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Results}
For each case, we train five different runs with evaluations every 20000 iterations. We evaluate the driving performance by calculating the average return over five episodes at each evaluation, where the maximum length of each episode is 500 time steps. Fig.~\ref{f:return} demonstrates the learning curves of E-DSAC and DSAC. In the case of considering the 6 nearest surrounding vehicles, the final policy performance learned by E-DSAC (14398.46$\pm$177.67) is about
three times that of DSAC (4556.10$\pm$1134.72). The only difference between DSAC and E-DSAC when $M=6$ is that E-DSAC makes decisions based on permutation invariant state representation with the help of feature NN. In addition to the performance gains, E-DSAC also eliminates the requirement of manually designing sorting rules $o$ in $U(\mathcal{O})$. On the other hand, the E-DSAC ($M\in[1,N]$) that considers all observed surrounding vehicles slightly outperforms E-DSAC ($M=6$). This indicates that the proposed E-DSAC algorithm is capable of handling a variable number of surrounding vehicles. Although 6 surrounding vehicles seem to meet the basic needs of self-driving in this simulation, such a number may not be enough for other scenarios such as complex intersections. The capability of E-DSAC to process variable-size sets makes it more suitable for different driving tasks. To conclude, the state encoding module adopted by E-DSAC significantly improves the policy performance, relaxing the requirement for predetermined sorting rules and vehicle number restrictions.
\begin{figure}[!htb]
\captionsetup{singlelinecheck = false,labelsep=period, font=small}
\centering{\includegraphics[width=0.4\textwidth]{figure/return.pdf}}
\caption{Return comparison. The solid lines correspond to the mean and the shaded regions correspond to 95\% confidence interval over 5 runs.}
\label{f:return}
\end{figure}
Next, we will analyze the driving behavior of the learned policy. As the final policy performance of E-DSAC ($M\in[1,N]$) and E-DSAC ($M=6$) is similar, only the former will be considered in the sequel. In addition to E-DSAC and DSAC, we have also introduced a rule-based baseline, in which we control the ego vehicle using the Krauss car-following and SL2015 lane-changing models of SUMO \cite{SUMO2018}. Firstly, for each method, we performed 100 simulations starting from a random location of the rightmost lane, i.e., low-speed lane. The maximum time length of each simulation is 100 seconds. Assuming the average speed of ego is 100km/h, the total simulated driving distance is about 300km. Fig. \ref{f:avg_speed} shows the average speed of the ego vehicle, the preceding vehicle, and all observed surrounding vehicles (i.e., traffic) during the simulation. Results show that the policy learned by E-DSAC maintains a high level of driving efficiency. Its average speed (113.04$\pm$6.55 km/h) is about 13.24km/h higher than DSAC, and 11.10km/h higher than SUMO. It is worth noting that only the ego car controlled by E-DSAC rides at a speed faster than the average speed of the traffic flow (about 11.46km/h higher), whose speed is also 2.85km/h higher than the preceding vehicle. This means that the self-driving car has learned to speed up by changing to high-speed lanes or overtaking. As a comparison, the speed of the ego car controlled by DSAC is lower than traffic and its preceding vehicle. This indicates the DSAC policy tends to make conservative decisions, such as following the vehicle ahead at a lower speed.
\begin{figure}[!htb]
\captionsetup{singlelinecheck = false,labelsep=period, font=small}
\centering{\includegraphics[width=0.4\textwidth]{figure/V_compare.pdf}}
\caption{Average speed comparison. The error bar represents the standard deviation.}
\label{f:avg_speed}
\end{figure}
Fig. \ref{f:lanechange_compare} compares the changes in the following distance and the speed of the preceding vehicle before and after lane change. It can be seen that after changing lanes, the following distance and the speed of the preceding vehicle have increased by 26.11m and 13.14km/h, respectively. This indicates that the policy has learned to actively change into lanes with sparse traffic and faster speeds, which further explains the high driving efficiency of E-DSAC shown in Fig. \ref{f:avg_speed}.
\begin{figure}[!htb]
\centering
\captionsetup[subfigure]{justification=centering}
\subfloat[Distance comparison]{\includegraphics[width=0.49\linewidth]{figure/lc_distance.pdf}}
\subfloat[Speed comparison]{\includegraphics[width=0.49\linewidth]{figure/lc_velocity.pdf}}
\\
\caption{Difference before and after lane changing. The error bar represents the standard deviation.}
\label{f:lanechange_compare}
\end{figure}
A typical driving process and corresponding state curves are visualized in Fig.~\ref{f:e1_simu_traj} and \ref{f:e1_state}, respectively. As shown in Fig. \ref{fig:e1_tra_1}, \ref{fig:e1_tra_2} and \ref{fig:e1_velocity}, the ego vehicle is initialized in the outermost lane, and then the first left lane change is completed after going straight for about 100m. During this process, the vehicle speed is accelerated from 90km/h to about 100km/h. Next, the ego vehicle goes straight for a period of time, and then completes the second left lane change, increasing the driving speed to about 110km/h (See Fig. \ref{fig:e1_tra_3} and \ref{fig:e1_tra_4}). Throughout the entire driving process, the ego vehicle merges from the low-speed lane into the high-speed lane through two left lane changes, while maintaining a reasonable distance from the surrounding traffic. It can also be seen from Fig. \ref{fig:e1_control} and \ref{fig:e1_angle} that the control inputs and state curves are very smooth. Appendix \ref{appen.example} gives an instance of the overtaking process. These two cases show that the learned policy is capable of finding a reasonable lane-change position and timing through acceleration and deceleration.
\begin{figure}[!htb]
\captionsetup{singlelinecheck = false,labelsep=period, font=small}
\centering
\captionsetup[subfigure]{justification=centering}
\subfloat[Initialization]{\label{fig:e1_tra_1}\includegraphics[width=0.99\linewidth]{figure/example1-tra1.pdf}}
\\
\subfloat[The 1st lane change]{\label{fig:e1_tra_2}\includegraphics[width=0.99\linewidth]{figure/example1-tra2.pdf}}
\\
\subfloat[The 2nd lane change]{\label{fig:e1_tra_3}\includegraphics[width=0.99\linewidth]{figure/example1-tra3.pdf}}
\\
\subfloat[Going straight]{\label{fig:e1_tra_4}\includegraphics[width=0.99\linewidth]{figure/example1-tra4.pdf}}\\
\caption{Simulation 1: trajectory. The red box represents the indicators of the perceived vehicle.}
\label{f:e1_simu_traj}
\end{figure}
\begin{figure}[!htb]
\centering
\captionsetup[subfigure]{justification=centering}
\subfloat[Action commands]{\label{fig:e1_control}\includegraphics[width=0.99\linewidth]{figure/example1-control.pdf}}\\
\subfloat[Heading angle and yaw rate]{\label{fig:e1_angle}\includegraphics[width=0.99\linewidth]{figure/example1-angle.pdf}}\\
\subfloat[Speed]{\label{fig:e1_velocity}\includegraphics[width=0.99\linewidth]{figure/example1-velocity.pdf}}\\
\caption{Simulation 1: state and action curves}
\label{f:e1_state}
\end{figure}
In Fig. \ref{f:distance_distribution}, we display the joint distribution of the following distance and the distance between ego and its following vehicle. Results show that the ego car can maintain a reasonable distance from surrounding vehicles during the entire simulation process, providing evidence for the safety of the E-DSAC policy. In summary, E-DSAC outperforms DSAC by a wide margin in terms of policy performance in the field of autonomous driving. The policy learned by E-DSAC can realize efficient, smooth, and relatively safe autonomous driving in the designed multi-lane highway scenario. Besides, Appendix \ref{appen.extension} demonstrates that the proposed encoding policy iteration framework can also be extended to other RL algorithms such as SAC \cite{Haarnoja2018ASAC}, TD3 \cite{Fujimoto2018TD3}, and DDPG \cite{lillicrap2015DDPG}.
\begin{figure}[!htb]
\captionsetup{singlelinecheck = false,labelsep=period, font=small}
\centering{\includegraphics[width=0.4\textwidth]{figure/distance_distribution.pdf}}
\caption{Distance distribution.}
\label{f:distance_distribution}
\end{figure}
\section{Real Vehicle Test}
\label{sec:real_veh_test}
In this section, we deploy the learned policy on a real automated vehicle on a two-lane park road to verify the effectiveness of E-DSAC in practical applications.
\subsection{Experiment Description}
The experiment road located at ($36^{\circ}18'20''$N, $120^{\circ}40'25''$E) in Suzhou, China, has a total length of $170$m, and each lane is 3.5m wide. There are two speed bumps on this road. The test vehicle is a Chang-an CS55 SUV, equipped with speed following and steering wheel angle tracking systems. An industrial PC (IPC) is employed to send control commands. The indicators of ego vehicle can be obtained through RTK modules and CAN bus, and the 51Sim-One traffic simulation software is adopted to generate continuous surrounding traffic.
The experiment pipeline is shown in Fig.~\ref{f:architeture}. At first, we used E-DSAC in SUMO to train a decision-making module composed of policy and feature NNs offline, and then deployed it in IPC. At each time step, the decision module will output the corresponding control commands, according to the observation information received by IPC. Noted that in this experiment, we only consider the lateral decision-making and control, so only the expected steer angle command is sent to the steering wheel angle tracking system through CAN bus. For longitudinal control, the speed tracking system is used to track the expected speed, which is 20km/h. The autonomous driving process will continue until the end of the road.
\begin{figure}[!htb]
\captionsetup{singlelinecheck = false,labelsep=period, font=small}
\centering{\includegraphics[width=0.99\linewidth]{figure/experiment_diagram.pdf}}
\caption{Real vehicle test architecture.}
\label{f:architeture}
\end{figure}
\subsection{Experimental Results}
Fig. ~\ref{f: experiment_example_1} shows a typical driving process and the corresponding state and action curves. There were two surrounding vehicles on the road with a speed of about 7.2km/h. From Fig. \ref{fig:experi_tra}, the ego vehicle started from the right lane, and immediately chose to change to the left lane due to the slow speed of the preceding car. After that, it gradually approached another front car in the left lane. Next, the ego vehicle changed to the right lane to avoid collision, and finally drove to the end. Fig. \ref{fig:experi_control} displays the expected and actual steering wheel angle curves during driving. Although there exists a system response delay of about 0.1s between the expected value and the actual, the actual steering wheel angle changed smoothly during the whole driving process, and only a small oscillation occurred when the vehicle passes through two bumps. Similarly, the yaw rate and heading angle maintained a smooth trend during riding in Fig.~\ref{fig:experi_state}, which indicates that the proposed algorithm can assure a satisfactory level of driving comfort. Besides, as shown in Fig.~\ref{fig:experi_deviation}, the vehicle mainly ran near the centerline while going straight.
\begin{figure}[thpb]
\centering
\captionsetup{singlelinecheck = false,labelsep=period, font=small}
\captionsetup[subfigure]{justification=centering}
\subfloat[Trajectory]{\label{fig:experi_tra}\includegraphics[width=0.9\linewidth]{figure/S1M8_tra.pdf}}
\\
\subfloat[Steering wheel angle]{\label{fig:experi_control}\includegraphics[width=0.9\linewidth]{figure/S1M8_control.pdf}}
\\
\subfloat[Heading angle and yaw rate]{\label{fig:experi_state}\includegraphics[width=0.9\linewidth]{figure/S1M8_heading.pdf}}
\\
\subfloat[Distance to lane center]{\label{fig:experi_deviation}\includegraphics[width=0.9\linewidth]{figure/S1M8_center.pdf}}
\caption{Experiment 1: trajectory, state and action curves.}
\label{f: experiment_example_1}
\end{figure}
Fig. ~\ref{f: experiment_tra} shows five trajectories of autonomous driving under different initial conditions. In Fig. \ref{fig:tra_1}, the speeds of vehicles ahead in the right and left lanes were about 7.2km/h and 20km/h, respectively. The ego car first changed to the left lane, and then went straight until the end. In Fig. \ref{fig:tra_2}, the ego car turned to the left lane to avoid collision when it found that the vehicle ahead (7.2km/h) in the left lane suddenly changed lane to the right. After that, a right lane change was made to realize continuous collision avoidance. In Fig. \ref{fig:tra_3}, the ego car headed straight to the end due to the fast speed (20km/h) of the preceding car. In Fig. \ref{fig:tra_4}, the ego car has changed lanes three times in a row to avoid the slow-moving vehicle ahead (3.6km/h). In Fig. \ref{fig:tra_5}, the ego bypassed two stationary cars and drove towards the destination. Combining Fig. \ref{f: experiment_example_1} and \ref{f: experiment_tra}, experimental results show that the learned policy of E-DSAC can smoothly complete maneuvers such as lane-keeping, lane changing and overtaking, so as to realize autonomous driving in response to different surrounding vehicles.
\begin{figure}[thpb]
\captionsetup{singlelinecheck = false,labelsep=period, font=small}
\centering
\captionsetup[subfigure]{justification=centering}
\subfloat[]{\label{fig:tra_1}\includegraphics[width=0.9\linewidth]{figure/S1M3_tra.pdf}}
\\
\subfloat[]{\label{fig:tra_2}\includegraphics[width=0.9\linewidth]{figure/S1M11_tra.pdf}}
\\
\subfloat[]{\label{fig:tra_3}\includegraphics[width=0.9\linewidth]{figure/S2M4_tra.pdf}}
\\
\subfloat[]{\label{fig:tra_4}\includegraphics[width=0.9\linewidth]{figure/S1M10_tra.pdf}}
\\
\subfloat[]{\label{fig:tra_5}\includegraphics[width=0.9\linewidth]{figure/S2M5_tra.pdf}}
\caption{Trajectories under different traffic conditions.}
\label{f: experiment_tra}
\end{figure}
In addition, to increase the difficulty and test the robustness of the learned policy to external disturbances, we manually turn the steering wheel to impose interference during driving. The gray shaded area in Fig. \ref{f: experiment_example_2} represents the time interval during which we apply artificial steering wheel disturbances. The first interference occurred after driving about 10m. At this time, we manually turned the steering wheel to about -50$^\circ$ to make the ego car approach the right boundary of the road (See Fig. \ref{fig:experi_picture1}). When the IPC took over, the learned policy quickly sent out a left-turn command to make the vehicle quickly return to the right position (See Fig. \ref{fig:experi_picture2}). The second and fourth disturbances are similar to the first, during which we urgently turned the steering wheel to the left. The deviation caused by the two was also well corrected by the learned policy. Before the third interference occurred, the vehicle was in the initial stage of left lane change, aiming to avoid the preceding car. Then, we turned the steering wheel right to about -35$^\circ$ to interfere with the lane change (See Fig. \ref{fig:experi_picture3}). The learned policy still successfully completed the left lane change after taking over.
\begin{figure}[thpb]
\centering
\captionsetup{singlelinecheck = false,labelsep=period, font=small}
\captionsetup[subfigure]{justification=centering}
\subfloat[Trajectory]{\label{fig:experi_tra_d}\includegraphics[width=0.9\linewidth]{figure/S1M9d_tra.pdf}}
\\
\subfloat[Steering wheel angle]{\label{fig:experi_control_d}\includegraphics[width=0.9\linewidth]{figure/S1M9d_control.pdf}}
\\
\subfloat[Heading angle and yaw rate]{\label{fig:experi_state_d}\includegraphics[width=0.9\linewidth]{figure/S1M9d_heading.pdf}}
\\
\subfloat[Distance to lane center]{\label{fig:experi_deviation_d}\includegraphics[width=0.9\linewidth]{figure/S1M9d_center.pdf}}
\caption{Experiment 2: trajectory, state and action curves. The gray shaded area represents the time interval during which we apply artificial steering wheel disturbances.}
\label{f: experiment_example_2}
\end{figure}
\begin{figure}[thpb]
\centering
\captionsetup{singlelinecheck = false,labelsep=period, font=small}
\captionsetup[subfigure]{justification=centering}
\subfloat[The 1st interference]{\label{fig:experi_picture1}\includegraphics[width=0.48\linewidth]{figure/exper_picture_1.png}}\quad
\subfloat[The 1st take over ]{\label{fig:experi_picture2}\includegraphics[width=0.48\linewidth]{figure/exper_picture_2.png}}
\\
\subfloat[The 3rd interference]{\label{fig:experi_picture3}\includegraphics[width=0.48\linewidth]{figure/exper_picture_3.png}} \quad
\subfloat[The 3rd take over]{\label{fig:experi_picture4}\includegraphics[width=0.48\linewidth]{figure/exper_picture_4.png}}
\caption{Images of experiment 2.}
\label{f: experiment_picture}
\end{figure}
To conclude, the learned policy of E-DSAC has the potential for real-vehicle applications. It can realize relatively safe and smooth decision-making and control under different traffic situations in multi-lane scenarios. Besides, when the steering wheel is disturbed by human beings, the learned policy can restore the vehicle to the proper status immediately after taking over. This implies that the learned policy is also compatible with human-machine shared driving.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we propose an encoding distributional soft actor-critic (E-DSAC) for self-driving decision-making, which can deal with permutation sensitivity problems faced by existing related studies. Firstly, we develop an encoding distributional policy iteration (DPI) framework by embedding the encoding sum and concatenation method in the distributional RL framework. Then, the proposed DPI framework is proved to exhibit important properties in terms of convergence and global optimality. Based on the encoding DPI framework, we propose the E-DSAC algorithm by adding the gradient-based update rule of the feature NN to the policy evaluation process of the DSAC algorithm. We design a simulated multi-lane highway driving task and the corresponding reward function to verify the effectiveness of E-DSAC. Results show that the policy learned by E-DSAC can realize efficient, smooth, and relatively safe autonomous driving in the designed scenario. Compared with DSAC, E-DSAC has improved the final policy return by about three times, relaxing the requirement for predetermined sorting rules and vehicle number restrictions. Finally, a real vehicle test is conducted, and experimental
results show that the learned policy can smoothly complete maneuvers such as lane-keeping and
lane-changing in practical applications, so as to realize autonomous driving in response to different
surrounding vehicles. Besides, the learned policy has high robustness to artificial steering wheel interference. This study poses great potential for the application of RL in actual driving scenarios.
\section{Supplementary Results}
\subsection{Simulation Example}
\label{appen.example}
Fig. \ref{f:e2_trajectory} gives an instance of the overtaking process, and Fig. \ref{f:e2_state} shows the corresponding state curves. As shown in Fig. \ref{fig:e2_tra1} and \ref{fig:e2_tra2}, the ego vehicle first changes lanes to the innermost lane due to the too close following distance. After that, the self-vehicle found a suitable lane change position by accelerating and then completed the overtaking process through right lane change (See Fig. \ref{fig:e2_tra3}, \ref{fig:e2_tra4} and \ref{fig:e2_speed}) .
\begin{figure}[!htb]
\centering
\captionsetup{singlelinecheck = false,labelsep=period, font=small}
\captionsetup[subfigure]{justification=centering}
\subfloat[Initialization]{\label{fig:e2_tra1}\includegraphics[width=0.99\linewidth]{figure/example2-tra1.pdf}}
\\
\subfloat[Left lane change]{\label{fig:e2_tra2}\includegraphics[width=0.99\linewidth]{figure/example2-tra2.pdf}}
\\
\subfloat[Right lane change]{\label{fig:e2_tra3}\includegraphics[width=0.99\linewidth]{figure/example2-tra3.pdf}}
\\
\subfloat[Going straight]{\label{fig:e2_tra4}\includegraphics[width=0.99\linewidth]{figure/example2-tra4.pdf}}\\
\caption{Simulation 2: trajectories. The red box represents the indicators of the perceived vehicle.}
\label{f:e2_trajectory}
\end{figure}
\begin{figure}[!htb]
\centering
\captionsetup[subfigure]{justification=centering}
\subfloat[Action commands]{\label{fig:e2_action}\includegraphics[width=0.99\linewidth]{figure/example2-control.pdf}}\\
\subfloat[Heading angle and yaw rate]{\label{fig:e2_angle}\includegraphics[width=0.99\linewidth]{figure/example2-angle.pdf}}\\
\subfloat[Speed]{\label{fig:e2_speed}\includegraphics[width=0.99\linewidth]{figure/example2-velocity.pdf}}\\
\caption{Simulation 2: state and control curves}
\label{f:e2_state}
\end{figure}
\subsection{Extension}
\label{appen.extension}
The encoding DPI framework can also be extended to other RL algorithms, such as SAC, TD3, and DDPG. Based on similar ideas, we develop encoding versions of these algorithms, called E-SAC, E-TD3 and E-DDPG, respectively. We run these algorithms in the designed multi-lane highway driving task, and the learning curves are shown in Fig. \ref{f:return_extension}. The results show that the encoding RL framework has good compatibility with mainstream RL algorithms, and is of great significance for improving the policy performance in the field of RL-based autonomous driving.
\begin{figure}[!htb]
\captionsetup{singlelinecheck = false,labelsep=period, font=small}
\centering{\includegraphics[width=0.4\textwidth]{figure/return_extention.pdf}}
\caption{Return comparison. The solid lines correspond to the mean and the shaded regions correspond to 95\% confidence interval over 5 runs.}
\label{f:return_extension}
\end{figure}
\section{Proof of Convergence}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{ieeetr}
|
2,877,628,089,081 | arxiv | \section{Introduction}
A nice and powerful application in the study of the Green function of an elliptic differential operator appears in Schoen's approach \cite{schoen:84} of the Yamabe problem. This famous problem of conformal Riemannian geometry can be stated as follow: given a connected compact Riemannian manifold $(\mathrm{M}^n,g)$ with $n\geq 3$, can one find a metric $\overline{g}$ in the conformal class $[g]$ of $g$ with constant scalar curvature? We first briefly recall some steps in the resolution of this problem. In \cite{yamabe:60}, Yamabe claims to answer this question, however Tr\"udinger \cite{trudinger:68} points out a mistake in Yamabe's argument and can fix the proof in some cases. The remaining cases were treated by Aubin \cite{aubin:76} and Schoen \cite{schoen:84}. In fact, solving the Yamabe problem is equivalent to find a smooth positive solution of a non linear elliptic equation involving a conformally invariant operator: the conformal Laplacian. This is a two order elliptic differential operator acting on functions defined by:
\begin{eqnarray*}
\mathrm{L}_g :=\frac{4(n-1)}{n-2}\Delta_g +\mathrm{R}_g
\end{eqnarray*}
and which relates the scalar curvature of two metrics in the same conformal class by the formula:
\begin{eqnarray*}
\mathrm{R}_{\overline{g}}=f^{-\frac{n+2}{n-2}}\mathrm{L}_g f,
\end{eqnarray*}
if $\overline{g}=f^{\frac{4}{n-2}}g\in[g]$ and where $\Delta_g$ and $\mathrm{R}_g$ denote respectively the standard Laplacian and the scalar curvature of $(\mathrm{M},g)$. Then finding a metric with constant scalar curvature in the conformal class of $g$ is equivalent to find a smooth positive function $f$ for the equation:
\begin{eqnarray}\label{equationyam}
\mathrm{L}_g f= {\rm{C}}\, f^{N-1}
\end{eqnarray}
where $\rm{C}$ is a constant and $N=\frac{2n}{n-2}$. The major difficulty in this problem comes from the fact that the Sobolev embedding $\rm{H}^2_1(\mathrm{M})\hookrightarrow\rm{L}^N(\mathrm{M})$ is not compact and then a standard variationnal approach cannot allow to conclude. However with a little work, one can show the existence of a non negative smooth function $f$ solution of Equation~(\ref{equationyam}) which is either positive or zero on $\mathrm{M}$. So it remains to show that this function cannot vanish identically on $\mathrm{M}$. In \cite{aubin:76}, Aubin observed that if:
\begin{eqnarray}\label{aubin}
\mu(\mathrm{M})<\mu(\mathbb{S}^n)
\end{eqnarray}
then the function $f$ has to be positive. Here $\mu(\mathrm{M})$ denotes the Yamabe invariant of $\mathrm{M}$ (which only depends of the conformal class of $g$) defined by:
\begin{eqnarray}\label{yamcomp}
\mu(\mathrm{M}):=\underset{u\in\mathrm{C}^1(\mathrm{M}),u\neq 0}{\mathrm{inf}} \frac{\int_{\mathrm{M}}u{\rm{L}}_g u\, dv(g)}{\Big(\int_{\mathrm{M}}|u|^N dv(g)\Big)^{\frac{2}{N}}}.
\end{eqnarray}
He also proved that if $(\mathrm{M},g)$ is not conformally equivalent to the standard sphere $(\mathbb{S}^n,g_{\rm{st}})$ then $\mu(\mathrm{M})\leq\mu(\mathbb{S}^n)$ and moreover if $n\geq 6$ and the manifold is not locally conformally flat then (\ref{aubin}) holds. The last step in the resolution of the Yamabe problem was made by Schoen \cite{schoen:84} which proved $(\ref{aubin})$ in the remaining cases using the positive mass theorem. In $1987$, Lee and Parker \cite{lee.parker:87} gave a proof of the Yamabe problem unifying Aubin's and Schoen's arguments. The key point of their work uses Schoen's result and is strongly based on the study of the Green function of the conformal Laplacian and on the existence of an adaptated coordinates chart namely the conformal normal coordinates. In fact, using the development of the Green function near a point $q\in\mathrm{M}$ in these coordinates, they construct adapted test functions which include Aubin and Schoen's arguments. More precisely, in a neighbourhood of $q$, the conformal Laplacian's Green function can be decomposed into the sum of a singular part (which is nearly the Green function for the standard Laplacian in Euclidean space) and a regular part. The fact is that according to the manifold is or is not conformally flat, this regular part evaluated in $q$ is either the mass of an asymptotically flat manifold (obtained by conformal change of metrics whose weight is exactly the Green function) either the squared norm of the Weyl tensor at $q$.\\
If now we suppose that the manifold $\mathrm{M}$ is spin, there exists another operator which has the same conformal properties than the conformal Laplacian operator: the Dirac operator. This operator is a first order elliptic differential operator acting on sections of the complex spinor bundle over $(\mathrm{M},g)$. The relation between these two operators was made by Hijazi (see \cite{hijazi:86}) with an inequality (now called the Hijazi inequality) which relates their first eigenvalues and which allows to define a spin conformal invariant given by:
\begin{eqnarray*}
\lambda_{\min}^+(\M,[g],\sigma):=\underset{\overline{g}\in[g]}{\inf}\big\{\lambda_1^+(\overline{g})\Vol(\mathrm{M},\overline{g})^{\frac{1}{n}}\big\}.
\end{eqnarray*}
This invariant can be seen as an analogue of the Yamabe invariant in the spinorial setting. Moreover, using the Hijazi inequality, we can compare this invariant to the Yamabe invariant of $(\mathrm{M},g)$ (see \cite{hijazi:91} and \cite{baer:92} for the case $n=2$):
\begin{eqnarray}\label{hijazicomp}
\lambda_{\min}^+(\M,[g],\sigma)^2 \geq\frac{n}{4(n-1)}\mu(\mathrm{M}).
\end{eqnarray}
This invariant has been (and is again) the main subject of many works, in particular in a serie of papers by Ammann, Humbert and Morel (see \cite{ammann}, \cite{amm1}, \cite{amm4} or \cite{ahm}). In these papers, the authors show that this invariant shares a lot of the Yamabe's invariant properties. Indeed, in \cite{amm3} it is shown that we can derive a spinorial analogue of Aubin's inequality, i.e. we have:
\begin{eqnarray}\label{ahm}
\lambda_{\min}^+(\M,[g],\sigma)\leq \lambda_{\min}(\mathbb{S}^n,[{g_{\rm st}}],\sigma_{\mathrm{st}})=\frac{n}{2}\omega_n^{\frac{1}{n}}
\end{eqnarray}
where $\omega_n$ denotes the volume of the standard sphere. Moreover, if this inequality is strict, then with the help of the Hijazi inequality, it gives a spinorial proof of the Yamabe problem. On the other hand, Ammann \cite{habilbernd} also proves that the strict inequality in (\ref{ahm}) gives the existence of a nonlinear partial differential equations for the Dirac operator involving a critical Sobolev embedding theorem. A natural question is then to ask if we can find sufficient conditions for which Inequality~(\ref{ahm}) is strict. A partial answer is given in \cite{amm4} with the help of a detailed study of the Green function for the Dirac operator. Indeed, in this article, they considered the case of locally conformally flat manifolds where the behaviour of the Green function can be well understood. In fact, they show that the Dirac operator's Green function can be decomposed (around $q$) into the sum of a singular part (which is nearly the Green function for the Dirac operator in the Euclidean space) and a smooth spinor field. Then they define the mass operator which plays the same role as the constant term of the Green's function of the conformal Laplacian. An other application of the Green function of the Dirac operator can also be found in \cite{amm2} where a simple proof of the positive mass theorem for the Yamabe problem is given. The main idea of the proof is based on the fact that the Witten's spinor (see \cite{lee.parker:87} or \cite{bartnik}) is exactly obtained as the image of the Green function's of the Dirac operator under the conformal change of metric whose weight is given by the Green function of the conformal Laplacian.\\
In this paper, we define and study the Green's function of the Dirac operator on manifolds with boundary. More precisely, since the conformal aspect is underlying in this work, the choice of the boundary conditions is crucial for the following. It turns out that local boundary conditions seems to be appropriate for the study of the conformal aspect of the Dirac operator on manifolds with boundary.\\
In a first part, we will focus on the condition associated with a chirality operator (also called the chiral bag boundary condition, see \cite{hijazi.montiel.roldan:01} for example). A direct application of this construction is motivated by previous results of the author (see \cite{sr}, \cite{sr3} or Section~\ref{apl1}). Indeed, in \cite{sr3} we define an analogue of $\lambda_{\min}^+(\M,[g],\sigma)$ on manifolds with boundary and we show that this invariant satisfies an inequality corresponding to (\ref{ahm}). We note that if this inequality is strict then there exists a solution for the Yamabe problem on manifolds with boundary (see \cite{escobar:92}). Moreover, we show (in a forthcoming paper \cite{sr5}) that it also implies the existence of a spinor field solution of a Yamabe type boundary problem for the Dirac operator under the chiral bag boundary condition. A natural question is then to ask if one can find sufficient conditions for which this inequality is strict. It appears that the study of the Green function of the Dirac operator under this boundary condition allows to give such a condition.\\
Secondly, we will study the Green function for the Dirac operator for an other local elliptic boundary condition: the $\mathrm{MIT}$ bag boundary condition. Unlike the chiral bag boundary condition, this condition exists on every spin manifold with boundary since it does not require some additional structures. As in the previous part, we will compute the development of this function around a boundary point where the geometry of the manifold is quite simple. Then as an application, we will give a proof of the positive mass theorem proved by Escobar in \cite{escobar:92} which appears in the resolution of the Yamabe problem on manifolds with boundary. Note that our result requires that the manifold is spin (which is a stronger assumption compared with Escobar's theorem) however we don't assume that the whole manifold is locally conformally flat with umbilic boundary but only that there exists a boundary point $q\in\pa\mathrm{M}$ with a locally conformally flat with umbilic boundary neighbourhood (see below for the definition of such a neighbourhood).\\
{\bf Convention:} In this article, a point $q\in\pa\mathrm{M}$ in a Riemannian manifold $(\mathrm{M}^n,g)$ has a locally conformally flat with umbilic boundary neighbourhood, if there exist a metric $\overline{g}\in [g]$ and a neighbourhood $\mathrm{V}\subset\mathrm{M}$ (resp. $\mathrm{U}\subset\mathbb{R}^n_+$) of $q\in\pa\mathrm{M}$ (resp. of $0\in\pa\mathbb{R}^n_+$) such that $(\mathrm{U},\xi)$ and $(\mathrm{V},\overline{g})$ are isometric. In a same way, a point $q\in\pa\mathrm{M}$ has a locally flat with umbilic boundary neighbourhood if $\overline{g}=g$ in the previous notation.\\
{\bf Acknowledgements:} I would like to thank Oussama Hijazi and Emmanuel Humbert for their support. I am also very grateful to Marc Herzlich and Sebasti\'an Montiel for their remarks and their suggestions.
\section{Green function for the Dirac operator under the chiral bag boundary condition}\label{GFDOCBC}
\subsection{Definitions and first properties}\label{definition1}
In this section, we give a rigorous definition of the Green function of the Dirac operator under the chiral bag boundary condition. This condition has been introduced by Gibbons, Hawking, Horowitz and Perry \cite{hawking} (see also \cite{herzlich1}) on asymptotically flat manifolds with inner boundary in order to prove positive mass theorems for black holes. This condition defines an elliptic boundary condition for the Dirac operator and in the context of compact Riemannian spin manifolds with boundary, we can study the properties of its spectrum (see \cite{fs}, \cite{hmz} or \cite{sr} for example).\\
Let $(\mathrm{M}^n,g,\sigma)$ a $n$-dimensionnal connected compact Riemannian spin manifold with non empty smooth boundary $\pa\mathrm{M}$. We will denote by $\Sigma_g(\mathrm{M})$ the spinor bundle over $(\mathrm{M},g)$, $\na$ the Riemannian or spinorial Levi-Civita connections and ``$\cdot$'' the Clifford multiplication. The Dirac operator is then the first order elliptic differential operator acting on $\Sigma_g(\mathrm{M})$ locally given by:
\begin{eqnarray*}
\mathrm{D}_g\varphi=\sum_{i=1}^n e_i\cdot\na_{e_i}\varphi,
\end{eqnarray*}
for all $\varphi\in\Ga\big(\Sigma_g(\mathrm{M})\big)$ and where $\{e_1,...,e_n\}$ is a local $g$-orthonormal frame of the tangent bundle. We now briefly recall the definition of the chiral bag boundary condition. From now on, we assume that there exists a chirality operator, i.e. a linear map
\begin{eqnarray*}
\Ga:\Sigma_g(\mathrm{M})\longrightarrow\Sigma_g(\mathrm{M})
\end{eqnarray*}
from the spinor bundle over $(\mathrm{M},g)$ which satisfies the following properties:
\begin{equation}\label{poc}
\begin{array}{rc}
\Ga^2=\Id, & \<\Ga\varphi,\Ga\psi\>=\<\varphi,\psi\>\\
\na_X(\Ga\psi)=\Ga(\na_X\psi), & X\cdot\Ga\psi=-\Ga(X\cdot\psi),
\end{array}
\end{equation}
for all $X\in\Gamma(\TM)$ and for all spinor fields $\psi,\varphi\in\Gamma\big(\Sigma_g(\mathrm{M})\big)$. If $\nu$ denotes the (inner) unit normal vector field to the boundary, an easy computation shows that the fiber preserving endomorphism:
\begin{eqnarray*}
v\cdot\Ga:\mathbf{S}_g\longrightarrow\mathbf{S}_g
\end{eqnarray*}
from the restricted spinor bundle $\mathbf{S}_g:=\Sigma_g(\mathrm{M})_{|\pa\mathrm{M}}$ is an involution. So this spinor bundle splits into the direct sum $\mathbf{S}_g=\mathrm{V}_g^+\oplus\mathrm{V}_g^-$ where $\mathrm{V}_g^\pm$ is the eigensubbundle associated with the eigenvalues $\pm 1$. Thus one can check that the orthogonal projection
$$\begin{array}{lccl}
\BCHIpm : & \mathrm{L}^2(\mathbf{S}_g) & \longrightarrow & \mathrm{L}^2(\mathrm{V}^{\pm}_g)\\
& \varphi & \longmapsto & \frac{1}{2}(\Id\pm \nu\cdot\Ga)\varphi,
\end{array}$$
onto the eigensubbundle $\mathrm{V}^\pm_g$ defines an elliptic boundary condition for the Dirac operator. More precisely, the Dirac operator
\begin{eqnarray}
\mathrm{D}_g:\hpm_g=\{\varphi\in\mathrm{H}_1^2\;/ \;\BCHIpm(\varphi_{|\pa\mathrm{M}})=0\}\longrightarrow\mathrm{L}^2\big(\Si_g(\mathrm{M})\big)
\end{eqnarray}
is a Fredholm operator and its spectrum (under this boundary condition) consists of entirely isolated real eigenvalues with finite multiplicity. In the following, we will denote by $\big(\lambda^\pm_k(g)\big)_{k\in\mathbb{Z}}$ this set of eigenvalues, i.e. we have:
$$\left\lbrace
\begin{array}{ll}
\mathrm{D}_g\varphi^\pm_k=\lambda_k^\pm(g)\varphi_k^\pm & \quad\textrm{on}\;\mathrm{M}\\ \\
\BCHIpm(\varphi^\pm_{k\,|\pa\mathrm{M}})=0 & \quad\textrm{along}\;\pa\mathrm{M}
\end{array}
\right.$$
where $(\phi^\pm_k)_{k\in\mathbb{Z}}$ can be chosen as being a spectral resolution. Let $\pi_1$, $\pi_2:\mathrm{M}\times\mathrm{M}\rightarrow\mathrm{M}$ be the projection on the first and the second component and let:
\begin{eqnarray*}
\Sigma_g(\mathrm{M})\boxtimes\big(\Sigma_g(\mathrm{M})\big)^\ast:=\pi_1^\ast\big(\Sigma_g(\mathrm{M})\big)\otimes\big(\pi_2^*\big(\Sigma_g(\mathrm{M})\big)\big)^\ast
\end{eqnarray*}
i.e. the fiber bundle whose fiber over $(x,y)$ is given by
\begin{eqnarray*}
\Big(\Sigma_g(\mathrm{M})\boxtimes\big(\Sigma_g(\mathrm{M})\big)^\ast\Big)_{(x,y)}:=\mathrm{Hom}\big(\Sigma_{y}(\mathrm{M}),\Sigma_{x}(\mathrm{M})\big),
\end{eqnarray*}
where $\Sigma_{y}(\mathrm{M})$ denotes the fiber of the spinor bundle over $(\mathrm{M},g)$. We are now ready to give the definition of the Green function for the Dirac operator under the chiral bag boundary condition.
\begin{definition}
A Green function for the Dirac operator under the chiral bag boundary condition (or a chiral Green function) is given by a smooth section:
\begin{eqnarray}
\mathrm{G}^\pm_\mathrm{CHI}:\mathrm{M}\times\mathrm{M}\setminus\Delta\rightarrow\Sigma_g(\mathrm{M})\boxtimes\big(\Sigma_g(\mathrm{M})\big)^\ast
\end{eqnarray}
locally integrable on $\mathrm{M}\times\mathrm{M}$ which satisfies, in a weak sense, the following boundary problem:
$$\left\lbrace
\begin{array}{ll}
\mathrm{D}_g\big(\mathrm{G}^\pm_\mathrm{CHI}(x,y)\big)=\delta_y\mathrm{Id}_{\Sigma_y\mathrm{M}}\\ \\
\mathbb{B}_{\CHI}^\pm\big(\mathrm{G}^\pm_\mathrm{CHI}(x,y)\big)=0,\quad\textrm{for}\;x\in\pa\mathrm{M}\setminus\{y\},
\end{array}
\right.$$
for all $x\neq y\in\mathrm{M}$ (where $\Delta:=\{(x,y)\in\mathrm{M}\times\mathrm{M}\,/\,x=y\}$). In other words, we have:
\begin{eqnarray*}
\int_{\mathrm{M}}\<\mathrm{G}_{\mathrm{CHI}}^\pm(x,y)\psi_0^\pm,\mathrm{D}_g\varphi(x)\>dv(x)=\<\psi_0^\pm,\varphi(y)\>
\end{eqnarray*}
for all $y\in\mathrm{M}$, $\psi_0^\pm\in\Sigma_{y}(\mathrm{M})$ satisfying $\nu\cdot\Ga\psi_0^\pm=\mp\psi_0^\pm$ and $\varphi\in\Gamma\big(\Sigma_g(\mathrm{M})\big)$ such that $\BCHIpm(\varphi_{|\pa\mathrm{M}})=0$.
\end{definition}
\begin{remark}\label{remark1}
Here we recall that on the Euclidean space, we have a corresponding notion of Green function for the Dirac operator (see \cite{amm4} for example). In fact, one can easily check that there exists a unique Green function given by:
$$\begin{array}{lccl}
\mathrm{G}_{\eucl}: & \mathbb{R}^n\times\mathbb{R}^n\setminus\Delta & \longrightarrow & \Sigma\mathbb{R}^n\boxtimes\big(\Sigma\mathbb{R}^n\big)^\ast\\
& (x,y) & \longmapsto & -\frac{1}{\omega_{n-1}}\frac{x-y}{|x-y|^n}\cdot,
\end{array}$$
where $\omega_n$ stands for the volume of the $n$-dimensional round sphere. This Green function can be seen (up to a conformal change of metric) as the inverse of the Dirac operator on the standard sphere $\mathbb{S}^n$.
\end{remark}
We are now going to give the development of the chiral Green function near a boundary point $q\in\pa\mathrm{M}$ and when the manifold has a particular geometry near this point. From now on, we assume that the Dirac operator is invertible under the chiral bag boundary condition, i.e. $\mathrm{Ker}^\pm(\mathrm{D}_g)=\{0\}$ where $\mathrm{Ker}^\pm(\mathrm{D}_g)$ denotes the kernel of the Dirac operator with domain in $\mathcal{H}^\pm_g$.\\
In order to get this expansion, we have to identify (locally) the spinor bundle over $(\mathrm{M},g)$ with the (trivial) spinor bundle over an open set of the half-space $\mathbb{R}^n_+$ endowed with the Euclidean metric. In fact, in a previous work (see \cite{sr3} or \cite{mathese}), it is shown that if
\begin{eqnarray*}
\mathcal{F}_q:\mathrm{U}\longrightarrow\mathrm{V}
\end{eqnarray*}
stands for the Fermi coordinate system around $q\in\mathrm{V}$ where $\mathrm{U}$ (resp. $\mathrm{V}$) is an open set in $\mathbb{R}^n_+$ (resp. in $\mathrm{M}$) then we have a canonical trivialization given by:
$$\begin{array}{clc}
\Sigma_\xi(\mathrm{U}) & \longrightarrow & \Sigma_g(\mathrm{V}) \\
\psi & \longmapsto & \overline{\psi}.
\end{array}$$
where $\xi$ is the euclidean metric on $\mathbb{R}^n_+$. This identification is closely related to those given in \cite{bourguignon} and \cite{amm3}.
\begin{remark}\label{remark2}
In \cite{sr3} (see also \cite{mathese}), we show that it is sufficient to choose a constant spinor field $\psi_0\in\Sigma_0(\mathbb{R}^n_+)$ such that $\BCHIpm\big(\overline{\psi_0^\pm}\big)(q)=0$ at one point $q\in\pa\mathrm{M}$ so that $\BCHIpm\big(\overline{\psi_{0}^\pm}_{\,|\mathrm{V}\cap\pa\mathrm{M}}\big)=0$
\end{remark}
In the following, we will note indifferently a spinor over $\mathrm{U}$ and its image on $\mathrm{V}$ under this trivialization. The expansion of the chiral Green function is then given by:
\begin{proposition}\label{Devfoncgreen}
Assume that there exists a point $q\in\pa\mathrm{M}$ which has a locally flat with umbilic boundary neighbourhood. Then the chiral Green function exists and has, in the above trivialization, the following expansion (near $q$):
\begin{eqnarray}\label{devfoncgreen}
\mathrm{G}_\mathrm{CHI}^\pm(x,q)\psi_0^\pm=-\frac{2}{\omega_{n-1}}\frac{x-q}{|x-q|^n}\cdot\psi_0^\pm+\mathrm{m}_\mathrm{CHI}^\pm(x,q)\psi_0^\pm,
\end{eqnarray}
where $\psi_0^\pm\in\Sigma_q(\mathrm{M})$ is a spinor field satisfying $\nu\cdot\Ga\psi_0^\pm=\mp\psi_0^\pm$ (in other words $\mathbb{B}_{\CHI}^{\pm}(\psi_0^\pm)=0$) and where $\mathrm{m}_\mathrm{CHI}^\pm(\,.\,,q)\psi_0^\pm$ is a smooth spinor field such that $\mathrm{D}_g\big(\mathrm{m}_\mathrm{CHI}^\pm(\,.\,,q)\psi_0^\pm\big)=0$ around $q$. Moreover, along $\pa\mathrm{M}$, we have:
\begin{eqnarray}\label{masschiral}
\mathbb{B}_{\CHI}^{\pm}\big(\mathrm{m}_\mathrm{CHI}^\pm(.\,,q)\psi^\pm_{0|\pa\mathrm{M}}\big)=0.
\end{eqnarray}
\end{proposition}
{\it Proof:}
We only prove this result for the boundary condition $\mathbb{B}_{\mathrm{CHI}}^-$ since the proof is similar for $\mathbb{B}_{\mathrm{CHI}}^+$. We trivialize the spinor bundle over $\mathrm{V}$ (around the boundary point $q$) using the preceding discussion. In fact, we can isometrically identify $(\mathrm{V},g)$ with $(\mathrm{U},\xi)$ and thus the spinor bundles over these spaces. Now we consider a smooth cut-off function $\eta$ such that:
$$\left\lbrace
\begin{array}{ll}
0\leq \eta\leq 1, & \eta\in\mathrm{C}^\infty(\mathrm{M})\\
\eta\equiv 1\textrm{ on } \mathrm{B}^+_q(\delta), & \mathrm{supp}(\eta)\subset\mathrm{B}^+_q(2\delta)\subset\mathrm{V}
\end{array}
\right.$$
where $\mathrm{B}^+_q(2\delta)$ is the half-ball centered in $q$ and with radius $2\delta>0$ contained in $\mathrm{V}$. Let $\Psi=\mathrm{D}_g\big(\eta\mathrm{G}_{\eucl}(\,.\,,q)\psi_0\big)$ with $\psi_0\in\Sigma_{q}(\mathrm{M})$ such that $\nu\cdot\Ga\psi_0=\psi_0$. Using Remark~\ref{remark2}, we then have that this boundary condition is fulfilled along $\mathrm{V}\cap\pa\mathrm{M}$ for the spinor field $\psi_0$. The spinor field $\Psi$ is smooth on $\mathrm{M}\setminus\{q\}$ and an easy calculation leads to $\Psi_{|\mathrm{B}_q^+(\delta)\setminus\{q\}}=0$. Thus we can extend this spinor all over $\mathrm{M}$ by $\Psi(q)=0$. Since the Dirac operator is supposed to be invertible on $\mathcal{H}^-$, there exists a unique spinor field $\mathrm{m}_\mathrm{CHI}^-(\,.\,,q)\psi_0\in\mathcal{H}^-$ such that $\mathrm{D}_g\big(\mathrm{m}_\mathrm{CHI}^-(\,.\,,q)\psi_0\big)=-\Psi$. Now for $x\in\mathrm{M}\setminus\{q\}$, we let:
\begin{eqnarray*}
\mathrm{G}^-_q(x)\psi_0=2\eta\,\mathrm{G}_{\eucl}(x,q)\psi_0+\mathrm{m}_\mathrm{CHI}^-(x,q)\psi_0.
\end{eqnarray*}
We first check that if $x\in\pa\mathrm{M}\setminus\{q\}$ then this spinor field satisfies the chiral bag boundary condition. If $x\in\mathrm{supp}(\eta)^c\cap\pa\mathrm{M}$ then
\begin{eqnarray*}
\mathbb{B}_{\CHI}^{-}\big(\mathrm{G}^-_q(x)\psi_0\big)=\mathbb{B}_{\CHI}^{-}\big(\mathrm{m}_\mathrm{CHI}^-(x,q)\psi_0\big)=0
\end{eqnarray*}
since by construction $\mathrm{m}_\mathrm{CHI}^\pm(\,.\,,q)\psi_0\in\mathcal{H}^-$. Secondly, if $x\in\big(\mathrm{supp}(\eta)\cap\pa\mathrm{M}\big)\setminus\{q\}$ we get
\begin{eqnarray*}
\mathbb{B}_{\CHI}^{-}\big(\mathrm{G}^-_q(x)\psi_0\big)=\mathbb{B}_{\CHI}^{-}\big(\mathrm{G}_{\eucl}(x,q)\psi_0\big).
\end{eqnarray*}
Using the expression of the Euclidean Green function given in Remark~\ref{remark1} and the properties (\ref{poc}) of the chirality operator $\Gamma$, we obtain
\begin{eqnarray*}
\nu\cdot\Ga\big(\frac{x-q}{|x-q|^n}\cdot\psi_0\big)=\frac{x-q}{|x-q|^n}\cdot\nu\cdot\Ga\psi_0=\frac{x-q}{|x-q|^n}\cdot\psi_0
\end{eqnarray*}
since $\psi_0$ is chosen such that $\nu\cdot\Ga\psi_0=\psi_0$ and thus $\mathbb{B}_{\mathrm{CHI}}^-\big(\mathrm{G}^-_q(x)\psi_0\big)=0$. An easy calculation shows that for all $\phi\in\mathcal{H}^-$, we have
\begin{eqnarray*}
\int_{\mathrm{M}}\<\mathrm{G}^-_q(x)\psi_0,\mathrm{D}(\phi)\>dv(x)=\<\psi_0,\phi(q)\>,
\end{eqnarray*}
and so $\mathrm{G}^-_q(\,.\,)$ is a chiral Green function. Unicity follows from the hypothesis $\mathrm{ker}^\pm(\mathrm{D}_g)=\{0\}$. In fact, if we assume that there exists another chiral Green function $\widetilde{\mathrm{G}}^-_q(\,.\,)$, we can show that the spinor field $\mathrm{G}^-_q(\,.\,)\psi_0-\widetilde{\mathrm{G}}^-_q(\,.\,)\psi_0$ is in the kernel of the Dirac operator (using the classical regularity theorems, see \cite{schwartz} for example). Since the Dirac operator (under the chiral bag boundary condition) is supposed to be invertible, this spinor vanishes identically and so unicity follows directly.
\hfill$\square$\\
In the following, the chiral Green function will be indifferently denoted by $\mathrm{G}_\mathrm{CHI}^\pm(\,.\,,q)$ or $\mathrm{G}^\pm_q(\,.\,)$.\\
We now look at the behaviour of the chiral Green function under a conformal change of metric. In fact, we prove the following result:
\begin{proposition}\label{Confgreen}
Let $\overline{g}=f^2g$ be a metric in the conformal class of $g$. If $\mathrm{G}_\mathrm{CHI}^\pm(\,.\,,q)$ (resp. $\overline{\mathrm{G}}_\mathrm{CHI}^\pm$) denote the chiral Green function for the Dirac operator $\mathrm{D}_g$ (resp. $\mathrm{D}_{\overline{g}}$), then:
\begin{eqnarray}\label{covgreen}
\overline{\mathrm{G}}_\mathrm{CHI}^\pm(\,.\,,q)=f^{-\frac{n-1}{2}}(\,.\,)\,f^{-\frac{n-1}{2}}(q)\,\overline{\mathrm{G}_\mathrm{CHI}^{\pm}}(\,.\,,q)
\end{eqnarray}
for $q\in\pa\mathrm{M}$.
\end{proposition}
{\it Proof:}
First recall that given two conformal metrics $g$ and $\overline{g}=f^2g$, there exists a canonical identification between the spinor bundle over $(\mathrm{M},g)$ and the one over $(\mathrm{M},\overline{g})$ (see \cite{hitchin:74} or \cite{hijazi:86}). It is given by the bundle isomorphism (which is a fiberwise isometry)
\begin{eqnarray}
\mathrm{F}:\Sigma_g(\mathrm{M})\longrightarrow\Sigma_{\overline{g}}(\mathrm{M})
\end{eqnarray}
such that the Dirac operators satisfy the relation
\begin{eqnarray}\label{covconf}
\mathrm{F}^{-1}\big(\mathrm{D}_{\overline{g}}(\mathrm{F}(\psi))\big)=f^{-\frac{n+1}{2}}\mathrm{D}_g(f^{\frac{n-1}{2}}\psi),
\end{eqnarray}
for all $\psi\in\Gamma\big(\Sigma_g(\mathrm{M})\big)$. On the other hand, since $\mathrm{G}_q^{-}$ is the chiral Green function for the Dirac operator, we have:
\begin{eqnarray*}
\int_{\mathrm{M}}\<\mathrm{G}^-_q(x)\psi_0,\mathrm{D}_g\varphi\>dv(g)=\<\psi_0,\varphi(q)\>
\end{eqnarray*}
for all $\psi_0\in\Sigma_q(\mathrm{M})$ such that $\mathbb{B}_{\CHI}^{-}(\psi_0)=0$ and $\varphi\in\Ga\big(\Sigma_g(\mathrm{M})\big)$ such that $\mathbb{B}_{\CHI}^{-}(\varphi_{|\pa\mathrm{M}})=0$. We can now prove that the section defined by (\ref{covgreen}) is a chiral Green function for the Dirac operator $\mathrm{D}_{\overline{g}}$. For $\varphi\in\Gamma\big(\Sigma_g(\mathrm{M})\big)$ such that $\mathbb{B}_{\CHI}^{-}(\varphi_{|\pa\mathrm{M}})=0$, we let $\Phi=f^{-\frac{n-1}{2}}\mathrm{F}(\varphi)\in\Gamma\big(\Sigma_{\overline{g}}(\mathrm{M})\big)$. This spinor field satisfies $\overline{\mathbb{B}}_\mathrm{CHI}^\pm(\phi_{|\pa\mathrm{M}})=0$ because of the conformal covariance of the chiral bag boundary condition (see for example \cite{mathese}). We now write:
\begin{eqnarray*}
\int_{\mathrm{M}}\<\overline{\mathrm{G}}^{-}_q(x)\mathrm{F}(\psi_0),\mathrm{D}_{\overline{g}}\Phi\>dv(\overline{g}) & = & \int_{\mathrm{M}}f^{-\frac{n-1}{2}}(x)\,f^{-\frac{n-1}{2}}(q)f^{-\frac{n+1}{2}}(x)\<\mathrm{G}^{-}_q(x)\psi_0,\mathrm{D}_g\varphi\>f^n dv(g)\\
& = & f^{-\frac{n-1}{2}}(q)\int_{\mathrm{M}}\<\mathrm{G}^{-}_q(x)\psi_0,\mathrm{D}_g\varphi\>dv(g)\\
& = & \<\psi_0,f^{-\frac{n-1}{2}}(q)\varphi(q)\>
\end{eqnarray*}
that is:
\begin{eqnarray*}
\int_{\mathrm{M}}\<\overline{\mathrm{G}}^{-}_q(x)\mathrm{F}(\psi_0),\mathrm{D}_{\overline{g}}(\Phi)\>dv(\overline{g}) & = & \<\mathrm{F}(\psi_0),\Phi(q)\>
\end{eqnarray*}
for all $\mathrm{F}(\psi_0^\pm)\in\Sigma_q(\mathrm{M})$ such that $\overline{\mathbb{B}}_\mathrm{CHI}^\pm\big(\mathrm{F}(\psi_0^\pm)\big)=0$ and $\phi\in\Gamma\big(\Sigma_{\overline{g}}(\mathrm{M})\big)$ such that $\overline{\mathbb{B}}_\mathrm{CHI}^\pm(\phi_{|\pa\mathrm{M}})=0$. We have then checked that $\overline{\mathrm{G}}^{-}_q(\,,\,)$ is a chiral Green function. Unicity follows directly.
\hfill$\square$
\begin{remark}
Propositions \ref{Devfoncgreen} and \ref{Confgreen} imply that the chiral Green function exists on locally conformally flat manifolds with umbilic boundary on which the Dirac operator is invertible under the chiral bag boundary condition.
\end{remark}
We now prove a self-adjointness result for the chiral bag boundary condition which will be very important for the following and more particulary for the application given in Section \ref{apl1}.
\begin{proposition}\label{greensymetry}
For $(x,y)\in\mathrm{M}\times\mathrm{M}\setminus\Delta$, we have:
\begin{eqnarray}\label{greensym}
\big(\mathrm{G}_\mathrm{CHI}^{\pm}\big)^*(x,y)= \mathrm{G}_\mathrm{CHI}^{\pm}(y,x).
\end{eqnarray}
In fact, if $\psi_0^\pm\in\Sigma_x(\mathrm{M})$ (resp. $\varphi_0^\pm\in\Sigma_y(\mathrm{M})$) such that $\nu\cdot\Ga\psi_0^\pm=\mp\psi_0^\pm$ for $x\in\pa\mathrm{M}$ (resp. $\nu\cdot\Ga\varphi_0^\pm=\mp\varphi_0^\pm$ for $y\in\pa\mathrm{M}$), then:
\begin{eqnarray*}
\<\mathrm{G}_\mathrm{CHI}^{\pm}(x,y)\varphi_0^\pm,\psi_0^\pm\>_{\Sigma_x(\mathrm{M})}=\<\varphi_0^\pm,\mathrm{G}^{\pm}_\mathrm{CHI}(y,x)\psi_0^\pm\>_{\Sigma_y(\mathrm{M})}.
\end{eqnarray*}
\end{proposition}
{\it Proof:}
We use the fact that there exists a spectral resolution of the space of $\mathrm{L}^2$-spinors, i.e. for all $\psi\in\Gamma\big(\Sigma_g(\mathrm{M})\big)$, $\exists (\mathrm{A}_k)_{k\in\mathbb{Z}}\subset\mathbb{C}$ such that $\psi=\sum_{k\in\mathbb{Z}}\mathrm{A}_k\varphi_k$ where $\varphi_k$ is a smooth spinor field satisfying
$$\left\lbrace
\begin{array}{ll}
\mathrm{D}_g\varphi_k=\lambda_k(g)\varphi_k & \quad\textrm{on}\;\mathrm{M}\\
\mathbb{B}_{\CHI}^{-}(\varphi_{k\,|\pa\mathrm{M}})=0 & \quad\textrm{along}\;\pa\mathrm{M}.
\end{array}
\right.$$
First note that:
\begin{eqnarray*}
\int_{\mathrm{M}\times\mathrm{M}\setminus\Delta}\<\mathrm{G}_\mathrm{CHI}^-(x,y)\varphi_i,\varphi_j\>dv(x)dv(y) & = & \int_{\mathrm{M}}\Big(\int_{\mathrm{M}}\<\mathrm{G}_\mathrm{CHI}^-(x,y)\varphi_i,\varphi_j\>dv(x)\Big)dv(y)\\
& = & \int_{\mathrm{M}}\Big(\int_{\mathrm{M}}\<\mathrm{G}_\mathrm{CHI}^-(x,y)\varphi_i,\frac{1}{\lambda_j}\mathrm{D}_g\varphi_j\>dv(x)\Big)dv(y)\\
& = & \frac{1}{\lambda_j}\int_{\mathrm{M}}\<\varphi_i,\varphi_j\>dv(y)\\
& = & \frac{1}{\lambda_j}\delta_{ij},
\end{eqnarray*}
since $\mathrm{D}$ is assumed to be invertible and thus $\lambda_j\neq 0$ for all $j$. In a same way, we easily show that:
\begin{eqnarray*}
\int_{\mathrm{M}\times\mathrm{M}\setminus\Delta}\<\varphi_i,\mathrm{G}_\mathrm{CHI}^-(y,x)\varphi_j\>dv(x)dv(y) & = & \frac{1}{\lambda_i}\delta_{ij},
\end{eqnarray*}
We thus have proved that for all $(i,j)\in\mathbb{Z}^2$:
\begin{eqnarray*}
\int_{\mathrm{M}\times\mathrm{M}\setminus\Delta}\<\mathrm{G}_\mathrm{CHI}^-(x,y)\varphi_i,\varphi_j\>dv(x)dv(y)=\int_{\mathrm{M}\times\mathrm{M}\setminus\Delta}\<\varphi_i,\mathrm{G}_\mathrm{CHI}^-(y,x)\varphi_j\>dv(x)dv(y).
\end{eqnarray*}
By linearity, we have:
\begin{eqnarray*}
\int_{\mathrm{M}\times\mathrm{M}\setminus\Delta}\<\mathrm{G}_\mathrm{CHI}^-(x,y)\varphi,\psi\>dv(x)dv(y)=\int_{\mathrm{M}\times\mathrm{M}\setminus\Delta}\<\varphi,\mathrm{G}_\mathrm{CHI}^-(y,x)\psi\>dv(x)dv(y),
\end{eqnarray*}
for all $\psi$, $\varphi\in\Gamma\big(\Sigma_g(\mathrm{M})\big))$ such that $\mathbb{B}_{\CHI}^-(\psi_{|\pa\mathrm{M}})=\mathbb{B}_{\CHI}^-(\varphi_{|\pa\mathrm{M}})=0$. Now we consider $x_0\in\mathrm{M}$ such that $y\mapsto\mathrm{G}_\mathrm{CHI}^-(y,x_0)$ and let $f_{\varepsilon}:\mathbb{R}^n\rightarrow\mathbb{R}$ such that $f_\varepsilon\rightarrow\delta_{x_0}$ where $\delta_{x_0}$ is the Dirac's distribution at $x_0$. So if $\psi\in\Gamma\big(\Sigma_g(\mathrm{M})\big)$ satisfies $\mathbb{B}_{\CHI}^-(\psi_{|\pa\mathrm{M}})=0$ and $\psi(x_0)=\psi_0$, we obtain:
\begin{eqnarray*}
\int_{\mathrm{M}}\<\varphi(y),\mathrm{G}_\mathrm{CHI}^-(y,x_0)\psi_0\>dv(y)=\int_{\mathrm{M}}\<\mathrm{G}_\mathrm{CHI}^-(x_0,y)\varphi(y),\psi_0\>dv(y).
\end{eqnarray*}
Using the same method for the $y-$variable with $y_0\neq x_0$ leads to:
\begin{eqnarray*}
\<\varphi_0,\mathrm{G}_\mathrm{CHI}^-(y_0,x_0)\psi_0\>_{\Sigma_{y_{0}}(\mathrm{M})}=\<\mathrm{G}_\mathrm{CHI}^-(x_0,y_0)\varphi_0,\psi_0\>_{\Sigma_{x_{0}}(\mathrm{M})},
\end{eqnarray*}
and this concludes the proof.
\hfill$\square$\\
We now show that the chiral Green functions $\mathrm{G}_\mathrm{CHI}^-(\,.\,,q)$ and $\mathrm{G}_\mathrm{CHI}^+(\,.\,,q)$ are related by the action of the chirality operator. In fact, we prove the following result:
\begin{proposition}\label{posneg}
Let $(\mathrm{M}^n,g)$ be a connected compact Riemannian spin manifold with a boundary point $q\in\pa\mathrm{M}$ which has a locally conformally flat with umbilic boundary neighbourhood. Then we get:
\begin{eqnarray}
\mathrm{G}_\mathrm{CHI}^-(\,.\,,q)=-\Ga\mathrm{G}_\mathrm{CHI}^+(\,.\,,q)\Ga.
\end{eqnarray}
\end{proposition}
{\it Proof:}
Without loss of generality, we can assume that the metric $g$ is such that there is a neighbourhood of $q\in\pa\mathrm{M}$ isometric to an open set of the Euclidean half-space. Then in this chart, the chiral Green function $\mathrm{G}_\mathrm{CHI}^-(\,.\,,q)$ admits the development (\ref{devfoncgreen}). Now define the section $\widetilde{\mathrm{G}}(\,.\,,q)=-\Ga\mathrm{G}_\mathrm{CHI}^+(\,.\,,q)\Ga$ on $\mathrm{M}\setminus\{q\}$. First note that we can easily check that $\widetilde{\mathrm{G}}(\,.\,,q)\in\mathcal{H}^-$. Then a direct computation shows that the spinor field given by
\begin{eqnarray*}
\mathrm{G}_\mathrm{CHI}^-(\,.\,,q)\psi_0-\widetilde{\mathrm{G}}(\,.\,,q)\psi_0
\end{eqnarray*}
for $\psi_0$ such that $\mathbb{B}_{\CHI}^-(\psi_0)=0$, is harmonic. Hence since the Dirac operator is supposed to be invertible under the chiral bag boundary condition, we conclude that $\widetilde{\mathrm{G}}(\,.\,,q)\psi_0=\mathrm{G}_\mathrm{CHI}^-(\,.\,,q)\psi_0$ is the unique chiral Green function.
\hfill$\square$\\
We are now ready to define the chiral mass operator (see \cite{amm4} for the case of manifolds without boundary). The name of this operator comes from its tight relation with the concept of mass in General Relativity which also appears in the context of the Yamabe problem (see \cite{schoen:84} or \cite{lee.parker:87} for the closed case and \cite{escobar:92} for the boundary case).
\begin{definition}
Let $(\mathrm{M}^n,g,\sigma)$ be a connected compact Riemannian spin manifold with non empty smooth boundary $\pa\mathrm{M}$. Suppose that there exits a boundary point $q\in\pa\mathrm{M}$ which has a locally flat with umbilic boundary neighbourhood. The chiral mass operator is then defined by:
$$\begin{array}{rll}
\mathrm{m}_\mathrm{CHI}^\pm(q):\mathrm{V}^\pm_q & \longrightarrow & \mathrm{V}^\pm_q \\
\psi_0^\pm & \longmapsto & \mathrm{m}^\pm_\mathrm{CHI}(q,q)\psi_0^\pm
\end{array}$$
where $\mathrm{m}^\pm(\,.\,,q)\psi_0^\pm$ is given in Proposition~\ref{Devfoncgreen} and $\mathrm{V}^\pm_q=\frac{1}{2}\big(\mathrm{Id}\pm\nu\cdot\Ga\big)\big(\Sigma_q(\mathrm{M})\big)$.
\end{definition}
Thanks to Proposition \ref{greensym}, we can deduce the following result which will be very useful for the next section.
\begin{proposition}
For a point $q\in\pa\mathrm{M}$ with a locally flat with umbilic boundary neighbourhood, the chiral mass operator $\mathrm{m}_\mathrm{CHI}^\pm(q)$ is linear and symmetric.
\end{proposition}
{\it Proof:}
An easy computation shows that the chiral Green function is linear. Using this fact, it clearly follows that the chiral mass operator is also linear. The symmetry of the chiral mass operator comes from the symmetry of the chiral Green function proved in Proposition~\ref{greensymetry}.
\hfill$\square$\\
\begin{remark}
In the following, we will refer to the ``negative chiral mass operator'' (resp. ``positive chiral mass operator'') for $\mathrm{m}_\mathrm{CHI}^-(q)$ (resp. $\mathrm{m}_\mathrm{CHI}^+(q)$).
\end{remark}
As a direct consequence, we obtain:
\begin{corollary}\label{spectrum}
For a point $q\in\pa\mathrm{M}$ with a locally flat with umbilic boundary neighbourhood, the (pointwise) spectrum of the chiral mass operator $\mathrm{m}_\mathrm{CHI}^\pm(q)$ is real. Moreover, if $\kappa$ is an eigenvalue for the negative chiral mass operator, then $-\kappa$ is an eigenvalue for the positive chiral mass operator.
\end{corollary}
{\it Proof:}
Using Proposition~\ref{posneg}, we can easily check that the positive and the negative chiral mass operators satisfy the relation:
\begin{eqnarray*}
\Ga\mathrm{m}_\mathrm{CHI}^-(q)=-\mathrm{m}_\mathrm{CHI}^+(q)\Ga.
\end{eqnarray*}
So consider an eigenspinor $\psi_0\in\mathrm{V}^-_q$ for the negative chiral mass operator associated with the eigenvalue $\kappa$, i.e. $\mathrm{m}_\mathrm{CHI}^-(q)\psi_0=\kappa\psi_0$ and $\nu\cdot\Ga\psi_0=\psi_0$. Using the preceding formula and the fact that $\nu\cdot\Ga\big(\Ga\psi_0\big)=-\Ga\psi_0$, we observe that $\mathrm{m}_\mathrm{CHI}^+(q)(\Ga\psi_0)=-\kappa\Ga\psi_0$ and thus $-\kappa$ is an eigenvalue for the positive chiral mass operator.
\hfill$\square$\\
In the next section, we give a direct application of the construction of the chiral Green function and the chiral mass operator.
\subsection{Application: The chiral bag invariant}\label{apl1}
This part is devoted to a direct application of the preceding construction of the chiral Green function for the Dirac operator. This application concerns a spin conformal invariant on manifolds with boundary introduced in \cite{sr3}. We first begin with a brief introduction on this invariant. We have seen in Section~\ref{definition1} (see \cite{hijazi.montiel.roldan:01} for more details) that the spectrum of the Dirac operator under the chiral bag boundary condition consists of entirely isolated real eigenvalues with finite multiplicity. If we denote by $\lambda_1^\pm(g)$ the first eigenvalue of the Dirac operator $\mathrm{D}_g$ under the boundary condition $\mathbb{B}_{\CHI}^\pm$, then the chiral bag invariant is defined by:
\begin{eqnarray}\label{cbi}
\lambda_{\min}^{\pm}(\mathrm{M},\pa\mathrm{M}):=\underset{\overline{g}\in[g]}{\inf}|\lambda^{\pm}_1(\overline{g})|\Vol(\mathrm{M},\overline{g})^{\frac{1}{n}},
\end{eqnarray}
where $[g]$ denotes the conformal class of $g$ and $\Vol(\mathrm{M},\overline{g})$ is the volume of the manifold $\mathrm{M}$ equipped with the Riemannian metric $\overline{g}\in[g]$. A very useful formula for the following is given by the variational characterization of the chiral bag invariant. In fact, it is shown in \cite{sr3} (see also \cite{ammann}) that:
\begin{eqnarray}\label{charvar}
\lambda_{\min}^\pm(\mathrm{M},\pa\mathrm{M})=
\underset{\varphi\in\mathcal{C}^\pm_g}{\inf}\Big\{\frac{\big(\int_{\mathrm{M}}|\mathrm{D}_g\varphi|^{\frac{2n}{n+1}}dv(g)\big)^{\frac{n+1}{n}}}{\big|\int_{\mathrm{M}}\mathrm{Re}\<\mathrm{D}_g\varphi,\varphi\>dv(g)\big|}\Big\},
\end{eqnarray}
where $\mathcal{C}^\pm_g$ is the $\mathrm{L}^2$-orthogonal of $\mathrm{Ker}^{\pm}(\mathrm{D}_g)$ in $\mathcal{H}^\pm_g$ and $\mathrm{D}_g$ is the Dirac operator in the metric $g$.
\begin{remark}
\begin{enumerate}
\item The above definition seems to depend on the boundary condition chosen $\mathbb{B}^{+}_{g}$ or $\mathbb{B}^{-}_{g}$, however it doesn't (see \cite{sr3}). Taking in account this fact, we will denote by $\lambda_{\min}(\mathrm{M},\pa\mathrm{M})$ the chiral bag invariant in what follows and we will use the $\mathbb{B}^-_g$ condition.
\item Using the Hijazi inequality proved in \cite{sr}, we can compare the chiral bag invariant with the Yamabe invariant $\mu(\mathrm{M},\pa\mathrm{M})$ of the manifold (see \cite{escobar:92}). In fact, if $n\geq 3$, we have:
\begin{eqnarray}\label{hijbord}
\lambda_{\mathrm{min}}(\mathrm{M},\pa\mathrm{M})^2\geq\frac{n}{4(n-1)}\mu(\mathrm{M},\pa\mathrm{M}).
\end{eqnarray}
\item In \cite{sr3}, we have shown a spinorial analogous of Aubin's (or Escobar's) inequality \cite{aubin:76} (\cite{escobar:92}, for the non empty boundary case). More precisely, we proved that if $n\geq 2$:
\begin{eqnarray}\label{largeb1}
\lambda_{\min}(\mathrm{M},\pa\mathrm{M})\leq\lambda_{\min}(\mathbb{S}^n_+,\pa\mathbb{S}^n_+)=\frac{n}{2}\Big(\frac{\omega_n}{2}\Big)^{\frac{1}{n}}.
\end{eqnarray}
\noindent This inequality is the analogue of the one obtained by Ammann, Humbert and Morel \cite{amm3} in the boundaryless case.
\end{enumerate}
\end{remark}
A natural question is to find a sufficient condition under which Inequality~(\ref{largeb1}) is strict. Keeping in mind the work of Schoen \cite{schoen:84}, Escobar \cite{escobar:92} and Ammann, Humbert and Morel \cite{amm4}, we are going to see that the construction of the chiral mass operator gives an answer for a certain class of manifolds. More precisely, we prove the following result:
\begin{theorem}\label{lcfm}
Let $(\mathrm{M}^n,g,\sigma)$ be a $n$-dimensional $(n\geq 2)$ connected compact Riemannian spin manifold with non empty smooth boundary $\pa\mathrm{M}$. Suppose that there exists a point $q\in\pa\mathrm{M}$ which has a locally conformally flat with umbilic boundary neighbourhood. Moreover, we assume that the Dirac operator $\mathrm{D}_g$ is invertible on $\mathcal{H}^-$ (or on $\mathcal{H}^+$) and that the negative chiral mass operator $\mathrm{m}_\mathrm{CHI}^-(q)$ (or the positive chiral mass operator $\mathrm{m}_\mathrm{CHI}^+(q)$) is not identically zero. Then we get:
\begin{eqnarray*}
\lambda_{\min}(\mathrm{M},\pa\mathrm{M})<\lambda_{\min}(\mathbb{S}^n_+,\pa\mathbb{S}^n_+).
\end{eqnarray*}
\end{theorem}
The proof of this theorem is in the same spirit as the one of Inequality~(\ref{largeb1}). Indeed, the idea is to construct a test-spinor to estimate in the variational characterization~(\ref{charvar}) of $\lambda_{\min}(\mathrm{M},\pa\mathrm{M})$. In \cite{sr3}, the test-spinor was constructed from a Killing spinor on the hemisphere satisfying the chiral bag boundary condition and with support contained in an open set of a trivialization around a boundary point. In order to prove Theorem~\ref{lcfm}, it is not enough to extend by zero the test-spinor away from the open set of trivialization. In fact, the good extention is given by the chiral Green function. More precisely, we show:
\begin{theorem}\label{cop}
Let $(\mathrm{M},g,\sigma)$ be a $n$-dimensional connected compact Riemannian spin manifold ($n\geq 2$) with non empty boundary $\pa\mathrm{M}$. Suppose that there exists a point $q\in\pa\mathrm{M}$ which has a locally conformally flat with umbilic boundary neighbourhood. Assume also that there exits on $\mathrm{M}\setminus\{q\}$ a spinor field $\psi^\pm$ such that:
\begin{equation}\label{hyp1}
\left\lbrace
\begin{array}{ll}
\mathrm{D}_g\psi^\pm=0 & \quad\text{on}\;\mathrm{M}\setminus\{q\}\\ \\
\mathbb{B}_{\CHI}^\pm(\psi^\pm_{|\pa\mathrm{M}})=0 & \quad\text{along}\;\pa\mathrm{M}
\end{array}
\right.
\end{equation}
which admits the following development around $q$:
\begin{eqnarray}\label{hyp2}
\psi^\pm=\frac{x}{r^n}\cdot\psi_0^\pm+\psi_1^\pm+\theta^\pm,
\end{eqnarray}
where $\psi_0^\pm$, $\psi_1^\pm\in\Sigma_q\mathrm{M}$ are spinors satisfying $\nu\cdot\Ga\psi_0^\pm=\mp\psi_0^\pm$, $\nu\cdot\Ga\psi_1^\pm=\mp\psi_1^\pm$ and such that:
\begin{eqnarray}\label{hyp3}
\mathrm{Re}\,\<\psi_0^\pm,\psi_1^\pm\><0\qquad\text{and}\qquad\mathrm{Re}\,\<x\cdot\psi_0^\pm,\psi_1^\pm\>=0.
\end{eqnarray}
We also assume that $\theta^\pm$ is a smooth spinor field all over $\mathrm{M}$ satisfying $\theta^\pm=O(r)$, $\mathbb{B}_{\CHI}^\pm(\theta^\pm_{|\pa\mathrm{M}})=0$ and which is harmonic around $q$. Under these hypothesis, we get:
\begin{eqnarray*}
\lambda_{\min}(\mathrm{M},\pa\mathrm{M})<\lambda_{\min}(\mathbb{S}^n_+,\pa\mathbb{S}^n_+)=\frac{n}{2}\Big(\frac{\omega_n}{2}\Big)^{\frac{1}{n}}.
\end{eqnarray*}
\end{theorem}
{\it Proof:}
For $\varepsilon>0$, we let $\rho:=\varepsilon^{\frac{1}{n+1}}$ and $\varepsilon_0:=\frac{\rho^n}{\varepsilon}f\big(\frac{x}{\varepsilon}\big)^{\frac{n}{2}}$ with $f(r)=\frac{1}{1+r^2}$ and we consider the spinor field defined by:
\begin{equation}
\psi_{\varepsilon}^\pm=
\left\lbrace
\begin{array}{ll}
f\big(\frac{x}{\varepsilon}\big)\big(1-\frac{x}{\varepsilon}\big)\cdot\psi_0^\pm-\varepsilon_0\psi_1^\pm \quad & \quad \textrm{if }r\leq \rho\\ \\
-\varepsilon_0\big(\psi^\pm-\eta\theta^\pm\big)+\eta f\big(\frac{\rho}{\varepsilon}\big)^{\frac{n}{2}}\psi_0^\pm\quad & \quad\textrm{if } \rho\leq r\leq 2\rho\\ \\
\varepsilon_0\psi^\pm \quad & \quad\textrm{if }2\rho\leq r
\end{array}
\right.
\end{equation}
where $r=d(x,q)$ and $\eta$ is a cut-off function equal to zero on $\mathrm{M}\setminus\mathrm{B}^+_q(2\rho)$, $1$ on $\mathrm{B}^+_q(\rho)$ and satisfying $|\na\eta|\leq\frac{2}{\rho}$. We first check that $\mathbb{B}_{\CHI}^\pm(\psi^\pm_{\varepsilon|\pa\mathrm{M}})=0$. For $r\leq \rho$, we have:
\begin{eqnarray*}
\mathbb{B}_{\CHI}^\pm(\psi^\pm_{\varepsilon|\pa\mathrm{M}}) & = & \mathbb{B}_{\CHI}^\pm\Big(\big(f\big(\frac{x}{\varepsilon}\big)\big(1-\frac{x}{\varepsilon}\big)\cdot\psi_0^\pm-\varepsilon_0\psi^\pm_{1}\big)_{|\pa\mathrm{M}}\Big)\\ \\
& = & f\big(\frac{x}{\varepsilon}\big)\big(1-\frac{x}{\varepsilon}\big)\cdot\,\mathbb{B}_{\CHI}^\pm(\psi_{0\,|\pa\mathrm{M}}^\pm)-\varepsilon_0\,\mathbb{B}_{\CHI}^\pm(\psi_{1\,|\pa\mathrm{M}}^\pm)=0
\end{eqnarray*}
since $\mathbb{B}_{\CHI}^\pm(\psi_{0\,|\pa\mathrm{M}}^\pm)=\mathbb{B}_{\CHI}^\pm(\psi_{1\,|\pa\mathrm{M}}^\pm)=0$. In the same way, since $\psi^\pm$ and $\theta^\pm$ satisfy $\mathbb{B}_{\CHI}^\pm(\psi_{|\pa\mathrm{M}}^\pm)=\mathbb{B}_{\CHI}^\pm(\theta_{|\pa\mathrm{M}}^\pm)=0$, we easily check that for $\rho\leq r\leq 2\rho$ or $r\geq 2\rho$, we also have $\mathbb{B}_{\CHI}^\pm(\psi^\pm_{\varepsilon|\pa\mathrm{M}})=0$. Without loss of generality, we can assume that $|\psi_0^\pm|^2=1$. Since $\psi^\pm$ and $\theta^\pm$ are harmonic around $q$, we compute that:
\begin{equation}
\mathrm{D}_g\psi_{\varepsilon}^\pm=
\left\lbrace
\begin{array}{ll}
\frac{n}{\varepsilon}f\big(\frac{r}{\varepsilon}\big)^{\frac{n}{2}+1}\big(1-\frac{x}{\varepsilon}\big)\cdot\psi_0^\pm\quad & \quad \textrm{if }r\leq \rho\\ \\
\varepsilon_0\na\eta\cdot\theta^\pm+f\big(\frac{r}{\varepsilon}\big)^{\frac{n}{2}}\na\eta\cdot\psi_0^\pm\quad & \quad\textrm{if } \rho\leq r\leq 2\rho\nonumber\\ \\
0\quad & \quad\textrm{if }2\rho\leq r.
\end{array}
\right.
\end{equation}
In order to obtain an estimate of the numerator of the variational characterization~(\ref{charvar}) of $\lambda_{\min}(\mathrm{M},\pa\mathrm{M})$, one can check that:
\begin{equation}
|\mathrm{D}_g\psi^\pm_{\varepsilon}|^{\frac{2n}{n+1}}=
\left\lbrace
\begin{array}{ll}
n^{\frac{2n}{n+1}}\varepsilon^{-\frac{2n}{n+1}}f\big(\frac{r}{\varepsilon}\big)^{n}\quad & \quad \textrm{if }r\leq \rho\\ \\
|\varepsilon_0\na\eta\cdot\theta^\pm+f\big(\frac{r}{\varepsilon}\big)^{\frac{n}{2}}\na\eta\cdot\psi_0^\pm|^{\frac{2n}{n+1}}\quad & \quad\textrm{if } \rho\leq r\leq 2\rho\nonumber\\ \\
0\quad & \quad\textrm{if }2\rho\leq r.
\end{array}
\right.
\end{equation}
Hence we have:
\begin{eqnarray*}
\int_{\mathrm{B}^+_q(\rho)}|\mathrm{D}_g\psi^\pm_{\varepsilon}|^{\frac{2n}{n+1}}dx=\varepsilon^{n-\frac{2n}{n+1}}n^{\frac{2n}{n+1}}\int_{\mathrm{B}^+_q(\frac{\rho}{\varepsilon})}f^n dx\leq\int_{\mathbb{R}^n_+} f^ndx
\end{eqnarray*}
and:
\begin{eqnarray*}
\int_{\mathrm{B}^+_q(2\rho)\setminus\mathrm{B}^+_q(\rho)}|\mathrm{D}_g\psi^\pm_{\varepsilon}|^{\frac{2n}{n+1}}dx\leq C\varepsilon^{\frac{n(2n-1)}{n+1}}
+C\varepsilon^{\frac{n(3n-1}{n+1}}\leq C\varepsilon^{\frac{n(2n-1)}{n+1}}.
\end{eqnarray*}
These estimates lead to:
\begin{eqnarray*}
\Big(\int_\mathrm{M}|\mathrm{D}_g\psi^\pm_{\varepsilon}|^{\frac{2n}{n+1}}dv(g)\Big)^{\frac{n+1}{n}}\leq\varepsilon^{n-1}n^2\mathrm{I}^{1+\frac{1}{n}}\Big(1+C\varepsilon^{\frac{n^2}{n+1}}\Big)=\varepsilon^{n-1}n^2\mathrm{I}^{1+\frac{1}{n}}\big(1+o(\varepsilon^{n-1})\big)
\end{eqnarray*}
where $\mathrm{I}=\int_{\mathbb{R}^n_+}f^ndx$. For the next, we let $\kappa=\mathrm{Re}\,\<\psi_0^\pm,\psi_1^\pm\>$ and we focus on an estimate of the denominator of the variational characterization~(\ref{charvar}) of $\lambda_{\min}(\mathrm{M},\pa\mathrm{M})$. We write:
\begin{eqnarray*}
\mathrm{Re}\<\mathrm{D}_g\psi^\pm_{\varepsilon},\psi^\pm_{\varepsilon}\>_{|\mathrm{B}^+_q(\rho)} & = & \mathrm{Re}\<\frac{n}{\varepsilon}f\big(\frac{r}{\varepsilon}\big)^{\frac{n}{2}+1}\big(1-\frac{x}{\varepsilon}\big)\cdot\psi_0^\pm,f\big(\frac{r}{\varepsilon}\big)^{\frac{n}{2}}\big(1-\frac{x}{\varepsilon}\big)\cdot\psi_0^\pm-\varepsilon_0\psi_1^\pm\>\\
& = & \frac{n}{\varepsilon}f\big(\frac{r}{\varepsilon}\big)^n-\frac{n}{\varepsilon}\varepsilon_0\kappa\, f\big(\frac{r}{\varepsilon}\big)^{\frac{n}{2}+1}+\frac{n}{\varepsilon}\varepsilon_0 f\big(\frac{r}{\varepsilon}\big)^{\frac{n}{2}+1}\mathrm{Re}\<\big(\frac{x}{\varepsilon}\big)\cdot\psi_0^\pm,\psi_1^\pm\>.
\end{eqnarray*}
Integrating on $\mathrm{B}^+_q(\rho)$ gives:
\begin{eqnarray*}
\int_{\mathrm{B}^+_q(\rho)} \mathrm{Re}\<\mathrm{D}_g\psi^\pm_{\varepsilon},\psi^\pm_{\varepsilon}\> dx& = & \frac{n}{\varepsilon}\int_{\mathrm{B}^+_q(\rho)}f\big(\frac{r}{\varepsilon}\big)^n dx-\frac{n}{\varepsilon}\varepsilon_0\kappa\int_{\mathrm{B}^+_q(\rho)}f\big(\frac{r}{\varepsilon}\big)^{\frac{n}{2}+1}dx\\
& & +\frac{n}{\varepsilon}\varepsilon_0\int_{\mathrm{B}^+_q(\rho)}f\big(\frac{r}{\varepsilon}\big)^{\frac{n}{2}+1}\mathrm{Re}\<\big(\frac{x}{\varepsilon}\big)\cdot\psi_0^\pm,\psi_1^\pm\>dx\\
& = & n\varepsilon^{n-1}\Big(\int_{\mathrm{B}^+_q(\frac{\rho}{\varepsilon})}f(r)^n dx-\varepsilon_0\kappa\int_{\mathrm{B}^+_q(\frac{\rho}{\varepsilon})}f(r)^{\frac{n}{2}+1}dx+\mathrm{A}_{\varepsilon}\Big)
\end{eqnarray*}
where:
\begin{eqnarray*}
\mathrm{A}_\varepsilon=n\varepsilon_0\varepsilon^{-n}\int_{\mathrm{B}^+_q(\rho)}f\big(\frac{r}{\varepsilon}\big)^{\frac{n}{2}+1}\mathrm{Re}\<\frac{x}{\varepsilon}\cdot\psi_0^\pm,\psi_1^\pm\>dx.
\end{eqnarray*}
However $\mathrm{A}_\varepsilon=0$ since by hypothesis, we assumed that:
\begin{eqnarray*}
\mathrm{Re}\,\<\frac{x}{\varepsilon}\cdot\psi_0^\pm,\psi_1^\pm\>=0.
\end{eqnarray*}
Moreover an easy computation leads to:
\begin{eqnarray*}
\int_{\mathrm{B}^+_q(\frac{\rho}{\varepsilon})}f(r)^n dx=\mathrm{I}+O(\varepsilon^{\frac{n^2}{n+1}})
\end{eqnarray*}
and since $\varepsilon_0\sim\varepsilon^{n-1}$ when $\varepsilon\rightarrow 0$, we find:
\begin{eqnarray*}
\int_{\mathrm{B}^+_q(\rho)}\mathrm{Re}\<\mathrm{D}_g\psi^\pm_{\varepsilon},\psi^\pm_{\varepsilon}\> dx\geq n\varepsilon^{n-1}\Big(\mathrm{I}-\mathrm{C}_0\kappa\varepsilon^{n-1}+o(\varepsilon^{n-1})\Big)
\end{eqnarray*}
where $\mathrm{C}_0=\int_{\mathbb{R}^n_+}f(r)^{\frac{n}{2}+1}dx$. On the other hand, we compute:
\begin{eqnarray*}
\mathrm{Re}\<\mathrm{D}_g\psi_{\varepsilon}^\pm,\psi_{\varepsilon}^\pm\>_{|\mathrm{B}^+_q(2\rho)\setminus\mathrm{B}^+_q(\rho)} & = & -\mathrm{Re}\<\varepsilon_0\na\eta\cdot\theta^\pm,\varepsilon_0(\psi^\pm-\eta\theta^\pm)+\eta f\big(\frac{\rho}{\varepsilon}\big)^{\frac{n}{2}}\psi_0^\pm\>\\
& & +\mathrm{Re}\,\<f\big(\frac{\rho}{\varepsilon}\big)^{\frac{n}{2}}\na\eta\cdot\psi_0^\pm,\varepsilon_0(\psi^\pm-\eta\theta^\pm)+\eta f\big(\frac{\rho}{\varepsilon}\big)^{\frac{n}{2}}\psi_0^\pm\>\\
& = & -\mathrm{Re}\<\varepsilon_0\na\eta\cdot\theta^\pm+f\big(\frac{\rho}{\varepsilon}\big)^{\frac{n}{2}}\na\eta\cdot\psi_0^\pm,\varepsilon_0\psi^\pm\>
\end{eqnarray*}
since $\mathrm{Re}\<\na\eta\cdot\theta^\pm,\theta^\pm\>=0$, $\mathrm{Re}\<\na\eta\cdot\psi_0^\pm,\psi_0^\pm\>=0$ and
\begin{eqnarray*}
\mathrm{Re}\<\na\eta\cdot\psi_0^\pm,\theta^\pm\>=-\mathrm{Re}\<\na\eta\cdot\theta^\pm,\psi_0^\pm\>.
\end{eqnarray*}
This leads to the following estimation:
\begin{eqnarray*}
\mathrm{Re}\<\mathrm{D}_g\psi_{\varepsilon}^\pm,\psi_{\varepsilon}^\pm\>_{|\mathrm{B}^+_q(2\rho)\setminus\mathrm{B}^+_q(\rho)} \geq C\varepsilon^{2n-2}\rho^{1-n}
\end{eqnarray*}
and thus we have:
\begin{eqnarray*}
\int_{\mathrm{B}^+_q(2\rho)\setminus\mathrm{B}^+_q(\rho)}\mathrm{Re}\<\mathrm{D}_g\psi^\pm_{\varepsilon},\psi^\pm_{\varepsilon}\>dx=o(\varepsilon^{2(n-1)}).
\end{eqnarray*}
Now using the fact that $\mathrm{Re}\<\mathrm{D}_g\psi_{\varepsilon}^\pm,\psi_{\varepsilon}^\pm\>_{|\mathrm{M}\setminus\mathrm{B}^+_q(2\rho)}=0$, we get:
\begin{eqnarray*}
\int_{\mathrm{M}}\mathrm{Re}\<\mathrm{D}_g\psi_{\varepsilon}^\pm,\psi_{\varepsilon}^\pm\>\geq n\varepsilon^{n-1}\mathrm{I}\Big(1-\mathrm{C_0}\kappa\varepsilon^{n-1}++o(\varepsilon^{n-1})\Big).
\end{eqnarray*}
The variational characterization~(\ref{charvar}) of $\lambda_{\min}(\mathrm{M},\pa\mathrm{M})$ gives:
\begin{eqnarray*}
\lambda_{\min}(\mathrm{M},\pa\mathrm{M})\leq n\mathrm{I}^{\frac{1}{n}}\frac{1+o(\varepsilon^{n-1})}{1-\mathrm{C_0}\kappa\varepsilon^{n-1}+o(\varepsilon^{n-1})}=\lambda_{\min}(\mathbb{S}^n_+,\pa\mathbb{S}^n_+)+\mathrm{C}_0\kappa\varepsilon^{n-1}+o(\varepsilon^{n-1})
\end{eqnarray*}
where $\mathrm{C}_0$ is a positive constant. However, we assumed that $\kappa:=\mathrm{Re}\,\<\psi_0^\pm,\psi_1^\pm\><0$ so we can finally conclude that:
\begin{eqnarray*}
\lambda_{\min}(\mathrm{M},\pa\mathrm{M})<\lambda_{\min}(\mathbb{S}^n_+,\pa\mathbb{S}^n_+).
\end{eqnarray*}
\hfill$\square$\\
The proof of Theorem~\ref{lcfm} is then reduced to prove the existence of a spinor field satisfying the hypothesis of Theorem~\ref{cop}.\\
{\it Proof of Theorem~\ref{lcfm}:}
We show that under the hypothesis of Theorem~\ref{lcfm}, there exists a spinor field on $\mathrm{M}\setminus\{q\}$ satisfying (\ref{hyp1}), (\ref{hyp2}) and (\ref{hyp3}). Without loss of generality, we can assume (using the conformal covariance of $\lambda_{\min}(\mathrm{M},\pa\mathrm{M})$) that for the metric $g$, there exists a boundary point $q\in\pa\mathrm{M}$ with locally flat and umbilic boundary neighbourhood. On the other hand, since the Dirac operator is supposed to be invertible on $\mathcal{H}^\pm$, Proposition~\ref{Devfoncgreen} allows to write that (around a point $q\in\pa\mathrm{M}$), the chiral Green function admits the following development:
\begin{eqnarray*}
\mathrm{G}_\mathrm{CHI}^\pm(x,q)\psi_0^\pm=-\frac{2}{\omega_{n-1}}\frac{x-q}{|x-q|^n}\cdot\psi_0^\pm+\mathrm{m}_\mathrm{CHI}^\pm(x,q)\psi_0^\pm,
\end{eqnarray*}
where $\psi_0^\pm$ is a spinor such that $\nu\cdot\Ga\psi_0^\pm=\mp\psi_0^\pm$ and $\mathbb{B}_{\CHI}^\pm\big(\mathrm{m}_\mathrm{CHI}^\pm(\,.\,,q)\psi^\pm_{0|\pa\mathrm{M}}\big)=0$. Since the chiral mass operator is supposed to be non identically zero, we can choose a spinor $\psi_0^\pm$ which is an eigenspinor for the chiral mass operator $\mathrm{m}_\mathrm{CHI}^\pm(q)$ associated with the eigenvalue $\pm\kappa$ with $\kappa\in\mathbb{R}$. Moreover, Corollary~\ref{spectrum} insures that the eigenvalues $\pm\kappa$ are of opposite signs and so one of the two is positive and the other is negative. However the point $(1)$ of Remark~\ref{remark2} tells us that the chiral bag invariant $\lambda_{\min}(\mathrm{M},\pa\mathrm{M})$ does not depend of the choice of the boundary condition and we can then choose $\psi_0^-$ (for example) as being an eigenspinor for the negative chiral mass operator $\mathrm{m}^-_\mathrm{CHI}(q)$ associated with the eigenvalue $\kappa$ with $\kappa>0$. Then the chiral Green function $\mathrm{G}_\mathrm{CHI}^-(\,.\,,q)\psi_0^-$ is given by:
\begin{eqnarray*}
\mathrm{G}_\mathrm{CHI}^-(x,q)\psi_0^-=-\frac{2}{\omega_{n-1}}\frac{x-q}{|x-q|^n}\cdot\psi_0^-+\kappa\psi_0^-.
\end{eqnarray*}
Using the notations of Theorem~\ref{cop}, we have $\psi_1^-=-\kappa\psi_0^-$ and then we obtain:
\begin{eqnarray*}
\mathrm{Re}\,\<x\cdot\psi_0^-,\psi_1^-\>=\kappa\,\mathrm{Re}\,\<x\cdot\psi_0^-,\psi_0^-\>=0.
\end{eqnarray*}
Thus the spinor field $-\frac{\omega_{n-1}}{2}\mathrm{G}_\mathrm{CHI}^-(\,.\,,q)\psi_0^-$ satisfies the properties (\ref{hyp1}), (\ref{hyp2}) and (\ref{hyp3}) and so Theorem~\ref{cop} allows to conclude.
\hfill$\square$\\
As an application of this result, we obtain a spinorial proof of the Yambe problem on manifolds with boundary. In fact, we have:
\begin{corollary}
Suppose that $\mu(\mathrm{M},\pa\mathrm{M})>0$ and that the manifold $(\mathrm{M}^n,g,\sigma)$ ($n\geq 3)$ has a locally conformally flat with umbilic boundary point $q\in\pa\mathrm{M}$. Assume moreover that the chiral mass operator is non identically zero, then there exists a metric $\overline{g}\in [g]$ such that the scalar curvature $\mathrm{R}_{\overline{g}}$ is a positive constant and the mean curvature $\mathrm{H}_{\overline{g}}$ is zero.
\end{corollary}
{\it Proof:}
A sufficient condition for the existence of a solution of the Yamabe problem on manifolds with boundary is (see \cite{escobar:92}):
\begin{eqnarray}\label{solyam}
\mu(\mathrm{M},\pa\mathrm{M})<\mu(\mathbb{S}^n_+,\pa\mathbb{S}^n_+),
\end{eqnarray}
where $\mu(\mathrm{M},\pa\mathrm{M})$ is the Yamabe invariant of $\mathrm{M}$ endowed with the conformal class of $g$. The hypothesis $\mu(\mathrm{M},\pa\mathrm{M})>0$ implies that the Dirac operator is invertible, under the chiral bag boundary condition. Since the manifold has a locally conformally flat with umbilic boundary point $q\in\pa\mathrm{M}$ and since the chiral mass operator is supposed to be non zero, then we can apply Theorem~\ref{lcfm} and we finally get:
\begin{eqnarray*}
\lambda_{\min}(\mathrm{M},\pa\mathrm{M})<\lambda_{\min}(\mathbb{S}^n_+,\pa\mathbb{S}^n_+).
\end{eqnarray*}
However, using the Hijazi inequality~(\ref{hijbord}) and the fact that
\begin{eqnarray*}
\lambda_{\min}(\mathbb{S}^n_+,\pa\mathbb{S}^n_+)=\frac{n}{2}\Big(\frac{\omega_{n}}{2}\Big)^{\frac{1}{n}},
\end{eqnarray*}
we obtain (\ref{solyam}) and thus the result follows direcly.
\hfill$\square$\\
This spinorial proof of the Yamabe problem problem on manifolds with boundary is obviously more difficult than the one given by Escobar. However, it seems that the inequality proved in Theorem~(\ref{lcfm}) gives existence of solutions to a Yamabe type problem for the Dirac operator under the chiral bag boundary condition involving critical Sobolev embedding theorems. A similar equation has been treated by B. Ammann for the boundaryless case (see \cite{habilbernd} or \cite{amm1}).
\section{The $\mathrm{MIT}$ Green function}
In this section, we construct the Green function for the Dirac operator under the $\mathrm{MIT}$ bag boundary condition. As an application of this construction, we give a simple proof of a positive mass theorem proved by Escobar \cite{escobar:92} in the context of the Yamabe problem on manifolds with boundary.
\subsection{Definitions and first properties}\label{definition1}
The $\mathrm{MIT}$ bag boundary condition has been introduced by physicists of the Massachusetts Institute of Technology in the seventies (see \cite{cjjtw}, \cite{cjjt} or \cite{johnson}). The spectrum of the Dirac operator under this boundary condition has then been studied on compact Riemannian spin manifolds with boundary (see \cite{hijazi.montiel.roldan:01} and \cite{sr1}). Another very nice application of this boundary condition can be found in \cite{hijazi.montiel.zhang:2} where the authors use its conformal covariance to give some estimates of the boundary Dirac operator involving a conformal invariant (which can be seen as an extrinsic version of the Hijazi inequality \cite{hijazi:86}). \\
We now briefly recall the definition of the $\mathrm{MIT}$ bag boundary condition. Consider the linear map
\begin{eqnarray*}
i\nu\cdot:\mathbf{S}_g\longrightarrow\mathbf{S}_g
\end{eqnarray*}
which is an involution, so we can define the pointwise orthogonal projection
$$\begin{array}{lccl}
\mathbb{B}_{\MIT}^\pm : & \mathrm{L}^2(\mathbf{S}_g) & \longrightarrow & \mathrm{L}^2(\mathrm{V}^{\pm}_g)\\
& \varphi & \longmapsto & \frac{1}{2}(\Id\pm i\nu\cdot)\varphi,
\end{array}$$
where $\mathrm{V}_g^\pm$ is the eigensubbundle associated with the eigenvalues $\pm 1$ of the endomorphism $i\nu$. We can then check (see \cite{hijazi.montiel.roldan:01}) that these projections define elliptic boundary conditions for the Dirac operator. We can now define the Green function for the Dirac operator under the $\mathrm{MIT}$ bag boundary condition.
\begin{definition}\label{MITgreen}
A Green function for the Dirac operator under the $\mathrm{MIT}$ bag boundary condition (or a $\mathrm{MIT}$ Green function) is given by a smooth section:
\begin{eqnarray}\label{MITgreen1}
\mathrm{G}^\pm_\mathrm{MIT}:\mathrm{M}\times\mathrm{M}\setminus\Delta\rightarrow\Sigma_g(\mathrm{M})\boxtimes\big(\Sigma_g(\mathrm{M})\big)^\ast
\end{eqnarray}
locally integrable on $\mathrm{M}\times\mathrm{M}$ which satisfies, in a weak sense, the following boundary problem:
$$\left\lbrace
\begin{array}{ll}
\mathrm{D}_g\big(\mathrm{G}^\pm_\mathrm{CHI}(x,y)\big)=\delta_y\mathrm{Id}_{\Sigma_y\mathrm{M}}\\ \\
\mathbb{B}_{\MIT}^\pm\big(\mathrm{G}^\pm_\mathrm{CHI}(x,y)\big)=0,\quad\textrm{for}\;x\in\pa\mathrm{M}\setminus\{y\},
\end{array}
\right.$$
for all $x\neq y\in\mathrm{M}$. In other words, we have:
\begin{eqnarray*}
\int_{\mathrm{M}}\<\mathrm{G}_{\mathrm{MIT}}^\pm(x,y)\psi_0^\pm,\mathrm{D}_g\varphi(x)\>dv(x)=\<\psi_0^\pm,\varphi(y)\>
\end{eqnarray*}
for all $y\in\mathrm{M}$, $\psi_0^\pm\in\Sigma_{y}(\mathrm{M})$ satisfying $i\nu\cdot\psi_0^\pm=\pm\psi_0^\pm$ et $\varphi\in\Gamma\big(\Sigma_g(\mathrm{M})\big)$ such that $\mathbb{B}_{\MIT}^\mp(\varphi_{|\pa\mathrm{M}})=0$.
\end{definition}
\begin{remark}
If we let:
\begin{eqnarray*}
\mathrm{D}_g^\pm:\mathcal{H}^\pm_g:=\{\phi\in\mathrm{H}^2_1(\Sigma\mathrm{M})\;/\;\mathbb{B}_{\MIT}^\pm(\phi_{|\pa\mathrm{M}})=0\}\longrightarrow\mathrm{L}^2\big(\Sigma_g(\mathrm{M})\big),
\end{eqnarray*}
we can easily check (using the Green formula) that $(\mathrm{D}^\pm_g)^\ast=\mathrm{D}^\mp_g$ where $(\mathrm{D}^\pm_g)^\ast$ is the formal adjoint of the Dirac operator under the boundary condition $\mathbb{B}_{\MIT}^\pm$. Thus the domain $\mathrm{dom}\,(\mathrm{D}^\pm_g)^\ast$ of the Dirac operator's adjoint is given by $\mathcal{H}^\mp_g$. The preceding definition of the $\mathrm{MIT}$ Green fonction is then consistent with the definition of weak solution of an equation. Indeed, the section $\mathrm{G}_\mathrm{MIT}^\pm(.\,,q)\psi_0^\pm\in\Gamma\big(\Sigma_g(\mathrm{M})\big)$ satisfies on $\mathcal{H}^\pm_g$ and in a week sense the equation:
\begin{eqnarray*}
\mathrm{D}_g\big(\mathrm{G}^\pm_{\mathrm{MIT}}(.\,,q)\psi_0^\pm\big) = \delta_q
\end{eqnarray*}
if for all $\psi_0^\pm\in\Sigma_q\mathrm{M}$ such that $i\nu\cdot\psi_0^\pm=\pm\psi_0^\pm$:
\begin{eqnarray*}
\int_{\mathrm{M}}\<\mathrm{G}_{\mathrm{MIT}}(x,q)\psi_0^\pm,(\mathrm{D}^\pm_g)^\ast\phi\>dv(g) = \<\psi_0^\pm,\phi(q)\>_{\Sigma_q\mathrm{M}}
\end{eqnarray*}
for all $\phi\in\mathrm{dom}\,(\mathrm{D}^\pm_g)^\ast=\mathcal{H}^\mp_g$. This definition is exactly the one given in Definition \ref{MITgreen}.
\end{remark}
In order to give a development of the $\mathrm{MIT}$ Green function in a neighbourhood of $q\in\pa\mathrm{M}$, it is very useful to study the behaviour of the $\mathrm{MIT}$ condition under the trivialization of the spinor bundle induced by Fermi coordinates and given in Section~\ref{GFDOCBC}. More precisely, we prove the following lemma:
\begin{lemma}\label{MITtri}
Let $\mathrm{U}$ and $\mathrm{V}$ be the open sets of the trivialization given in Section~\ref{GFDOCBC} and let $\Phi_0\in\Ga\big(\Sigma_{\xi}(\mathbb{R}^n_+)\big)$ be a parallel spinor such that
\begin{eqnarray*}
i\nu\cdot\overline{\Phi}_{0}(q)=\pm\overline{\Phi}_{0}(q)
\end{eqnarray*}
at one point $q\in\mathrm{V}\cap\pa\mathrm{M}$. Then we get:
\begin{eqnarray*}
i\nu\cdot\overline{\Phi}_{0|\mathrm{V}\cap\pa\mathrm{M}}=\pm\overline{\Phi}_{0|\mathrm{V}\cap\pa\mathrm{M}},
\end{eqnarray*}
that is $\mathbb{B}_{\MIT}^\mp(\overline{\Phi}_{0|\mathrm{V}\cap\pa\mathrm{M}})=0$.
\end{lemma}
\noindent {\it Proof:} One considers the function defined on $\mathrm{V}$ by $f(p)=|i\nu\cdot\overline{\Phi}_0-\overline{\Phi}_0|^2(p)$ and we then show that $f$ vanishes on $\mathrm{V}\cap\pa\mathrm{M}$. Indeed, for $1\leq i\leq n-1$, we have:
\begin{eqnarray*}
e_i(f) & = & e_i(|i\nu\cdot\overline{\Phi}_0-\overline{\Phi}_0|^2) \\
& = & 2\,e_i\big(|\overline{\Phi}_0|^2-i\<\nu\cdot\overline{\Phi}_0,\overline{\Phi}_0\>\big)\\
& = & 2\,e_i\big(|\overline{\Phi}_0|^2+2\,\mathrm{Im}\<\nu\cdot\overline{\Phi}_0,\overline{\Phi}_0\>\big),
\end{eqnarray*}
where $\mathrm{Im}\,(z)$ is the imaginary part of a complex number $z\in\mathbb{C}$. However, since the spinor field $\Phi_0$ is parallel, we can assume that $|\Phi_0|^2=1$ and since the trivialization of the spinor bundle is an isometry, we get $|\overline{\Phi}_0|^2=1$ and so $e_i\big(|\overline{\Phi}_0|^2\big)=0$. Using the compatibilty of the spinorial Levi-Civita connection with the Hermitian metric leads to:
\begin{eqnarray*}
e_i\big(\mathrm{Im}\<\nu\cdot\overline{\Phi}_0,\overline{\Phi}_0\>\big)=\mathrm{Im}\<\overline{\na}_{e_i}\nu\cdot\overline{\Phi}_0,\overline{\Phi}_0\>+2\,\mathrm{Im}\<\nu\cdot\overline{\na}_{e_i}\overline{\Phi}_0,\overline{\Phi}_0\>.
\end{eqnarray*}
The local expression of the spinorial Levi-Civita connection $\overline{\na}$ acting on $\Sigma_g(\mathrm{M})$ gives:
\begin{eqnarray*}
\mathrm{Im}\<\nu\cdot\overline{\na}_{e_i}\overline{\Phi}_0,\overline{\Phi}_0\> & = & \frac{1}{4}\sum_{1\leq j\neq k\leq n-1}\widetilde{\Ga}_{ij}^k\,\mathrm{Im}\<\nu\cdot e_j\cdot e_k\cdot\overline{\Phi}_0,\overline{\Phi}_0\>\\
& & -\frac{1}{2}\,\mathrm{Im}\<\overline{\na}_{e_i}\nu\cdot\overline{\Phi}_0,\overline{\Phi}_0\>.
\end{eqnarray*}
Note that:
\begin{eqnarray*}
\<\nu\cdot e_j\cdot e_k\cdot\overline{\Phi}_0,\overline{\Phi}_0\> & = & \<\overline{\Phi}_0,\nu\cdot e_j\cdot e_k\cdot\overline{\Phi}_0\>,
\end{eqnarray*}
and so we have $e_i(f)=0$ for all $1\leq i\leq n-1$. Since we assumed that $f(q)=0$, we obtain immediately the result of this lemma.
\hfill$\square$\\
We can now state the analogous result of Proposition~\ref{Devfoncgreen} for the $\mathrm{MIT}$ Green function. Indeed, we have:
\begin{proposition}\label{DevfoncgreenMIT}
Assume that there exists a point $q\in\pa\mathrm{M}$ which has a locally flat with umbilic boundary neighbourhood. Then the $\mathrm{MIT}$ Green function exists and admits the following development near $q$:
\begin{eqnarray}\label{devfoncgreenmit}
\mathrm{G}_\mathrm{MIT}^\pm(x,q)\psi_0^\pm=-\frac{2}{\omega_{n-1}}\frac{x-q}{|x-q|^n}\cdot\psi_0^\pm+\mathrm{m}_\mathrm{MIT}^\pm(x,q)\psi_0^\pm,
\end{eqnarray}
where $\psi_0^\pm\in\Sigma_q(\mathrm{M})$ is a spinor field satisfying $i\,\nu\cdot\psi_0^\pm=\pm\psi_0^\pm$ (in other words $\mathbb{B}_{\MIT}^{\mp}(\psi_0^\pm)=0$) and where $\mathrm{m}_\mathrm{MIT}^\pm(\,.\,,q)\psi_0^\pm$ is a smooth spinor field such that $\mathrm{D}_g\big(\mathrm{m}_\mathrm{MIT}^\pm(\,.\,,q)\psi_0^\pm\big)=0$ around $q$. Moreover, along $\pa\mathrm{M}$, we have:
\begin{eqnarray}\label{massmit}
\mathbb{B}_{\MIT}^{\mp}\big(\mathrm{m}_\mathrm{MIT}^\pm(.\,,q)\psi^\pm_{0|\pa\mathrm{M}}\big)=0.
\end{eqnarray}
\end{proposition}
{\it Proof:} The proof of this proposition follows the proof of Proposition~\ref{Devfoncgreen} using Lemma~\ref{MITtri} and the fact that the Dirac operator:
\begin{eqnarray*}
\mathrm{D}_g:\mathcal{H}^\pm_g:=\{\phi\in\mathrm{H}^2_1(\Sigma\mathrm{M})\;/\;\mathbb{B}_{\MIT}^\pm(\phi_{|\pa\mathrm{M}})=0\}\longrightarrow\mathrm{L}^2\big(\Sigma_g(\mathrm{M})\big),
\end{eqnarray*}
defines an invertible operator.
\hfill$\square$\\
\subsection{Application: The positive mass theorem for the Yamabe problem on manifolds with boundary}
First we recall briefly the notion of mass for compact manifolds with boundary. For more details, we refer to \cite{lee.parker:87} for the boundaryless case and \cite{escobar:92} for the non empty boundary case. Let $(\mathrm{M}^n,g)$ be a connected compact Riemannian manifold which is locally conformally flat with umbilic boundary. If we assume that the Yamabe invariant $\mu(\mathrm{M},\pa\mathrm{M})$ of the manifold $\mathrm{M}$ is positive and if $q\in\pa\mathrm{M}$ then there exists a unique Green function $\mathcal{G}_q$ for the conformal Laplacian
\begin{eqnarray*}
\mathrm{L}_g:=4\frac{n-1}{n-2}\Delta_g+\mathrm{R}_g
\end{eqnarray*}
under the boundary condition
\begin{eqnarray*}
\mathrm{B}_g:= -\frac{2}{n-2}\frac{\pa}{\pa\nu}+\mathrm{H}_g,
\end{eqnarray*}
that is, a smooth function $\mathcal{G}_q$ defined on $\mathrm{M}\setminus\{q\}$ which satisfies, in a weak sense, the boundary problem:
\begin{equation}
\left\lbrace
\begin{array}{lll}\label{GFCF}
\mathrm{L}_g\mathcal{G}_q & = & \delta_q \quad\text{on}\;\mathrm{M} \\ \\
\mathrm{B}_g\mathcal{G}_{q|\pa\mathrm{M}} & = & \delta_q \quad\text{along}\;\pa\mathrm{M}
\end{array}
\right.
\end{equation}
One can then check that if there exists a point $q\in\pa\mathrm{M}$ with a locally flat and umbilic boundary neighbourhood in the metric $\overline{g}\in [g]$, then the Green function $\mathcal{G}_q$ admits the following expansion (near $q$):
\begin{eqnarray}\label{devfongrlc}
\mathcal{G}_q(x)=\frac{1}{(n-2)\omega_{n-1} r^{n-2}}+\mathrm{A}+\alpha_{q}(x)
\end{eqnarray}
where $\mathrm{A}\in\mathbb{R}$ and $\alpha_q$ is a harmonic function (near $q$) which satisfies $\alpha_q(q)=0$ and $\frac{\pa\alpha_q}{\pa\nu}=0$. The positive mass theorem proved by Escobar can then be stated as follows:
\begin{center}
{\it The constant $\mathrm{A}$ satisfies $\mathrm{A}\geq 0$. Moreover, $\mathrm{A}=0$ if and only if $\mathrm{M}$ is conformally isometric to the round hemisphere $\mathbb{S}^n_+$.\\}
\end{center}
We give here a proof of this result for spin manifolds. However, in our proof, it is not necessary to impose that the manifold is locally conformally flat with umbilic boundary. Indeed, it is sufficient to assume that the manifold has only one point $q\in\pa\mathrm{M}$ which has a locally conformally flat with umbilic boundary neighbourhood.
The proof of this positive mass theorem is inspired of the work of Ammann and Humbert \cite{amm2} and is based on the construction of the Green function of the Dirac operator. The chiral Green function used in Section~\ref{GFDOCBC} seems to be a good candidate for our purpose however it needs the existence of a chirality operator which is not the case in any dimension. That is why we will use another conformal covariant boundary condition which exists without additional assumptions, the $\mathrm{MIT}$ bag boundary condition. More precisely, we prove:
\begin{theorem}
Let $(\mathrm{M}^n,g,\sigma)$ be a connected compact Riemannian spin manifold with non empty smooth boundary. Assume that the Yamabe invariant is positive and that there exists a point $q\in\pa\mathrm{M}$ which has a locally conformally flat with umbilic boundary neighbourhood. Then the mass ${\rm A}$ of the manifold $\mathrm{M}$ satisfies ${\rm A}\geq 0$. Moreover, equality holds if and only if $\mathrm{M}$ is conformally isometric to the round hemisphere $\mathbb{S}^n_+$.
\end{theorem}
{\it Proof:}
First note that we can assume that for the metric $g$, the point $q\in\pa\mathrm{M}$ has a locally flat with umbilic boundary neighbourhood. Proposition~\ref{DevfoncgreenMIT} allows to deduce that there exists a unique $\mathrm{MIT}$ Green function $\mathrm{G}_{\mathrm{MIT}}^-(\,.\,,q)$ for the Dirac operator which admits the following development near $q\in\pa\mathrm{M}$:
\begin{eqnarray}
\mathrm{G}_{\mathrm{MIT}}^-(x,q)\psi_0 & = & \mathrm{G}_{\eucl}(x,q)\psi_0+\mathrm{m}^-_{\mathrm{MIT}}(x,q)\psi_0,
\end{eqnarray}
where $\psi_0\in\Sigma_q\mathrm{M}$ is such that $i\nu\cdot\psi_0=-\psi_0$ and $\mathrm{m}_\mathrm{MIT}^-(\,.\,,q)\psi_0\in\Gamma\big(\Sigma_g(\mathrm{M})\big)$ is a harmonic spinor near $q$ which satisfies:
\begin{eqnarray*}
\mathbb{B}_{\MIT}^-\big(\mathrm{m}_\mathrm{MIT}^-(.\,,q)\psi_{0|\pa\mathrm{M}}\big)=0.
\end{eqnarray*}
Without loss of generality, we can assume that $|\psi_0|^2=1$. On the other hand, since $\mu(\mathrm{M},\pa\mathrm{M})>0$, there exists a unique Green function $\mathcal{G}_q$ for the conformal Laplacian smooth on $\mathrm{M}\setminus\{q\}$ such that $\mathcal{G}_q>0$ (see \cite{escobar:92}). Moreover, since the point $q$ is supposed to have a locally flat with umbilic boundary neighbourhood in the metric $g$, the Green function $\mathcal{G}_q$ can be written
\begin{eqnarray*}
\mathcal{G}_q(x)=\frac{1}{(n-2)\omega_{n-1} r^{n-2}}+\mathrm{A}+\alpha_{q}(x)
\end{eqnarray*}
near $q\in\pa\mathrm{M}$ and where $\mathrm{A}\in\mathbb{R}$ and $\alpha_q$ is a smooth function on $\mathrm{M}$. Now consider the conformal change of the metric on $\mathrm{M}\setminus\{q\}$ given by:
\begin{eqnarray*}
\overline{g}=\big((n-2)\omega_{n-1}\mathcal{G}_q\big)^{\frac{4}{n-2}}g =\widetilde{\mathcal{G}}_q^{\frac{4}{n-2}}g.
\end{eqnarray*}
Since the Green function satisfies the boundary problem~(\ref{GFCF}) and using the fact that the scalar and mean curvatures in $g$ and $\overline{g}$ are related by:
\begin{eqnarray*}
\overline{\mathrm{R}}=\widetilde{\mathcal{G}}_q^{-\frac{n+2}{n-2}}\mathrm{L}_g\widetilde{\mathcal{G}}_q\quad\textrm{and}\quad\overline{\mathrm{H}}=\widetilde{\mathcal{G}}_q^{-\frac{n}{n-2}}\mathrm{B}_g\widetilde{\mathcal{G}}_q,
\end{eqnarray*}
we obtain that the scalar curvature of $(\mathrm{M}\setminus\{q\},\overline{g})$ is $\overline{\mathrm{R}}=0$ and the mean curvature of $(\pa\mathrm{M}\setminus\{q\},\overline{g})$ in $\mathrm{M}$ is $\overline{\mathrm{H}}=0$. We can then identify the spinor bundle $(\mathrm{M}\setminus\{q\},g)$ with the one over $(\mathrm{M}\setminus\{q\},\overline{g})$ thanks to the bundle ismorphism $\mathrm{F}$ used in the proof of Proposition~\ref{Confgreen}. Now consider the spinor field defined by:
\begin{eqnarray*}
\overline{\mathrm{G}}_{q}(\,.\,)\overline{\psi}_0 = \widetilde{\mathcal{G}}_q^{-\frac{n+1}{n-2}}\mathrm{F}\big(\mathrm{G}_\mathrm{MIT}^-(\,.\,,q)\psi_0\big)\in\Gamma\big(\Sigma_{\overline{g}}(\mathrm{M}\setminus\{q\})\big).
\end{eqnarray*}
Using the conformal covariance (\ref{covconf}) of the Dirac operator, we have:
\begin{eqnarray}\label{grendi}
\mathrm{D}_{\overline{g}}\big(\overline{\mathrm{G}}_{q}(\,.\,)\overline{\psi}_0\big)=\widetilde{\mathcal{G}}_q^{-\frac{n-1}{n-2}}\mathrm{F}\Big(\mathrm{D}_g\big(\mathrm{G}_\mathrm{MIT}^-(\,.\,,q)\psi_0\big)\Big)=0
\end{eqnarray}
on $\mathrm{M}\setminus\{q\}$. Moreover since the $\mathrm{MIT}$ condition is also conformally invariant, we obtain:
\begin{eqnarray*}
\overline{\mathbb{B}}_\mathrm{MIT}^-\big(\overline{\mathrm{G}}_{q}(\,.\,)\overline{\psi}_{0|\pa\mathrm{M}}\big)=0,
\end{eqnarray*}
where $\overline{\mathbb{B}}_\mathrm{MIT}^-=\frac{1}{2}\big(\Id-i\overline{\nu}\,\overline{\cdot}\big)$ is the projection of the $\mathrm{MIT}$ bag boundary condition in the metric $\overline{g}$. Now let $\varepsilon>0$ and denote by $\mathrm{B}^+_q(\varepsilon)$ the half-ball centered at $q$ with radius $\varepsilon$. Using the formula~(\ref{grendi}), the Schr\"odinger-Lichnerowicz formula and the fact that the scalar curvature of $\mathrm{M}\setminus\{q\}$ is zero, we get:
\begin{eqnarray*}
0=\int_{\mathrm{M}\setminus\mathrm{B}^+_q(\varepsilon)}\<\mathrm{D}^2_{\overline{g}}\big(\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0\big),\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0\>dv(\overline{g})=\int_{\mathrm{M}\setminus\mathrm{B}^+_q(\varepsilon)}\<\overline{\na}^\ast\overline{\na}(\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0),\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0\>dv(\overline{g}).
\end{eqnarray*}
An integration by parts leads to:
\begin{eqnarray}\label{mass}
\int_{\pa\mathrm{B}^+_q(\varepsilon)}\<\overline{\na}_{\overline{\nu}_{\varepsilon}}(\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0),\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0\>ds(\overline{g})& = & \int_{\pa\mathrm{M}_\varepsilon}\<\overline{\na}_{\overline{\nu}}(\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0),\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0\>ds(\overline{g})\nonumber\\
& & + \int_{\mathrm{M}\setminus\mathrm{B}^+_q(\varepsilon)}|\overline{\na}(\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0)|^2 dv(\overline{g})
\end{eqnarray}
where $\pa\mathrm{M}_{\varepsilon}=\pa\mathrm{M}\setminus\big(\pa\mathrm{M}\cap\pa\mathrm{B}^+_q(\varepsilon)\big)$ and $\nu$ (resp. $\nu_{\varepsilon}$) is the inner unit vector field (resp. outer) normal to $\pa\mathrm{M}_\varepsilon$ (resp. $\pa\mathrm{B}^+_q(\varepsilon)$) in the metric $g$. An easy calculation gives:
\begin{eqnarray*}
\overline{\na}_{\overline{\nu}}(\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0) & = & \frac{n-1}{2}\overline{\mathrm{H}}\,\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0-\overline{\nu}\,\overline{\cdot}\mathrm{D}_{\overline{g}}\big(\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0\big)-\mathrm{D}^{\pa\mathrm{M}}_{\overline{g}}\big(\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0\big)
\end{eqnarray*}
where
\begin{eqnarray*}
\mathrm{D}^{\pa\mathrm{M}}_{g}:=\sum_{i=1}^{n-1} e_i\cdot\nu\cdot\na^{\bf{S}}_{e_i}
\end{eqnarray*}
is the boundary Dirac operator acting on the restricted spinor bundle over $\pa\mathrm{M}$ endowed with the metric $g$. However, on $\pa\mathrm{M}_\varepsilon$ we have $\overline{\mathrm{H}}=0$ and $\mathrm{D}_{\overline{g}}\big(\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0\big)=0$, so (\ref{mass}) gives:
\begin{eqnarray*}
\int_{\pa\mathrm{B}^+_q(\varepsilon)}\<\overline{\na}_{\overline{\nu}_{\varepsilon}}(\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0),\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0\>ds(\overline{g})& = & -\int_{\pa\mathrm{M}_\varepsilon}\<\mathrm{D}^{\pa\mathrm{M}}_{\overline{g}}\big(\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0\big),\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0\>ds(\overline{g})\\
& & + \int_{\mathrm{M}\setminus\mathrm{B}^+_q(\varepsilon)}|\overline{\na}(\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0)|^2 dv(\overline{g}).
\end{eqnarray*}
On the other hand, using the conformal covariance of the $\mathrm{MIT}$ bag boundary condition, we have:
\begin{eqnarray*}
i\overline{\nu}\,\overline{\cdot}\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0=\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0,
\end{eqnarray*}
for all $x\in\pa\mathrm{M}_\varepsilon$ and so:
\begin{eqnarray*}
\<\mathrm{D}^{\pa\mathrm{M}}_{\overline{g}}\big(\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0\big),\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0\> & = & \<i\overline{\nu}\,\overline{\cdot}\mathrm{D}^{\pa\mathrm{M}}_{\overline{g}}\big(\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0\big),
i\overline{\nu}\,\overline{\cdot}\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0\>\\
& = & -\<\mathrm{D}^{\pa\mathrm{M}}_{\overline{g}}\big(\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0\big),\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0\>.
\end{eqnarray*}
Hence Inequality~(\ref{mass}) gives:
\begin{eqnarray}
0 \leq \int_{\mathrm{M}\setminus\mathrm{B}^+_q(\varepsilon)}|\overline{\na}(\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0)|^2 dv(\overline{g}) = \frac{1}{2}\int_{\pa\mathrm{B}^+_q(\varepsilon)}\overline{\nu}_{\varepsilon}|\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0|^2ds(\overline{g}).
\end{eqnarray}
The vector field $\overline{\nu}_\varepsilon$ is the inner normal field to $\pa\mathrm{B}^+_q(\varepsilon)$ for the metric $\overline{g}$ and $\overline{g}$ is conformal to $g$ (which is flat around $q$), then the vector field $\overline{\nu}_{\varepsilon}$ is colinear to $\frac{\pa}{\pa r}$, that is there exists a constant $c>0$ such that $\overline{\nu}_{\varepsilon}=-c\frac{\pa}{\pa r}$. However:
\begin{eqnarray*}
1=\overline{g}(\overline{\nu}_{\varepsilon},\overline{\nu}_{\varepsilon})=c^2\widetilde{\mathcal{G}}_q^{\frac{4}{n-2}}g(\frac{\pa}{\pa r},\frac{\pa}{\pa r})=c^2(n-2)^{\frac{4}{n-2}}\,\omega_{n-1}^{\frac{4}{n-1}}\,\mathcal{G}_q^{\frac{4}{n-2}}
\end{eqnarray*}
and since the half-ball $\mathrm{B}_q^+(\varepsilon)$ is contained in the open (flat) set of the trivialization near $q$, the Green function for the conformal Laplacian admits the expansion (\ref{devfongrlc}), that is:
\begin{eqnarray*}
c^2(n-2)^{\frac{4}{n-2}}\,\omega_{n-1}^{\frac{4}{n-1}}\,\Big(\frac{1}{(n-2)\omega_{n-1} r^{n-2}}+\mathrm{A}+\alpha_{q}(x)\Big)^{\frac{4}{n-2}}=1.
\end{eqnarray*}
An easy calculation then shows that $c=\varepsilon^2+o(\varepsilon^2)$ and finally we have $\overline{\nu}_{\varepsilon}=-(\varepsilon^2+o(\varepsilon^2))\frac{\pa}{\pa r}$. We now give an estimate of $\overline{\nu}_{\varepsilon}|\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0|^2$ on $\pa\mathrm{B}_q^+(\varepsilon)$; for this, we write:
\begin{eqnarray*}
|\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0|^2 & = & \widetilde{\mathcal{G}}_q^{-2\frac{n-1}{n-2}}|\mathrm{G}_\mathrm{MIT}^-(x,q)\psi_0|^2\\
& = & (n-2)^{-2\frac{n-1}{n-2}}\,\omega_{n-1}^{-2\frac{n-1}{n-2}}\mathcal{G}_q^{-2\frac{n-1}{n-2}}\,|\mathrm{G}_\mathrm{MIT}^-(x,q)\psi_0|^2\\
& = & \Big(\frac{1}{r^{n-2}}+(n-2)\omega_{n-1}\mathrm{A}+\widetilde{\alpha}_{q}(x)\Big)^{-2\frac{n-1}{n-2}}\,|\mathrm{G}_{\eucl}(x,q)\psi_0+\mathrm{m}^-_{\mathrm{MIT}}(x,q)\psi_0|^2
\end{eqnarray*}
where $\widetilde{\alpha}_q(x)=(n-2)\,\omega_{n-1}\alpha_{q}(x)$. Using the expansion~(\ref{MITgreen1}) of the $\mathrm{MIT}$ Green function, we get:
\begin{eqnarray*}
|\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0|^2 & = & \Big(1+(n-2)\omega_{n-1}\mathrm{A}r^{n-2}+\widetilde{\alpha}_{q}(x)r^{n-2}\Big)^{-2\frac{n-1}{n-2}}\\
& & \times\Big(1+|\mathrm{m}^-_{\mathrm{MIT}}(x,q)\psi_0|^2+2\mathrm{Re}\,\<\mathrm{G}_{\eucl}(x,q)\psi_0,\mathrm{m}^-_{\mathrm{MIT}}(x,q)\psi_0\>\Big).
\end{eqnarray*}
Now note that, with some calculations, we obtain:
\begin{eqnarray*}
\frac{\pa}{\pa r}|\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0|^2=-2(n-1)(n-2)\,\omega_{n-1}\,\mathrm{A}\,\varepsilon^{n-3}+o(\varepsilon^{n-3}).
\end{eqnarray*}
On the other hand, we have:
\begin{eqnarray*}
ds (\overline{g}) & = & \sqrt{\mathrm{det}(\overline{g})}dx\\
& = & \widetilde{\mathcal{G}}_q^{2\frac{n-1}{n-2}}dx\\
& = & \varepsilon^{-2(n-1)}\big(1+o(1)\big)dx
\end{eqnarray*}
where $dx$ is the standard volume form of $\mathbb{R}^{n-1}$. We then have finally shown that:
\begin{eqnarray*}
0\leq \int_{\mathrm{M}\setminus\mathrm{B}^+_q(\varepsilon)}|\overline{\na}(\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0)|^2 dv(\overline{g}) & = & \frac{1}{2}\int_{\pa\mathrm{B}^+_q(\varepsilon)}\overline{\nu}_{\varepsilon}|\overline{\mathrm{G}}_{q}(x)\overline{\psi}_0|^2ds(\overline{g})\\
& = & \frac{1}{2}\mathrm{vol}\big(\pa\mathrm{B}^+_q(\varepsilon)\big)\big(-\varepsilon^2+o(\varepsilon^2)\big)\big(\varepsilon^{-2(n-1)}+o(\varepsilon^{-2(n-1)})\big)\\
& & \times \big(-2(n-1)(n-2)\omega_{n-1}\,\mathrm{A}\varepsilon^{n-3}+o(\varepsilon^{n-3})\big)\\
& = & (n-1)(n-2)\,\omega_{n-1}^2\,\mathrm{A}+o(1),
\end{eqnarray*}
hence $\mathrm{A}\geq0$. Now assume that $\mathrm{A}=0$; using the preceding identity, we have that the spinor field $\overline{\mathrm{G}}_{q}(\,.\,)\overline{\psi}_0$ is parallel on $\mathrm{M}\setminus\{q\}$, that is $\overline{\na}_X(\overline{\mathrm{G}}_{q}(\,.\,)\overline{\psi}_0)=0$ for all $X\in\Ga\big(\mathrm{T}(\mathrm{M}\setminus\{q\})\big)$. Since the choice of $\psi_0$ is arbitrary, we easily construct a basis of parallel spinor fields over $(\mathrm{M}\setminus\{q\},\overline{g})$. On the other hand, the spinor $\overline{\mathrm{G}}_{q}(\,.\,)\overline{\psi}_0$ satisfies the $\mathrm{MIT}$ bag boundary condition, i.e.:
\begin{eqnarray*}
i\overline{\nu}\,\overline{\cdot}\overline{\mathrm{G}}_{q}(\,.\,)\overline{\psi}_{0|\pa\mathrm{M}}=\overline{\mathrm{G}}_{q}(\,.\,)\overline{\psi}_{0|\pa\mathrm{M}}.
\end{eqnarray*}
So if we derive (along $\pa\mathrm{M}$) this equality, we get:
\begin{eqnarray*}
\overline{\na}_X\big(i\overline{\nu}\,\overline{\cdot}\overline{\mathrm{G}}_{q}(\,.\,)\overline{\psi}_0\big) & = & -i\overline{\mathrm{A}}(X)\overline{\cdot}\overline{\mathrm{G}}_{q}(\,.\,)\overline{\psi}_0=0
\end{eqnarray*}
since the spinor field $\overline{\mathrm{G}}_{q}(\,.\,)\overline{\psi}_0$ is parallel. Using the fact the spinor field $\overline{\mathrm{G}}_{q}(\,.\,)\overline{\psi}_0$ has no zero (since it is a constant section), we deduce that the boundary $\pa\mathrm{M}$ is totally geodesic in $\mathrm{M}$. The restriction to the boundary of a parallel spinor field from a basis of the spinor bundle over $(\mathrm{M},\overline{g})$ gives a parallel spinor field on the restricted spinor bundle along the boundary. This is an easy consequence of the spinorial Gauss formula and of the fact that the boundary is totally geodesic. We can thus easily conclude that the boundary is isometric to the Euclidean space $\mathbb{R}^{n-1}$, hence the manifold $(\mathrm{M}\setminus\{q\},\overline{g})$ is isometric to the Euclidean half-space $(\mathbb{R}^n_+,\xi)$. Now consider $\mathrm{I}:(\mathrm{M}\setminus\{q\},\overline{g})\rightarrow(\mathbb{R}^n_+,\xi)$ an isometry and let $f(x)=1+\xi\big(f(x)\big)/4$ where $\xi\big(f(x)\big):=\xi\big(f(x),f(x)\big)$. Then $\mathrm{M}\setminus\{q\}$ endowed with the metric $f^{-2}\overline{g}=f^{-2}\widetilde{\mathcal{G}}_q^{\frac{4}{n-2}}g$ is isometric to the standard hemisphere $\mathbb{S}^n_+$. The function $f^{-2}\widetilde{\mathcal{G}}_q^{\frac{4}{n-2}}$ is smooth on $\mathrm{M}\setminus\{q\}$ and extends to a positive function all over $\mathrm{M}$. Thus we have shown that $\mathrm{M}$ is conformally equivalent to the standard hemisphere $(\mathbb{S}^n_+,g_{\rm st})$.
\hfill$\square$
\bibliographystyle{amsalpha}
|
2,877,628,089,082 | arxiv | \section{Introduction}
In the past years, there has been a growing interest
in noise in mesoscopic systems~\cite{Blanter2000}.
Normally, noise is an unwanted feature, and, according to classical physics, in principle
can be made arbitrarily small by lowering the temperature;
according to quantum physics, however,
noise is uneliminable due to the intrinsic randomness of elementary processes. Furthermore,
noise, rather than being a hindrance, contains valuable information
which adds to the one carried by the mean value of the quantity observed.
Simple probability distributions, like e.g. the gaussian ones,
are determined by the mean values and noise. Even though gaussian distributions
are ubiquitous, there are interesting physical processes which are described
by non-gaussian distributions.
Noise alone is not sufficient for the determination of such distributions.
One needs to know all the momenta, or equivalently their generating function.
Full Counting Statistics~\cite{Levitov1993,Levitov1996} consists in determining the latter.
The FCS approach has been receiving increasing attention from the physics community.
Its connection with the formalism of non-equilibrium Green functions~\cite{Keldysh1964} and
circuit theory~\cite{Nazarov1999} was established~\cite{Belzig2001}.
It has been used to characterize transport in heterostructures~\cite{Belzig2002},
shuttling mechanism~\cite{Pistolesi2004,Romito2004}, charge pumping~\cite{Andreev2001},
and multiple Andreev reflections~\cite{Cuevas2003,Johansson2003}.
The technique was extended to charge counts in multiterminal structures~\cite{Nazarov2002},
and to spin counts~\cite{DiLorenzo2004}.
The FCS of a general quantum variable was studied, and the
necessity of including the dynamics of detectors stressed~\cite{Nazarov2003}.
There are some open issues in FCS.
The main one concerns whether it is always possible to find
a generating function which allows an interpretation
in terms of probabilities.
Indeed, in Ref.~\onlinecite{Belzig2001}
it was found that such an interpretation is not straightforward.
This problem was shown to amount to the long standing
question of the non-positivity
of the Wigner distribution~\cite{Nazarov2003}.
The lack of a classical interpretation was
attributed to the breaking of gauge invariance
for the charge degrees of freedom, due to the
presence of superconducting terminals.\cite{Belzig2001}
It is interesting to consider a more general mechanism
of gauge invariance breaking
which involves spin degrees of freedom.
It may be caused either by the presence of
ferromagnetic terminals or by subsequent detectors
measuring different components of
the spin. We shall consider the latter case.
Another issue we want to address is the range of applicability of
the Projection Postulate. Since von Neumann's
classic work\cite{vonNeumann1932},
it is known that Schr\"odinger's evolution cannot account for the fact
that the result of an individual measurement has a unique value, and
cannot be described by a superposition.
It is necessary to supplement Schr\"odinger's evolution with an additional
evolution (type II in the terminology of von Neumann), projecting
the state of the observed system into the eigenstate
of the measured observable corresponding to the actual outcome.
This can be done at several stages: one could dispense with the description
of the measurement, and project the wave-function of the system.
Alternatively, one may continue the chain by describing the interaction
of system and detector, trace out the system's degrees of freedom, and then
project the state of the detector.
This chain can be continued indefinetely, by skipping
the projection of the detector's state, and considering the coupling
of the detector with the visible radiation, of the latter with the eye
of the observer, etc.
So far, it has been implicitly assumed that the predictions of Quantum
Mechanics do not depend on the stage at which one chooses to
stop the chain and project. In this work, we shall demonstrate
that different statistics are predicted when one projects
at the level of the system and at the level of the detector.
The reason for this is that, in the example we shall discuss,
the quantum dynamics of the detectors cannot be neglected, even
after accounting for decoherence.
Besides being the simplest illustration of noncommuting variables,
detection of spin components is a worthy subject in its own right.
Spintronics, i.e.~the study of how
producing, detecting, and manipulating spins,
is a rapidly growing field~\cite{Awschalom2002}, which has already found
important technological applications~\cite{Chang2003}.
In this paper, the subsequent detection of
non-commuting variables is discussed.
The Full Counting Statistics approach
allows to obtain the joint probability distribution
for the counts. The non-commutativity
of the observed variables manifests itself in the fact that the
back-action of detectors, and their quantum dynamics,
must be taken into account.
This remains true also when an environment-induced
dissipative dynamics for detectors is included.
The reason is that one does not observe one particle
at a time, but a flux of particles
traversing the detectors at a rate
which can be larger than the decoherence rate of the detectors themselves.
The present paper is laid out as follows:
In section \ref{sec:background}
the connection of FCS with the density matrix of the detectors is derived,
and a general theory
of detection of non-commuting variables is presented;
a model for the measurement is introduced.
In section \ref{sec:nodynamics},
we discuss the case of ideal quantum detectors, having
no internal dynamics. We argue that they do not provide a realistic model of
detectors because of their long memory.
In section \ref{sec:dynamics},
we discuss the internal dynamics of the detectors.
The fact that detectors are
``classical'' objects is accounted for by
introducing a dissipative dynamics due
to their interaction with an environment.
Then, since we intend to concentrate on spin counts,
in section \ref{sec:spindetector}
we present a model for
a spin detector in solid state, relying on spin-orbit interaction.
We proceed to section \ref{sec:setup}
by introducing the particular system that we study, namely
two normal reservoirs connected by a coherent conductor.
In section \ref{sec:technique}
we give details about the derivation of the FCS for this system, relying
on the full quantum mechanical description of detection process,
and we present the results.
In section \ref{sec:PP}, we discuss the
FCS that would be obtained by a naive application
of the projection postulate,
i.e.~by neglecting the quantum dynamics of the detectors.
In section \ref{sec:comp12},
we compare the results of the two approaches for the case
of one and two detectors in series, and we find that they coincide.
In section \ref{sec:comp3},
we find a discrepancy between the two approaches when
three detectors in series are considered.
In particular, we show that both approaches
predict the same second-order cross correlators,
and that they differ in the prediction
of fourth order cumulants $\langle\!\langle \sigma_1^2 \sigma_3^2\rangle\!\rangle$.
Finally, in section \ref{sec:case},
the case of three spin detectors, monitoring the $X$, $Y$ and $Z$
components of spin current, is presented.
The probability distribution for the counts reveals large
deviations from the Gaussian distribution.
\section{General considerations about measurements}\label{sec:background}
All the information that we can gain about a system
is stored in the density matrix of one
or more detectors (denoted by index $a$)
which have interacted with the system during a time $\tau$.
The reduced density matrix is
\[\Hat{\rho}_{det}(\tau)=
\Tr_{sys}{\left\{ \mathcal{U}_{\tau,0}\Hat{\rho}(0)
\mathcal{U}_{\tau,0}^\dagger\right\}}\;,\]
where $\Tr_{sys}$ stands for the trace over
the degrees of freedom of the measured system,
$\Hat{\rho}(0)$ is the initial density matrix of system and detectors, and
$\mathcal{U}_{\tau,0}$ is the time evolution operator.
We focus on the representation of $\Hat{\rho}_{det}$ in a basis $|\phi\rangle$,
$\rho^{\phi,\phi'}_{det}(\tau) \equiv
\langle\phi|\Hat{\rho}_{det}(\tau)|\phi'\rangle$.
Here, $|\phi\rangle = \bigotimes_a |\phi_a\rangle$
is a vector in the Hilbert space of the detectors.
Since the time-evolution is linear, a matrix
$\mathcal{Z}^{\phi,\phi'}_{\mu,\mu'}$
exists such that
\begin{align*}
\rho^{\phi,\phi'}_{det}(\tau)
=&\ \int d\mu d\mu' \mathcal{Z}^{\phi,\phi'}_{\mu,\mu'}
\rho^{\mu,\mu'}_{det}\!\!(0)\;.
\end{align*}
Thus, given that one knows the initial density matrix of the detectors,
$\mathcal{Z}$ contains all the information
one can extract from the measurement.
However, part of this information gets lost: we can only know
the diagonal elements of the density matrix in a particular basis,
identified by the pointer
states of the detectors. These states, which will be denoted by
$|N\rangle$, correspond to
the detectors indicating the values $\{N_a\}$,
and are individuated by the property that,
if one prepares the detector in a generic state identified by a density matrix
$\rho_{det}^{N,N'}$, and then lets the environment act on it,
the off-diagonal elements of the density matrix in the basis $|N\rangle$
will go to zero with an exponential decay.
We point out that this does not dispense us from invoking a projection at
some point. The presence of the environment explains how the ensemble
averaged density matrix reduces to diagonal form in the basis of pointer
states, but it does not explain how the density matrix of the subensemble
corresponding to an outcome $N_a$ purifies to the state $|N_a\rangle$.
This requires invoking the projection postulate for the detector,
or, equivalently, an evolution dictated by the rules of the
bayesian approach\cite{Korotkov2002}
or of the quantum trajectory \cite{Dalibard1992} one.
The quantity accessible to observation is
the probability to find the detectors in states
$|N_a\rangle$, after a time $\tau$.
It is given by
\begin{equation}\label{eq:prob1}
P_\tau(N) =
\langle N|
\Hat{\rho}_{det}(\tau)
|N\rangle\;.
\end{equation}
If offdiagonal elements of the detector's density matrix decay instantaneously,
$P_\tau(N)$ depends only on the probabilities at a time immediately
preceding $\tau$, $P_{\tau-dt}(N)$, and the process is Markovian.
Now, let us consider the
operators $\Hat{K}_a$ corresponding to the read-out variables of the
detectors. Their eigenstates are $|N_a\rangle$,
where $N_a$ indicates an integer which is proportional to $K_a$.
The proportionality constant is provided below.
Let us also introduce the conjugated
operators $\Hat{V}_a$,
$\left[\Hat{K}_a,\Hat{V}_b\right]=i\delta_{ab}\hbar$, and their eigenstates
$|\phi_a\rangle$,
with $\phi_a$ dimensionless quantitities proportional to $V_a$.
If we insert to the left and to the right of $\Hat{\rho}_{det}$
in the RHS of Eq.~\eqref{eq:prob1}
the identity (in the detectors' Hilbert space)
in the form
$\mathcal{I}\propto\int \frac{d\phi}{2\pi} |\phi\rangle\langle \phi|$,
we obtain
\begin{align}
\nonumber
P_\tau(N) =&\ \int \frac{d\phi^+}{2\pi}\frac{d\phi^-}{2\pi}\\
&\times \exp{\left[-\frac{i}{\hbar} (\phi^+-\phi^-)\cdot N\right]}
{\rho}^{\phi^-,\phi^+}_{det}(\tau)\;.
\end{align}
We used the shorthand $\phi\cdot N \equiv \sum_a \phi_a N_a$.
We change variables according to $\phi^\pm = (\Phi \pm \phi)/2$.
Here, $\Phi$ and $\phi$
are the classical and quantum part of the field, respectively.
This terminology reflects the fact that fluctuations of $\Phi$
are set by the temperature, while fluctuations of $\phi$ depend on
$\hbar$, as we shall prove in section \ref{sec:dynamics}.
The time evolution depends on the Hamiltonians of the system and the detectors,
and on their interaction.
We focus on the detection of internal degrees of freedom
of a system whose center of mass coordinate
$\mathbf{x}$ is not affected by the presence of the detectors.
We consider several detectors in series along the path $\mathbf{x}(t)$.
We take the interaction to be of the form $H_{int} = \sum_a H_{int}^a$ with
\begin{equation}\label{eq:int}
H_{int}^a=-\alpha_a(\mathbf{x})\lambda_a \Hat{V}_a \Hat{J}_a\;,
\end{equation}
where $\mathbf{x}$ is the coordinate of the wave-packet,
$\lambda_a$ coupling constants depending
on the actual detection setup,
$\Hat{J}_a$ is an operator
on the Hilbert space of the system's degrees of freedom, and
$\alpha_a(\mathbf{x})$ is a function which
is unity inside the sensible area of the $a$-th detector and zero outside.
For a one-dimensional motion, e.g., we would have
$\alpha(x)=\theta(x-X^{(in)})\theta(X^{(fin)}-x)$, with
$X^{(in)}$ and $X^{(fin)}$ are the coordinates
delimiting the sensible area of the detector,
$\theta(x)=0$ if $x< 0$, $\theta(x)=1$ if $x\ge 0$.
$\Hat{J}_a$ is the current associated with the measured quantity, such that
the output of the detector does not depend
on the time each particle takes to cross
its sensible area.
Indeed, the equation of motion for the ``measuring'' operator is
\[\frac{d\Hat{K}_a(t)}{dt} = \alpha_a(\mathbf{x}(t))\lambda\Hat{J}_a(t)\;.\]
In the equation above, we have assumed that
the operator $\Hat{K}_a$ commutes with
the unperturbed Hamiltonian of the detector.
In general, however, $<\Hat{K}_a>$ will fluctuate in time
due to background noise. Such fluctuations put a lower
limit to the resolution of the detector.
For a reliable detection, the resolution must be smaller than
the minimal variation $K_{Qa}$ one intends to measure.
Let us introduce proper units.
We consider the case where the measured
quantities have discrete values proportional to a quantum $E_{Qa}$.
For instance, for charge $E_Q=e$, the elementary charge, and for spin
$E_Q=\hbar/2$. Every time an elementary unit passes the detector,
the readout of the latter will change by $K_{Qa}= \lambda_a E_{Qa}$.
Thus, we introduce the number and phase operators
$N_a = K_a/K_{Qa}$, $\phi_a = V/V_{Qa}$,
with $V_{Qa}=\hbar/K_{Qa}$.
We further assume that
\emph{i}) the detectors are initially prepared in a state with zero counts
$\Hat{\rho}_{det}(0)=|N\!\!=\!\!0\rangle\langle N\!\!=\!\!0\,|$,
and \emph{ii}) the spread of the system wave-packet is much smaller than
the distance between two subsequent detectors,
$\Delta x \ll X_{a+1}-X_a$.
The first assumption implies that
\[\rho^{\phi,\phi'}_{det}(\tau) =
\mathcal{Z}(\phi,\phi')\equiv
\int \frac{d\mu}{2\pi} \frac{d\mu'}{2\pi}
\mathcal{Z}_{\mu,\mu'}^{\phi,\phi'}\;,
\]
or, explicitly,
\begin{align}
\nonumber
\mathcal{Z}(\phi^+,\phi^-) =&\
\int \frac{d\mu^+}{2\pi}\frac{d\mu^-}{2\pi}
\int\limits_{\mu^+}^{\phi^+}\!\mathcal{D}\phi^+(t)
\!\int\limits_{\mu^-}^{\phi^-}
\!\mathcal{D}\phi^-(t) \\
&\label{eq:FCS_definition}
\exp{\left(S_{det}[\phi^+]-S_{det}[\phi^-]+
\mathcal{F}_{sys}[\phi^+,\phi^-]\right)},
\end{align}
where
the limits of the path-integrals fix the values of the fields at
$t=0$ and $t=\tau$, and
we introduced the influence functional of the system
on the detectors\ \cite{Feynman1963a}
\begin{align}\nonumber
&\exp{
\left(\mathcal{F}_{sys}[\phi^+,\phi^-]\right)}:=
\Tr_{sys} \\
&{\left\{
\exp{\left(S_{int}[\phi^+,\Hat{J}]\right)}
\Hat{\rho}_{sys}(0)
\exp{\left(-S_{int}[\phi^-,\Hat{J}]\right)}
\right\}},
\end{align}
where $S_{int}$ is the action corresponding to the interaction
$H_{int}$ given in Eq.\ \eqref{eq:int}.
We shall call $\mathcal{Z}$ the quantum generating function.
In principle it depends on twice as many parameters
than the classical generating function does.
In the rest of the paper we shall use the cumulant generating function (CGF),
$\mathcal{F}\equiv \log{\mathcal{Z}}$ .
The advantage of working with the CGF is that it often has
a clearer interpretation than $P_\tau$, since independent processes
contribute factors to $P_\tau$ and simply additive terms to the CGF. Hence,
if subsequent events are independent, the CGF is proportional
to the observation time $\tau$.
Thus, time averaged cumulants, which for long $\tau$ correspond
to zero-frequency noise and higher order correlators for currents,
have a finite value.
\section{Detectors with no dynamics}\label{sec:nodynamics}
We analyze the situation where the dynamics of the detectors is neglected.
This means that
\[
\exp{S_{det}[\phi(t)]}=\prod_t \delta(\phi(t)-\phi),\]
i.e.\ the counting fields are constant.
We consider first the case of one detector.
Then
$\mathcal{Z}^{\phi^+,\phi^-}_{\mu^+,\mu^-} = \delta_{\phi^+,\mu^+}
\delta_{\phi^-,\mu^-} \mathcal{Z}(\phi^+,\phi^-)$, and
\begin{align}
\mathcal{Z}^{\phi^+,\phi^-} =&
\Tr_{sys}{\left\{
\mathcal{U}^{\phi^+}
\Hat{\rho}_{sys}(0){\mathcal{U}^{\phi^-}}^\dagger\right\}}
,
\end{align}
where
\begin{align*}
\mathcal{U}^{\phi} =&\ \mathcal{T}
\exp{\left[- i \phi\int dt \Hat{J}(t)/E_Q\right]}
\end{align*}
($\mathcal{T}$ being the time-ordering operator)
is an operator in the system's Hilbert space.
By exploiting
the cyclic property of the trace, we have that,
if $\Hat{J}$ is a conserved operator or,
more generally, $[\Hat{J}(t),\Hat{J}(t')]=0$,
then $\mathcal{Z}(\phi^+,\phi^-)$
\emph{depends only on $\phi=\phi^+-\phi^-$}.
It has been shown that in this case
$\mathcal{Z}(\phi^-,\phi^+)$ gives directly
the generating function~\cite{Nazarov2003}.
Next, we consider the case of two detectors.
The kernel $\mathcal{Z}$ is now
\begin{align}
\label{eq:gftwodet}
\mathcal{Z}(\phi^+,\phi^-) =&
\Tr_{sys}{\left\{
\mathcal{U}^{\phi_2^+} \mathcal{U}^{\phi_1^+}
\Hat{\rho}_{sys}(0)
{\mathcal{U}^{\phi_1^-}}^\dagger{\mathcal{U}^{\phi_2^-}}^\dagger\right\}}
\;.
\end{align}
Here we exploited assumption \emph{ii}),
and defined
\begin{align*}
\mathcal{U}^{\phi_a} =&\
\mathcal{T}
\exp{\left[- i \phi_a\int dt \Hat{J}_a(t)/E_Q\right]}\;.
\end{align*}
Once again we exploit the cyclic property of the trace
and see that the expression
does not depend on the combination
${\Phi}_2\equiv \phi_2^++\phi_2^-$.
From Eq.~\eqref{eq:gftwodet} we see that in general, for two detectors,
$\mathcal{Z}$ does depend on $\Phi_1$, even when $\Hat{J}_a$ are conserved.
However, when the system is initially in the unpolarized state
$\Hat{\rho}_{sys}\propto \mathcal{I}_{sys}$,
the dependence on ${\Phi_1}$ disappears as well.
Another case in which this happens is when
the detectors monitor two commuting degrees
of freedom which are conserved. For instance,
if the current $\Hat{J}$ is not conserved,
in general $\left[\Hat{J}(t),\Hat{J}(t')\right]\neq 0$.
Thus, even if one repeats the same measurement,
one would obtain different results.
If however the current is conserved $\Hat{J}(t)=\Hat{J}$,
and both detectors
measure $\Hat{J}$, the kernel depends only on the combination $\phi_1+\phi_2$,
which means that the two measurements will give the same result.
In general, when there are three detectors,
labelled 1, 2 and 3 according to their order,
measuring non-commuting quantities, even if the
system is initially unpolarized,
the integrand will depend on the classical variable of the middle
detector, $\Phi_2$.
When such a dependence appears in the expression
for the generating function,
it is a signal that the internal dynamics
of the detector must be taken into account.
Indeed, when $\mathcal{Z}$ does not depend on $\Phi$,
the density matrix is diagonal
in the basis $|N\rangle$.
When $\mathcal{Z}$ does depend on $\Phi$,
$\rho_{det}$ develops off-diagonal components.
We consider as an example the case where the detectors' density matrix is
prepared in a diagonal state at $t=0$, and two particles are sent
to the detectors one at time $t_1>0$ and the other at time $t_2>t_1$,
in such a way that their wave-packets do not overlap.
Then, after the first particle has crossed the detectors,
the density matrix of the detectors
$\rho_{det}^{N,N'}$ has offdiagonal elements,
which depend on the original diagonal elements (probabilities).
Since one observes only the probabilities,
this can not be ascertained directly.
However, when the second particle crosses the detectors, the new probabilities
will be a combination of the former diagonal and off-diagonal elements.
In order to know $\rho_{det}^{N,N}(t_2)$, knowledge of
$\rho_{det}^{N,N}(t_1)$ is not sufficient.
Thus, the process is non-Markovian.
In principle, even after the detector
has been measuring for a long time a large number of particles,
the off-diagonal
elements created after the passage of the first particle
will still influence its dynamics.
This is not realistic, since,
because of the coupling of the detectors to the environment,
the offdiagonal elements will go to zero within a typical time $\tau_c$.
In order to account for this, one should
consider the dynamics of the detectors,
which we shall do in the next section.
\section{Detectors with internal dynamics}\label{sec:dynamics}
We model the decoherence
of the detectors by introducing a dissipative
dynamics for the detectors' degrees of freedom, i.e.
we couple the detectors to an environment, whose degrees of
freedom are traced out.
We model the environment as a system of independent harmonic oscillator
in thermal equilibrium, having the action
\[S_{env} = -\frac{i}{\hbar}\int dt \sum_j
\frac{1}{2} m_j\left[ \dot{x}_j^2 - \omega_j^2 x_j^2\right]
,\]
and coupling to the detectors through the position operator
\begin{equation}\label{eq:det-env}
S_{det-env}=\frac{i}{\hbar} \int dt \sum_{ja} c_{ja} x_j V_{Qa} \phi_a,
\end{equation}
with $c_{ja}$ coupling constant between the $j$-th oscillator
and the $a$-th detector.
Then the generating function becomes
\begin{widetext}
\begin{align}\nonumber
&\mathcal{Z}(\phi^+,\phi^-) =
\int \frac{d\mu^+}{2\pi} \frac{d\mu^-}{2\pi} \int dx_j dx^+_j dx^-_j
\int\limits_{\mu^+}^{\phi^+}\!\mathcal{D}\phi^+(t)
\!\int\limits_{\mu^-}^{\phi^-}
\!\mathcal{D}\phi^-(t)
\int\limits_{x_j^+}^{x_j}\!\mathcal{D}x^+_j(t)
\int\limits_{x_j^-}^{x_j}
\!\mathcal{D}x^-_j(t)\rho_{env}(x^+,x^-)\\
&\exp\biggl\{
S_{det}[\phi^+]-S_{det}[\phi^-]+F_{sys}[\phi^+,\phi^-]+
S_{env}[x^+_j]-S_{env}[x^-_j]
+ S_{det-env}[x^+_j,\phi^+]-S_{det-env}[x^-_j,\phi^-]
\biggr\}
\;,
\end{align}
\end{widetext}
In the expression above, we isolate the part
\begin{align}
\nonumber
&\exp{\mathcal{F}_{env}} = \int dx_j dx^+_j dx^-_j
\int\limits_{x_j^+}^{x_j}\!\mathcal{D}x^+_j(t)
\int\limits_{x_j^-}^{x_j}
\!\mathcal{D}x^-_j(t)\\
&\nonumber
\exp\biggl(S_{env}[x^+_j]-S_{env}[x^-_j] +S_{det-env}[x^+_j,\phi^+]\\
&\phantom{S_{det-env}}-S_{det-env}[x^-_j,\phi^-]\biggr)
\rho_{env}(x^+,x^-)
\end{align}
which gives the influence functional of the environment on the detectors.
We notice from Eq.\ \eqref{eq:det-env} that,
since the functions $\phi^\pm_a(t)$ are fixed by the external path-integrals,
they act as
an external source $I^\pm_j(t)=\sum_a c_{ja} \phi^\pm_a(t)$ on the
$j$-th harmonic oscillator.
It is then possible to perform the independent gaussian path-integrals
over $x_j$, resulting in\cite{Kleinert2004}
\begin{align}
\nonumber
\mathcal{F}_{env} =&\ -\frac{i}{\hbar}
\sum_a V_{Qa}^2\int_{0}^{\tau} dt \int_{0}^{t} dt'
\left(\phi_a^+(t)-\phi_a^-(t)\right)
\\
&\times\left[\alpha_a(t-t')\phi_a^+(t')
-\alpha_a^*(t-t') \phi_a^-(t')\right]\;,
\end{align}
where the influence of the environment is contained in the
complex functions $\alpha_a(t)$, whose Fourier transforms are
\begin{align}
\alpha_a(\omega)=&\ \frac{1}{2}\left(\coth{\frac{\hbar\beta\omega}{2}}+1\right)
\sigma_a(\omega)\;,\end{align}
where the inverse temperature $\beta=1/k_B T$ comes from
having assumed the bath in thermal
equilibrium ($\Hat{\rho}_{env} = \exp{(-\beta \Hat{H}_{env})}$),
and $\sigma_a$ are the spectral densities
\begin{align}
\sigma_a(\omega)=&\ \pi \sum_j \frac{c_{ja}^2}{m_j\omega_j}
\left[\delta(\omega-\omega_j)-\delta(\omega+\omega_j)\right]\;,
\end{align}
At low frequencies,
we can approximate the odd-functions $\sigma_a$ by
$\sigma_a(\omega)\simeq \gamma_a \omega$ (Ohmic approximation), with
$\gamma_a$ friction constant, as will be clear later.
We introduce new variables $\phi=\phi^+-\phi^-$, $\Phi=\phi^++\phi^-$.
Thus we get
\begin{align}
\nonumber
\mathcal{F}_{env} =&\ \sum_a \gamma_a V_{Qa}^2\biggl\{
\frac{1}{2\hbar} \int \frac{d\omega}{2\pi}
\omega \Phi_a(\omega)\phi_a(-\omega)
\\
&-\frac{1}{\beta\hbar^2}
\int \frac{d\omega}{2\pi}
\frac{\beta\hbar\omega}{2}\coth{\frac{\beta\hbar\omega}{2}}
\left|\phi_a(\omega)\right|^2
\end{align}
We take the action of free detectors to be that of harmonic oscillators, i.e.~
\begin{equation}
\mathcal{S}_{det}[\phi]=\sum_a
\frac{-i m_a V_{Qa}^2}{2\hbar}\int \frac{d\omega}{2\pi}
(\omega^2-\Omega_a^2) \left|\phi_a(\omega)\right|^2\;,
\end{equation}
where $m_a$ is the ``mass" of the detector (i.e. it is the inertial term
corresponding to the kinetic energy $m_a V_{Qa}^2\dot{\phi}_a^2/2$).
Then the generating function reads
\begin{align}
\nonumber
&\mathcal{Z}(\phi,\Phi) = \int \frac{d\mu}{2\pi} \frac{dM}{2\pi}
\int\limits_{\mu}^{\phi}\!\mathcal{D}\phi(t)
\!\int\limits_{M}^{\Phi}
\!\mathcal{D}\Phi(t)\\
&\nonumber
\exp\biggl\{\sum_a\biggl[
\frac{-i m_a V_{Qa}^2}{2\hbar} \int \frac{d\omega}{2\pi}
\Phi_a(\omega) g_a^{-1}(\omega) \phi_a(-\omega)
\\
&
-\frac{\gamma_a}{\beta\hbar^2}\int \frac{d\omega}{2\pi}
f(\omega)\left|\phi_a(\omega)\right|^2\biggr]
+\mathcal{F}_{sys}[\phi,\Phi]\biggr\}
\;,
\end{align}
where we introduced
the response function,
\begin{equation} \label{eq:response}
g_a^{-1}(\omega)=\omega^2-\Omega_a^2
+i\frac{\gamma_a}{m_a} \omega\;,
\end{equation}
from which one can see that $\gamma_a$ are proportional to the friction
constant,
and the fluctuation term
\begin{equation}\label{eq:fluctuation}
f(\omega)=\frac{\beta\hbar\omega}{2}\coth{\frac{\beta\hbar\omega}{2}}
.\end{equation}
The part of the action containing the fluctuation term in $\phi(\omega)$
is, at low frequencies, proportional to temperature $T$ and to $1/\hbar^2$.
The factor $1/\hbar^2$ strongly suppresses large fluctuations in $\phi$.
Thus, the influence functional due to the measured system
$\mathcal{F}_{sys}[\Phi,\phi] = \int dt L_{inf}(\Phi(t),\phi(t))$
can be approximated by
$\int dt L_{inf}(\Phi(t),\phi):= \mathcal{F}_{\phi}[\Phi]$.
Integration over $\phi_\omega$ gives finally
\begin{align}
\mathcal{Z}(\phi,\Phi)
=&
\int dM
\int\limits_{M}^{\Phi}
\!\mathcal{D}\Phi(t)\
e^{\mathcal{F}_{\phi}[\Phi]+S_{eff}[\Phi]}\;,
\end{align}
with the effective action
\[S_{eff}[\Phi]= -\frac{1}{2}
\sum_a \frac{(\beta m_a V^2_{Qa})^2}{\gamma_a} \int d\omega
\frac{|g^{-1}_a(\omega)|^2}{f(\omega)} \left|\Phi_{a}(\omega)\right|^2 \;.
\]
We notice that at high temperatures $f(\omega)\simeq 1$, and thus
$\hbar$ disappears in the
effective action for $\Phi$.
For this reason the latter is termed the ``classical'' part of the field.
In the limit of small mass $m_a\to 0$,
$m_a\Omega_a^2 V_{Qa}^2\to E_a$, where $E_a$ has a finite value and
is a typical energy scale of detector $a$,
the effective action simplifies to
\[S_{eff}[\Phi]= -\frac{1}{2}\sum_a\int dt \left[
\tau_{ac} \left(\dot{\Phi}_{a}(t)\right)^2
+ \frac{1}{\tau_{ac} \Delta \Phi_a^2}\Phi_a(t)^2\right]
\;,
\]
with $\tau_{ac} = {\beta\gamma_a V_{Qa}^2}/{2}$
the ``coherence time'' of the detector,
and $\Delta \Phi_a = 2/\beta E_a$ the spread of $\Phi_a$.
\section{Spin detector}\label{sec:spindetector}
We discuss a model for spin detection.
The setup corresponds to the one proposed and used
in~\cite{Cimmino1989} to detect Aharonov-Casher effect~\cite{Aharonov1984}
for neutrons.
This setup exploits the fact that
a moving magnetic dipole generates an electric one~\cite{Costa1967,Fisher1971}.
To measure this, one encloses
the two-dimensional current lead between the plates of a
capacitor as shown in Fig.~\ref{fig:detector}.
While in Ref.~\onlinecite{Cimmino1989} the neutrons
passed a fixed electric field, which gave
a constant Aharonov-Casher phase,
in a spin detector the initial voltage applied to the
plates is zero, and the passing of a particle with spin 1/2
will cause the charge
in the capacitor to
show pulses towards positive or negative values depending on the result of the
measurement. The associated phase $K_t = \int_0^t dt Q(t)$ will thus
increase or decrease stepwise in the
ideal situation where spins are transmitted
separately in vacuum through the detector.
Each spin moving with
velocity $\mathbf{v}$ produces an electric field.
For electrons in vacuum,
the interaction term between spin and detector
is given by the spin-orbit coupling
\[H_{int} = - \frac{1}{2}\boldsymbol{E} \cdot (\frac{\boldsymbol{v}}{c^2}\times \boldsymbol{\mu})\;,\]
with $c$ the speed of light, and the factor $1/2$
accounts for the Thomas precession.
The magnetic moment $\boldsymbol{\mu}$ is proportional to the spin
$\boldsymbol{\mu}= (g_S |e|/2m_e) \boldsymbol{S}$,
with $m_e$ mass of the electron, $e=-|e|$ its charge, and $g_S$ its
spin gyromagnetic factor.
Thus, we rewrite the interaction as
\[H_{int} = - (g_S |e|/4m_e c^2) \boldsymbol{E} \cdot (\boldsymbol{v}\times \boldsymbol{S})\;.\]
The spin-orbit coupling induces a current in the $RC$ circuit.
The integrated charge traversing the circuit is the detector read-out.
The read-out signal is
proportional to spin current
in the lead $\mathbf J$,
$Q=\lambda \mathbf{n}\cdot\mathbf{J}$,
$\mathbf{n}$ being the unit
vector perpendicular to the direction of the current flow
and parallel to the plates of the capacitor,
$\lambda$ being a proportionality coefficient. The concrete expression
for the latter,
$\lambda=g_S |e| L_\shortparallel/4m_e c^2 w$,
depends on the geometrical dimensions of the detector
the length of its plates in the direction
of the current $L_\shortparallel$, and the distance between the plates $w$.
\begin{figure}[t!]
\includegraphics[width=0.4\textwidth]{detectorbw.eps}
\caption{\label{fig:detector}
The proposed spin current detector. An electron with velocity $\mathbf{v}$
and spin $\mathbf{S}$ induces a voltage drop in a capacitor. The electric
field $\mathbf{E}$ inside the capacitor produces an Aharonov-Casher phase shift
on the electrons.}
\end{figure}
The variable canonically conjugated to the read-out is the
voltage $V$ across the capacitor, and the expression for the
interaction in terms of $V$
contains the same proportionality coefficient $\lambda$,
$H_{int}=-\lambda V \mathbf{n}\cdot\mathbf{J}$.
Our choice of the detection setup is motivated by the fact
that this detector does not influence electron transfers through
the contact and only gives
the minimal feedback compatible with the uncertainty principle:
the electrons passing the capacitor in the direction of current
acquire an Aharonov-Casher phase shift,
which consists in a precession of the spin around
the detecton axis $\mathbf{n}$. This
depends on spin and is given by
$\Phi_{AC}= \lambda V \mathbf{n}\!\cdot\!\mathbf{S}/\hbar$.
This is similar to the detection scheme
presented in~\cite{Levitov1996} for charges transferred.
A fundamental complication in comparison with the charge FCS
is that in our case the phase shift depends on spin, so that even the
minimal feedback influences the statistics
of the outcomes of following spin detectors.
We introduce dimensionless variables $N = 2\int dtQ/\hbar\lambda$,
$\phi = \lambda V/2$.
Then $N$ varies by one every time a spin 1/2 crosses the detector.
With reference to Eqs.\eqref{eq:response},\eqref{eq:fluctuation}, we have
$m=LC^2\to 0$, $\Omega^2=1/LC\to \infty$,
$m\Omega^2\to C$,
$\gamma\to RC^2$,
$E=4C/\lambda^2$,
with $L,$, $R$ and $C$ inductance (assumed negligible),
resistance and capacitance of the circuit.
\section{The setup considered}\label{sec:setup}
We consider a system composed of two metallic,
unpolarized leads, connected through a
coherent conductor, characterized by a set of transmission probabilities
$T_n$, where $n$ identifies transmission channels.
A negative bias voltage $V$ is applied to the left lead.
At the right of the conductor there are several spin detectors, labelled
from left to right by $a=1,2\cdots$, and a current detector, denoted by $a=0$.
The counting fields will be then $\phi_a$, with $a=0,1,\cdots$.
Since charge and spin currents commute,
the current detector can be positioned at any point
along the chain of detectors,
without influencing the statistics of the outcomes.
The setup is depicted in Fig.~\ref{fig:setup}.
\begin{figure}[htb!]
\includegraphics[width=0.3\textwidth]{setupbw.eps}
\caption{\label{fig:setup}The setup considered, in the case of three spin
detectors and one charge detector.}
\end{figure}
We require that the coherent conductor is non-polarizing.
Thus, the average spin current is zero.
However, there are spin fluctuations, which are revealed by measuring
noise and higher order correlators (or cumulants).
From the symmetry with respect to reversal of spin,
we can predict a priori that all
odd cumulants are zero.
We shall concentrate on a situation where
there are three spin detectors.
This is because,
as anticipated in section \ref{sec:background},
the current is unpolarized and
one needs at least three detectors
monitoring non-commuting quantities in order to see
non-trivial consequences of the detectors' feedback on the system.
The feedback consists in the wave-function
picking up an Aharonov-Casher phase while
traversing each detector.
\section{Results}\label{sec:technique}
The technique we use is an extension of the
scattering theory of charge FCS.
This theory~\cite{Levitov1993,Levitov1996,Belzig2001}
expresses FCS in terms of a phase factor $e^{i\chi}$ acquired
by scattering waves upon traversing the charge detector.
Since we do not consider energy-resolved measurements,
the phase factor does not depend on the channel, and
the approach works for a multi-channel conductor as well as
for a single-channel one.
The phase factor $e^{i\chi}$ can be seen as resulting from a gauge transform,
to be applied to the (known) Green function of the right lead,
that removes the coupling term~\cite{Levitov1996,Belzig2001}
$\hat H_{int}=-\frac{\hbar}{e} \Hat\chi \hat I $ .
For the case of the spin detectors,
the gauge transform introduces a phase factor which is a unitary matrix in spin space.
Namely,
the gauge transform generated by spin detector $a$ is
$e^{i \phi_{a} \mathbf{n}_{a}\cdot\boldsymbol{\tau}}$.
In this matrix, $\boldsymbol{\tau}$ is a pseudovector of $2\times2$
Pauli matrices, and $\mathbf{n}_a$ is the direction along which detector $a$ detects
spin current.
The Keldysh Green function of the lead is
\[\check{G}_{l}(E) =
\begin{pmatrix}1-2f_l&-2f_l\\-2(1-f_l)&2f_l-1
\end{pmatrix}\;,\]
where $l\in\{L,R\}$ denotes the left or right lead, and
$f_l$ is the corresponding Fermi occupation number
at energy $E$ and chemical potential $\mu_l$.
The elements of the matrix are actually in their turn a matrix in spin space.
Since the leads are assumed to be unpolarized, they are simply the identity.
The matrix current is given by \cite{Belzig2002}
\begin{equation}
\check{I}(\chi,\phi) =
\frac{e^2}{2\pi\hbar}
\sum_{n}
\frac{{T}_{n}\left[\check{G}_{L}
,\check{\tilde{G}}_{R}\right]}
{1+
{T}_{n} \left(\left\{
\check{G}_{L},\check{\tilde{G}}_{R}\right\}
-2\right)/4}\;,
\end{equation}
from which it follows that the quantum generating function is
\[\mathcal{F} = \frac{e^2}{2\pi\hbar}
\sum_{n} \int dE
\log{\left(1+{T}_{n} \left(\left\{
\check{G}_{L},\check{\tilde{G}}_{R}\right\}
-2\right)/4\right)}\;.\]
Here $\left[...\right](\{...\})$ denote (anti)commutator of
two matrices, and $\check{\tilde{G}}_{R}$ is the transformed
matrix
\[\check{\tilde{G}}_{R} = e^{i\bar{\chi}}
\prod^{\rightarrow}_a e^{i\bar{\phi}_{a}\mathbf{n}_a\cdot \boldsymbol{\tau}}
\check{G}_{R}
\prod^{\leftarrow}_a e^{-i\bar{\phi}_{a}\mathbf{n}_a\cdot \boldsymbol{\tau}}
e^{-i\bar{\chi}}\;,\]
where $\bar{\chi}= \diag(\chi^+,\chi^-)$, $\bar{\phi} =\diag{(\phi^+,\phi^-)}$
are matrices in Keldysh space.
After substituting the expression for $\check{G}_{R}$, we obtain
\[\check{\tilde{G}}_{R} =
\begin{pmatrix}
1-2f_R&-2f_R e^{i\chi}\mathcal{M}\\
-2(1-f_R) e^{-i\chi} \mathcal{M}^\dagger&2f_R-1
\end{pmatrix}
\;,\]
where
\begin{equation}
{\cal M} \equiv \prod^{\rightarrow}_a
e^{i(\phi^+_{a}/2)\mathbf{n}_a\cdot \boldsymbol{\tau}}
\prod^{\leftarrow}_a
e^{-i(\phi^-_{a}/2)\mathbf{n}_a\cdot \boldsymbol{\tau}} \;.
\label{eq:matrix}
\end{equation}
is a matrix in spin space.
We notice that
\emph{i}) the charge fields come only in the combination
$\chi = \chi^+-\chi^-$, and
\emph{ii})
the phase factors
$e^{i \chi}$ in the expression for charge FCS
are replaced by $e^{i \chi}{\cal M}$ to give the FCS of charge
and spin counts after taking trace over spin. If we also notice
that ${\cal M}$ is a $(2\times 2)$ matrix with eigenvalues
$e^{\pm i \alpha}$, we arrive to
\begin{equation}
\mathcal{F}(\chi,\{\phi^+_{a}\},\{\phi^-_{a}\})
=\frac{1}{2} \sum_{\pm} \mathcal{F}_{c}(\chi \pm \alpha)\;,
\label{eq:main_relation}
\end{equation}
where $\mathcal{F}_c(\chi)$ is the generating function for charge counting.
The $\alpha$ is given by
\begin{equation}\label{eq:matrixeigen}
\cos{\alpha}=\frac{1}{2}tr {\cal M}
\end{equation}.
The explicit expression for the system considered here,
in terms of the transmission
probabilities through the contact and the applied bias is,
at zero temperature,
\begin{equation}
\mathcal{F}
=\int_0^\tau\frac{dt}{\tau_V} \sum_n\log{\left[R_n^2 + T_n^2 e^{2i\chi} + 2 R_n T_n e^{i\chi}\cos{\alpha}
\right]}\;,
\label{eq:main_relation2}
\end{equation}
with $R_n\equiv 1-T_n$, $\tau_V \equiv 2\pi\hbar/eV$.
The interpretation is quite straightforward:
electrons coming through different channels
behave independently, which is revealed by the
fact that the generating function splits into a sum;
each channel can accommodate two electrons in a spin-singlet configuration;
with probability $R_n^2$ none of the
two electrons passes the junction, and there is no contribution
to the charge counting nor to the spin one;
with probability $T_n^2$ both electrons come through the conductor.
This gives a contribution of
two elementary charges transferred (factor $e^{2i\chi}$), but no
spin transfer. Finally, with probability
$p_n=2R_n T_n$, exactly one of the two electrons is transferred.
This gives a contribution to the charge and to the spin counting.
\section{Projection Postulate}\label{sec:PP}
We demonstrate that a different FCS is predicted by
using a different approach,
namely a na\"{\i}ve application of the
projection postulate,
consisting in avoiding the description of the measurement and
applying the projection to the system measured.
We shall denote this procedure with PP for brevity.
This approach predicts a parameter $\alpha_{PP}$
which does not depend on $\Phi$.
Let us give the details of such a derivation:
When an unpolarized electron arrives to the first detector, the probability of
the outcome $\sigma_1 = \pm 1$ is $P_1(\sigma_1)=1/2$.
The conditional probability that the second detector gives
$\sigma_2$, given that the first read $\sigma_1$ is
$P_2(\sigma_2|\sigma_1)
= (1+\sigma_1 \sigma_2 \mathbf{n}_1\cdot\mathbf{n}_2)/2$.
This is because after the first detection
the spin of the electron is assumed to have collapsed
along $\pm \mathbf{n}_1$. The same happens after the second detection.
Consequently, the conditional probability
that a third detector reads $\sigma_3$, given that
the first read $\sigma_1$ and the second $\sigma_2$,
depends only on the latter outcome
$P_3(\sigma_3|\sigma_2,\sigma_1)
= (1+\sigma_2 \sigma_3 \mathbf{n}_2\cdot\mathbf{n}_3)/2$.
The process is in a sense a Markovian one.
The total joint probability for each electron transfer with
an arbitrarily long chain of detectors is
\[P(\{\sigma\}) = \frac{1}{2} \prod_{a=1}^{K-1}
\frac{1\!+\!\sigma_{a} \sigma_{a+1}
\mathbf{n}_a\!\cdot\!\mathbf{n}_{a+1}}{2} \;,\]
and the corresponding generating function for the setup considered here
is given by Eq.~\eqref{eq:main_relation2} with
\begin{align}
\cos{\alpha_{PP}} = \sum_{\{\sigma\}} \cos{\left[\sum_a\sigma_a \phi_a\right]}
\frac{1}{2} \prod_{a=1}^{K-1}
\frac{1\!+\!\sigma_{a} \sigma_{a+1}
\mathbf{n}_a\!\cdot\!\mathbf{n}_{a+1}}{2} \;.
\end{align}
\section{Comparison of the two approaches
for one and two spin detectors}\label{sec:comp12}
Now, let us go back to Eqs.~\eqref{eq:matrix},\eqref{eq:matrixeigen}
and compare the two approaches for some simple cases.
For the case of one or two detectors in series,
the eigenvalues $e^{\pm i\alpha}$
are not affected by the order of matrix multiplication in (\ref{eq:matrix})
and depend on differences of spin counting fields
$\phi_a \equiv \phi^+_{a} -\phi^-_{a}$ only
(in fact they coincide with the value $e^{\pm i \alpha_{PP}}$).
This implies that the FCS definition (\ref{eq:FCS_definition}) can be readily
interpreted in classical terms: it is a generating function
for probability distribution of a certain number of spin counts
$\sigma_a$ in each detector,
\begin{equation}
P(\{\sigma_a\}) = \int \prod_{a} d \phi_a
e^{F(0,\{\phi_a\})} e^{- i\sum_a \sigma_a \phi_{a}}
\label{eq:probability}
\end{equation}
For a single detector, the spin FCS is very simple:
it corresponds to independent transfers of
two sorts of electrons, with spins "up" and "down"
with respect to the quantization axis.
The cumulants of the
spin (charge) transferred
are given by the derivatives of $F$ with respect to $\phi_1$ ($\chi$),
at $\chi=\phi_1=0$. In this case $\alpha=\phi_1$.
From this and relation (\ref{eq:main_relation}),
we conclude that all odd cumulants of spin current
are 0, as anticipated, and all even cumulants
coincide with the charge cumulants.
For two spin detectors,
with $\mathbf{n}_1\cdot\mathbf{n}_2=\cos{\theta}$,
we obtain
$\cos\alpha=\cos\phi_1\cos\phi_2-\sin\phi_1\sin\phi_2\cos\theta$.
Since there is no dependence on $\Phi_a$,
the quantum generating function has an immediate interpretation;
we consider the case
when the read-out of the charge is not exploited ($\chi=0$).
Then
\begin{equation}
\mathcal{Z}(\phi) = \prod_n\left[q_n + p_n\cos{\alpha}\right]^M\!,
\end{equation}
where $p_n=2 R_n T_n$ is the probability that, in two attempts of transmitting
one electron over a spin degenerate channel $n$,
exactly one is transmitted and $q_n=1-p_n$.
This result coincides with what one would obtain
from the Projection Postulate.
We discuss in detail the probability distribution.
By performing the Fourier transform, we find the probability of
detecting a spin $\sigma_1$ in direction $\mathbf{n}_1$ and
$\sigma_2$ in direction $\mathbf{n}_2$:
\begin{equation}
P(\sigma_1,\sigma_2) =
{\sum_{\sigma_1^{(n)}}}'{\sum_{\sigma_2^{(n)}}}'
\prod_n P_n(\sigma_1^{(n)},\sigma_2^{(n)})\;,
\end{equation}
where the prime in the sum means that it is restricted to
$\sum_n \sigma_a^{(n)} =\sigma_a$,
and the probability for each channel $n$ is
\begin{align}
\nonumber
P_n(\sigma_1,\sigma_2)\!=&\ \sum_k P_{tr}(k|N) P_\uparrow((k+\sigma_1)/2|k) \\
\nonumber&\times
\sum_l P_\uparrow((k+\sigma_2+2l)/4|(k+\sigma_1)/2=\uparrow) \\
\label{2detprob}&\quad\times
P_\uparrow((k+\sigma_2-2l)/4|(k-\sigma_1)/2=\downarrow)
\end{align}
where
\begin{align}
P_{tr}(k|N) =&\ \binom{N}{k}p_n^k q_n^{N-k}\;,\\
P_\uparrow(l|k) =&\ \frac{1}{2^k} \binom{k}{l}\;,\\
P_\uparrow(l|k=\uparrow) =&\ \binom{k}{l}
[\cos^2{(\theta/2)}]^{l} [\sin^2{(\theta/2)}]^{k-l}\;,\\
P_\uparrow(l|k=\downarrow)=&\
\binom{k}{l}
[\cos^2{(\theta/2)}]^{k-l} [\sin^2{(\theta/2)}]^{l}
\end{align}
The sums are over all values for which the binomials make sense
(no negative nor half-integer values). Thus $k,l,\sigma_1, \sigma_2$
have the same parity.
$P$ can be interpreted as follows: since the current is unpolarized,
we can think of it as carried by pairs of electrons in singlet
configuration.
Then, there is a successful attempt to transfer spin when exactly one
of the two electrons is transmitted. This gives $P_{tr}(k|N)$,
the probability of transferring $k$ spins
over $N$ attempts ($p_n$ probability of success for a single attempt)
through channel $n$;
the second binomial comes from the ways one can pick
$N_{1\uparrow}=(k+\sigma_1)/2$ spins up out of
$k$ spins, with probability $1/2$ (we recall that the incoming
electrons are unpolarized);
the third term comes from the fact that,
given that $N_{1\uparrow}=(k+\sigma_1)/2$
spins up according to the first detector are passed to the second one,
the latter will measure $(k+\sigma_2+2l)/4$ of these as spins up
(the probability of agreement
between detectors being $p_{ag}=\cos^2(\theta/2)$),
and the rest as spins down;
analogously, the latter term comes from the fact that given
$N_{1\downarrow}=(k-\sigma_1)/2$
spins down along direction $\mathbf{n}_1$ have been detected,
$(k+\sigma_2-2l)/4$ of them will be detected from
the second detector as spins up,
while the remaining ones will be detected as down.
When the two detectors have parallel orientation ($\theta=0$),
the second sum in Eq.~\eqref{2detprob} is nonzero
only if $\sigma_1=\sigma_2$, giving
\[P(\sigma_1,\sigma_2)\!= \sum_k P_{tr}(k|N) P_\uparrow((k+\sigma_1)/2|k)
\delta_{\sigma_1,\sigma_2},
\]
i.e.~there is perfect correlation, as is to be expected.
When the two detectors have orthogonal orientation ($\theta=\pi/2$),
it is possible to perform analytically the sum over $m$:
\[P(\sigma_1,\sigma_2)\!=
\sum_k P(k|N) P_\uparrow((k+\sigma_1)/2|k)P_\uparrow((k+\sigma_2)/2|k)
\]
i.e.~the outcomes are independent, given that $k$ successful spin
transfers happened.
\section{Comparison of the two approaches
for three spin detectors}\label{sec:comp3}
For the case of three detectors, we have
\begin{align}\nonumber
&\cos\alpha =\cos{\alpha_{PP}} -\sin{\theta_{12}}\sin{\theta_{23}}
\sin{(\Phi_2-\Phi_2^{(0)})} \sin{\phi}_3 \sin{\phi}_1
, \nonumber \\
&\cos\alpha_{PP} = \cos{\phi}_1 \cos{\phi}_2 \cos{\phi}_3 +\mbox{ }\nonumber \\
&-\cos{\theta_{12}}\sin{\phi}_1 \sin{\phi}_2 \cos{\phi}_3
-\cos{\theta_{23}}\sin{\phi}_2 \sin{\phi}_3 \cos{\phi}_1
-\nonumber \\
& \cos{\theta_{12}}\cos{\theta_{23}}
\sin{\phi}_3 \sin{\phi}_1 \cos{\phi}_2
.
\label{avsapp}
\end{align}
Here $\theta_{jk}=\arccos{{\mathbf n}_j\!\cdot\!{\mathbf n}_k}$ are the angles
between the polarizations ($\mathbf{n}$) of detectors $j$ and $k$,
and $\cos{\Phi_2^{(0)}}= ({\mathbf n}_1 \times{\mathbf n}_2)\!\cdot\!{\mathbf n}_3/
\sin{\theta_{12}}\sin{\theta_{23}}$,
$\sin{\Phi_2^{(0)}}= ({\mathbf n}_1 \!\times\!{\mathbf n}_2)\cdot
({\mathbf n}_2\times\!{\mathbf n}_3)/
\sin{\theta_{12}}\sin{\theta_{23}}$.
As before $\cos\alpha_{PP}$ is the part corresponding
to the Projection Postulate.
We notice that when two consecutive detectors are parallel or antiparallel,
then
$\alpha_{PP}=\alpha$. This is because the same measurement is repeated twice,
and thus we fall back to the case of two detectors.
In general, however, $\cos\alpha$ depends on $\Phi_2$,
and thus one needs to account
for the dynamics of the second detector
in order to get the probability distribution
for the spin counts. We recall that the corresponding detector's action is
$S[\Phi_2]=\int dt \frac{1}{2}\left[\tau_c
\dot{\Phi}_2(t)^2 - \Phi_2(t)^2/\tau_c \langle\Phi_2^2 \rangle\right]$,
with $\tau_c$ coherence time and $\langle\Phi_2^2 \rangle$
fluctuations of $\Phi_2$.
We have calculated the second cumulants or cross-correlators:
we found that they differ
from the ones obtained by using PP only by small terms.
The correlator between first and third detector's readings is:
\begin{align}
&\langle\!\langle \sigma_1 \sigma_3\rangle\!\rangle \!=\!
\langle\!\langle N^2\rangle\!\rangle
\left[
C +
\left(
\cos{\theta_{13}}-
C
\right)
e^{-\langle \Phi_2^2\rangle/2}
\right],
\end{align}
where
$C \equiv \cos{\theta_{12}}\cos{\theta_{23}}$,
and the first term is
the PP result.
The second term, as expected, has a typical signature of interference effects:
it is suppressed exponentially if the variance of the corresponding
Aharonov-Casher phase $\langle\Phi_2^2\rangle \gg 1$. Since
$\Phi_{AC}$ is inversely proportional to $\hbar$, this is the
classical limit. In this limit, the result coincides with the PP.
However, fourth cumulants show a large deviations from the PP result.
Namely:
\begin{align}\label{eq:QMpredictions}
\langle\!\langle \sigma_1^2 \sigma_3^2\rangle\!\rangle &\!=\!
\langle\!\langle
\sigma_1^2 \sigma_3^2\rangle\!\rangle_{\raisebox{-1mm}{$_{\!\!PP}$}}
+8\frac{\tau_c}{\tau}
A
\langle\!\langle N^2\rangle\!\rangle^2 ,
\end{align}
where \mbox{$A\equiv \sin^2{\theta_{12}}\sin^2{\theta_{23}}$,}
and the PP result is expressed in terms of charge cumulants as
\begin{equation*}
\langle\!\langle
\sigma_1^2 \sigma_3^2\rangle\!\rangle_{\raisebox{-1mm}{$_{\!PP}$}}\!\!=
\frac{1}{3}\left[(1+2C^2)\langle\!\langle N^4\rangle\!\rangle
+ 2 (1-C^2) \langle\!\langle N^2\rangle\!\rangle\right] .
\end{equation*}
This deviation results from correlations of $\Phi_2$ at time scale $\tau_c$.
To estimate the result, we notice that the charge cumulants are of the order
of $\tau/\tau_{el}$, $\tau_{el}$ being the average time between electron
transfers. It is easy to fulfill the condition
$\tau_{el} \ll \tau_{c} \ll \tau$, and in this case
$\langle\!\langle \sigma_1^2 \sigma_3^2\rangle\!\rangle$
is much larger than PP result.
It is interesting to study further the probability
distribution which gives rise
to such anomalously large fourth-order cumulants.
This we shall do in the next section.
\section{A particular case}\label{sec:case}
We discuss for definiteness the case of three detectors oriented
along three orthogonal directions forming a
right-handed basis.
This implies $\Phi_2^{(0)}=0$.
We concentrate on the joint probability distribution
for the outcomes of the first and the third detector,
irrespectively of the reading of the second detector.
We consider the "classical" limit $\langle \Phi_2^2 \rangle \to \infty$.
Then,
the generating function for the probability $P(\sigma_1,\sigma_3)$ for
counting $\sigma_1, \sigma_3$ spins in the detectors is:
\begin{align}
\nonumber
Z(\phi_1,\phi_3) =&\ \int d\Phi_{2,i} d\Phi_{2,f}
\int_{\Phi_{2,i}}^{\Phi_{2,f}} {\mathcal{D}}\Phi_2(t)
\exp\Biggl\{\int_0^\tau dt\\
&\left[-\frac{\tau_c}{2}\dot{\Phi}_2^2 + \frac{1}{\tau_V}
\sum_n\ln{\left[q_n+p_n\cos{\alpha(\phi,\Phi_2)}\right]}\right] \Biggr\}
\end{align}
where
\begin{align}\nonumber
&\cos\alpha(\phi,\Phi_2) =\cos{\alpha_{PP}} -
\sin{\phi}_3 \sin{\phi}_1
\cos\Phi_2
, \nonumber \\
&\cos\alpha_{PP} = \cos{\phi}_1 \cos{\phi}_3\;.
\label{eq:avsapp}
\end{align}
We have a path-integral over imaginary time.
We exploit the quantum mechanical technique
and re-express the path-integral in terms
of amplitudes:
\[Z(\phi)=\int d\Phi_{2,i} d\Phi_{2,f}
\langle\Phi_{2f};t=i\tau,\phi|\Phi_{2i}\;t=0,\phi\rangle\;.\]
Here the counting fields $\phi$ are parameters, and the time evolution
of the variable $\Phi_2$
is dictated by
$|\Phi_2;t,\phi\rangle = e^{-i\Hat{H}(\phi)t}|\Phi_2;0,\phi\rangle$,
with the Hamiltonian
\begin{equation}
\Hat{H}(\phi) = -\frac{1}{2\tau_c}\frac{\partial^2}{\partial \Phi_2^2}
- \frac{1}{\tau_V}
\sum_n\ln{\left[q_n+p_n\cos{\alpha(\phi,\Phi_2)}\right]}
\;.
\end{equation}
Then,
for large values of $\tau$, the path-integral can be approximated
\begin{equation}
Z(\phi) \simeq e^{-E_0(\phi)\tau}\;,
\end{equation}
where $E_0(\phi)$ is the ground state energy of the Hamiltonian.
The next step is to find an explicit expression for the probability.
We recall that the probability to have detectors 1 and 3 measure average
spin currents $I_1=\sigma_1/\tau$, $I_3=\sigma_3/\tau$ is related to $Z(\phi)$
through
\[P(I_1,I_3) = \int \frac{d\phi_1}{2\pi}\frac{d\phi_3}{2\pi} Z(\phi)
e^{-i\tau (\phi_1 I_1+\phi_3 I_3)}\;.\]
Since we are in the large $\tau$ limit,
we can evaluate the integrals in the saddle-point approximation,
and obtain
\[P(I_1,I_3) \propto \exp{
\left[-E_0(\phi^*)-i\tau (\phi^*_1 I_1+\phi^*_3 I_3)\right]}\;,\]
where
$\phi^*_a$ satisfy the saddle point condition
\begin{subequations}\label{eq:sp}
\begin{align}
\left.\frac{\partial E_0}{\partial \phi_1}
\right|_{\phi_1^*,\phi_3^*}+i I_1 =& 0 \;,\\
\left.\frac{\partial E_0}{\partial \phi_3}
\right|_{\phi_1^*,\phi_3^*}+i I_3 =& 0 \;
\end{align}
\end{subequations}
Assuming that the solutions are much smaller than 1, $\phi^*_a \ll 1$,
we have that the Hamiltonian can be rewritten,
including terms up to second order in $\phi$, as
\begin{equation}
\Hat{H}(\phi) = \left[-\frac{1}{2\tau_c}\frac{\partial^2}{\partial \Phi_2^2}
+ \frac{1}{2\tau_S}
\left(\phi_1^2+\phi_3^2+2\phi_1\phi_3 \cos{\Phi_2}\right)\right]
\;,
\end{equation}
where we introduced the average time between spin transfers,
$\tau_S=\tau_V/\sum_n p_n$.
We recognize the Hamiltonian for the Mathieu equation
\[H_M = -\frac{\partial^2}{\partial v^2} + 2 q \cos{(2v)}\;.\]
Thus the ground state energy depends on the lowest Mathieu
characteristic function $a_0(q)$,
with the coupling strength given by $q=4(\tau_c/\tau_S)\phi_1\phi_3$.
Namely,
\[E_0(\phi) = a_0(q)/8\tau_c + (\phi_1^2+\phi_3^2)/2\tau_S\;.\]
The saddle-point equations \eqref{eq:sp} can
then be combined to give a trascendent
equation for $q$, from which one expresses $\phi_a^*$,
which are purely imaginary, according to
\begin{subequations}\label{eq:spsol}
\begin{align}
i\phi_1^* =&\ \tau_S\frac{I_1- (I_3/2) a'_0(q^*)}{1-a'_0(q^*)^2/4}\;,\\
i\phi_3^* =&\ \tau_S\frac{I_3- (I_1/2) a'_0(q^*)}{1-a'_0(q^*)^2/4}\;.
\end{align}
\end{subequations}
Here, $q^*$ is the solution to the equation
\begin{equation}\label{eq:selfcons}
\frac{q}{4} = -\frac{(\nu_1+\nu_3)^2}{\left[2+a'_0(q)\right]^2}
+\frac{(\nu_1-\nu_3)^2}{\left[2-a'_0(q)\right]^2}
\;,
\end{equation}
where we introduced dimensionless currents
$\nu_a \equiv \sqrt{\tau_c \tau_S} I_a$.
Eqs. \eqref{eq:spsol},\eqref{eq:selfcons} are valid in the limit
$\tau_S I_a \ll 1$, i.e. $\nu_a \ll \sqrt{\tau_c/\tau_S}$.
Finally, we have that the probability distribution is
\begin{align}\nonumber
\log P(I_1,I_3)
\propto& -a_0(q^*)/8 \\
\nonumber
&- (\nu_1+\nu_3)^2 (1+a'_0(q^*))/
\left[2+a'_0(q^*)\right]^2 \\&
\label{eq:prob}
- (\nu_1-\nu_3)^2 (1-a'_0(q^*))/
\left[2-a'_0(q^*)\right]^2\;.
\end{align}
This probability distribution is to be compared
with the one predicted by applying the PP.
The latter is, in the same regime $\tau_S I_a \ll 1$,
the independent combination of two gaussians:
\begin{equation}\label{eq:ppprob}
\log P_{PP}(I_1,I_3)\propto -(\nu_1^2+\nu_3^2)/2\;,
\end{equation}
the proportionality constant ($\tau/\tau_c$)
being the same.
In the limit $\tau_c \ll \tau_S$,
we have that Eqs. \eqref{eq:prob} and \eqref{eq:ppprob} coincide.
However, by taking into account
that the detectors have a finite decoherence time $\tau_c$,
and that the time between spin transfers
$\tau_S$ can be much smaller than $\tau_c$,
we find that the probability distribution deviates
sensibly from Eq.\eqref{eq:ppprob}.
This deviation is larger
in the regime $1\ll|\nu_1|\simeq |\nu_3| \ll\sqrt{\tau_c/\tau_S}$.
when both (dimensionless) currents
are comparable in module and large with respect to 1.
When $\nu_1\gg 1$, we find that
\begin{equation}\label{eq:fcspred}
\log{P} \propto -\nu_1^2/2 + f(\nu_3/\nu_1)\;,
\end{equation}
with the scaling function $f(x)$ defined by
\[f(x)=-\frac{a_0(q_0(x))}{8}
+\frac{1}{4}x q_0(x)\]
where the condition
$\left.\frac{\partial a_0}{\partial q}\right|_{q=q_0}=2x$ defines $q_0(x)$.
In particular, $f(x)$ diverges at $x=1$ according to
$f(x)\simeq -1/16(1-x)$.
\begin{figure}[t!]
\includegraphics[width=0.4\textwidth]{mathieufig.eps}
\caption{\label{fig:comp}
The log of probability as a function of $\nu_3/\nu_1$
for different values of $\nu_1$
for the configuration studied in the text.
All the curves have been shifted by $\nu_1^2/2$.
The upper curves correspond to the result of the FCS approach,
and the lower ones
to the PP. The black dotted curve
is the limiting scaling curve discussed in the text.}
\end{figure}
In Fig.~\ref{fig:comp} we draw the logarithm of probability
as a function of $\nu_3/\nu_1$ for several values of $\nu_1$,
and compare with the probability
predicted by making use of PP.
\section{Conclusions}\label{sec:conclusions}
We have discussed the Full Counting Statistics of
non-commuting variables. As a concrete example,
we focused on
spin counts in a two terminal device
with non-ferromagnetic leads connected
through a non-polarizing coherent conductor.
We have provided a formula connecting the FCS of spins to the one of charge.
We have seen that it is crucial
to have a coherent conductor with finite transparency
connecting the two leads.
This is because electrons transmitted
through the same channel are in a spin singlet, and thus
contribute no net spin transfer nor spin fluctuations.
However, if the transmission probability through channel $n$
is finite ($0<T_n<1$),
then there is a non-zero probability $p_n= 2 (1-T_n) T_n$ that
exactly one electron out of a singlet pair is transmitted,
and this contributes to spin fluctuations.
Another interesting conclusion which we can draw from this work is that,
when measuring non-commuting quantities with subsequent detectors,
one should take into account
the quantum dynamics of the detectors themselves.
This is because the decoherence time
for the detectors, $\tau_c$ can be larger than the average time between
two subsequent counts, $\tau_S$. Thus if one would
put by hand the off-diagonal elements of the detectors' density matrix to zero
after each count, which amounts to applying the projection postulate, one
would obtain the wrong result. We have shown that,
in the system considered here, such a deviation from the
na\"ive application of the projection postulate is revealed
by the fourth correlator of spin counts.
\acknowledgments
We acknowledge the financial support provided through the European
Community's Research Training Networks Programme under contract
HPRN-CT-2002-00302, Spintronics.
|
2,877,628,089,083 | arxiv | \section{Introduction and discussion of results}
\subsection{General introduction}
Counting functions occurring naturally in algebra and geometry
frequently display symmetries which manifest themselves in the form of
certain functional equations. A classical example is the functional
equation satisfied by the Weil zeta function of a smooth, projective
algebraic variety defined over a finite field. More recently, Stanley
established similar symmetries for the Hilbert-Poincar\'e series of
graded algebras, with remarkable applications to counting problems in
combinatorics and topology; see \cite{Stanley/78}. The symmetries of
the Weil zeta functions lie at the heart of Denef and Meuser's proof
of a functional equation for certain $p$-adic integrals, called
Igusa's local zeta functions; see \cite{DenefMeuser/91}. The
phenomenon of functional equations also arises in the context of zeta
functions associated to groups and rings. Indeed, a recent result of
the second author establishes functional equations for local zeta
functions of finitely generated nilpotent groups, enumerating the
numbers of prime-power index subgroups; see \cite{Voll/06a}. It
depends on generalisations of methods developed for studying Igusa's
local zeta functions and draws on some of the ideas introduced in the
current paper.\footnote{The last two sentences were added during the
revision of the manuscript in October 2007. In the sequel we inserted
references to~\cite{Voll/06a} and \cite{duSG/06} where appropriate.}
In the current paper we are concerned with a counting problem in
geometric algebra: we study the numbers of flags of
\emph{non-degenerate} subspaces in a finite vector space, equipped
with an alternating bilinear, hermitian or quadratic form. The
polynomials giving these numbers can easily be computed, e.g.\
following Artin's classical book \cite{Artin/57} on Geometric
Algebra. Combining ideas from the theory of zeta functions, Coxeter
groups and combinatorics, we are able to establish remarkable
symmetries satisfied by these numbers which are far from evident from
the formulae. This is achieved by proving functional equations for
rational functions encoding the polynomials in question; see
Theorems~\ref{theorem_A} and \ref{theorem_B} below. A recurrent idea
in the present paper is to describe these polynomials in terms of
Coxeter groups and then to deduce functional equations from
generalisations of arguments of Igusa. In an important special case of
Theorem~A we can only conjecture such a description, leading to
Conjecture~\ref{conjecture_C}, which is of independent interest. The
proof we give for this special case of Theorem~\ref{theorem_A} is
combinatorial.
Our results have a precedent in the work of Igusa. In \cite[Part
II]{Igusa/89} Igusa establishes a formalism for studying certain
$p$-adic integrals associated to reductive algebraic groups. Igusa
computes a closed formula for these integrals in terms of the
associated Weyl groups and their root systems. In the case of
classical groups the data involved encode the number of flags of
\emph{totally isotropic} subspaces in finite polar spaces. In this
sense our work is complementary to that of Igusa. In both cases, the
functional equations may be understood in terms of the symmetry which
is given by (right-)multiplication by the longest element in a Coxeter
group.
Igusa's functions are closely related to $p$-adic integrals associated
to zeta functions of groups and rings, where many instances of
functional equations similar to the ones described in
Theorem~\ref{theorem_A} occur. The analytic properties of Euler
products of $p$-adic integrals of this type are also objects of
intense study. We refer the reader to~\cite{duS-ennui/03, duSG/00,
duSLubotzky/96, Voll/05, duSG/06, Voll/06a} for more information on
analytic properties of zeta functions of groups and their functional
equations. Whilst we first encountered some of the Igusa-type
rational functions studied in the current paper in the context of zeta
functions of groups, they themselves are not generating functions.
Rather than encoding infinite arithmetic sequences they are associated
to finite formed spaces. We remark that a priori the rational
functions studied in the current paper do not have a natural
interpretation as $p$-adic integrals.
\subsection{Detailed statement of results}
\subsubsection{Theorem~\ref{theorem_A}}
In order to give a detailed statement of our results, we require some
notation. We fix a natural number~$n \in \ensuremath{\mathbb{N}}$ and consider
$n$-dimensional vector spaces~$V$ over a finite field~$F$, equipped
with a non-degenerate
\begin{itemize}
\item alternating bilinear form~$B$ (the `symplectic
case'),
\item hermitian form $B$ (the `unitary case') or
\item quadratic
form $f$ (the `orthogonal case').
\end{itemize}
In the symplectic and unitary cases, we formally define $f : V
\rightarrow F$ by $f(x) := B(x,x)$. In the orthogonal case, we let
$B$ denote the bilinear form obtained by polarising $f$: if $\cha F
\ne 2$, then $B$ is non-degenerate symmetric, whereas, if $\cha F =
2$, then $B$ is alternating and possibly degenerate. The triple
$\ensuremath{\mathcal{V}} := (V,B,f)$ will be called a \emph{formed space}. We also
introduce a parameter $\ensuremath{\gamma}$ equal to $1$ in the unitary case and equal
to $1/2$ otherwise; with this convention $F = \mathbb{F}_{q^{2 \ensuremath{\gamma}}}$
for a prime power $q$.
Recall that, by the classification of finite formed spaces (cf.,
e.g.,~\cite[Section~3.3]{Cameron/91}), $\ensuremath{\mathcal{V}}$ decomposes as an
orthogonal direct sum of a certain number of hyperbolic planes and an
anisotropic space of dimension~$d \in \{0,1,2\}$. In the orthogonal
case we attach a sign~$\epsilon \in \{-1,1\}$ to $\ensuremath{\mathcal{V}}$ if $n$ is
even, according to whether $d$ equals~$0$ or~$2$. The six
possibilities are given by the following table.
\begin{figure}[H]
\begin{center}
\begin{tabular}{|c|c!{\vrule width 1pt}c|c|c|c|}\hline
geometric type & $n$ & $d$ & $\epsilon$ & $\ensuremath{\gamma}$ \\\hline
symplectic&$2m$&$0$&--&$1/2$\\
unitary&$2m$&$0$&--&$1$\\
unitary&$2m+1$&$1$&--&$1$\\
orthogonal&$2m$&$0$&$1$&$1/2$\\
orthogonal&$2m+1$&$1$&--&$1/2$\\
orthogonal&$2m$&$2$&$-1$&$1/2$\\
\hline
\end{tabular}
\end{center}
\end{figure}
In the current paper we study rational functions incorporating the
numbers of $F$-rational points of the varieties of flags of
non-degenerate subspaces in~$\ensuremath{\mathcal{V}}$. We write~$[n-1]$
for~$\{1,\dots,n-1\}$. By a \emph{non-degenerate flag of type $J =
\{j_1,\dots,j_s\} \subseteq [n-1]$}, where $j_1<j_2<\dots<j_s$, we
mean a family $\ensuremath{\mathbf{U}}_J=(U_j)_{j\in J}$ of non-degenerate subspaces
of~$\ensuremath{\mathcal{V}}$ with $U_{j_1}\subset\dots\subset U_{j_s}$ and $\dim U_j =j$
for each~$j\in J$. Let
$$
a_{\ensuremath{\mathcal{V}}}^J(q):=|\{\ensuremath{\mathbf{U}}_J \mid \ensuremath{\mathbf{U}}_J\text{ non-degenerate flag of
type }J\}|.
$$ Then $a^J_{\ensuremath{\mathcal{V}}}(q)$ is a monic polynomial in $q$ (cf.~the remarks
at the end of this subsection regarding the orthogonal case), and we
set
$$
\alpha^J_{\ensuremath{\mathcal{V}}}(q^{-1}):=a^J_{\ensuremath{\mathcal{V}}}(q)/q^{\deg_qa^J_{\ensuremath{\mathcal{V}}}}.
$$
We encode these numbers in rational functions as follows. Let $\ensuremath{\mathbf{X}} =
(X_i)_i$ be a finite family of independent indeterminates. Fix a
family of rational functions $\ensuremath{\mathbf{F}} = \left( F_{J}(\ensuremath{\mathbf{X}}) \right)_{J
\subseteq [n-1]}$ in~$\ensuremath{\mathbf{X}}$ with the \emph{inversion property} that
\begin{equation}\tag{IP}\label{eq_IP}
\text{for all }~I\subseteq[n-1]: \; F_{I}(\ensuremath{\mathbf{X}}^{-1}) = (-1)^{|I|}
\sum_{J\subseteq I} F_J(\ensuremath{\mathbf{X}}).
\end{equation}
A simple and naturally occurring
example~(cf.~\cite[Part~II]{Igusa/89}) of a family with this property
is $\left(\prod_{j\in J}\frac{X_j}{1-X_j}\right)_{J\subseteq [n-1]}.$
By defining
$$
\Ig_{\ensuremath{\mathcal{V}}}(q^{-1},\ensuremath{\mathbf{X}}) := \Ig_{\ensuremath{\mathcal{V}},\bf{F}}(q^{-1},\ensuremath{\mathbf{X}}) :=
\sum_{J\subseteq[n-1]}\alpha^J_{\ensuremath{\mathcal{V}}}(q^{-1})F_J(\ensuremath{\mathbf{X}})
$$
we associate to $\ensuremath{\mathcal{V}}$ and $\ensuremath{\mathbf{F}}$ a rational function in $q$ and the
variables $(X_i)_i$. Some explicit examples of these \emph{Igusa-type
functions} may be found in the Appendix.
The first main result of this paper is
\begin{thmx} \label{theorem_A}
For each $n$-dimensional, non-degenerate formed space~$\ensuremath{\mathcal{V}}$ the
associated Igusa-type function satisfies the functional equation
$$\Ig_{\ensuremath{\mathcal{V}}}(q,\ensuremath{\mathbf{X}}^{-1})=(-1)^aq^{b}\,\Ig_{\ensuremath{\mathcal{V}}}(q^{-1},\ensuremath{\mathbf{X}}),$$
where the integers $a$ and $b$ are given by the table below
\textup{(}with $m:=\lfloor\frac{n}{2}\rfloor$\textup{)}.
\begin{figure}[H]
\begin{center}
\begin{tabular}{|c|c|c!{\vrule width 1pt}c|c|}\hline
\textup{geometric type} & $n$ & $\epsilon$ & $a$ & $b$ \\\hline
\textup{symplectic} &$2m$&--&$m-1$&$m(m-1)$\\
\textup{unitary} &$n$&--&$\binom{n}{2}+n-1$&$\binom{n}{2}$\\
\textup{orthogonal} &$2m$&$1$&$m+1$&$m^2$\\
\textup{orthogonal} &$2m+1$&--&$m$&$m(m+1)$\\
\textup{orthogonal} &$2m$&$-1$&$m$&$m^2$\\
\hline
\end{tabular}
\end{center}
\end{figure}
\end{thmx}
As we explained in the general introduction, we see
Theorem~\ref{theorem_A} primarily as a result about the polynomials
$\alpha^J_\ensuremath{\mathcal{V}}(q^{-1})$; the choice of the family of rational
functions $\bf{F}$ is secondary. The inversion property \eqref{eq_IP}
satisfied by the rational functions $\bf{F}$ is a key ingredient which
we require for our subsequent combinatorial and group theoretical
considerations. We remark that rational functions $\bf{F}$ satisfying
the inversion property arise naturally in Igusa's work as well as in
the context of zeta functions of groups and rings; cf.~\cite{Voll/06,
Voll/06a}.
\begin{remark}[Analogy with Igusa's work]
The analogy with Igusa's paper~\cite{Igusa/89} is the
following. On~\cite[p.~706]{Igusa/89} Igusa gives a formula for a
$p$-adic integral, essentially of the form
$$
Z(s)=\frac{\sum_{w\in W} q^{-l(w)}\prod_{\alpha_j \in w(R^-)} q^{A_j
- B_js}}{\prod_{j=1}^\ell (1 - q^{A_j-B_js})},
$$ where $q$ is a prime power, $W$ is a Weyl group, $l$ denotes the
standard Coxeter length function, $S
=\{\alpha_1,\dots,\alpha_\ell\}$ constitutes a basis for the root
system, $R^-$ denotes the set of negative roots with respect to $S$,
the parameters $A_j,B_j$ are suitable integers and $s$ is a complex
variable. It is immediate that
\begin{equation*}
Z(s)=\sum_{J\subseteq[\ell]} \beta_J(q^{-1}) F_J(\ensuremath{\mathbf{X}}),
\end{equation*}
where $$\beta_J(q^{-1}) := \sum_{\substack{w\in W\\D(w)\subseteq I}}
q^{-l(w)},\quad D(w) := \{j \in [\ell] \mid \alpha_j \in w(R^-)\},$$
and $$F_J(\ensuremath{\mathbf{X}}) := \prod_{j\in J}\frac{X_j}{1-X_j},\quad X_j :=
q^{A_j-B_js}.$$ The sets $D(w)$ may be interpreted as descent sets
(cf.~Section~\ref{section_coxeter}). If $W$ comes from a classical
group, the polynomials $\beta_J(q^{-1})$ which arise in this way
carry a geometric meaning: the number $b_J(q)$ of flags of
\emph{totally isotropic} subspaces in an associated finite polar
space equals $\beta_J(q)$. In this case $b_J(q)$ gives the number of
$\mathbb{F}_q$-points of a smooth projective variety, and we have
$b_J(q) / q^{\deg_q(b_J)} = b_J(q^{-1}) = \beta_J(q^{-1})$. In this
sense the rational functions $\text{Ig}_\ensuremath{\mathcal{V}}$ studied in the
current paper, which are built in a similar way from the numbers of
\emph{non-degenerate} flags, complement Igusa's function~$Z(s)$.
The key to Igusa's functional equation is to interpret the inversion
of the `variable' $q$ in terms of a natural symmetry of the root
system of the Weyl group $W$. This symmetry arises from
(right-)multiplication by the longest element $w_0 \in W$. A
recurrent theme of the present paper is to give a suitable
description of the polynomials $\alpha^J_\ensuremath{\mathcal{V}}(q^{-1})$ in terms of
Coxeter groups. Based on such a description one can then follow
Igusa's approach to derive functional equations. We note that,
contrary to the situation studied by Igusa, the `normalisation' of
the polynomials $a^J_\ensuremath{\mathcal{V}}(q)$ is indispensable -- unlike their
counterparts $\beta_J$ and $b_J$, the polynomials $\alpha^J_\ensuremath{\mathcal{V}}$
and $a^J_\ensuremath{\mathcal{V}}$ are typically not equal; cf.\ the examples given in
the Appendix. Moreover, it is worth noting that as a side effect of
passing to the normalised polynomials $\alpha^J_{\ensuremath{\mathcal{V}}}(q^{-1})$ the
assumption that $\ensuremath{\mathcal{V}}$ is non-degenerate means no loss of
generality.
\end{remark}
We now discuss the proof of Theorem~\ref{theorem_A}. In the symplectic
and unitary case it follows from Witt's Extension Theorem that the
respective isometry group acts transitively on the non-degenerate
flags of a given type. A simple stabiliser computation reveals that
the polynomials $\alpha^J_{\ensuremath{\mathcal{V}}}(q^{-1})$ may be expressed in terms
of Gaussian polynomials (or $q$-binomial coefficients), which in turn
admit a well-known description in terms of the \emph{length function}
on a Coxeter group of type~$A_{\ensuremath{\gamma} n-1}$. The functional equation then
follows with the same argument which Igusa has given
in~\cite[Part~II]{Igusa/89}. It rests on the fact that, in a Coxeter
group, the effect of right-multiplication by the longest element on an
element's length and descent set is well understood.
In the orthogonal case, however, things are more intricate. To begin
with, the $a^J_{\ensuremath{\mathcal{V}}}(q)$ flags of type $J$ come in up to $2^{|J|}$
isomorphism types and counting them together seems to be crucial for
the occurrence of a functional equation. But of course the natural
action of the respective orthogonal group on these flags is not
transitive. The proof we give for this case of
Theorem~\ref{theorem_A} is based on a combinatorial analysis of the
polynomials~$\alpha^J_{\ensuremath{\mathcal{V}}}(q^{-1})$ in terms of \emph{integer
compositions} and their refinements. Complementing this approach, we
propose in Conjecture~\ref{conjecture_C} an explicit formula which
expresses these polynomials, too, in terms of Coxeter group data.
\begin{remark}[The orthogonal case in characteristic $2$] As is
well-known, quadratic forms are intimately related to symmetric
bilinear forms. In fact, over a field of characteristic not equal to
$2$, the two notions lead to one and the same theory: a quadratic
space $\ensuremath{\mathcal{V}} = (V,B,f)$ over a field $F$ with $\cha F \ne 2$ can
equally well be regarded as a symmetric bilinear space and vice
versa. Such a space $\ensuremath{\mathcal{V}}$ is said to be \emph{non-degenerate} if
the bilinear form $B$ is non-degenerate, i.e.\ if the radical
$\Rad(B) := \{ x \in V \mid \forall y \in V: B(x,y) = 0 \}$ is the
zero subspace. In particular, enumerating non-degenerate flags in a
quadratic space $\ensuremath{\mathcal{V}}$ is the same as counting non-degenerate flags
in the symmetric bilinear space $\ensuremath{\mathcal{V}}$.
In characteristic $2$, however, one has to distinguish more
carefully between quadratic and symmetric bilinear forms. It is
noteworthy that the analogous statement of Theorem~\ref{theorem_A}
for symmetric bilinear spaces does not hold in characteristic $2$:
in the Appendix we display a $4$-dimensional non-degenerate
symmetric bilinear space whose associated `Igusa-type' function does
not satisfy a functional equation.
Now consider quadratic spaces $\ensuremath{\mathcal{V}} = (V,B,f)$ over a field $F$
with $\cha F = 2$. In this context $B$ is alternating and carries
less information than $f$. There are basically two notions of
`non-degeneracy', but unfortunately no standard terminology;
cf.~\cite{Cameron/91}, \cite{Grove/02},
\cite[Appendix~1]{MilnorHusemoller/73}, \cite{Pfister/95}. In this
paper, we call $\ensuremath{\mathcal{V}}$ \emph{non-defective} if the associated
bilinear form $B$ is non-degenerate, i.e.~if the radical $\Rad(B) :=
\{ x \in V \mid \forall y \in V: B(x,y) = 0 \}$ is the zero
subspace. This can be thought of as a strong version of
`non-degeneracy'; in particular, every non-defective quadratic space
is even-dimensional. But enumerating non-defective flags in a
quadratic space over $F$ is the same as counting non-degenerate
flags in the induced alternating bilinear space, so we gain nothing
new. We call a quadratic space $\ensuremath{\mathcal{V}}$ \emph{non-degenerate} if the
restriction of $f$ to the radical $\Rad(B)$ is anisotropic, i.e.~if
for all $x \in \Rad(B)$ either $x = 0$ or $f(x) \ne 0$. This concept
of `non-degeneracy' is more flexible; in particular, there are
non-degenerate quadratic spaces of any given dimension. Moreover,
this turns out to be the right notion to formulate
Theorem~\ref{theorem_A}. In fact, the polynomials $a_\ensuremath{\mathcal{V}}^J(q)$
counting non-degenerate flags of type $J$ are the same in all
characteristics; see Section~\ref{section_orthogonal}.
\end{remark}
\subsubsection{Theorem~\ref{theorem_B}}
In the symplectic and unitary case, we prove a result which is
slightly more general than Theorem~\ref{theorem_A}. Rather than
counting flags which are non-degenerate with respect to a single
non-degenerate sesquilinear form $B$, we study the numbers of flags
which are non-degenerate with respect to a `flag of forms'. Loosely
speaking, a \emph{flag of sesquilinear forms~$\ensuremath{\boldsymbol{B}}$} of
type~$I\subseteq[n-1]$ is a family of sesquilinear forms such that
\begin{itemize}
\item all but the first form are degenerate,
\item each but the last form is defined on the radical of its
successor,
\item the last form is defined on the total space $V$ and
\item the non-zero radicals constitute a flag of type $I$ in
$V$.
\end{itemize}
`Non-degeneracy' is defined inductively; see
Section~\ref{section_symplectic_unitary} for details.
Now let $\ensuremath{\boldsymbol{B}}$ be a sesquilinear flag of forms of type $I \subseteq
[n-1]$ on $V$. We denote by $a_{(V,\ensuremath{\boldsymbol{B}})}^{J}(q)$ the number of flags
of type~$J \subseteq [n-1]$ which are non-degenerate with respect to
$\ensuremath{\boldsymbol{B}}$. In the symplectic case, both the type~$I$ of $\ensuremath{\boldsymbol{B}}$ and all
the sets~$J\subseteq[n-1]$ for which $a^J_{(V,\ensuremath{\boldsymbol{B}})}(q)$ is non-zero
necessarily consist of even numbers. From the \emph{normalised}
polynomials
$$
\alpha_{(V,\ensuremath{\boldsymbol{B}})}^J(q^{-1}) :=
a_{(V,\ensuremath{\boldsymbol{B}})}^J(q)/q^{\deg_qa_{(V,\ensuremath{\boldsymbol{B}})}^J}
$$
and a family of rational functions~${\bf
F}=(F_{J}(\ensuremath{\mathbf{X}}))_{J\subseteq[n-1]}$ with the inversion
property~\eqref{eq_IP} we define, similarly as
above, a rational function
$$
\Ig_{(V,\ensuremath{\boldsymbol{B}})}(q^{-1},\ensuremath{\mathbf{X}}) := \Ig_{(V,\ensuremath{\boldsymbol{B}}),{\bf
F}}(q^{-1},\ensuremath{\mathbf{X}})=\sum_{J\subseteq[n-1]}\alpha_{(V,\ensuremath{\boldsymbol{B}})}^J(q^{-1})F_J(\ensuremath{\mathbf{X}}).
$$
The second main result of this paper is
\begin{thmx}\label{theorem_B}
For each $n$-dimensional vector space~$V$, equipped with a flag of
alternating bilinear \textup{(}respectively hermitian\textup{)}
forms~$\ensuremath{\boldsymbol{B}}$ of type $I=\{i_1,\dots,i_r\}_<\subseteq[n-1]$, the
associated Igusa-type function satisfies the functional equation
\begin{equation*}
\Ig_{(V,\ensuremath{\boldsymbol{B}})}(q,\ensuremath{\mathbf{X}}^{-1})=(-1)^a
q^b\,\Ig_{(V,\widetilde{\ensuremath{\boldsymbol{B}}})}(q^{-1},\ensuremath{\mathbf{X}}),
\end{equation*}
where $\widetilde{\ensuremath{\boldsymbol{B}}}$ is a flag of forms of
type~$\widetilde{I}:=\{n-i \mid i\in I\}$ and the integers $a$ and
$b$ are given by the table below \textup{(}with $m :=
\lfloor\frac{n}{2}\rfloor$\textup{)}.
\begin{figure}[H]
\begin{center}
\begin{tabular}{|c!{\vrule width 1pt}c|c|}\hline
\textup{geometric type} & $a$ & $b$ \\ \hline
\textup{symplectic} &$m-1$&$m(m-1)+((i_2-i_1)i_1+\dots+(n-i_r)i_r)/2$\\
\textup{unitary} &$n-1+b$&$\binom{n}{2} + (i_2-i_1)i_1+\dots+(n-i_r)i_r$\\
\hline\end{tabular}
\end{center}
\end{figure}
\end{thmx}
Note that, for $I=\varnothing$, Theorem~\ref{theorem_B} specialises to
Theorem~\ref{theorem_A} in the symplectic and unitary case,
respectively.
To prove Theorem~\ref{theorem_B} we show that the
functions~$\alpha_{(V,\ensuremath{\boldsymbol{B}})}^J(q^{-1})$ are polynomials which may be
described in terms of a certain \emph{statistic} on the Coxeter group
$W$ of type~$A_{\ensuremath{\gamma} n -1}$. This statistic associates to an element $w
\in W$ the sum of its ordinary length~$l(w)$ with respect to the
standard Coxeter generating set~$S = \{s_1,\dots,s_{\ensuremath{\gamma} n-1}\}$ and its
`parabolic length'~$l_\ensuremath{\textup{L}}^{(\ensuremath{\gamma} \widetilde{I})^c}(w)$. The
\emph{parabolic length} $l_\ensuremath{\textup{L}}^{(\ensuremath{\gamma} \widetilde{I})^c}(w)$ is the
Coxeter length of the distinguished representative of shortest length
in the left coset~$w W_{(\ensuremath{\gamma} \widetilde{I})^c}$ of the standard
parabolic subgroup $W_{(\ensuremath{\gamma} \widetilde{I})^c} = \langle s_i\in S \mid
\ensuremath{\gamma} n - i \not \in \ensuremath{\gamma} I \rangle$.
In fact, in Section~\ref{section_symplectic_unitary} we show that
Theorem~\ref{theorem_B} can be deduced from Theorem~\ref{theorem_1}, a
general result on rational functions defined in terms of linear
combinations of parabolic length functions and characters on certain
subgroups of finite Coxeter groups. Indeed, Theorem~\ref{theorem_1}
extends to a slightly more general setting Igusa's key idea to deduce
functional equations from features of the map induced by (right-)
multiplication by the longest element.
Our initial interest in the Igusa-type functions $\Ig_{(V,\ensuremath{\boldsymbol{B}})}$
arose from our study of the zeta functions counting subgroups of
higher Heisenberg groups. In~\cite{KlopschVoll/05} we introduce an
equivalence relation, coarser than homothety, on the set of complete
$\mathbb{Z}_p$-lattices in a non-degenerate symplectic $p$-adic vector
space~$\mathbb{Q}_p^{2m}$ such that equivalence classes of lattices
are in one-to-one correspondence with the vertices of the affine
Bruhat-Tits building for the symplectic
group~$\text{Sp}_{2m}(\mathbb{Q}_p)$. For a flag of forms $\ensuremath{\boldsymbol{B}}$ of
type $I=\{2i\}$, $i\in[m]$, and suitable choices of $\ensuremath{\mathbf{F}}$ the Igusa
functions~$\Ig_{(\ensuremath{\mathbb{F}}_q^{2m},\ensuremath{\boldsymbol{B}}),\ensuremath{\mathbf{F}}}$ may be regarded as generating
functions, enumerating lattices in an equivalence class indexed by a
special vertex of type~$i$. We refer to~\cite{KlopschVoll/05} for
details.
\subsubsection{Conjecture~\ref{conjecture_C}}
In the last part of the paper we formulate a precise conjecture
describing the polynomials $\alpha^J_{\ensuremath{\mathcal{V}}}(q^{-1})$ in the
orthogonal case. If it holds, the orthogonal case of
Theorem~\ref{theorem_A} also follows from Theorem~\ref{theorem_1}.
Moreover, a proof of Conjecture~\ref{conjecture_C} would constitute a
first step towards extending Theorem~\ref{theorem_B} to the orthogonal
case.
We introduce the subgroup~$\mathcal{C}_n$ of `chessboard elements' in
the symmetric group $\ensuremath{\mathcal{S}}_n$ on~$n$ letters. A permutation is a
\emph{chessboard element} if the non-zero entries of its associated
permutation matrix all fit either on the black or on the white squares
of an $n\times n$-chessboard. In Section~\ref{section_conjecture} we
define linear characters~$\chi_{\epsilon}$ on~$\mathcal{C}_n$ and a
certain linear combination~$L$ of parabolic length functions
on~$\ensuremath{\mathcal{S}}_n$. By $D_\ensuremath{\textup{L}}(w)$ we denote the left-descent set of the
permutation~$w$ (cf.\ Section~\ref{section_coxeter}).
\begin{conjecture}\label{conjecture_C} For each
$n$-dimensional, non-degenerate quadratic space $\ensuremath{\mathcal{V}}$ and each $J
\subseteq [n-1]$,
$$
\alpha^J_{\ensuremath{\mathcal{V}}}(q^{-1}) = \sum_{\substack{w \in \mathcal{C}_n \\
D_\ensuremath{\textup{L}}(w) \subseteq J}} \chi_{\epsilon}(w) q^{-L(w)}.
$$
\end{conjecture}
\subsection{Organisation and Notation}
The structure of the paper is as follows. In
Section~\ref{section_coxeter} we derive functional equations for
rational functions defined in terms of parabolic length functions on
Coxeter groups. Theorem~\ref{theorem_1}, the main result of
Section~\ref{section_coxeter}, is applied to prove
Theorem~\ref{theorem_B} in Section~\ref{section_symplectic_unitary}.
In Section~\ref{section_orthogonal} we prove the orthogonal case of
Theorem~\ref{theorem_A}. In Section~\ref{section_conjecture} we give
a more precise statement of Conjecture~\ref{conjecture_C}. Some
explicit examples of Igusa-type functions can be found in the
Appendix.
\bigskip
\noindent We use the following notation.
\nopagebreak
\medskip
\begin{tabular}{l|l}
$\ensuremath{\mathbb{N}}$ & the set $\{1,2,\dots\}$ of natural numbers \\
$I_0$ & the set $I\cup\{0\}$ for $I\subseteq\ensuremath{\mathbb{N}}$ \\
$[a,b]$ & the interval $\{a,a+1,\dots,b\}$ for integers $a,b$ \\
$[a]$ & the interval $[1,a]$ for an integer $a$ \\
$\{i_1,\dots,i_r\}_{<}$ & the set $\{i_1,\dots,i_r\} \subseteq \ensuremath{\mathbb{N}}_0$ with
$i_1<\dots<i_r$ \\
$x I$ & the set $\{ x i \mid i \in I \}$ for $ I\subseteq
\ensuremath{\mathbb{N}}$ and a rational number $x$ \\
$I^c$ & the set $[n-1]\setminus I$ for $I\subseteq[n-1]$, \\
& \quad where $n$ is clear from the context\\
$\widetilde{I}$ & the set $\{n-i \mid i\in I\}$ for $I\subseteq[n-1]$, \\
& \quad where $n$ is clear from the context\\
$I-\mathbf{t}$ & the set $\{i_1-t_1,\dots,i_r-t_r\} \cap \ensuremath{\mathbb{N}}$ for
$I=\{i_1,\dots,i_r\}_{<} \subseteq \ensuremath{\mathbb{N}}$\\
& \quad and $\mathbf{t} = (t_1,\dots,t_r) \in \ensuremath{\mathbb{N}}_0^r$ \\
$I-j$ & the set $\{i-j \mid i \in I \} \cap \ensuremath{\mathbb{N}}$ for $I \subseteq \ensuremath{\mathbb{N}}$,
$j \in \ensuremath{\mathbb{N}}_0$\\
$\binom{a}{b}$ & the ordinary binomial coefficient for $a, b \in \ensuremath{\mathbb{N}}_0$\\
$\binom{a}{b}_{\! X}$ & the polynomial
$\prod_{i = 0}^{b-1} (1 - X^{a-i})/(1 - X^{b-i})$, \\
& \quad where $a,b \in \ensuremath{\mathbb{N}}_0$ with $a \geq b$ \\
& Note: The \emph{$q$-binomial coefficient} $\binom{a}{b}_{\! q}$ gives \\
& \quad the number of subspaces of dimension $b$ in $\ensuremath{\mathbb{F}}_q^a$.\\
$\binom{n}{J}_{\! X}$ & the polynomial
$\binom{n}{j_s}_{\! X} \binom{j_s}{j_{s-1}}_{\! X} \dots
\binom{j_2}{j_1}_{\! X}$,\\
& \quad where $J = \{j_1,\dots,j_s\}_< \subseteq[n-1]_0$ for $n \in \ensuremath{\mathbb{N}}$\\
& Note: $\binom{n}{J}_{\! q} \text{ gives the number of flags of
type~$J\setminus\{0\}$ in~$\ensuremath{\mathbb{F}}_q^n$.}$\\
$\lfloor x \rfloor$ & the greatest integer not exceeding the
rational number $x$\\
$\mathcal{P}(S)$ & the power set of a set $S$\\
$\ensuremath{\mathcal{S}}_n$ & the symmetric group on $n$ letters.
\end{tabular}
\bigskip
\noindent Throughout this paper $n \in \ensuremath{\mathbb{N}}$ and $m = \lfloor n/2 \rfloor$. We
shall write
\begin{align*}
I & = \{i_1,\dots,i_r\}_<,& J & = \{j_1,\dots,j_s\}_< & & \text{ for
subsets of $[n-1]$ or $[n]$, and} \\
G & = \{g_1,\dots,g_k\}_<,& H & & & \text{ for
subsets of $[m]$.}
\end{align*}
\section{Rational functions from Coxeter groups}\label{section_coxeter}
In this section we prove functional equations for a family of rational
functions associated to finite Coxeter systems.
Theorem~\ref{theorem_B} will turn out to be a consequence of
Theorem~\ref{theorem_1}, the main result of the current Section.
Let $(W,S)$ be a finite Coxeter system of rank $n-1$ with root
system~$\Delta$. To ease notation we will frequently identify the set
of Coxeter generators $S=\{s_1,\dots,s_{n-1}\}$ with the set of
integers~$[n-1]$. For each~$I\subseteq S$ we denote by $W_I$ the
corresponding standard parabolic subgroup of~$W$ generated by the
elements in~$I$ and by~$\Delta_I$ the induced root system. We denote
by $l$ the length function on $W$ with respect to~$S$. The length of
an element~$w$ may either be interpreted as the length of a shortest
word in the elements of~$S$ representing the group element or as the
number of positive roots that are sent to negative roots by~$w$. The
group~$W$ has a unique longest element~$w_0$, whose length equals
$|\Delta|/2$. It is well-known (cf.~\cite[Section~1.8]{Humphreys/90})
that, for each~$w\in W$,
$$
l(w_0w)+l(w) = l(ww_0)+l(w) = l(w_0).
$$
The rational functions studied in this section are defined in terms of
more general length functions. For each $I\subseteq S$ set
\begin{align*}
W^I_{\ensuremath{\textup{L}}}& := \{w\in W \mid \forall s\in I:\,l(ws)>l(w)\},\\
W^I_{\ensuremath{\textup{R}}}& := \{w\in W \mid \forall s\in I:\,l(sw)>l(w)\}.
\end{align*}
We will need the following lemma
(\cite[Proposition~2.1.7]{Scharlau/95}).
\begin{lemma}\label{lemma_1}
Let $I \subseteq S$. Then $W^I_{\ensuremath{\textup{L}}}$ \textup{(}respectively
$W^I_{\ensuremath{\textup{R}}}$\textup{)} is a left \textup{(}respectively
right\textup{)} transversal to $W_I$ in $W$, i.e.\ for every $w \in
W$ there are unique elements
$$
u_\ensuremath{\textup{L}}\in W^I_{\ensuremath{\textup{L}}}, \; v_\ensuremath{\textup{L}}\in W_I \; \text{ and } \; u_\ensuremath{\textup{R}}\in
W^I_{\ensuremath{\textup{R}}}, \; v_\ensuremath{\textup{R}}\in W_I
$$
such that
$$
w = u_\ensuremath{\textup{L}} v_\ensuremath{\textup{L}} = v_\ensuremath{\textup{R}} u_\ensuremath{\textup{R}}.
$$
In particular, $u_\ensuremath{\textup{L}}$ is the unique element of shortest length in
the left coset $w W_I$ and $u_\ensuremath{\textup{R}}$ is the unique element of shortest
length in the right coset $W_I w$. Moreover,
$$
l(w) = l(u_\ensuremath{\textup{L}})+l(v_\ensuremath{\textup{L}})=l(v_\ensuremath{\textup{R}})+l(u_\ensuremath{\textup{R}}).
$$
The elements $u_\ensuremath{\textup{L}} \in w W_I$ and $u_\ensuremath{\textup{R}} \in W_I w$ are also
characterised by the fact that they send positive roots
of~$\Delta_I$ to positive roots.
\end{lemma}
\begin{definition}[Parabolic length]
For each $I\subseteq S$ and $w\in W$ we set
\begin{align*}
l^I_\ensuremath{\textup{L}}(w)&:= l(u_\ensuremath{\textup{L}}),\\
l^I_\textup{R}(w)&:= l(u_\ensuremath{\textup{R}}).
\end{align*}
We call $l^I_\ensuremath{\textup{L}}$ (respectively $l^I_\textup{R}$) the \emph{left}
(respectively \emph{right}) \emph{parabolic length function} on~$W$
associated to~$I$. We write $\ensuremath{\mathbf{l}}_L:=(l^I_\ensuremath{\textup{L}})_{I\subseteq S}$ and
$\ensuremath{\mathbf{l}}_\ensuremath{\textup{R}}:=(l^I_\textup{R})_{I\subseteq S}$.
\end{definition}
Note that for $I = \varnothing$ the corresponding parabolic length
functions reduce to the ordinary Coxeter length function:
$l^{\varnothing}_\ensuremath{\textup{L}}=l^{\varnothing}_\ensuremath{\textup{R}}=l$. Moreover,
$l^{S}_\ensuremath{\textup{L}}=l^{S}_\ensuremath{\textup{R}}=0$.
\begin{lemma} \label{lemma_2}
For each $I\subseteq S$ and $w\in W$ we have
\begin{align}
l^I_\ensuremath{\textup{L}}(w_0w)+l^I_\ensuremath{\textup{L}}(w) = l^I_\ensuremath{\textup{L}}(w_0), & \quad
l^I_\ensuremath{\textup{L}}(ww_0)+l_\ensuremath{\textup{L}}^{I^{w_0}}(w) = l^I_\ensuremath{\textup{L}}(w_0),\label{eq_1}\\
l^I_\ensuremath{\textup{R}}(w_0w)+l_\ensuremath{\textup{R}}^{I^{w_0}}(w) = l^I_\ensuremath{\textup{R}}(w_0), & \quad
l^I_\ensuremath{\textup{R}}(ww_0)+l^I_\ensuremath{\textup{R}}(w) = l^I_\ensuremath{\textup{R}}(w_0).\label{eq_2}
\end{align}
\end{lemma}
\begin{proof}
Let $v_0$ denote the longest element in~$W_I$. Then
$$
l^I_\ensuremath{\textup{L}}(w_0)=l(w_0)-l(v_0)=l(w_0)- |\Delta_I|/2.
$$
Write $w=u_\ensuremath{\textup{L}} v_\ensuremath{\textup{L}}$ as in Lemma~\ref{lemma_1}. We may then write
$$
w_0=u'v'v_\ensuremath{\textup{L}}^{-1}u_\ensuremath{\textup{L}}^{-1}
$$
with $u'\in W$, $v'\in W_I$ such that $v'v_\ensuremath{\textup{L}}^{-1}=v_0$. It
follows that
\begin{align*}
l(u') &= l(w_0) -
l(v'v_\ensuremath{\textup{L}}^{-1}u_\ensuremath{\textup{L}}^{-1}) = l(w_0)-l(u_\ensuremath{\textup{L}} v_0)\\
& = l(w_0) - l(u_\ensuremath{\textup{L}}) - l(v_0) = (l(w_0) - l(v_0)) -
l(u_\ensuremath{\textup{L}}) \\
& = l^I_\ensuremath{\textup{L}}(w_0) - l^I_\ensuremath{\textup{L}}(w).
\end{align*}
Clearly, $w_0 w W_I = u' W_I$. But $u'$ sends positive roots of
$\Delta_I$ to positive roots and is thus the unique coset
representative of shortest length. Hence $l^I_\ensuremath{\textup{L}}(w_0w) = l(u') =
l^I_\ensuremath{\textup{L}}(w_0) - l^I_\ensuremath{\textup{L}}(w)$. This gives the first equation in
\eqref{eq_1}.
For the second equation in \eqref{eq_1}, note that conjugation by~$w_0$
yields $l^I_\ensuremath{\textup{L}}(w_0)=l_{\ensuremath{\textup{L}}}^{I^{w_0}}(w_0)$ and thus
$$
l^I_\ensuremath{\textup{L}}(ww_0) = l_{\ensuremath{\textup{L}}}^{I^{w_0}}(w_0w) = l_\ensuremath{\textup{L}}^{I^{w_0}}(w_0)
- l_\ensuremath{\textup{L}}^{I^{w_0}}(w) = l^I_\ensuremath{\textup{L}}(w_0)-l_\ensuremath{\textup{L}}^{I^{w_0}}(w).
$$
We omit the analogous proofs for the equations~\eqref{eq_2}.
\end{proof}
Another important invariant of an element of a Coxeter group which we
shall need is its (left) \emph{descent set}~$D_\ensuremath{\textup{L}}(w):=\{s\in S \mid
l(sw)<l(w)\}$. Note that
\begin{equation}\label{eq_3}
D_\ensuremath{\textup{L}}(ww_0)=D_\ensuremath{\textup{L}}(w)^c:=\{s\in S \mid s\not\in D_\ensuremath{\textup{L}}(w)\}.
\end{equation}
Elements of Coxeter groups of type~$A$ can be regarded as permutation
matrices. It is noteworthy that both the descent sets and the values
of the various parabolic length functions are easily read off from the
associated matrices.
\begin{lemma} \label{lemma_3}
Let $(F_{J}(\ensuremath{\mathbf{X}}))_{J\subseteq S}$ be a family of rational functions
with the inversion property~\eqref{eq_IP}.
Then, for all $I\subseteq S$,
$$\sum_{ I\subseteq J \subseteq
S}F_{J}(\ensuremath{\mathbf{X}}^{-1})=(-1)^{|S|}\sum_{I^c\subseteq J \subseteq S}
F_{J}(\ensuremath{\mathbf{X}})$$
\end{lemma}
\begin{proof}
This is an easy calculation. See~\cite[Lemma~7]{Voll/06}.
\end{proof}
We now fix a family of rational functions~$\ensuremath{\mathbf{F}} = (F_J(\ensuremath{\mathbf{X}}))_{J
\subseteq S}$ with the inversion property~\eqref{eq_IP} and an
independent indeterminate $Y$. We choose a family~$\ensuremath{\mathbf{b}} =
(b_I)_{I\subseteq S}$ of integers and define the statistics
$\ensuremath{\mathbf{b}}\cdot\ensuremath{\mathbf{l}}_\ensuremath{\textup{L}}$ and~$\ensuremath{\mathbf{b}}\cdot\ensuremath{\mathbf{l}}_\ensuremath{\textup{R}}$ on~$W$ by setting, for
$w\in W$,
$$
\ensuremath{\mathbf{b}}\cdot\ensuremath{\mathbf{l}}_\ensuremath{\textup{L}}(w):=\sum_{I\subseteq S}b_Il^I_\ensuremath{\textup{L}}(w), \qquad
\ensuremath{\mathbf{b}}\cdot\ensuremath{\mathbf{l}}_\ensuremath{\textup{R}}(w):=\sum_{I\subseteq S}b_Il^I_\ensuremath{\textup{R}}(w).
$$
Similarly, we write $\ensuremath{\mathbf{b}}^{w_0}\cdot\ensuremath{\mathbf{l}}_\ensuremath{\textup{L}}$ and
$\ensuremath{\mathbf{b}}^{w_0}\cdot\ensuremath{\mathbf{l}}_\ensuremath{\textup{R}}$ to denote the statistics associating to~$w$
the elements $\sum_{I\subseteq S} b_I l_{\ensuremath{\textup{L}}}^{I^{w_0}}(w)$ and
$\sum_{I\subseteq S} b_I l_{\ensuremath{\textup{R}}}^{I^{w_0}}(w)$, respectively. Let
$W'\subseteq W$ be a subgroup with $w_0\in W'$, and
$\chi:W'\rightarrow \mathbb{C}^*$ a (linear) character of~$W'$.
\begin{definition}
With the given data we define the following rational functions:
\begin{align*}
\IG_{\ensuremath{\textup{L}}}^{W',\ensuremath{\mathbf{b}},\chi,\ensuremath{\mathbf{F}}}(Y,\ensuremath{\mathbf{X}})& := \sum_{w\in W'} \chi(w)
Y^{\ensuremath{\mathbf{b}}\cdot\ensuremath{\mathbf{l}}_\ensuremath{\textup{L}}(w)} \sum_{\substack{D_\ensuremath{\textup{L}}(w) \subseteq J
\subseteq S}}F_J(\ensuremath{\mathbf{X}}), \\
\IG_{\ensuremath{\textup{R}}}^{W',\ensuremath{\mathbf{b}},\chi,\ensuremath{\mathbf{F}}}(Y,\ensuremath{\mathbf{X}})& := \sum_{w\in W'} \chi(w)
Y^{\ensuremath{\mathbf{b}}\cdot\ensuremath{\mathbf{l}}_\ensuremath{\textup{R}}(w)} \sum_{\substack{D_\ensuremath{\textup{L}}(w) \subseteq J
\subseteq S}}F_J(\ensuremath{\mathbf{X}}).
\end{align*}
\end{definition}
The main result of the current section is
\begin{theorem}\label{theorem_1}
The following functional equations hold:
\begin{align}
\IG_{\ensuremath{\textup{L}}}^{W',\ensuremath{\mathbf{b}},\chi,\ensuremath{\mathbf{F}}}(Y^{-1},\ensuremath{\mathbf{X}}^{-1}) & = (-1)^{|S|}
\chi(w_0) Y^{-\ensuremath{\mathbf{b}}\cdot\ensuremath{\mathbf{l}}_\ensuremath{\textup{L}}(w_0)}
\IG_{\ensuremath{\textup{L}}}^{W',\ensuremath{\mathbf{b}}^{w_0},\chi,\ensuremath{\mathbf{F}}}(Y,\ensuremath{\mathbf{X}}), \label{eq_4} \\
\IG_{\ensuremath{\textup{R}}}^{W',\ensuremath{\mathbf{b}},\chi,\ensuremath{\mathbf{F}}}(Y^{-1},\ensuremath{\mathbf{X}}^{-1}) & =(-1)^{|S|}
\chi(w_0) Y^{-\ensuremath{\mathbf{b}}\cdot\ensuremath{\mathbf{l}}_\ensuremath{\textup{R}}(w_0)}
\IG_{\ensuremath{\textup{R}}}^{W',\ensuremath{\mathbf{b}},\chi,\ensuremath{\mathbf{F}}}(Y,\ensuremath{\mathbf{X}}). \label{eq_5}
\end{align}
\end{theorem}
\begin{proof}
The equations
\begin{equation} \label{eq_6}
\begin{split}
\ensuremath{\mathbf{b}}^{w_0}\cdot\ensuremath{\mathbf{l}}_\ensuremath{\textup{L}}(ww_0) + \ensuremath{\mathbf{b}}\cdot\ensuremath{\mathbf{l}}_\ensuremath{\textup{L}}(w) & =
\ensuremath{\mathbf{b}}\cdot\ensuremath{\mathbf{l}}_\ensuremath{\textup{L}}(w_0), \\
\ensuremath{\mathbf{b}}\cdot\ensuremath{\mathbf{l}}_\ensuremath{\textup{R}}(ww_0) + \ensuremath{\mathbf{b}}\cdot\ensuremath{\mathbf{l}}_\ensuremath{\textup{R}}(w) & =
\ensuremath{\mathbf{b}}\cdot\ensuremath{\mathbf{l}}_\ensuremath{\textup{R}}(w_0)
\end{split}
\end{equation}
are immediate consequences of Lemma~\ref{lemma_2}. Therefore,
by~\eqref{eq_6}, by Lemma~\ref{lemma_3} and by~\eqref{eq_3},
\begin{align*}
\IG_{\ensuremath{\textup{L}}}^{W',\ensuremath{\mathbf{b}},\chi,\ensuremath{\mathbf{F}}} & (Y^{-1},\ensuremath{\mathbf{X}}^{-1}) = \sum_{w\in W'}
\chi(w) Y^{-\ensuremath{\mathbf{b}}\cdot\ensuremath{\mathbf{l}}_\ensuremath{\textup{L}}(w)} \sum_{\substack{D_\ensuremath{\textup{L}}(w) \subseteq
J \subseteq S}} F_J(\ensuremath{\mathbf{X}}^{-1}) \\
= (-1)^{|S|} & \chi(w_0)^{-1} Y^{-\ensuremath{\mathbf{b}}\cdot\ensuremath{\mathbf{l}}_\ensuremath{\textup{L}}(w_0)}
\sum_{w\in W'} \chi(ww_0) Y^{\ensuremath{\mathbf{b}}^{w_0}\cdot\ensuremath{\mathbf{l}}_\ensuremath{\textup{L}}(ww_0)}
\sum_{\substack{D_\ensuremath{\textup{L}}(ww_0)\subseteq
J\subseteq S}} F_J(\ensuremath{\mathbf{X}})\\
= (-1)^{|S|} & \chi(w_0) Y^{-\ensuremath{\mathbf{b}}\cdot\ensuremath{\mathbf{l}}_\ensuremath{\textup{L}}(w_0)}
\IG_{\ensuremath{\textup{L}}}^{W',\ensuremath{\mathbf{b}}^{w_0},\chi,\ensuremath{\mathbf{F}}}(Y,\ensuremath{\mathbf{X}}).
\end{align*}
The equation~\eqref{eq_5} is proved analogously.
\end{proof}
In this paper we shall see instances of both types of functional
equations presented in Theorem~\ref{theorem_1}. In
Section~\ref{section_symplectic_unitary} we demonstrate that
Theorem~\ref{theorem_B} is a consequence of~\eqref{eq_4}. Note that
in the special case $\ensuremath{\mathbf{F}}=\left(\prod_{j\in
J}\frac{X_j}{1-X_j}\right)_{J\subseteq[n-1]}$, replacing~$\ensuremath{\mathbf{b}}$
by~$\ensuremath{\mathbf{b}}^{w_0}$ in~\eqref{eq_4} simply amounts to inverting the order
of the variables $X_1,\dots,X_{n-1}$. If Conjecture~\ref{conjecture_C}
holds, the orthogonal case of Theorem~\ref{theorem_A} follows
from~\eqref{eq_5}.
\section{The symplectic and unitary case}\label{section_symplectic_unitary}
In this section we study the polynomials enumerating flags which are
non-dege\-nerate with respect to a `flag of sesquilinear forms'. Our
aim is to proof Theorem~\ref{theorem_B}. Let $V$ be an $n$-dimensional
vector space over a field~$F$.
Let~$I=\{i_1,\dots,i_r\}_<\subseteq[n-1]$, and set $i_0 := 0$,
$i_{r+1} := n$.
\begin{definition}[Flag of forms]
We say that~$V$ is equipped with a \emph{flag of alternating
bilinear} (respectively \emph{hermitian}) \emph{forms}
$\ensuremath{\boldsymbol{B}}=(B_{i_1},\dots,B_{i_{r+1}})$ of type~$I$ if there is a
filtration of subspaces
$$
\{0\}=:R_{i_0}\subset R_{i_1}\subset\dots\subset R_{i_r}\subset
R_{i_{r+1}}:=V
$$
such that
\begin{enumerate}
\item[(a)] for all $i\in I$, $\dim R_i=i$;
\item[(b)] for all $ \rho\in [r+1]$, $B_{i_\rho}$ is an alternating
bilinear (respectively hermitian) form $B_{i_\rho}: R_{i_\rho}
\times R_{i_\rho} \rightarrow F$ with
$$
\Rad(B_{i_\rho}):=\{x\in R_{i_\rho} \mid \forall y\in
R_{i_\rho}:\; B_{i_\rho}(x,y)=0\}=R_{i_{\rho-1}}.
$$
\end{enumerate}
We call the sequence $\ensuremath{\mathbf{R}}=(R_{i_1},\dots,R_{i_r})$ the
\emph{flag of radicals} associated to the flag of
forms~$\ensuremath{\boldsymbol{B}}$.
\end{definition}
Note that, given a flag of sesquilinear forms~$\ensuremath{\boldsymbol{B}}$ of type~$I$ on
$V$ with flag of radicals~$\ensuremath{\mathbf{R}}$ and $\rho\in[r+1]$, we have a flag of
forms $(B_{i_1},\dots,B_{i_\rho})$ of type $\{i_1,\dots,i_{\rho-1}\}$
on $R_{i_\rho}$ with flag of radicals $(R_{i_1}, \dots,
R_{i_{\rho-1}})$ and a flag of forms~$(\overline{B_{i_{\rho+1}}},
\dots, \overline{B_{i_{r+1}}})$ of type~$\{i_\varrho - i_\rho \mid
\rho < \varrho \leq r \}$ on $V/R_{i_\rho}$ with flag of radicals
$(R_{i_{\rho+1}}/R_{i_\rho},\dots,R_{i_r}/R_{i_\rho})$.
\begin{definition}[Non-degeneracy] \label{definition_4}
Given a flag of sesquilinear forms~~$\ensuremath{\boldsymbol{B}}$ of type~$I$ on~$V$ with
flag of radicals~$\ensuremath{\mathbf{R}}$ as above, we say that a subspace~$U\subseteq
V$ is \emph{non-degenerate} with respect to $\ensuremath{\boldsymbol{B}}$ if for each
$\rho\in[r+1]$,
\begin{enumerate}
\item[(a)] $U\cap R_{i_\rho}$ is non-degenerate with respect to
$(B_{i_1},\dots,B_{i_\rho})$ and
\item[(b)] $(U+R_{i_\rho})/R_{i_\rho}$ is \emph{non-degenerate} with
respect to
$(\overline{B_{i_{\rho+1}}},\dots,\overline{B_{i_{r+1}}})$.
\end{enumerate}
A flag $\mathbf{U}_J = (U_j)_{j\in J}$ of subspaces of~$V$ of type
$J\subseteq[n-1]$, i.e.\ an ascending chain of subspaces with $\dim
U_j =j$ for each~$j\in J$, is said to be \emph{non-degenerate} with
respect to $\ensuremath{\boldsymbol{B}}$ if each of its constituents~$U_j$ is.
\end{definition}
These definitions are illustrated by the following simple example.
\begin{example}
Suppose that $n$ is even, $r=1$ and $i_1\in[n-1]$ is even. Then a
flag of alternating bilinear forms $\ensuremath{\boldsymbol{B}} = (B_{{i_1}},B_n)$ consists
of a (degenerate) alternating bilinear form~$B_n$ on~$V$ with
${i_1}$-dimensional radical $R_{{i_1}}$, which in turn supports a
non-degenerate form~$B_{i_1}$. A subspace $U\subseteq V$ is
non-degenerate with respect to~$\ensuremath{\boldsymbol{B}}=(B_{i_1},B_n)$ if $U\cap
R_{i_1}$ is non-degenerate with respect to~$B_{i_1}$ and
$(U+R_{i_1})/R_{i_1}$ is non-degenerate with respect to
$\overline{B_n}$.
\end{example}
We shall now assume that $V$ is an $n$-dimensional vector space over a
finite field $F$, equipped with a flag of sesquilinear forms~$\ensuremath{\boldsymbol{B}}$ of
type~$I$. As in the introduction we write $\ensuremath{\gamma}=1/2$ in the symplectic
case and $\ensuremath{\gamma}=1$ in the unitary case so that $F = \mathbb{F}_{q^{2\ensuremath{\gamma}}}$
for some prime power $q$. Let $J\subseteq[n-1]$ and
define~$a^J_{(V,\ensuremath{\boldsymbol{B}})}(q)$ to be the number of flags of type~$J$ which
are non-degenerate with respect to $\ensuremath{\boldsymbol{B}}$. We set
$$
\alpha_{(V,\ensuremath{\boldsymbol{B}})}^J(q^{-1}) :=
a^J_{(V,\ensuremath{\boldsymbol{B}})}(q)/q^{\deg_qa_{(V,\ensuremath{\boldsymbol{B}})}^J}
$$
and shall frequently write $a^J_{n,I}(q)$ for
$a^J_{(V,\ensuremath{\boldsymbol{B}})}(q)$ and $\alpha^J_{n,I}(q^{-1})$ for
$\alpha^J_{(V,\ensuremath{\boldsymbol{B}})}(q^{-1})$. Recall that, in the symplectic
case, both the type~$I$ of a flag of forms and all the
sets~$J\subseteq[n-1]$ for which
$a^J_{(V,\ensuremath{\boldsymbol{B}})}(q)$ is non-zero consist necessarily of even
numbers.
\begin{definition} \label{definition_5}
Given a family $\ensuremath{\mathbf{F}}=(F_J(\ensuremath{\mathbf{X}}))_{J\subseteq[n-1]}$ of rational
functions with the inversion property~\eqref{eq_IP} we define
$$
\Ig_{(V,\ensuremath{\boldsymbol{B}})}(q^{-1},\ensuremath{\mathbf{X}}):=\Ig_{(V,\ensuremath{\boldsymbol{B}}),\ensuremath{\mathbf{F}}}(q^{-1},\ensuremath{\mathbf{X}}) =
\sum_{J\subseteq[n-1]}\alpha_{(V,\ensuremath{\boldsymbol{B}})}^J(q^{-1})F_J(\ensuremath{\mathbf{X}}).
$$
\end{definition}
Theorem~\ref{theorem_B} states that these Igusa-type functions satisfy a
functional equation. In the remainder of the current section we show
how this can be deduced from the first assertion of
Theorem~\ref{theorem_1}. Fix a family of rational
functions~$\ensuremath{\mathbf{F}}=(F_J(\ensuremath{\mathbf{X}}))_{J\subseteq [n-1]}$ with the inversion
property~\eqref{eq_IP}, and define
$$
\ensuremath{\gamma} \ensuremath{\mathbf{F}} := (F_{\ensuremath{\gamma}^{-1} J'}(\ensuremath{\mathbf{X}}))_{J' \subseteq [\ensuremath{\gamma} n - 1]}.
$$
Let $W':= W :=\ensuremath{\mathcal{S}}_{\ensuremath{\gamma} n}$ be the full symmetric group on $\ensuremath{\gamma} n$
letters, let $\chi$ be the trivial character on $W'$ and set, for each
$J' \subseteq [\ensuremath{\gamma} n-1]$,
\begin{equation*}
b_{J'} := \delta(\text{`}J' = \varnothing\text{'}) +
\delta(\text{`}J' = \{s_i \mid i \not \in
\ensuremath{\gamma} \widetilde{I} \}\text{'}),
\end{equation*}
where the Kronecker-delta $\delta(E) \in \{1,0\}$ reflects whether or
not equation $E$ holds.
In order to derive Theorem~\ref{theorem_B}
from \eqref{eq_4} it suffices to show that
$$
\Ig_{(V,\ensuremath{\boldsymbol{B}}),\ensuremath{\mathbf{F}}}(q^{-1},\ensuremath{\mathbf{X}}) = \IG_{\ensuremath{\textup{L}}}^{W',\ensuremath{\mathbf{b}},\chi,\ensuremath{\gamma} \ensuremath{\mathbf{F}}}
((-q)^{-1/\ensuremath{\gamma}},\ensuremath{\mathbf{X}})
$$
for the given data $W'$, $\ensuremath{\mathbf{b}}$, $\chi$ and $\ensuremath{\gamma} \ensuremath{\mathbf{F}}$. Clearly, it is enough
to prove
\begin{proposition}
Let $J\subseteq[n-1]$ such that $\ensuremath{\gamma} J\subseteq[\ensuremath{\gamma} n-1]$. Then
\begin{equation} \label{eq_7}
\alpha_{n,I}^J(q^{-1}) = a_{n,I}^J(q) / {q^{2\ensuremath{\gamma}\deg_q{\binom{
n}{J}_{\! q}}}} = \sum_{\substack{w\in \ensuremath{\mathcal{S}}_{\ensuremath{\gamma} n} \\ D_\ensuremath{\textup{L}}(w) \subseteq
\ensuremath{\gamma} J}} Y^{\lambda_{n,I}(w)},
\end{equation}
where $Y:=(-q)^{-1/\ensuremath{\gamma}}$ and $\lambda_{n,I}(w) := l(w) + l^{(\ensuremath{\gamma}
\widetilde{I})^c}_{\ensuremath{\textup{L}}}(w)$ for all $w \in W$.
\end{proposition}
\begin{proof}
We will first prove \eqref{eq_7} in the case $I =
\varnothing$. The proof consists of a simple index computation in
the respective isometry group, i.e.\ in the symplectic group
$\Sp_{n}(\ensuremath{\mathbb{F}}_q)$ or the unitary group
$\textup{U}_n(\mathbb{F}_{q^2})$. The proof in the general case is
then based on a recursive expression for the numbers~$a_{n,I}^J(q)$.
So assume that $I=\varnothing$. Then $\ensuremath{\boldsymbol{B}} = (B)$ simply specifies a
non-degenerate alternating bilinear (respectively hermitian) form on
$V$. The respective isometry group $\Sp_{n}(\ensuremath{\mathbb{F}}_q)$ or
$\textup{U}_n(\mathbb{F}_{q^2})$ acts transitively on the
non-degenerate flags of type~$J$, so it suffices to compute the
stabiliser of any one of them. We construct a `standard'
non-degenerate flag ${\bf U}_J:=(U_{j_1},\dots,U_{j_s})$ of type~$J
= \{ j_1, \dots, j_s \}_<$ in the following way. In the symplectic
case, choose a symplectic basis $E =
(e_1,f_1,\dots,e_{n/2},f_{n/2})$ for~$V$ (i.e.\
$B(e_i,f_j)=\delta_{ij}$, $B(e_i,e_j) = B(f_i,f_j)=0$) and set
$U_j:=\langle e_1,f_1,\dots,e_{j/2},f_{j/2} \rangle$ for $j \in J$.
In the unitary case, choose a unitary basis $E = (e_1,\dots,e_n)$
for~$V$ (i.e.\ $B(e_i,e_j)=\delta_{ij}$) and set $U_j:=\langle
e_1,\dots,e_j\rangle$ for $j \in J$. It is not difficult to verify
that an element of the respective isometry group of $(V,B)$
stabilises $\ensuremath{\mathbf{U}}_J$ if and only if its matrix $M_n$ with respect to
the basis $E$ is of block diagonal form
$$
M_n =
\left(
\begin{array}{cccc}
M_{j_1} & & & \\
& M_{j_2-j_1}& & \\
& & \ddots& \\
& & & M_{n-j_s} \\
\end{array}
\right)
$$
with $M_{j_\sigma-j_{\sigma-1}}$ in the respective smaller
isometry group for all $\sigma\in[s+1]$, where $j_0:=0, j_{s+1}:=n$.
Thus
$$
a^J_{n,\varnothing}(q) =
\begin{cases}
|\text{Sp}_{n}(\ensuremath{\mathbb{F}}_q)| / \prod_{\sigma\in[s+1]}
|\text{Sp}_{j_\sigma-j_{\sigma-1}}(\ensuremath{\mathbb{F}}_q)| & \text{in the
symplectic case,} \\
|\text{U}_{n}(\mathbb{F}_{q^2})| / \prod_{\sigma\in[s+1]}
|\text{U}_{j_\sigma-j_{\sigma-1}}(\mathbb{F}_{q^2})| & \text{in
the unitary case.}
\end{cases}
$$
Employing the well-known formulae (cf.~\cite[p.~147]{Artin/57},
\cite[Theorems~3.12 and~11.28]{Grove/02})
\begin{align*}
|\Sp_{n}(\ensuremath{\mathbb{F}}_q)| & = q^{\binom{n+1}{2}}\prod_{i\in[n/2]} (1-q^{-2i}),\\
|\textup{U}_n(\mathbb{F}_{q^2})| & = q^{n^2}
\prod_{i\in[n]}(1-(-q^{-1})^i)
\end{align*}
and using the notation $Y = (-q)^{- 1/ \ensuremath{\gamma}}$ we obtain
$$
\alpha_{n,\varnothing}^J(q^{-1}) = \frac{\prod_{i\in[\ensuremath{\gamma} n]} (1 -
Y^i)}{\prod_{\sigma\in[s+1]} \prod_{\iota \in [ \ensuremath{\gamma}
(j_\sigma-j_{\sigma-1})]} (1-Y^\iota)} = \binom{\ensuremath{\gamma} n}{\ensuremath{\gamma} J}_{\!\! Y}.
$$
It is equally well-known~(cf.~\cite[Example~2.2.5]{Stanley/97})
that Gaussian polynomials may be expressed in terms of Coxeter
length functions on symmetric groups:
$$
\binom{\ensuremath{\gamma} n}{\ensuremath{\gamma} J}_{\!\! Y} = \sum_{\substack{w\in\ensuremath{\mathcal{S}}_{\ensuremath{\gamma} n} \\
D_\ensuremath{\textup{L}}(w)\subseteq \ensuremath{\gamma} J}} Y^{l(w)}.
$$
Equation~\eqref{eq_7} follows in the particular case $I =
\varnothing$, as $l_{\ensuremath{\textup{L}}}^{(\ensuremath{\gamma} \widetilde{I})^c} = l_{\ensuremath{\textup{L}}}^S=0$
and $\lambda_{n,I} = l$.
We now treat the general case $I=\{i_1,\dots,i_r\}_<\subseteq[n-1]$.
To prove \eqref{eq_7} we argue by induction on~$n$. The base step
$n = 0$ is trivial, so suppose that $n > 0$. We may further assume
that $J = \{ j_1, \dots, j_s \}_< \not = \varnothing$ and we define
$j := j_1 = \min J$. Our first aim is to derive a recursive formula
for~$a_{n,I}^J(q)$, using the formula we obtained in the special
case~$I=\varnothing$. For this purpose we determine what are the
possible first terms $U_j$ of the flags $\mathbf{U}_J$ we intend to
count. Then we consider in how many ways each such space $U_j$ can
be completed to yield a full flag $\mathbf{U}_J$.
Let $T$ be the set of all $r$-tuples $\mathbf{t} = (t_1,\dots,t_r)
\in ([j]_0)^r$ such that
\begin{equation} \label{eq_8}
\begin{split}
& t_1 \leq \dots \leq t_r, \qquad \ensuremath{\gamma} \{ t_1, \dots,
t_r \} \subseteq [\ensuremath{\gamma} j]_0 \qquad \text{and} \\
& \forall \rho\in[r+1]:\; j - (n - i_\rho) - t_{\rho-1} \leq
t_\rho-t_{\rho-1} \leq i_\rho-i_{\rho-1},
\end{split}
\end{equation}
where $i_0 = t_0 = 0$ and $i_{r+1}:=n, t_{r+1}:=j$. These
`admissible' tuples encode the possible dimensions of the
intersections $U_j \cap R_{i_\rho}$ of a $j$-dimensional subspace
$U_j$ of $V$, non-degenerate with respect to $\ensuremath{\boldsymbol{B}}$, with the
members $R_{i_\rho}$ of the flag of radicals associated to
$\ensuremath{\boldsymbol{B}}$. Recalling that the underlying field $F$ has
cardinality $q^{2 \ensuremath{\gamma}}$ and applying \eqref{eq_7} for
$I = \varnothing$, we note that for each $\mathbf{t} \in T$ there
are precisely
\begin{equation}\label{eq_9}
\begin{split}
A_{n,I}^\mathbf{t}(q) & = \prod_{\rho\in[r+1]}
a_{(i_\rho-i_{\rho-1}),\varnothing}^{\{t_\rho-t_{\rho-1}\}}
(q^{-1}) \, q^{2\ensuremath{\gamma}(t_\rho-t_{\rho-1})(i_{\rho-1}-t_{\rho-1})} \\
& = \prod_{\rho \in [r+1]} q^{2\ensuremath{\gamma} (t_\rho - t_{\rho -1}) (i_\rho -
i_{\rho -1}) } \prod_{\rho \in [r+1]} \sum_{\substack{ w \in
\ensuremath{\mathcal{S}}_{\ensuremath{\gamma}(i_\rho-i_{\rho-1})} \\ D_\ensuremath{\textup{L}}(w) \subseteq \ensuremath{\gamma}
\{t_{\rho}-t_{\rho-1}\}}} Y^{l(w)}
\end{split}
\end{equation}
subspaces $U_j$, non-degenerate with respect to $\ensuremath{\boldsymbol{B}}$, such
that $\dim( U_j \cap R_{i_\rho} ) = t_\rho$ for all $\rho \in
[r+1]$. Given such a subspace $U_j$, the number of non-degenerate
flags $\mathbf{U}_J$ with first term $U_j$ can be described
inductively, using the notation $J - j = \{ j_2 - j, \dots, j_s - j
\}$ and $I - \mathbf{t} = \{i_1-t_1,\dots,i_r-t_r\} \cap \ensuremath{\mathbb{N}}$; it equals
\begin{equation}\label{eq_10}
a^{J-j}_{n-j,I-\mathbf{t}}(q) =
q^{2\ensuremath{\gamma}\deg_q\binom{n-j}{J-j}_{\! q}}\sum_{\substack{w\in\ensuremath{\mathcal{S}}_{\ensuremath{\gamma}(n-j)} \\
D_\ensuremath{\textup{L}}(w)\subseteq\ensuremath{\gamma}(J-j)}} Y^{\lambda_{n-j,I-\mathbf{t}}(w)}.
\end{equation}
For $\mathbf{t} \in T$, apply equations~\eqref{eq_9} and
\eqref{eq_10} together with the identities
$$
\sum_{\rho\in[r+1]}(t_{\rho}-t_{\rho-1})(i_{\rho}-t_{\rho}) =
j(n-j)-\sum_{\rho\in[r]}t_\rho(i_{\rho+1}-i_\rho-(t_{\rho+1}-t_\rho))
$$
and
$$
\binom{n}{J}_{\!\! q} = \binom{n}{j}_{\!\! q} \binom{n-j}{J-j}_{\!\! q},\quad
\deg_q\binom{n}{j}_{\!\! q}=j(n-j)
$$
to obtain
\begin{equation} \label{eq_11}
\begin{split}
\alpha_{n,I}^J(q^{-1}) & = a_{n,I}^J(q) / q^{2\ensuremath{\gamma} \deg_q
\binom{n}{J}_{\! q}} \\
& = q^{-2\ensuremath{\gamma} \deg_q \binom{n}{J}_{\! q}} \; \sum_{\mathbf{t} \in T}
A_{n,I}^\mathbf{t}(q) \;
a_{n-j,I-\mathbf{t}}^{J-j}(q) \\
& = \sum_{\mathbf{t} \in T} \; Y^{2\left(\sum_{\rho\in[r]}\ensuremath{\gamma}
t_\rho(\ensuremath{\gamma}(i_{\rho+1}-i_\rho)
- \ensuremath{\gamma}(t_{\rho+1} - t_\rho))\right)} \\
& \quad \cdot \Big( \sum_{\substack{w\in\ensuremath{\mathcal{S}}_{\ensuremath{\gamma}(n-j)} \\ D_\ensuremath{\textup{L}}(w)
\subseteq \ensuremath{\gamma}(J-j)}} \!\!\!\! Y^{\lambda_{n-j,I-\mathbf{t}}(w)}
\Big) \Big( \prod_{\rho\in[r+1]}
\sum_{\substack{w\in\ensuremath{\mathcal{S}}_{\ensuremath{\gamma}(i_\rho-i_{\rho-1})} \\
D_\ensuremath{\textup{L}}(w) \subseteq \ensuremath{\gamma} \{t_{\rho}-t_{\rho-1}\}}} \!\!\!\!
Y^{l(w)} \Big).
\end{split}
\end{equation}
We are looking to prove that the right hand side of
equation~\eqref{eq_11} may be written as a sum over the
elements in the symmetric group~$\ensuremath{\mathcal{S}}_{\ensuremath{\gamma} n}$ whose left descent set
is contained in~$\ensuremath{\gamma} J$. In the following considerations we shall
identify permutations $w\in\ensuremath{\mathcal{S}}_{\ensuremath{\gamma} n}$ (acting on $\{1, \dots, \ensuremath{\gamma}
n\}$ from the right) with the corresponding $\ensuremath{\gamma} n \times \ensuremath{\gamma}
n$-permutation matrices (acting on the set of standard row vectors
by right-multiplication). Observe that for any element $w\in
\ensuremath{\mathcal{S}}_{\ensuremath{\gamma} n}$ with $D_\ensuremath{\textup{L}}(w)\subseteq \ensuremath{\gamma} J$ the corresponding
permutation matrix is ascending on the first~$[\ensuremath{\gamma} j]$ rows. Define
$$
\mathbf{t} = \mathbf{t}(w) = (t_1,\dots,t_r)
$$
by
$$
t_\rho := \ensuremath{\gamma}^{-1} |\{ \varrho \in[\ensuremath{\gamma} j] \mid \varrho^w > \ensuremath{\gamma}
(n-i_\rho)\}| \quad \text{for all $\rho\in[r]$,}
$$
and set $t_0 := 0$, $t_{r+1} := j$. Then $\mathbf{t} \in T$,
as~$n - i_\rho \geq j - t_\rho$ for all~$\rho\in[r+1]$ and thus
$\mathbf{t}$ satisfies~\eqref{eq_8}. Applying suitable elementary
column operations to $w$ corresponding to left multiplication by
elements of the parabolic subgroup $W_{(\ensuremath{\gamma} \widetilde{I})^c}$, it is
easily seen that there are unique elements~$u,v\in \ensuremath{\mathcal{S}}_{\ensuremath{\gamma} n}$ such
that
\begin{enumerate}
\item[(a)] $w=uv$ and $l(w)=l(u)+l(v)$;
\item[(b)] for all $\rho \in [r+1]$:
$$
\varrho \in [ \ensuremath{\gamma}(j-t_\rho)+1, \ensuremath{\gamma}(j-t_{\rho-1}) ] \iff
\varrho^u = \varrho + \ensuremath{\gamma}(n-i_\rho);
$$
\item[(c)] $v \in W_{(\ensuremath{\gamma} \widetilde{I})^c}$, i.e.\ for all $\rho \in
[r+1]$:
$$
\varrho \in [ \ensuremath{\gamma}(n-i_\rho)+1, \ensuremath{\gamma}(n-i_{\rho-1}) ] \iff
\varrho^v \in [\ensuremath{\gamma}(n-i_\rho)+1, \ensuremath{\gamma}(n-i_{\rho-1}) ],
$$
and $D_\ensuremath{\textup{L}}(v)\subseteq \{\ensuremath{\gamma}(n-i_\rho)+\ensuremath{\gamma}(t_\rho-t_{\rho-1}) \mid
\rho\in[r+1]\}$.
\end{enumerate}
This is best seen in terms of permutation matrices. We write $\Id_s$
for the $s\times s$-unit matrix. Then the permutation matrix~$u$
has the shape
\begin{equation}\label{eq_12}
\left(
\begin{array}{c|c!{\vrule width 1pt}c!{\vrule width 1pt}c|c!{\vrule
width 1pt}c|c}
\Id_{\ensuremath{\gamma}(j-t_r)}& & \dots & & & & \\
\hline
& & & & & & \\
\hline
& & \ddots & & & & \\
\hline
& & & \Id_{\ensuremath{\gamma}(t_2-t_1)} & & & \\
\hline
& & & & & \Id_{\ensuremath{\gamma} t_1} & \\
\hline
& u_{r+1} & \dots & & u_2 & & u_1
\end{array}
\right),
\end{equation}
where $u_\rho$ is an $\ensuremath{\gamma} (n-j) \times \ensuremath{\gamma} ((i_\rho - i_{\rho-1}) -
(t_\rho - t_{\rho-1}))$-matrix for $\rho \in [r+1]$. The
permutation matrix~$v$ has the form
$$
\left(
\begin{array}{c!{\vrule width 1pt}c!{\vrule width 1pt}c!{\vrule width 1pt}c}
v_{r+1} & & & \\
\hline
& \ddots & & \\
\hline
& & v_2 & \\
\hline
& & & v_1
\end{array}
\right),
$$
where $v_\rho$ is an $\ensuremath{\gamma}(i_\rho - i_{\rho-1}) \times \ensuremath{\gamma}(i_\rho -
i_{\rho-1})$-permutation matrix with at most one descent for $\rho
\in [r+1]$. We may thus identify~$v$ with
$$
(v_1,\dots,v_{r+1})\in \ensuremath{\mathcal{S}}_{\ensuremath{\gamma} i_1}\times
\ensuremath{\mathcal{S}}_{\ensuremath{\gamma}(i_2-i_1)}\times\dots\times\ensuremath{\mathcal{S}}_{\ensuremath{\gamma}(n-i_r)}
$$
and have, by slight abuse of notation, for each $\rho\in[r+1]$,
\begin{equation}\label{eq_13}
D_\ensuremath{\textup{L}}(v_\rho) \subseteq \{\ensuremath{\gamma}(t_\rho-t_{\rho-1})\} \cap
[\ensuremath{\gamma}(i_\rho-i_{\rho-1})-1].
\end{equation}
\begin{remark}
The above decomposition $w=uv$ is \emph{not} the one from
Lemma~\ref{lemma_1}. It is important for our purpose that each
$v_\rho$ has at most one descent.
\end{remark}
Note that, by deleting the first~$\ensuremath{\gamma} j$ rows and respective columns
in~\eqref{eq_12}, the element $u$ determines a unique $\ensuremath{\gamma}(n-j) \times
\ensuremath{\gamma}(n-j)$-permutation matrix
$$
u' :=
\left(
\begin{array}{cccc}
u_{r+1} & \cdots & u_2 & u_1
\end{array}
\right)
$$
with descent set~$D_\ensuremath{\textup{L}}(u') = D_\ensuremath{\textup{L}}(w) - \ensuremath{\gamma} j$.
As we indicated in Section~\ref{section_coxeter}, it is easy to
determine the length of a permutation given by a permutation matrix:
it is simply the number of entries~$0$ in the matrix which are not
below or to the right of an entry~$1$. Thus
\begin{align*}
l(u) & = \sum_{\rho \in [r]} \ensuremath{\gamma} t_\rho (\ensuremath{\gamma} (i_{\rho+1}-i_\rho) -
\ensuremath{\gamma} (t_{\rho+1}-t_\rho))+l(u'), \\
l(v) & = \sum_{\rho\in[r+1]} l(v_\rho).
\end{align*}
Moreover, the parabolic length of $w$ with respect to $(\ensuremath{\gamma}
\widetilde{I})^c$ is determined by $\mathbf{t}$ and by the parabolic
length of $u'$ with respect to $(\ensuremath{\gamma}(\widetilde{I - \mathbf{t}}))^c$:
\begin{equation*}
l_\ensuremath{\textup{L}}^{(\ensuremath{\gamma} \widetilde{I})^c}(w) = \sum_{\rho\in[r]} \ensuremath{\gamma} t_\rho (
\ensuremath{\gamma}(i_{\rho+1}-i_\rho) - \ensuremath{\gamma} (t_{\rho+1}-t_\rho) ) +
l_\ensuremath{\textup{L}}^{(\ensuremath{\gamma}(\widetilde{I - \mathbf{t}}))^c}(u').
\end{equation*}
This gives
\begin{equation} \label{eq_14}
\begin{split}
\lambda_{n,I}(w) & = (l(u) + l(v)) + l_\ensuremath{\textup{L}}^{(\ensuremath{\gamma} \widetilde{I})^c}(w) \\
& = 2 \sum_{\rho\in[r]} \ensuremath{\gamma} t_\rho (\ensuremath{\gamma} (i_{\rho+1}-i_\rho) - \ensuremath{\gamma}
(t_{\rho+1}-t_\rho) ) \\
& \quad + \lambda_{\ensuremath{\gamma}(n-j),{I-\mathbf{t}}}(u') +
\sum_{\rho\in[r+1]} l(v_s).
\end{split}
\end{equation}
Conversely, any $\mathbf{t} \in T$ and any permutations $v_1, \dots,
v_{r+1}, u'$ of the appropriate degrees such that \eqref{eq_13}
holds give rise to a permutation~$w$ satisfying
$$
D_\ensuremath{\textup{L}}(w) \subseteq (D_\ensuremath{\textup{L}}(u')+\ensuremath{\gamma} j)\cup\{\ensuremath{\gamma} j\}.
$$
Thus \eqref{eq_14} shows that the right hand side of
\eqref{eq_11} is indeed equal to the right hand side of
\eqref{eq_7}.
\end{proof}
\section{The orthogonal case}\label{section_orthogonal}
Our aim in this section is to complete the proof of
Theorem~\ref{theorem_A}. We consider non-degenerate quadratic spaces
$\ensuremath{\mathcal{V}} = (V,B,f)$, where $V$ is an $n$-dimensional vector space over
the finite field~$F = \ensuremath{\mathbb{F}}_q$, equipped with a quadratic form $f$, and
$B$ denotes the bilinear form obtained by polarising $f$. So for
all $x,y \in V$,
$$
B(x,y) =
\begin{cases}
f(x+y) + f(x) + f(y) & \text{if $\cha F = 2$,} \\
\frac{1}{2} \left( f(x+y) - f(x) - f(y) \right) & \text{if $\cha
F \ne 2$.}
\end{cases}
$$
If $\cha F \ne 2$, then $B$ is non-degenerate symmetric and, as
$f(x) = B(x,x)$ for all $x \in V$, the quadratic form $f$ can easily
be recovered from $B$. If $\cha F = 2$, then $B$ is alternating,
possibly degenerate and carries less information than $f$.
Non-degenerate quadratic spaces over finite fields have been
classified and can be described up to isomorphism as follows;
cf.~\cite[p.~144]{Artin/57}, \cite[Section~3.3]{Cameron/91}. If $\cha
F \ne 2$, then for any given dimension~$n$ there are two possible
isomorphism types of non-degenerate quadratic spaces $\ensuremath{\mathcal{V}} =
(V,B,f)$, namely
\begin{align*}
\text{for $n$ odd:} && \ensuremath{\mathcal{V}} & = \ensuremath{\mathcal{H}}_1 \perp \dots \perp
\ensuremath{\mathcal{H}}_m \perp \ensuremath{\mathcal{A}}_{1,1}, \\
&& \ensuremath{\mathcal{V}} & = \ensuremath{\mathcal{H}}_1 \perp \dots \perp \ensuremath{\mathcal{H}}_m \perp \ensuremath{\mathcal{A}}_{1,-1}, \\
\text{for $n$ even:} && \ensuremath{\mathcal{V}} & = \ensuremath{\mathcal{H}}_1 \perp \dots
\perp \ensuremath{\mathcal{H}}_{m-1} \perp \ensuremath{\mathcal{H}}_m, \\
&& \ensuremath{\mathcal{V}} & = \ensuremath{\mathcal{H}}_1 \perp \dots \perp \ensuremath{\mathcal{H}}_{m-1} \perp \ensuremath{\mathcal{A}}_2,
\end{align*}
where $m=\lfloor \frac{n}{2} \rfloor$, the $\ensuremath{\mathcal{H}}_i$ denote hyperbolic
planes, $\ensuremath{\mathcal{A}}_{1,1}$ (respectively $\ensuremath{\mathcal{A}}_{1,-1}$) stands for an
anisotropic line $\langle x \rangle$ with $f(x) \in (F^*)^2$
(respectively $f(x) \in F^* \setminus (F^*)^2$) and $\ensuremath{\mathcal{A}}_2$ is an
anisotropic plane. For the purpose of counting non-degenerate flags in
quadratic spaces~$\ensuremath{\mathcal{V}}$ of given odd dimension, there is no
significant difference between the two possible isomorphism types.
We now discuss the case $\cha F = 2$. Then the above list still
provides all isomorphism types of non-degenerate quadratic spaces, but
becomes one term shorter: as every element of $F$ is a square, in
any given odd dimension there is (up to isomorphism) just one
non-degenerate quadratic space. In any given even dimension there are
still two isomorphism types. Note also that non-degenerate quadratic
spaces of odd dimension are defective with $1$-dimensional radical,
whereas non-degenerate quadratic spaces of even dimension are
non-defective.
Returning to the task of proving Theorem~\ref{theorem_A}, we recall
from the introduction that, in the even-dimensional case, we attach a
sign~$\epsilon=1$ or $\epsilon=-1$ to~$\ensuremath{\mathcal{V}}$ according to whether the
anisotropic kernel of~$\ensuremath{\mathcal{V}}$ is $0$- or $2$-dimensional. More
suggestively, we write~$a^J_{2m+1}(q) := a^J_{\ensuremath{\mathcal{V}}}(q)$ if $n = 2m+1$
is odd and, similarly, $a^J_{2m,\epsilon}(q) := a^J_{\ensuremath{\mathcal{V}}}(q)$ if~$n
= 2m$ is even and $\ensuremath{\mathcal{V}}$ of type~$\epsilon$. We are interested in the
polynomials
\begin{align*}
\alpha^J_{2m+1}(q^{-1}) & := a^J_{2m+1}(q)/q^{\deg_qa^J_{2m+1}},
\\
\alpha^J_{2m,\epsilon}(q^{-1}) & :=
a^J_{2m,\epsilon}(q)/q^{\deg_qa^J_{2m,\epsilon}}.
\end{align*}
\begin{definition}\label{definition_6}
Given a family $\ensuremath{\mathbf{F}}=\left(F_J(\ensuremath{\mathbf{X}})\right)_{J\subseteq[n-1]}$ of
rational functions with the inversion property~\eqref{eq_IP} we
define respectively
\begin{align*}
\Ig_{2m+1}(q^{-1},\ensuremath{\mathbf{X}}) :=\Ig_{2m+1,\ensuremath{\mathbf{F}}}(q^{-1},\ensuremath{\mathbf{X}}) &:=
\sum_{J\subseteq [n-1]}\alpha_{2m+1}^J(q^{-1})F_J(\ensuremath{\mathbf{X}}), \\
\Ig_{2m,\epsilon}(q^{-1},\ensuremath{\mathbf{X}}) :=
\Ig_{2m,\epsilon,\ensuremath{\mathbf{F}}}(q^{-1},\ensuremath{\mathbf{X}}) & := \sum_{J\subseteq
[n-1]}\alpha_{2m,\epsilon}^J(q^{-1})F_J(\ensuremath{\mathbf{X}}).
\end{align*}
\end{definition}
To streamline notation, we will sometimes add in the odd-dimensional
case a superfluous $\epsilon$ to expressions like $a^J_n(q)$,
$\alpha^J_n(q^{-1})$ or $\Ig_n(q^{-1},\ensuremath{\mathbf{X}})$, thus writing e.g.\
$a^J_{n,\epsilon}(q)$, $\alpha^J_{n,\epsilon}(q^{-1})$ or
$\Ig_{n,\epsilon}(q^{-1},\ensuremath{\mathbf{X}})$, irrespective of the parity of $n$.
We now fix a family of rational functions~$\ensuremath{\mathbf{F}} = (F_J(\ensuremath{\mathbf{X}}))_{J
\subseteq [n-1]}$ with the inversion property~\eqref{eq_IP}. The
assertion of Theorem~\ref{theorem_A} in the orthogonal case then takes
the following form.
\begin{theorem} \label{theorem_2}
The Igusa-type functions satisfy functional equations
\begin{align*}
\Ig_{2m+1}(q, \ensuremath{\mathbf{X}}^{-1}) & = (-1)^m q^{m^2+m} \Ig_{2m+1}(q^{-1},\ensuremath{\mathbf{X}}),\\
\Ig_{2m,\epsilon}(q, \ensuremath{\mathbf{X}}^{-1})& = -\epsilon (-1)^m q^{m^2}
\Ig_{2m,\epsilon}(q^{-1},\ensuremath{\mathbf{X}}).
\end{align*}
\end{theorem}
We first give an outline of the proof of Theorem~\ref{theorem_2},
deferring precise definitions for a moment. In
Proposition~\ref{proposition_3} we derive explicit
formulae for the polynomials $\alpha^J_{n,\epsilon}(q^{-1})$ from the
well-known formulae for the orders of the orthogonal groups. A key
observation is that the map $J \mapsto \alpha^J_{n,\epsilon}(q^{-1})$
factors over a `bisecting' map $\phi: \mathcal{P}([n]) \rightarrow
\mathcal{P}([m])$. We are thus led to define, for $G \subseteq [m]$,
$I \in \phi^{-1}(G)$,
\begin{equation*}
\alpha^{\uparrow G}_{n,\epsilon}(q^{-1}) :=
\alpha^I_{n,\epsilon}(q^{-1})
\end{equation*}
and
$$
F_{\phi^{-1}(G)}(\ensuremath{\mathbf{X}}):=\sum_{I\in\phi^{-1}(G)}F_I(\ensuremath{\mathbf{X}})
$$
so that
$$
\Ig_{n,\epsilon}(q^{-1},\ensuremath{\mathbf{X}}) = \sum_{G\subseteq[m]}
\alpha^{\uparrow G}_{n,\epsilon}(q^{-1}) F_{\phi^{-1}(G)}(\ensuremath{\mathbf{X}}).
$$
As we shall see, any subset~$G \subseteq [m]$ induces in a natural
way a composition $C(G) := C(G,m)$ of a non-negative integer $N(G)
\leq m$. For $G,H \subseteq [m]$, we denote by $\|G\|$ the number of
parts of $C(G)$ and by $c_{G,H}$ the number of ways the
composition~$C(H)$ refines a truncation of the composition~$C(G)$. We
then prove the following `inversion equations'.
\begin{proposition}\label{proposition_2}
\begin{enumerate}
\item[(i)] For each~$H\subseteq[m]$,
\begin{equation*}
F_{\phi^{-1}(H)}(\ensuremath{\mathbf{X}}^{-1}) = (-1)^{n-1 + \|H\|} \sum_{G\subseteq[m]}
c_{G,H} F_{\phi^{-1}(G)}(\ensuremath{\mathbf{X}}).
\end{equation*}
\item[(ii)] For each~$G\subseteq[m]$,
\begin{equation*}
\begin{split}
\alpha^{\uparrow G}_{2m+1}(q) & = (-1)^m q^{m^2+m} \sum_{H
\subseteq [m]} (-1)^{\|H\|} c_{G,H} \, \alpha^{\uparrow
H}_{2m+1}(q^{-1}), \\
\alpha^{\uparrow G}_{2m,\epsilon}(q) & = \epsilon (-1)^m q^{m^2}
\sum_{H \subseteq [m]} (-1)^{\|H\|} c_{G,H} \, \alpha^{\uparrow
H}_{2m,\epsilon} (q^{-1}).
\end{split}
\end{equation*}
\end{enumerate}
\end{proposition}
Theorem~\ref{theorem_2} is an immediate consequence of
Proposition~\ref{proposition_2}: indeed, in the
odd-dimensional case,
\begin{align*}
\Ig_{2m+1}(q,\ensuremath{\mathbf{X}}^{-1}) & = \sum_{G\subseteq[m]} \,
\alpha^{\uparrow G}_{2m+1}(q) F_{\phi^{-1}(G)}(\ensuremath{\mathbf{X}}^{-1}) \\
& = (-1)^m q^{m^2+m} \sum_{G,H\subseteq[m]} (-1)^{\|H\|}c_{G,H} \,
\alpha^{\uparrow H}_{2m+1}(q^{-1}) F_{\phi^{-1}(G)}(\ensuremath{\mathbf{X}}^{-1}) \\
& = (-1)^m q^{m^2+m} \sum_{H\subseteq[m]} \alpha^{\uparrow
H}_{2m+1}(q^{-1}) F_{\phi^{-1}(H)}(\ensuremath{\mathbf{X}}) \\
& = (-1)^m q^{m^2+m} \Ig_{2m+1}(q^{-1},\ensuremath{\mathbf{X}}).
\end{align*}
The functional equation for $\Ig_{2m,\epsilon}(q^{-1},\ensuremath{\mathbf{X}})$ follows
in a similar way. In the remainder of this section we give precise
definitions of the above concepts, and we supply a proof of
Proposition~\ref{proposition_2}.
\begin{definition}[Integer compositions]
By a \emph{composition} $C$ of a non-negative integer $N$ into
$\rho$ parts we mean a tuple $(x_1, \dots, x_\rho) \in \ensuremath{\mathbb{N}}^\rho$ such
that $N = x_1 + \dots + x_\rho$.
Given $I = \{ i_1, \dots, i_r \}_< \subseteq [n]$, we define
$$
N(I,n) := \max ( [n]_0 \setminus I ) \quad \text{and} \quad
\rho := \max \{ \varrho \in [r+1]_0 \mid i_{\varrho-1} < N(I,n) \},
$$
where $i_{-1} := -1$, $i_0 := 0$. Then $I$ induces a composition
$C(I,n)$ of $N(I,n)$ into $\|I\|_n := \rho$ parts, namely
$$
C(I,n) := (i_1, i_2 - i_1, \dots, i_{\rho - 1} - i_{\rho - 2},
N(I,n) - i_{\rho - 1} ).
$$
Note that, if $I \subseteq [n-1]$, then $N(I,n) = n$ and $\rho =
r+1$. The map $I \mapsto C(I,n)$ induces a bijection from
$\mathcal{P}([n-1])$ onto the set of all compositions of $n$.
We define the \emph{bisecting map}
$$
\phi: \mathcal{P}([n]) \rightarrow \mathcal{P}([m])
$$
as follows: for $I \subseteq [n]$ with $C(I,n) = (x_1, \dots,
x_\rho)$ set
\begin{align*}
\cut(I) & := \left\lfloor \frac{x_1}{2} \right\rfloor +
\left\lfloor \frac{x_2}{2}
\right\rfloor + \dots + \left\lfloor \frac{x_\rho}{2} \right\rfloor, \\
\phi_0(I) & := \Big\{ \left\lfloor \frac{x_1}{2} \right\rfloor +
\left\lfloor \frac{x_2}{2} \right\rfloor + \dots + \left\lfloor
\frac{x_\varrho}{2} \right\rfloor \mid \varrho \in [\rho] \Big\}
\setminus \{ 0, \cut(I) \} \\
\end{align*}
and
\begin{equation*}
\phi(I) := \phi_0(I) \cup \left[ \cut(I)+1, m \right].
\end{equation*}
Note that $N(\phi(I),m) = \cut(I)$ and $\|\phi(I)\|_m \leq \|I\|_n$.
Moreover, $\phi$ maps $\mathcal{P}([n-1])$ surjectively onto
$\mathcal{P}([m])$. For subsets $G \subseteq [m]$ we agree to write
$N(G) := N(G,m)$ and $\|G\| := \|G\|_m$.
\end{definition}
We now give explicit formulae for the polynomials
$\alpha_{n,\epsilon}^J(q^{-1})$.
\begin{proposition} \label{proposition_3}
Let $J \subseteq[n-1]$, $H := \phi(J) \subseteq [m]$, and put $Y :=
q^{-2}$.
\begin{enumerate}
\item[(i)] For $n = 2m+1$ odd,
\begin{equation*}
\begin{split}
\alpha^J_{2m+1}(q^{-1}) & = \binom{N(H)}{\phi_0(J)}_{\!\! Y} \;
\prod_{i = N(H)+1}^m (1 - Y^i) \\
& = \binom{m}{H \cup \{N(H)\}}_{\!\! Y} \; (1-Y)^{m - N(H)}.
\end{split}
\end{equation*}
\item[(ii)] For $n = 2m$ even,
\begin{align*}
\alpha^J_{2m,\epsilon}(q^{-1}) & = \binom{m}{J/2}_{\!\! Y} =
\binom{m}{H}_{\!\! Y} && \text{if $J
\subseteq 2\ensuremath{\mathbb{N}}$,} \\
\alpha^J_{2m,\epsilon}(q^{-1}) & = \binom{N(H)}{\phi_0(J)}_{\!\!
Y} \; \frac{\prod_{i =
N(H)+1}^m (1 - Y^i)}{1 + \epsilon q^{-m}} && \\
& = \binom{m}{H \cup \{N(H)\}}_{\!\! Y} \; \frac{(1 - Y)^{m -
N(H)}}{1 + \epsilon q^{-m}} && \text{otherwise.}
\end{align*}
\end{enumerate}
\end{proposition}
\begin{proof}
First we are going to prove the assertions in odd characteristic,
where the discriminant helps to distinguish isomorphism types of
quadratic spaces and where we can freely apply Witt's Extension and
Cancellation Theorem. Afterwards we explain why the formulae also
remain true in characteristic $2$.
So first suppose that the underlying field $F = \ensuremath{\mathbb{F}}_q$ has odd
characteristic. Recall the formulae for the orders of the respective
orthogonal groups
\begin{align*}
|\textup{O}_{2m+1}(\ensuremath{\mathbb{F}}_q)| & = 2 q^{m^2} \prod_{i\in[m]}
(q^{2i}-1) =: p_{2m+1}(q) =: p_{2m+1}, \\
|\textup{O}_{2m}^\epsilon(\ensuremath{\mathbb{F}}_q)| & = 2 q^{m^2-m} (q^m-\epsilon)
\prod_{i\in[m-1]} (q^{2i}-1) =: p_{2m,\epsilon}(q) =:
p_{2m,\epsilon}
\end{align*}
(cf.~\cite[p.~147]{Artin/57}, \cite[Theorem~9.11]{Grove/02}), and
put
$$
p_n^\sharp := p_n^\sharp (q) :=
\begin{cases}
p_{2m + 1} & \text{if $n = 2m+1$ odd,} \\
(q^m + \epsilon) p_{2m,\epsilon} & \text{if $n = 2m$ even}.
\end{cases}
$$
Let $J = \{j_1,\dots,j_s\}_< \subseteq[n-1]$, and put $j_0 := 0$,
$j_{s+1} := n$. Counting non-degenerate flags $\ensuremath{\mathbf{U}}_J = (U_j)_{j \in
J}$ of type $J$ in $\ensuremath{\mathcal{V}}$ is equivalent to counting (ordered)
orthogonal decompositions
\begin{equation}\label{eq_15}
\ensuremath{\mathcal{V}} = \ensuremath{\mathcal{W}}_1 \perp \dots \perp \ensuremath{\mathcal{W}}_{s+1}
\end{equation}
with $\dim \ensuremath{\mathcal{W}}_\sigma = k_{\sigma} := j_\sigma - j_{\sigma - 1}$
for all $\sigma \in [s+1]$. The isomorphism type of such an
orthogonal decomposition is determined by the discriminants
$\disc \ensuremath{\mathcal{W}}_\sigma \in \ensuremath{\mathbb{F}}_q^* / (\ensuremath{\mathbb{F}}_q^*)^2 \cong \{1, -1\}$ of the
non-degenerate spaces $\ensuremath{\mathcal{W}}_\sigma$, $\sigma \in [s+1]$.
Let $\eta \in \{1, -1\}$, according to whether $-1$ is a square in
$\ensuremath{\mathbb{F}}_q$ or not. At this point it is advantageous to assign, also to
an odd-dimensional non-degenerate quadratic space $\ensuremath{\mathcal{W}}$ a sign
$\epsilon(\ensuremath{\mathcal{W}}) \in \{ 1, -1 \}$, namely the discriminant of the
(one-dimensional) anisotropic kernel of $\ensuremath{\mathcal{W}}$. We then have
$\disc \ensuremath{\mathcal{W}} = \epsilon(\ensuremath{\mathcal{W}}) \eta^{\lfloor \dim \ensuremath{\mathcal{W}} / 2 \rfloor}$
for any non-degenerate quadratic space $\ensuremath{\mathcal{W}}$, irrespective of the
parity of $\dim \ensuremath{\mathcal{W}}$.
Thus the isomorphism type of an orthogonal decomposition of the
form~\eqref{eq_15} can be encoded in a tuple
$\boldsymbol{\epsilon} = (\epsilon_1, \dots, \epsilon_{s+1}) \in \{
1,-1 \}^{s+1}$ such that $\ensuremath{\mathcal{W}}_\sigma$ is of type $\epsilon_\sigma$
for all $\sigma \in [s+1]$. Moreover, the tuples
$\boldsymbol{\epsilon}$ which arise in this way are precisely the
elements of $E := E(\ensuremath{\mathcal{V}}) := \{ \boldsymbol{\epsilon} \mid
\epsilon_1 \cdots \epsilon_{s+1} = \eta^{m - N(\phi(J))} \epsilon
\}$, and Witt's Extension and Cancellation Theorem implies that the
number of ordered orthogonal decompositions of isomorphism type
$\boldsymbol{\epsilon} \in E$ equals
$$
\frac{|\textup{O}_{n}^\epsilon(\ensuremath{\mathbb{F}}_q)|}{|\prod_{\sigma = 1}^{s+1}
\textup{O}_{k_\sigma}^{\epsilon_\sigma}(\ensuremath{\mathbb{F}}_q)|} =
\frac{p_{n,\epsilon}}{\prod_{\sigma = 1}^{s+1} p_{k_\sigma,
\epsilon_\sigma}};
$$
cf.~\cite[p.~147f]{Artin/57}. Setting
$$
\mathcal{E}(J) := \{ \sigma \in [s+1] \mid k_\sigma \equiv 0 \mod
2 \}
$$
we thus obtain
\begin{equation*}
a_{n,\epsilon}^J(q) = p_{n,\epsilon} \sum_{\boldsymbol{\epsilon}
\in E} \left( \prod_{\sigma = 1}^{s+1} p_{k_\sigma,
\epsilon_\sigma} \right)^{-1}
= \frac{p_{n,\epsilon}}{ \prod_{\sigma = 1}^{s+1}
p_{k_\sigma}^\sharp } \; \sum_{\boldsymbol{\epsilon} \in E}
\prod_{\sigma \in \mathcal{E}(J)} (q^{k_\sigma / 2} +
\epsilon_\sigma).
\end{equation*}
Note that
\begin{align*}
\sum_{\boldsymbol{\epsilon} \in E} \prod_{\sigma \in
\mathcal{E}(J)} (q^{k_\sigma / 2} + \epsilon_\sigma) & = 2^s
\prod_{\sigma \in \mathcal{E}(J)} q^{k_\sigma / 2} && \text{if
$\mathcal{E}(J) \not = [s+1]$,} \\
\sum_{\boldsymbol{\epsilon} \in E} \prod_{\sigma \in
\mathcal{E}(J)} (q^{k_\sigma / 2} + \epsilon_\sigma) & = 2^s
(q^{n / 2} + \epsilon) && \text{if $\mathcal{E}(J) = [s+1]$.}
\end{align*}
From this the claim follows for $\cha F \ne 2$.
Before turning our attention to the case $\cha F = 2$, we record a
set of formulae for later use. Let $j \in [n-1]$ and $\delta \in
\{1,-1\}$. If $j = 2h+1$ is odd, let $a_\ensuremath{\mathcal{V}}^j(q)$ denote the number
of non-degenerate $j$-dimensional subspaces in $\ensuremath{\mathcal{V}}$. If $j = 2h$
is even, let $a_\ensuremath{\mathcal{V}}^{j,\delta}(q)$ denote the number of
non-degenerate $j$-dimensional subspaces of type $\delta$ in $\ensuremath{\mathcal{V}}$.
According to whether $\ensuremath{\mathcal{V}}$ is odd- or even-dimensional, we also
write $a_{2m+1}^{2h+1}(q)$, $a_{2m,\epsilon}^{2h+1}(q)$ in the former
and $a_{2m+1}^{2h,\delta}(q)$, $a_{2m,\epsilon}^{2h,\delta}(q)$ in
the latter case. Our calculations above, based on Witt's Extension
and Cancellation Theorem, show in particular that, if $\cha F \ne 2$,
\begin{align}
a_{2m+1}^{2h+1}(q) & =
\frac{|\textup{O}_{2m+1}(\ensuremath{\mathbb{F}}_q)|}{|\textup{O}_{2h+1}(\ensuremath{\mathbb{F}}_q)|
|\textup{O}_{2m-2h}^+(\ensuremath{\mathbb{F}}_q)|} +
\frac{|\textup{O}_{2m+1}(\ensuremath{\mathbb{F}}_q)|}{|\textup{O}_{2h+1}(\ensuremath{\mathbb{F}}_q)|
|\textup{O}_{2m-2h}^-(\ensuremath{\mathbb{F}}_q)|} \label{eq_16} \\
& = \frac{2 \, q^{m-h} \, p_{2m+1}}{p_{2h+1} \, p^\sharp_{2m-2h}},
\notag \\
a_{2m,\epsilon}^{2h+1}(q) & = 2 \cdot
\frac{|\textup{O}_{2m}^\epsilon (\ensuremath{\mathbb{F}}_q)|}{|\textup{O}_{2h+1}(\ensuremath{\mathbb{F}}_q)|
|\textup{O}_{2m-2h-1}(\ensuremath{\mathbb{F}}_q)|}
= \frac{2 \, p_{2m,\epsilon}}{p_{2h+1} \, p_{2m-2h-1}}, \label{eq_17} \\
a_{2m+1}^{2h,\delta}(q) & = \frac{|\textup{O}_{2m+1}
(\ensuremath{\mathbb{F}}_q)|}{|\textup{O}_{2h}^\delta (\ensuremath{\mathbb{F}}_q)|
|\textup{O}_{2m-2h+1}(\ensuremath{\mathbb{F}}_q)|} = \frac{p_{2m+1}}{p_{2h,\delta} \,
p_{2m-2h+1}}, \label{eq_18} \\
a_{2m,\epsilon}^{2h,\delta}(q) & = \frac{|\textup{O}_{2m}^\epsilon
(\ensuremath{\mathbb{F}}_q)|}{|\textup{O}_{2h}^\delta (\ensuremath{\mathbb{F}}_q)|
|\textup{O}_{2m-2h}^{\delta \epsilon} (\ensuremath{\mathbb{F}}_q)|} =
\frac{p_{2m,\epsilon}}{p_{2h,\delta} \, p_{2m-2h,\delta \epsilon}}.
\label{eq_19}
\end{align}
Below we will show that, in fact, also in characteristic $2$ one
obtains the same polynomials $a_\ensuremath{\mathcal{V}}^j(q)$ and
$a_\ensuremath{\mathcal{V}}^{j,\delta}(q)$. Thus, by induction, the formulae for
$a_\ensuremath{\mathcal{V}}^J(q)$ and $\alpha_\ensuremath{\mathcal{V}}^J(q^{-1})$, which we initially
derived only under the extra assumption $\cha F \ne 2$, also remain
valid in characteristic $2$.
So suppose that $\cha F = 2$, and let $j \in [n-1]$, $\delta \in
\{1,-1\}$. Write $j = 2h+1$, if $j$ is odd, and $j = 2h$, if $j$ is
even. The orders of the respective orthogonal groups are now
\begin{equation*}
|\textup{O}_{2m+1}(\ensuremath{\mathbb{F}}_q)| = \frac{p_{2m+1}}{2}, \qquad
|\textup{O}_{2m}^\epsilon(\ensuremath{\mathbb{F}}_q)| = p_{2m,\epsilon},
\end{equation*}
and Witt's Extension and Cancellation Theorem still applies to
non-defective subspaces; cf.~\cite[Theorems~3.12 and~14.48]{Grove/02}
and \cite[Theorem~3.15]{Cameron/91}. Therefore we immediately obtain
the counterparts of \eqref{eq_18} and \eqref{eq_19},
\begin{align*}
a_{2m+1}^{2h,\delta}(q) & = \frac{|\textup{O}_{2m+1}
(\ensuremath{\mathbb{F}}_q)|}{|\textup{O}_{2h}^\delta (\ensuremath{\mathbb{F}}_q)|
|\textup{O}_{2m-2h+1}(\ensuremath{\mathbb{F}}_q)|} = \frac{p_{2m+1}}{p_{2h,\delta} \,
p_{2m-2h+1}}, \\
a_{2m,\epsilon}^{2h,\delta}(q) & = \frac{|\textup{O}_{2m}^\epsilon
(\ensuremath{\mathbb{F}}_q)|}{|\textup{O}_{2h}^\delta (\ensuremath{\mathbb{F}}_q)|
|\textup{O}_{2m-2h}^{\delta \epsilon} (\ensuremath{\mathbb{F}}_q)|} =
\frac{p_{2m,\epsilon}}{p_{2h,\delta} \, p_{2m-2h,\delta \epsilon}}.
\end{align*}
Next we suppose that $n = 2m+1$ is odd and compute
$a_{2m+1}^{2h+1}(q)$. If $h = 0$, then we are to count anisotropic
lines in $\ensuremath{\mathcal{V}}$. It is well-known that the polar space associated to
$\ensuremath{\mathcal{V}}$ has $(q^{2m} - 1)/(q-1)$ points, each corresponding to an
isotropic line; cf.~\cite[Theorem~3.13]{Cameron/91}. So we deduce
that
$$
a_{2m+1}^{1}(q) = \frac{q^{2m+1} - 1}{q-1} - \frac{q^{2m} -
1}{q-1} = \frac{2 \, q^m \, p_{2m+1}}{p_{1} \, p^\sharp_{2m}}.
$$
In general, choosing a $(2h+1)$-dimensional non-degenerate
subspace $U$ in $\ensuremath{\mathcal{V}}$ can be split into two parts: first pick a
$2h$-dimensional non-degenerate (hence non-defective) subspace $U_0$
of type $1$, then complement your choice by picking an anisotropic
line $A$ in $U_0^\perp$ to obtain $U = U_0 + A$. Applying Witt's
Extension and Cancellation Theorem, we obtain the counterpart of
\eqref{eq_16},
\begin{equation*}
\begin{split}
a_{2m+1}^{2h+1}(q) & = \frac{a_{2m+1}^{2h,1}(q) \;
a_{2m-2h+1}^{1}(q)}{a_{2h+1}^{2h,1}(q)} \\
& = \frac{ ( p_{2m+1} \cdot 2 \, q^{m-h} \, p_{2m-2h+1} ) /
(p_{2h,1} \, p_{2m-2h+1} \cdot p_{1} \, p^\sharp_{2m-2h} ) }{
(p_{2h+1}) / (p_{2h,1} \, p_{1})} \\
& = \frac{2 \, q^{m-h} \, p_{2m+1}}{p_{2h+1} \, p^\sharp_{2m-2h}}.
\end{split}
\end{equation*}
A similar computation yields the counterpart of \eqref{eq_17}.
\end{proof}
\begin{definition}[Refinements of compositions]\label{definition_8}
Let $C_1 = (x_1, \dots, x_{\kappa})$ and $C_2 = (y_1, \dots,
y_{\lambda})$ be compositions. A \emph{refinement of a truncation of
$C_1$ by $C_2$} is a triple $(C_1,C_2,\boldsymbol{\xi})$ such that
$\boldsymbol{\xi}=(\xi_1,\dots,\xi_{\kappa}) \in
{[\lambda]_0}^{\kappa}$ satisfies
$$
\xi_1 \leq \dots \leq \xi_{\kappa} = \lambda \quad \text{and}
\quad \forall i \in [\kappa]: \; y_{\xi_{i-1} + 1}+ \dots +y_{\xi_i}
\leq x_i,
$$
where $\xi_0 := 0$. By slight abuse of terminology, we also call
the $\kappa$-tuple $\boldsymbol{\xi}$ a refinement of a truncation
of $C_1$ by $C_2$. For $G,H \subseteq [m]$, the number of
refinements of truncations of $C(G)$ by $C(H)$ is denoted
by~$c_{G,H} := c_{G,H}^{(m)}$.
Let $I, J \subseteq [n-1]$ such that $I \subseteq J$, and put $G :=
\phi(I)$, $H := \phi(J)$. Clearly, $C(J,n)$ can be regarded as a
refinement of $C(I,n)$. Applying the bisecting map, we obtain a
refinement of a truncation of $C(G)$ by $C(H)$ as follows.
The sets $[n-1] \setminus I$ and $[n-1] \setminus J$ decompose
uniquely into disjoint unions
$$
[n-1] \setminus I = \mathcal{I}_{I,1} \ensuremath{\; \dot{\cup} \;} \dots \ensuremath{\; \dot{\cup} \;}
\mathcal{I}_{I,\|G\|}, \qquad [n-1] \setminus J = \mathcal{I}_{J,1}
\ensuremath{\; \dot{\cup} \;} \dots \ensuremath{\; \dot{\cup} \;} \mathcal{I}_{J,\|H\|}
$$
of intervals $\mathcal{I}_{I,i}$ (respectively $\mathcal{I}_{J,j}$)
of natural numbers with $\max \mathcal{I}_{I,i} < \min
\mathcal{I}_{I,i+1}$ (respectively $\max \mathcal{I}_{J,j} < \min
\mathcal{I}_{J,j+1}$) for all admissible values of $i$ (respectively
$j$).
The \emph{refinement of a truncation of $G$ by $H$ induced from~$I
\subseteq J$} is the $\|G\|$-tuple $\boldsymbol{\xi}(I,J) =
(\xi_1,\dots,\xi_{\|G\|})$ defined by $\xi_{\|G\|}:=\|H\|$ and
\begin{equation*}
\forall i \in [\|G\|] : \; \mathcal{I}_{J,\xi_{i-1} + 1}
\ensuremath{\; \dot{\cup} \;} \dots \ensuremath{\; \dot{\cup} \;} \mathcal{I}_{J,\xi_i} \subseteq
\mathcal{I}_{I,i},
\end{equation*}
where $\xi_0 := 0$. We remark that, starting from $G,H \subseteq
[m]$, every refinement of a truncation of $C(G)$ by $C(H)$ is
induced by suitable $I,J \subseteq [n-1]$ with $I \subseteq J$.
\end{definition}
We illustrate these notions by an example.
\begin{example}
Set $n=11$ so that $m=5$. The subsets $G=\{1,3,4\}$, $H=\{2,4,5\}
\subseteq [m]$ induce compositions $C(G)=(1,2,1,1)$ of $N(G)=5$ and
$C(H) = (2,1)$ of $N(H)=3$, respectively. Note that~$\|G\|=4$
and~$\|H\|=2$. Among the seven `truncations' of $(1,2,1,1)$ to
`pre-compositions' of $3$,
\medskip
\centerline{
\begin{tabular}{ccccc}
$(1,2,0,0)$, &$(1,1,1,0)$, &$(1,1,0,1)$, &$(1,0,1,1)$,
&$(0,1,1,1)$, \\
$(0,2,0,1)$, &$(0,2,1,0)$, & & &
\end{tabular}
}
\medskip
\noindent only the $2 = c_{G,H}$ last ones yield the composition
$C(H) = (2,1)$. They are encoded in the tuples
$\boldsymbol{\xi}=(0,1,1,2)$ and $\boldsymbol{\xi}=(0,1,2,2)$,
respectively.
Define subsets
\begin{equation*}
I := \{2,7,9\}, \qquad
J_1 := \{1,2,7,8,9\}, \qquad
J_2 := \{1,2,3,7,9,10\}
\end{equation*}
of $[n-1] = [10]$. The set $I$ induces the composition $C(I,n) =
(2,5,2,2)$ of $N(I,n) = 11$. Thus $\cut(I) = 1+2+1+1 = 5$ and
$\phi(I) = G$. Similarly, $\cut(J_1) = \cut(J_2) = 2+1 =3$ and
$\phi(J_1)=\phi(J_2) = H$. We have
$\boldsymbol{\xi}(I,J_1)=(0,1,1,2)$ and
$\boldsymbol{\xi}(I,J_2)=(0,1,2,2)$.
\end{example}
We are now ready to prove Proposition~\ref{proposition_2}.
\begin{proof}[Proof of Proposition~\textup{\ref{proposition_2}~(i)}]
Let $H \subseteq [m]$. From the definition of
$F_{\phi^{-1}(H)}(\ensuremath{\mathbf{X}})$ and the fact that $\ensuremath{\mathbf{F}}$ has the inversion
property~\eqref{eq_IP} we obtain
$$
F_{\phi^{-1}(H)}(\ensuremath{\mathbf{X}}^{-1}) = \sum_{J\in\phi^{-1}(H)} (-1)^{|J|}
\sum_{I \subseteq J} F_I(\ensuremath{\mathbf{X}}).
$$
Thus it is enough to show that for $I \subseteq [n-1]$ with
$\phi(I)=G$,
$$
(-1)^{n-1+\|H\|} \sum_{\substack{J\in\phi^{-1}(H) \\ I \subseteq
J}} (-1)^{|J|} = c_{G,H}.
$$
This is certainly the case if $c_{G,H} = 0$, as then the sum on
the left hand side is empty. Now suppose that $c_{G,H} \not =
0$ and fix a refinement~$\boldsymbol{\xi}$ of a truncation
of~$G$ by~$H$; put $\xi_0 := 0$. It suffices to show that
\begin{equation}\label{eq_20}
(-1)^{n-1+\|H\|} \sum_{\substack{J\in\phi^{-1}(H) \\ I \subseteq
J,\;\; \boldsymbol{\xi}(I,J) = \boldsymbol{\xi}}} (-1)^{|J|} = 1.
\end{equation}
Decompose $[n-1] \setminus I = \mathcal{I}_{I,1} \ensuremath{\; \dot{\cup} \;} \dots
\ensuremath{\; \dot{\cup} \;} \mathcal{I}_{I,\|G\|}$ into a disjoint union of intervals
$\mathcal{I}_{I,i}$ as in Definition~\ref{definition_8},
and write $C(H) = (y_1, \dots, y_{\|H\|})$. We claim that
\begin{multline}\label{eq_21}
\sum_{\substack{J\in\phi^{-1}(H) \\ I \subseteq J,\;\;
\boldsymbol{\xi}(I,J) = \boldsymbol{\xi}}} (-1)^{|J| - |I|} =
\prod_{i=1}^{\|G\|} \; \sum_{k = 0}^{\xi_i-\xi_{i-1}}
\binom{|\mathcal{I}_{I,i}| - \sum_{j = \xi_{i-1} + 1}^{\xi_i}
(2y_j - 1) - k + 1}{\xi_i - \xi_{i-1}} \\ \cdot \binom{\xi_i -
\xi_{i-1}}{k} (-1)^{|\mathcal{I}_{I,i}| - \sum_{j = \xi_{i-1} +
1}^{\xi_i} (2y_j - 1) - k}.
\end{multline}
Indeed, specifying $J \in\phi^{-1}(H)$ with $I \subseteq J$ and
$\boldsymbol{\xi}(I,J) = \boldsymbol{\xi}$ is equivalent to the
following task: for each $i \in [\|G\|]$ choose $k_i \in
[\xi_i-\xi_{i-1}]_0$ and single out a disjoint union
$\mathcal{I}_{J,\xi_{i-1} + 1} \ensuremath{\; \dot{\cup} \;} \dots \ensuremath{\; \dot{\cup} \;}
\mathcal{I}_{J,\xi_i} \subseteq \mathcal{I}_{I,i}$ of intervals
$\mathcal{I}_{J,j}$ such that
\begin{enumerate}
\item[(a)] $\max \mathcal{I}_{J,j} < \min \mathcal{I}_{J,j+1}$ for
all admissible values of $j$,
\item[(b)] $|\mathcal{I}_{J,j}| = 2 y_j$ for exactly $k_i$ values
of $j$ and $|\mathcal{I}_{J,j}| = 2 y_j - 1$ for the remaining
values of $j$.
\end{enumerate}
Moreover, the cardinality of the set $J$ corresponding to such a
choice of $k_i$ and such a choice of intervals $\mathcal{I}_{J,j}
\subseteq \mathcal{I}_{I,i}$ is
$$
|I| + \sum_{i = 1}^{\|G\|} \left( |\mathcal{I}_{I,i}| - \sum_{j =
\xi_{i-1} + 1}^{\xi_i} (2 y_j -1) - k_i \right).
$$
As $n-1 + \|H\| = |I| + \sum_{i = 1}^{\|G\|} |\mathcal{I}_{I,i}| +
\sum_{i = 1}^{\|G\|} (\xi_i - \xi_{i-1})$,
equation~\eqref{eq_21} implies that the left hand side
of~\eqref{eq_20} is equal to
$$
\prod_{i=1}^{\|G\|} \sum_{k=0}^{\xi_i-\xi_{i-1}}
\binom{|\mathcal{I}_{I,i}| - \sum_{j = \xi_{i-1} + 1}^{\xi_i} (2y_j
-1) + 1 - k}{\xi_i - \xi_{i-1}} \binom{\xi_i -
\xi_{i-1}}{k}(-1)^k.
$$
This does indeed equal $1$, because for any positive integers $M
\leq N$,
$$
\sum_{k=0}^M \binom{N-k}{M} \binom{M}{k} (-1)^k = 1
$$
(cf.~\cite[p.~169, (5.25)]{GrahamKnuthPatashnik/98}) and hence
each of the $\|G\|$ factors already equals $1$.
\end{proof}
\begin{proof}[Proof of Proposition~\textup{\ref{proposition_2}~(ii)}
for $n = 2m+1$ odd]
For $G \subseteq [m]$, we are looking to prove
\begin{equation}\label{eq_22}
\alpha_{2m+1}^{\uparrow G}(q) = (-1)^m q^{m^2+m} \sum_{H
\subseteq [m]} (-1)^{\|H\|} c_{G,H} \; \alpha_{2m+1}^{\uparrow H}
(q^{-1}).
\end{equation}
First we deal with the case $N(G) < m$, i.e.\ $m \in G$. Writing $G'
:= G \setminus \{m\}$ and $Y := q^{-2}$, we see from
Proposition~\ref{proposition_3}~(i) that in this
case
$$
\alpha_{2m+1}^{\uparrow G}(q^{-1}) = (1 - Y^m)
\alpha_{2m-1}^{\uparrow G'} (q^{-1}).
$$
If $H \subseteq [m]$ with $c_{G,H} \not = 0$, then $N(H) \leq
N(G) < m$, hence $m \in H$, hence we obtain $\|H\| = \|H'\|_{m-1}$
and $c_{G,H} = c_{G',H'}^{(m-1)}$ for $H' := H \setminus \{m\}$.
With these observations \eqref{eq_22} follows
by induction:
\begin{align*}
\alpha_{2m+1}^{\uparrow G}(q) & = (1 - Y^{-m}) (-1)^{m-1}
q^{(m-1)^2+(m-1)} \\
& \qquad \cdot \sum_{H' \subseteq [m-1]} (-1)^{\|H'\|_{m-1}}
c_{G',H'}^{(m-1)} \; \alpha_{2m-1}^{\uparrow H'} (q^{-1}) \\
& = (-1)^m q^{m^2+m} \sum_{H' \subseteq [m-1]} (-1)^{\|H'\|_{m-1}}
c_{G',H'}^{(m-1)} \; (1 - Y^m) \alpha_{2m-1}^{\uparrow H'} (q^{-1}) \\
& = (-1)^m q^{m^2+m} \sum_{H \subseteq [m]} (-1)^{\|H\|} c_{G,H}
\; \alpha_{2m-1}^{\uparrow H} (q^{-1}).
\end{align*}
It remains to consider the case $N(G) = m$, i.e.\ $G \subseteq
[m-1]$. Again set $Y := q^{-2}$, and write $C(G) = (x_1, \dots,
x_{k+1})$. Proposition~\ref{proposition_3}~(i)
shows that in this case
$$
\alpha_{2m+1}^{\uparrow G}(q^{-1}) = \binom{m}{G}_{\!\! Y},
$$
in particular, as $\deg_Y \binom{m}{G}_{\! Y} = \binom{m+1}{2} -
\sum_{\kappa \in [k+1]} \binom{x_\kappa + 1}{2}$,
$$
\alpha_{2m+1}^{\uparrow G}(q) =
\alpha_{2m+1}^{\uparrow G}(q^{-1}) \; Y^{- \binom{m+1}{2} +
\sum_{\kappa \in [k+1]} \binom{x_\kappa + 1}{2}} .
$$
We shall show below that
\begin{equation} \label{eq_23}
\alpha_{2m+1}^{\uparrow G}(q^{-1}) \; Y^{\sum_{\kappa \in [k+1]}
\binom{x_\kappa + 1}{2}} = \sum_{H \subseteq [m]}
(-1)^{m + \|H\|} c_{G,H} \; \alpha_{2m+1}^{\uparrow H}(q^{-1}).
\end{equation}
From these equations \eqref{eq_22} follows readily.
It remains to prove \eqref{eq_23}. For this we need
the following formulae.
\begin{enumerate}
\item[(i)] For all $i \in \ensuremath{\mathbb{N}}_0$: $Y^{\binom{i+1}{2}} = \sum_{j=0}^i
\binom{i}{[i-j, i-1]}_{\! Y} (Y-1)^j Y^{\binom{i-j}{2}}$.
\item[(ii)] For all $i \in \ensuremath{\mathbb{N}}_0$: $Y^{\binom{i+1}{2}} =
\sum_{I\subseteq[i]}\binom{i+1}{I}_{\! Y}(-1)^{i-|I|}$.
\end{enumerate}
Part (i) is easily proved inductively (see the end of this proof),
part (ii) is a well-known fact about Gaussian polynomials. With the
formulae (i), (ii) at our disposal, the left hand side of
\eqref{eq_23} can be written as
\begin{equation}\label{eq_24}
\begin{split}
\alpha_{2m+1}^{\uparrow G} & (q^{-1}) \; Y^{\sum_{\kappa \in
[k+1]} \binom{x_\kappa + 1}{2}} \\
& = \binom{m}{G}_{\!\! Y} \prod_{\kappa \in [k+1]} Y^{\binom{x_\kappa +
1}{2}} \\
& = \binom{m}{G}_{\!\! Y} \prod_{\kappa \in [k+1]} \left(
\sum_{j=0}^{x_\kappa} \binom{x_\kappa}{[ x_\kappa-j, x_\kappa-1
]}_{\!\! Y} \left(Y - 1 \right)^j
Y^{\binom{x_\kappa - j}{2}} \right) \\
& = \binom{m}{G}_{\!\! Y} \prod_{\kappa \in [k+1]} \left(
\sum_{j=0}^{x_\kappa} \binom{x_\kappa}{[x_\kappa-j,
x_\kappa-1 ]}_{\!\! Y} \left( 1 - Y \right)^j \right. \\
& \qquad \qquad \qquad \qquad \qquad \cdot \left. \sum_{K
\subseteq [x_\kappa-j-1]} \binom{x_\kappa - j}{K}_{\!\! Y}
(-1)^{x_\kappa + |K| + \delta(\text{`} j \not = x_\kappa
\text{'})} \right),
\end{split}
\end{equation}
where the Kronecker-delta $\delta(\text{`} j \not = x_\kappa
\text{'}) \in \{1,0\}$ reflects whether or not the inequality $j
\not = x_\kappa$ holds. On the other hand, setting
$$
\Xi := \bigcup_{H \subseteq [m]} \{ (H,\boldsymbol{\xi}) \mid
\boldsymbol{\xi} \text{ a refinement of a truncation of $G$ by $H$}
\},
$$
the right hand side of~\eqref{eq_23} can be
written as
\begin{equation}\label{eq_25}
\sum_{H \subseteq [m]} (-1)^{m + \|H\|} c_{G,H} \;
\alpha_{2m+1}^{\uparrow H}(q^{-1}) = \sum_{(H,\boldsymbol{\xi}) \in
\Xi} (-1)^{m + \|H\|} \alpha_{2m+1}^{\uparrow H}(q^{-1}).
\end{equation}
Now we explain why the last sum is indeed equal to the right hand
side of \eqref{eq_24}. Choosing an element
$(H,\boldsymbol{\xi}) \in \Xi$, so that $\boldsymbol{\xi} = (\xi_1,
\dots, \xi_{k+1})$ is a refinement of a truncation of $C(G) = (x_1,
\dots, x_{k+1})$ by $C(H) = (y_1, \dots, y_\lambda)$, is the same as
fixing for each $\kappa \in [k+1]$ a truncation length $j_\kappa \in
[x_\kappa]_0$ and a subset $K_\kappa \subseteq [x_\kappa - j_\kappa
- 1]$, corresponding to a composition $(y_{\xi_{\kappa - 1} + 1},
\dots, y_{\xi_\kappa})$ of $x_\kappa - j_\kappa$. Moreover, the
summands attached to the data $(H,\boldsymbol{\xi})$ in
\eqref{eq_25} and $(j_\kappa,K_\kappa)_{\kappa \in [k+1]}$
in \eqref{eq_24} respectively agree:
\begin{equation*}
(-1)^{m + \|H\|} = (-1)^{\sum_{\kappa \in [k+1]} x_\kappa +
\sum_{\kappa \in [k+1]} (|K_\kappa| + \delta(\text{`} j \not =
x_\kappa \text{'}) )}
\end{equation*}
and
\begin{align*}
\alpha_{2m+1}^{\uparrow H}(q^{-1}) & = \frac{(1 - Y^m) (1 -
Y^{m-1}) \cdots (1-Y)}{\prod_{\iota \in [\lambda]} (1 -
Y^{y_\iota}) (1 - Y^{y_\iota - 1}) \cdots (1 - Y)} \\
& = \frac{(1 - Y^m) (1 - Y^{m-1}) \cdots (1-Y)}{\prod_{\kappa \in
[k+1]} \prod_{\iota = \xi_{\kappa - 1} + 1}^{\xi_\kappa} (1 -
Y^{y_\iota}) (1 - Y^{y_\iota - 1}) \cdots (1 - Y)} \\
& = \frac{(1 - Y^m) (1 - Y^{m-1}) \cdots (1-Y)}{\prod_{\kappa \in
[k+1]} (1 - Y^{x_\kappa - j_\kappa}) (1 - Y^{x_\kappa -
j_\kappa - 1}) \cdots (1 - Y)} \prod_{\kappa \in [k+1]} \!
\binom{x_\kappa - j_\kappa}{K_\kappa}_{\!\! Y} \\
& = \frac{(1 - Y^m) (1 - Y^{m-1}) \cdots (1-Y)}{\prod_{\kappa \in
[k+1]} (1 - Y^{x_\kappa}) (1 - Y^{x_\kappa - 1}) \cdots (1
- Y)} \\
& \qquad \cdot \prod_{\kappa \in [k+1]}
\binom{x_\kappa}{[x_{\kappa - j_\kappa}, x_\kappa - 1]}_{\!\! Y} (1 -
Y)^{j_\kappa} \binom{x_\kappa -
j_\kappa}{K_\kappa}_{\!\! Y} \\
& = \binom{m}{G}_{\!\! Y} \prod_{\kappa \in [k+1]}
\binom{x_\kappa}{[x_{\kappa - j_\kappa}, x_\kappa - 1]}_{\!\! Y} (1 -
Y)^{j_\kappa} \binom{x_\kappa - j_\kappa}{K_\kappa}_{\!\! Y}.
\end{align*}
This finishes the proof of \eqref{eq_23}. For later
use we record
\begin{equation} \label{eq_26}
\alpha_{2m+1}^{\uparrow G}(q^{-1}) \; Y^{\sum_{\kappa \in [k+1]}
\binom{x_\kappa}{2}} = \sum_{\substack{H \subseteq [m] \\ N(H) = m}}
(-1)^{m + \|H\|} c_{G,H} \; \alpha_{2m+1}^{\uparrow H}(q^{-1}).
\end{equation}
Indeed, summing only over those $H \subseteq [m]$ such that $N(H) =
m$ is achieved by setting persistently $j = j_\kappa = 0$ in the
above formulae. Clearly, under the restriction $j = 0$ the term in
the third line of~\eqref{eq_24} reduces to the left hand
side of \eqref{eq_26}.
Finally, we supply the proof of the formulae (i) above. We argue by
induction on $i \in \ensuremath{\mathbb{N}}_0$. For $i=0$ we have
$$
Y^{\binom{1}{2}} = 1 = \binom{0}{\varnothing}_{\!\! Y} (Y-1)^0
Y^{\binom{0}{2}},
$$
and for $i>0$ we obtain, by induction,
\begin{align*}
\sum_{j=0}^i & \binom{i}{[i-j,i-1]}_{\!\! Y} (Y-1)^j Y^{\binom{i-j}{2}}
\\
& = Y^{\binom{i}{2}} + \sum_{j \in [i]} \binom{i}{[i-j,i-1]}_{\!\! Y} (Y-1)^j
Y^{\binom{i-j}{2}}\\
& = Y^{\binom{i}{2}} + \sum_{j \in [i]} \binom{i-1}{[i-j,i-2]}_{\!\! Y}
\binom{i}{i-1}_{\!\! Y} (Y-1)^j
Y^{\binom{i-j}{2}} \\
& = Y^{\binom{i}{2}} + (Y-1) \binom{i}{i-1}_{\!\! Y} \sum_{j \in [i]}
\binom{i-1}{[i-j,i-2]}_{\!\! Y} (Y-1)^{j-1}
Y^{\binom{i-j}{2}} \\
& = Y^{\binom{i}{2}} + (Y-1) \frac{(Y^i -1)}{(Y-1)}
Y^{\binom{i}{2}} = Y^{i+\binom{i}{2}} = Y^{\binom{i+1}{2}}.
\end{align*}
\end{proof}
\begin{proof}[Proof of Proposition~\textup{\ref{proposition_2}~(ii)}
for $n = 2m$ even]
For $G \subseteq [m]$ we are looking to prove
\begin{equation} \label{eq_27}
\alpha_{2m,\epsilon}^{\uparrow G}(q) = \epsilon (-1)^m q^{m^2}
\sum_{H \subseteq [m]} (-1)^{\|H\|} c_{G,H} \;
\alpha_{2m,\epsilon}^{\uparrow H}(q^{-1}).
\end{equation}
Again by an inductive argument, analogous to the case $n = 2m+1$, we
may assume that in fact $N(G)=m$, i.e.\ $G \subseteq [m-1]$. Write
$Y := q^{-2}$ and $C(G) = (x_1, \dots, x_{k+1})$.
Proposition~\ref{proposition_3}~(ii) shows that
in this case
$$
\alpha_{2m,\epsilon}^{\uparrow G}(q^{-1}) = \alpha_{2m +
1}^{\uparrow G}(q^{-1}) = \binom{m}{G}_{\!\! Y},
$$
in particular, as $\deg_Y \binom{m}{G}_{\! Y} = \binom{m}{2} -
\sum_{\kappa \in [k+1]} \binom{x_\kappa}{2}$,
$$
\alpha_{2m,\epsilon}^{\uparrow G}(q) =
\alpha_{2m,\epsilon}^{\uparrow G}(q^{-1}) \; Y^{- \binom{m}{2} +
\sum_{\kappa \in [k+1]} \binom{x_\kappa}{2}} .
$$
We shall show below that
\begin{equation} \label{eq_28}
\alpha_{2m,\epsilon}^{\uparrow G}(q^{-1}) \; Y^{\sum_{\kappa \in [k+1]}
\binom{x_\kappa}{2}} q^{-m} = \epsilon \sum_{H \subseteq [m]}
(-1)^{m + \|H\|} c_{G,H} \; \alpha_{2m,\epsilon}^{\uparrow H}(q^{-1})
\end{equation}
From these equations \eqref{eq_27} follows
readily.
It remains to prove \eqref{eq_28}. An easy
computation gives
\begin{align*}
Y^{\sum_{\kappa \in [k+1]} \binom{x_\kappa}{2}} q^{-m} & =
\epsilon \frac{q^{-2m} + \epsilon q^{-m}}{1 + \epsilon q^{-m}}
Y^{\sum_{\kappa \in [k+1]}
\binom{x_\kappa}{2}} \\
& = \epsilon \frac{1}{1 + \epsilon q^{-m}} \left( Y^{\sum_{\kappa
\in [k+1]} \binom{x_\kappa +1}{2}} + \epsilon
q^{-m} Y^{\sum_{\kappa \in [k+1]} \binom{x_\kappa}{2}} \right) \\
& = \epsilon \left( \frac{Y^{\sum_{\kappa \in [k+1]}
\binom{x_\kappa + 1}{2}} - Y^{\sum_{\kappa \in [k+1]}
\binom{x_\kappa}{2}}}{1 + \epsilon q^{-m}} + Y^{\sum_{\kappa
\in [k+1]} \binom{x_\kappa}{2}} \right).
\end{align*}
From Proposition~\ref{proposition_3} we see that for
$H \subseteq [m]$,
\begin{equation*}
\alpha_{2m,\epsilon}^{\uparrow H}(q^{-1}) =
\begin{cases}
\alpha_{2m+1}^{\uparrow H}(q^{-1}) & \text{if $N(H) = m$,} \\
\alpha_{2m+1}^{\uparrow H}(q^{-1}) / (1 + \epsilon q^{-m}) &
\text{otherwise.}
\end{cases}
\end{equation*}
In view of \eqref{eq_23} and \eqref{eq_26}, we
thus obtain
\begin{align*}
\alpha_{2m,\epsilon}^{\uparrow G} & (q^{-1}) Y^{\sum_{\kappa \in
[k+1]} \binom{x_\kappa}{2}} q^{-m} \\
& = \epsilon \alpha_{2m + 1}^{\uparrow G}(q^{-1}) \left(
\frac{Y^{\sum_{\kappa \in [k+1]} \binom{x_\kappa + 1}{2}} -
Y^{\sum_{\kappa \in [k+1]} \binom{x_\kappa}{2}}}{1 + \epsilon
q^{-m}} + Y^{\sum_{\kappa \in [k+1]} \binom{x_\kappa}{2}}
\right) \\
& = \epsilon \sum_{H \subseteq [m], N(H) \not = m} (-1)^{m +
\|H\|} c_{G,H} \; \alpha_{2m+1}^{\uparrow
H}(q^{-1}) / (1 + \epsilon q^{-m}) \\
& \quad + \; \epsilon \sum_{H \subseteq [m], N(H) = m} (-1)^{m +
\|H\|} c_{G,H} \; \alpha_{2m+1}^{\uparrow
H}(q^{-1}) \\
& = \epsilon \sum_{H \subseteq [m]} (-1)^{m + \|H\|} c_{G,H} \;
\alpha_{2m,\epsilon}^{\uparrow H}(q^{-1}).
\end{align*}
This proves \eqref{eq_28}.
\end{proof}
\section{A conjecture for the orthogonal case}\label{section_conjecture}
In this section we discuss in more detail
Conjecture~\ref{conjecture_C}. As in Section~\ref{section_orthogonal},
let $\ensuremath{\mathcal{V}} =(V,B,f)$ be an $n$-dimensional, non-degenerate quadratic
space over the finite field~$F = \ensuremath{\mathbb{F}}_q$. Our aim is to give, for $J
\subseteq [n-1]$, an expression for the
polynomial~$\alpha^J_{\ensuremath{\mathcal{V}}}(q^{-1})$ in terms of parabolic length
functions on the Coxeter group~$W$ of type $A_{n-1}$. If
Conjecture~\ref{conjecture_C} holds, the orthogonal case of
Theorem~\ref{theorem_A} follows directly from Theorem~\ref{theorem_1}.
Fix the Coxeter system $(W,S)$ where $W = \ensuremath{\mathcal{S}}_n$ and $S = \{ s_1,
\dots, s_{n-1} \}$ denotes the standard set of Coxeter generators $s_i
= (i \;\; i+1 )$, $i \in [n-1]$. A crucial role is played by the
following statistic on $W$.
\begin{definition}[Length $L$]
Recalling the notation from Section~\ref{section_coxeter}, for
$w \in W$ set
\begin{equation}\label{eq_29}
L(w) := \ensuremath{\mathbf{b}} \cdot \ensuremath{\mathbf{l}}_\ensuremath{\textup{R}}(w), \quad \text{where }
\ensuremath{\mathbf{b}} = \left(b_I \right)_{I \subseteq S} = \left((-1)^{|I|}
2^{|S|-|I|-1} \right)_{I \subseteq S}.
\end{equation}
\end{definition}
It is well-known that the ordinary Coxeter length of a permutation $w
\in W$ is equal to the number of inversion pairs associated to $w$,
i.e.\ $l(w) = |\mathcal{I}(w)|$ where
$$
\mathcal{I}(w) := \left\{ (i,j) \mid 1 \leq i < j \leq n, i^w > j^w \right\}.
$$
The parabolic length function $L$ also has a simple interpretation in
terms of inversion pairs.
\begin{lemma}
For each $w \in W$,
$$
L(w) = | \{ (i,j) \in \mathcal{I}(w) \mid i \not \equiv j \mod 2 \} |.
$$
\end{lemma}
\begin{proof}
Let $w \in W$ and note that for any $I \subseteq [n-1]$,
$$
l^I_\ensuremath{\textup{R}}(w) = |\left\{(i,j)\in\mathcal{I}(w) \mid [i,j-1] \not
\subseteq I \right\}|.
$$
From this we derive
\begin{align*}
L(w) & = \frac{1}{2} \sum_{I\subseteq [n-1]} (-1)^{|I|}
2^{|S|-|I|} l^I_\ensuremath{\textup{R}}(w) \\
& = \frac{1}{2} \sum_{(i,j) \in \mathcal{I}(w)} \sum_{I \subseteq
[n-1]} (-1)^{|I|} 2^{|S|-|I|} \; \delta(\text{`} [i,j-1]
\not\subseteq
I \text{'}) \\
& = \frac{1}{2} \sum_{(i,j) \in \mathcal{I}(w)} \left( \sum_{I
\subseteq [n-1]} (-1)^{|I|} 2^{|S|-|I|} -
\sum_{[i,j-1] \subseteq I} (-1)^{|I|} 2^{|S|-|I|} \right) \\
& = \frac{1}{2} \sum_{(i,j) \in \mathcal{I}(w)} \left( (2-1)^{|S|}
-
(-1)^{j-i} (2-1)^{|S|-(j-i)} \right) \\
& = \frac{1}{2} \sum_{(i,j) \in \mathcal{I}(w)}
\left(1-(-1)^{j-i}\right),
\end{align*}
where the Kronecker-delta $\delta(\text{`} [i,j-1] \not\subseteq I
\text{'}) \in \{1,0\}$ reflects whether or not the inclusion
$[i,j-1] \not\subseteq I$ holds.
\end{proof}
\begin{definition}[Chessboard elements]
We say that $w \in W$ is a \emph{chessboard element} if $i + i^w
\equiv j + j^w$ modulo $2$ for all $i,j \in [n]$. Clearly, the set
$\mathcal{C}_n$ of chessboard elements forms a subgroup of $W$.
Note that $\mathcal{C}_n$ contains a subgroup~$\mathcal{C}_{n,0}$
consisting of elements~$w$ such that $i \equiv i^w$ modulo $2$ for
all $i \in [n]$. If $n = 2m+1$ is odd, we have $\mathcal{C}_n =
\mathcal{C}_{n,0} \cong \ensuremath{\mathcal{S}}_{m + 1} \times \ensuremath{\mathcal{S}}_m$. If $n = 2m$ is
even, we have $\mathcal{C}_n = \langle w_0 \rangle \ltimes
\mathcal{C}_{n,0}$, where $w_0$ denotes the longest element of $W$,
and $\mathcal{C}_{n,0} \cong \ensuremath{\mathcal{S}}_m \times \ensuremath{\mathcal{S}}_m$.
We write~$\sigma: W \rightarrow \{1,-1\}$, $w \mapsto (-1)^{l(w)}$
for the \emph{sign character}, and $\tau : \mathcal{C}_n \rightarrow
\{1,-1\}$ for the linear character with $\ker(\tau) =
\mathcal{C}_{n,0}$. Recall from the introduction that, in the
even-dimensional case, we attach a sign~$\epsilon \in \{1,-1\}$
to~$\ensuremath{\mathcal{V}}$. Observing that $\tau$ is trivial for $n$ odd, we define
$$
\chi_\epsilon : \mathcal{C}_n \rightarrow \{1,-1\}, \chi_\epsilon (w) :=
\begin{cases}
\sigma(w) & \text{if $n$ is odd, or if $n$ is even and $\epsilon = 1$,} \\
\sigma(w) \tau(w) & \text{if $n$ is odd, or if $n$ is even and
$\epsilon = -1$.}
\end{cases}
$$
\end{definition}
\addtocounter{thmx}{-1}
\begin{conjecture}
For each $J \subseteq [n-1]$,
\begin{equation} \label{eq_30}
\alpha^J_{\ensuremath{\mathcal{V}}}(q^{-1}) = \alpha^J_{n,\epsilon}(q^{-1}) =
\sum_{\substack{w \in \mathcal{C}_n \\
D_\ensuremath{\textup{L}}(w) \subseteq J}} \chi_\epsilon (w) q^{-L(w)} .
\end{equation}
\end{conjecture}
Note that, if Conjecture~\ref{conjecture_C} holds, the orthogonal case
of Theorem~\ref{theorem_A} follows from Theorem~\ref{theorem_1},
equation~\eqref{eq_5}, with~$W' = \mathcal{C}_n$, $\ensuremath{\mathbf{b}}$ as defined
in~\eqref{eq_29} and $\chi = \chi_\epsilon$.
Conjecture~\ref{conjecture_C} has been confirmed for $|J| \leq 1$ and
verified for~$n \leq 13$.
\begin{example}
For $n=3$, we have
\begin{figure}[H]
\begin{center}
\begin{tabular}{|l!{\vrule width 1pt}c|c|c|c|}
\hline
$J \subseteq [2]$&$\varnothing$&$\{1\}$&$\{2\}$&$\{1,2\}$\\
\hline\hline
$a^J_3(q)$&$1$&$q^2$&$q^2$&$q^3-q$\\
\hline
$\alpha^J_3(q^{-1})$&$1$&$1$&$1$&$1-q^{-2}$\\
\hline
\end{tabular}
\end{center}
\end{figure}
For $w \in W = \langle s_1, s_2 \rangle = \ensuremath{\mathcal{S}}_3$, the statistic
$L(w) = 2l(w) - l_{\ensuremath{\textup{R}}}^{\{1\}}(w) - l_{\ensuremath{\textup{R}}}^{\{2\}}(w)$, the
character $\chi_\epsilon (w) = \sigma(w) = (-1)^{l(w)}$ and the left
descent set $D_\ensuremath{\textup{L}}(w)$ take the values
\begin{figure}[H]
\begin{center}
\begin{tabular}{|l!{\vrule width 1pt}c|c|c|c|c|c|}
\hline
$w$&$\Id$&$s_1$&$s_2$&$s_1s_2$&$s_2s_1$&$s_1s_2s_1$\\
\hline\hline
$L(w)$&$0$&$1$&$1$&$1$&$1$&$2$\\
\hline
$\chi_\epsilon(w)$&$1$&$-1$&$-1$&$1$&$1$&$-1$\\
\hline
$D_\ensuremath{\textup{L}}(w)$&$\varnothing$&$\{1\}$&$\{2\}$&$\{1\}$&$\{2\}$&$\{1,2\}$\\
\hline
\end{tabular}
\end{center}
\end{figure}
\end{example}
If $n$ is odd or if $n$ is even and $\epsilon = 1$, the character
$\chi_\epsilon$ naturally extends to the sign character on the whole
group $W$. Interestingly, in this case also a modified version of
equation~\eqref{eq_30} seems to hold, where $\chi_\epsilon$ is
replaced by $\sigma$ and one sums over \emph{all} elements $w \in W$.
In fact, we originally introduced chessboard elements in an attempt to
control cancellation in this larger sum. Evidently, the contributions
of any two elements $w_1, w_2 \in W$ with $w_1^{-1} w_2 \in S$ and
$L(w_1) = L(w_2)$ cancel each other. Therefore we were led to sum
over the set
$$
\mathcal{M} := \{w \in W \mid \forall s\in S: D_\ensuremath{\textup{L}}(w) \not = D_\ensuremath{\textup{L}}(ws)
\text{ or } L(w)\not = L(ws) \}.
$$
The set~$\mathcal{M}$ is easily seen to be closed under
right-multiplication by the longest element $w_0$ and might indeed
coincide with~$\mathcal{C}_n$. Aided by computer evidence, we
distilled Conjecture~\ref{conjecture_C} out of this circle of ideas.
|
2,877,628,089,084 | arxiv | \section{Introduction and Summary}
It has been well documented in the financial press that a methodology is
needed that can identify an asset price bubble in real time. William Dudley,
the President of the New York Federal Reserve, in an interview with Planet
Money~\cite{planet} stated \textquotedblleft ...what I am proposing is that
we try to identify bubbles in real time, try to develop tools to address
those bubbles, try to use those tools when appropriate to limit the size of
those bubbles and, therefore, try to limit the damage when those bubbles
burst.\textquotedblright
It is also widely recognized that this is not an easy task. Indeed, in 2009
the Federal Reserve Chairman Ben Bernanke said in Congressional Testimony~%
\cite{Bernanke} \textquotedblleft It is extraordinarily difficult in real
time to know if an asset price is appropriate or not\textquotedblright .
Without a quantitative procedure, experts often have different opinions
about the existence of price bubbles. A famous example is the oil price
bubble of 2007/2008. Nobel prize winning economist Paul Krugman wrote in the
New York Times that it was not a bubble, and two days later Ben Stein wrote
in the same paper that it was.
Although not yet widely known by the finance industry, the authors have
recently developed a procedure based on a sophisticated mathematical model
for detecting asset price bubbles \emph{in real time} (see~\cite{JKP}). We
have successfully back-tested our methodology showing the existence of price
bubbles in various stocks during the dot-com era of 2000 to 2002. We also showed that some stocks in that period that might have suspected of being bubbles, were in fact not. But, we
have not yet tested our method on stocks in real time. That is, we have not
tested them until now.
Inspired by a New York Times article~\cite{NYT} discussing whether or not in
the aftermath of the LinkedIn IPO the stock price had a bubble, we obtained
stock price tick data from Bloomberg. And, we used our methodology to test
whether LinkedIn's stock price is exhibiting a bubble. \textit{We have
found, definitively, that there is a price bubble!}
Our method consists of assuming a general (and generally accepted) evolution
for the stock price, estimating its volatility using state of the art
estimators, and then extrapolating the volatility function to see if a
certain calculus integral based on the volatility is finite or infinite. If
it is finite, the stock price has a bubble; if the integral in question is infinite then the stock is not undergoing bubble pricing; our
test is indeterminate in a small set of cases where the volatility tends to
infinity at a rate between both possibilities. In the case of LinkedIn, the
volatility function is well inside the bubble region. There is no doubt
about its existence.
Our results can be summarized in the following graph showing an
extrapolation of an estimated volatility function for LinkedIn's stock
price. Put simply, the theory developed in~\cite{JPS1},\cite{JPS2}, and~\cite%
{JKP} tells us that if the graph of the volatility versus the stock price
tends to infinity at a faster rate than does the graph of $f(x)=x$, then we
have a bubble. Below we have a graph of the volatility coefficient of
LinkedIn together with its extrapolation, and the reader can clearly see
that the graph indicates the stock has a price bubble.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.57]{RKHS2.jpg}
\end{center}
\caption{Estimation and Extrapolation of the Volatility Function}
\label{RKHS2}
\end{figure}
The blue part of the graph is the estimated function, and the red part is
its extrapolation using the technique of Reproducing Kernel Hilbert Spaces
(RKHS).
These LinkedIn results illustrate the usefulness of our new methodology for
detecting bubbles in real time. Our methodology provides a solution to the
problems stated by both Chairman Bernanke and President Dudley, and it is
our hope that they will prove useful to regulators, policy makers, and
industry participants alike.
\bigskip
\noindent \textbf{Acknowledgement:} The authors thank Peter Carr and Arun
Verma for help in obtaining quickly the tick data for the stock LinkedIn.
\section*{The Empirical Test}
In this section we first review the methodology contained in \cite{JPS2} to
determine whether or not LinkedIn's stock price is experiencing a bubble,
and then we apply this methodology to LinkedIn minute by minute stock price
tick data obtained from Bloomberg. We conclude that LinedIn's stock is
indeed experiencing a price bubble.
\subsection{The Methodology}
The methodology is based on studying the characteristics of LinkedIn's stock
price process. If LinkedIn's stock price is experiencing a bubble, it can be
shown \cite{JPS2} that the stock price's volatility will be unusually large.
Our empirical methodology validates whether or not LinkedIn's stock
volatility is large enough.
To perform this validation, we start by assuming that the stock price
follows a stochastic differential equation of a form that makes it a
diffusion in an incomplete market (see (\ref{1}) below). The assumed stock
price evolution is very general and fits most stock price processes
reasonably well. For identifying a price bubble, the key characteristic of
this evolution is that the stock price's volatility $\sigma (S_{t})$ is a
function of the level of the stock price.
Next, we use sophisticated methods to estimate the volatility coefficient $%
\sigma (x)$. Since the data is necessarily finite, we can estimate the
values of $\sigma (x)$ only on the set of stock prices observed (which
is a compact subset of $\mathbb{R}_{+}$) We use two methods to extend $%
\sigma $ to all of $\mathbb{R}_{+}$. One, we use parametric methods combined
with a comparison theorem. Two, we use an indexed family of Reproducing
Kernel Hilbert Spaces (RKHS) combined with an optimization over the index
set to obtain the best possible extension given the data (this is a sort of
bootstrap procedure).
This \textquotedblleft knowledge of $\sigma (S_{t})$\textquotedblright\ then
enables us to determine whether the stock price is a martingale or is a
strict local martingale under any of an infinite collection of risk neutral
measures. If it is a strict local martingale for all of the risk neutral
measures (which corresponds to a certain calculus integral being finite by
Feller's test for explosions), then we can conclude that the stock price is
undergoing a bubble. Otherwise, there is not a stock price bubble.
\subsection{The Estimation}
We assume that LinkedIn's stock price
is a diffusion of the form
\begin{eqnarray}
dS_{t}&=&\sigma (S_{t})dW_{t} +b(S_t,Y_t)dt\\ \label{1}
dY_t&=&s(Y_t)dB_t+g(Y_t)dt
\end{eqnarray}%
where $W$ and $B$ are independent Brownian motions. This model permits that under the physical probability measure, the
stock price can have a drift that depends on additional randomness, making
the market incomplete (see \cite{JPS2}). Nevertheless for this family of models, $S$ satisfies the following equation for every neutral measure:
$$
dS_{t}=\sigma (S_{t})dW_{t}
$$
Under this evolution, the stock price exhibits a bubble if and only if
\begin{equation}
\int_{a}^{\infty }\frac{x}{\sigma ^{2}(x)}dx<\infty \ \ for\ all\ \ a>0.
\end{equation}%
We test to see if this integral is finite or not.
To perform this test, we obtained minute by minute stock price tick data for
the 4 business days 5/19/2011 to 5/24/2011 from Bloomberg. There are exactly
1535 price observations in this data set. The time series plot of LinkedIn's
stock price is contained in Figure \ref{LinkedInSpot}. The prices used are
the open prices of each minute but the results are not sensitive to using
open, high or lowest minute prices instead.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.62]{SpotPrice.jpg}
\end{center}
\caption{LinkedIn Stock Prices from 5/19/2011 to 5/24/2011.(The observation
interval is one minute.)}
\label{LinkedInSpot}
\end{figure}
The maximum stock price attained by LinkedIn during this period is \$120.74
and the minimum price was \$81.24. As evidenced in this diagram, LinkedIn
experienced a dramatic price rise in its early trading. This suggests an
unusually large stock price volatility over this short time period and
perhaps a price bubble.
Our bubble testing methodology first requires us to estimate the volatility
function $\sigma $ using local time based non-parametric estimators. We use
two such estimators. We compare the estimation results obtained using both
Zmirou's estimator (see Theorem 1 in \cite{JKP}) and the estimator developed
in \cite{JKP} (see Theorem 3 in the same reference). The implementation of
these estimators requires a grid step $h_{n}$ tending to zero, such that $%
nh_{n}\rightarrow \infty $ and $nh_{n}^{4}\rightarrow 0$ for the former
estimator, and $nh_{n}^{2}\rightarrow \infty $ for the later one. We choose
the step size $h_{n}=\frac{1}{n^{\frac{1}{3}}}$ so that all of these
conditions are simultaneously satisfied. This implies a grid of 7 points.
The statistics are displayed in figure \ref{Estimation}.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.65]{Estimation.jpg}
\end{center}
\caption{Non-parametric Volatility Estimates.}
\label{Estimation}
\end{figure}
Since the neighborhoods of the grid points \$118.915 and \$125.764 are
either not visited or visited only once, we do not have reliable estimates
at these points. Therefore, we restrict ourselves to the grid containing
only the first five points. We note that the last point in the new grid
\$112.065 still has only been visited very few times.
When using Zmirou's estimator, confidence intervals are provided. The
confidence intervals are quite wide. Given these observations, we apply our
methodology twice. In the first test, we use a 5 point grid. In the second
test, we remove the fifth point where the estimation is uncertain and we use
a 4 point grid instead. The graph in figure \ref{EstimationGraph} plots the
estimated volatilities for the grid points together with the confidence
intervals.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.65]{EstimationGraph.jpg}
\end{center}
\caption{Non parametric Volatility Estimation Results.}
\label{EstimationGraph}
\end{figure}
The next step in our procedure is to interpolate the shape of the volatility
function between these grid points. We use the estimations from our non
parametric estimator with the 5 point grid case. For the volatility time
scale, we let the 4 day time interval correspond to one unit of time. This
scaling does not affect the conclusions of this paper. When interpolating
one can use any reasonable method. We use both cubic splines and reproducing
kernel Hilbert spaces as suggested in \cite{JKP}, subsection 5.2.3 item
(ii). The interpolated functions are in figure \ref{Interpolation}.
From these,we select the kernel function $K_{1,\tau }$ as defined in Lemma
10 in \cite{JKP}, and we choose the parameter $\tau =6$.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.65]{Interpolation.jpg}
\end{center}
\caption{Interpolated Volatility using Cubic Splines and the RKHS Theory}
\label{Interpolation}
\end{figure}
The next step is to extrapolate the interpolated function $\sigma ^{b}$ using
the RKHS theory to the left and right stock price tails. Define $f(x)=\frac{1%
}{\sigma ^{2}(x)}$ and define the Hilbert space
\begin{equation*}
H_{n}=H_{n}\big(\lbrack 0,\infty \lbrack \big)=\big\{f\in C^{n}\big(\lbrack
0,\infty \lbrack \big)\mid \lim_{x\rightarrow \infty }x^{k}f^{(k)}(x)=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
for all }0\leq k\leq n-1\big\}
\end{equation*}%
where $n$ is the assumed degree of smoothness of $f$. We also need to define
an inner product. A smooth reproducing kernel $q^{RP}(x,y)$ can be
constructed explicitly (see Proposition 2 in \cite{JKP}) via the choice
\begin{equation*}
\langle f,g\rangle _{n,m}=\int_{0}^{\infty }\frac{y^{n}f^{(n)}(y)}{n!}\frac{%
y^{n}g^{(n)}(y)}{n!}\frac{dy}{w(y)}
\end{equation*}%
where $w(y)=\frac{1}{y^{m}}$ is an asymptotic weighting function. We
consider the family of RKHS $H_{n,m}=(H_{n},\langle ,\rangle _{n,m})$, in
which case the explicit form of $q_{n,m}^{RP}$ is provided in Proposition 2
in \cite{JKP} in terms of the Beta and the Gauss's hypergeometric functions.
For $n\in \{1,2\}$ fixed, we construct our extrapolation $\sigma =\sigma _{m}
$ as in \cite{JKP}, 5.2.3 item (iv), by choosing the asymptotic weighting
function parameter $m$ such that $f_{m}=\frac{1}{\sigma _{m}^{2}}$ is in $%
H_{n,m}$, $\sigma _{m}$ exactly matches the points obtained from the non
parametric estimation, and $\sigma _{m}$ is as close (in norm 2) to $\sigma
^{b}$ on the last third of the bounded interval where $\sigma ^{b}$ is
defined. Because of the observed kink and the obvious change in the rate of
increase of $\sigma ^{b}$ at the forth point, we choose $n=1$ in our
numerical procedure. The result is shown in figure \ref{RKHS}.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.65]{RKHS.jpg}
\end{center}
\caption{RKHS Based Extrapolation of $\protect\sigma ^{b}$}
\label{RKHS}
\end{figure}
We obtain $m=9.42$.
From Proposition 3 in \cite{JKP}, the asymptotic behavior of $\sigma $ is
given by
\begin{equation*}
\lim_{x\rightarrow \infty }x^{m+1}f(x)=n^{2}B(m+1,n)\sum_{i=1}^{M}c_{i}
\end{equation*}%
where $M=5$ is the number of observations available, $B$ is the Beta
function, and the coefficients $(c_{i})_{1\leq i\leq M}$ are obtained by
solving the system
\begin{equation*}
\sum_{i=1}^{M}c_{i}q_{n,m}^{RP}(x_{i},x_{k})=f(x_{k})\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for all }1\leq
k\leq M
\end{equation*}%
where $(x_{i})_{1\leq i\leq M}$ is the grid of the non parametric
estimation, $f(x_{k})=\frac{1}{\sigma ^{2}(x_{k})}$ and $\sigma (x_{k})$ is
the value at the grid point $x_{k}$ obtained from the non-parametric
estimation procedure. This implies that $\sigma $ is asymptotically
equivalent to a function proportional to $x^{\alpha }$ with $\alpha =\frac{%
1+m}{2}$, that is $\alpha =5.21$. This value appears very large, however, the
proportionality constant is also large. The $c_{i}$'s are automatically
adjusted to exactly match the input points $(x_{i},f(x_{i}))_{1\leq i\leq M}$%
.
We plot below the functions with different asymptotic weighting parameters $m
$ obtained using the RKHS extrapolation method, without optimization. All
the functions exactly match the non-parametrically estimated points.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.65]{RKHSAll.jpg}
\end{center}
\caption{Extrapolated Volatility Functions using Different Reproducing
Kernels}
\label{RKHSAll}
\end{figure}
The asymptotic weighting function's parameter $m=9.42$ obtained by
optimization appears in figure \ref{RKHSAll} to be the estimate most
consistent (within all the functions, in any Hilbert Space of the form $%
H_{1,m}$, that exactly match the input data) with a "natural" extension of
the behavior of $\sigma ^{b}$ to $\mathbb{R}^{+}$. \textit{The power }$%
\alpha =5.21$\textit{\ implies then that LinkedIn stock price is currently
exhibiting a bubble.}
Since there is a large standard error for the volatility estimate at the end
point \$112.065, we remove this point from the grid and repeat our
procedure. Also, the rate of increase of the function between the last two
last points appears large, and we do not want the volatility's behavior to
follow solely from this fact. Hence, we check to see if we can conclude
there is a price bubble based only on the first 4 reliable observation
points. We plot in figure \ref{RKHS2} the function $\sigma ^{b}$ (in blue)
and its extrapolation to $\mathbb{R}^{+}$, $\sigma $ (in red).
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.65]{RKHS2.jpg}
\end{center}
\caption{RKHS based Extrapolation of $\protect\sigma ^{b}$}
\label{RKHS2}
\end{figure}
Now $M=4$. With this new grid, we can assume a higher regularity $n=2$ and
we obtain, after optimization, $m=7.8543$. This leads to the power $\alpha
=4.42715$ for the asymptotic behavior of the volatility. Again, although
this power appears to be high given the numerical values $%
(x_{k},f(x_{k}))_{1\leq k\leq 4}$, the coefficients $(c_{i})_{1\leq i\leq 4}$
and hence the constant of proportionality are adjusted to exactly match the
input points. The extrapolated function obtained is the most consistent
(within all the functions, in any $H_{2,m}$, that exactly match the input
data) in terms of extending 'naturally' the behavior of $\sigma ^{b}$ to $%
\mathbb{R}^{+}$. Again, we can conclude that there is a stock price bubble.
\section*{Abstract (Not appropriate in this style!)}%
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}%
\quotation
\fi
}%
}{%
}%
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}%
\@ifundefined{maketitle}{\def\maketitle#1{}}{}%
\@ifundefined{affiliation}{\def\affiliation#1{}}{}%
\@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}%
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}%
\@ifundefined{newfield}{\def\newfield#1#2{}}{}%
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }%
\newcount\c@chapter}{}%
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}%
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}%
\@ifundefined{subsection}{\def\subsection#1%
{\par(Subsection head:)#1\par }}{}%
\@ifundefined{subsubsection}{\def\subsubsection#1%
{\par(Subsubsection head:)#1\par }}{}%
\@ifundefined{paragraph}{\def\paragraph#1%
{\par(Subsubsubsection head:)#1\par }}{}%
\@ifundefined{subparagraph}{\def\subparagraph#1%
{\par(Subsubsubsubsection head:)#1\par }}{}%
\@ifundefined{therefore}{\def\therefore{}}{}%
\@ifundefined{backepsilon}{\def\backepsilon{}}{}%
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}%
\@ifundefined{registered}{%
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}%
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr
\mathhexbox20D}}}}{}%
\@ifundefined{Eth}{\def\Eth{}}{}%
\@ifundefined{eth}{\def\eth{}}{}%
\@ifundefined{Thorn}{\def\Thorn{}}{}%
\@ifundefined{thorn}{\def\thorn{}}{}%
\def\TEXTsymbol#1{\mbox{$#1$}}%
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}%
\newdimen\theight
\@ifundefined{Column}{\def\Column{%
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}%
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{%
\rightline{\rlap{\box\z@}}%
\vss
}%
}%
}}{}%
\@ifundefined{qed}{\def\qed{%
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}%
}}{}%
\@ifundefined{cents}{\def\cents{\hbox{\rm\rlap/c}}}{}%
\@ifundefined{miss}{\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}}{}%
\@ifundefined{vvert}{\def\vvert{\Vert}}{
\@ifundefined{tcol}{\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}}{}%
\@ifundefined{dB}{\def\dB{\hbox{{}}}}{
\@ifundefined{mB}{\def\mB#1{\hbox{$#1$}}}{
\@ifundefined{nB}{\def\nB#1{\hbox{#1}}}{
\@ifundefined{note}{\def\note{$^{\dag}}}{}%
\defLaTeX2e{LaTeX2e}
\ifx\fmtnameLaTeX2e
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\fi
\def\alpha{{\Greekmath 010B}}%
\def\beta{{\Greekmath 010C}}%
\def\gamma{{\Greekmath 010D}}%
\def\delta{{\Greekmath 010E}}%
\def\epsilon{{\Greekmath 010F}}%
\def\zeta{{\Greekmath 0110}}%
\def\eta{{\Greekmath 0111}}%
\def\theta{{\Greekmath 0112}}%
\def\iota{{\Greekmath 0113}}%
\def\kappa{{\Greekmath 0114}}%
\def\lambda{{\Greekmath 0115}}%
\def\mu{{\Greekmath 0116}}%
\def\nu{{\Greekmath 0117}}%
\def\xi{{\Greekmath 0118}}%
\def\pi{{\Greekmath 0119}}%
\def\rho{{\Greekmath 011A}}%
\def\sigma{{\Greekmath 011B}}%
\def\tau{{\Greekmath 011C}}%
\def\upsilon{{\Greekmath 011D}}%
\def\phi{{\Greekmath 011E}}%
\def\chi{{\Greekmath 011F}}%
\def\psi{{\Greekmath 0120}}%
\def\omega{{\Greekmath 0121}}%
\def\varepsilon{{\Greekmath 0122}}%
\def\vartheta{{\Greekmath 0123}}%
\def\varpi{{\Greekmath 0124}}%
\def\varrho{{\Greekmath 0125}}%
\def\varsigma{{\Greekmath 0126}}%
\def\varphi{{\Greekmath 0127}}%
\def{\Greekmath 0272}{{\Greekmath 0272}}
\def\FindBoldGroup{%
{\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}%
}
\def\Greekmath#1#2#3#4{%
\if@compatibility
\ifnum\mathgroup=\symbold
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\else
\FindBoldGroup
\ifnum\mathgroup=\theboldgroup
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF}%
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\@ifundefined{mathletters}{%
\newcounter{equationnumber}
\def\mathletters{%
\addtocounter{equation}{1}
\edef\@currentlabel{\arabic{equation}}%
\setcounter{equationnumber}{\c@equation}
\setcounter{equation}{0}%
\edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}%
}
\def\endmathletters{%
\setcounter{equation}{\value{equationnumber}}%
}
}{}
\@ifundefined{BibTeX}{%
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}%
\@ifundefined{AmS}%
{\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}%
A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}%
\@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}%
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}%
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation}%
\fi
\fi
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}}%
\def\dfrac#1#2{{\displaystyle {#1 \over #2}}}%
\def\binom#1#2{{#1 \choose #2}}%
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}}%
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}%
\def\QATOP#1#2{{#1 \atop #2}}%
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}%
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}%
\def\QABOVE#1#2#3{{#2 \above#1 #3}}%
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}%
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}%
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}%
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}%
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}%
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}%
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}%
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\tint{\mathop{\textstyle \int}}%
\def\tiint{\mathop{\textstyle \iint }}%
\def\tiiint{\mathop{\textstyle \iiint }}%
\def\tiiiint{\mathop{\textstyle \iiiint }}%
\def\tidotsint{\mathop{\textstyle \idotsint }}%
\def\toint{\mathop{\textstyle \oint}}%
\def\tsum{\mathop{\textstyle \sum }}%
\def\tprod{\mathop{\textstyle \prod }}%
\def\tbigcap{\mathop{\textstyle \bigcap }}%
\def\tbigwedge{\mathop{\textstyle \bigwedge }}%
\def\tbigoplus{\mathop{\textstyle \bigoplus }}%
\def\tbigodot{\mathop{\textstyle \bigodot }}%
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}%
\def\tcoprod{\mathop{\textstyle \coprod }}%
\def\tbigcup{\mathop{\textstyle \bigcup }}%
\def\tbigvee{\mathop{\textstyle \bigvee }}%
\def\tbigotimes{\mathop{\textstyle \bigotimes }}%
\def\tbiguplus{\mathop{\textstyle \biguplus }}%
\def\dint{\displaystyle \int}%
\def\diint{\displaystyle \iint}%
\def\diiint{\displaystyle \iiint}%
\def\diiiint{\mathop{\displaystyle \iiiint }}%
\def\didotsint{\mathop{\displaystyle \idotsint }}%
\def\doint{\mathop{\displaystyle \oint}}%
\def\dsum{\mathop{\displaystyle \sum }}%
\def\dprod{\mathop{\displaystyle \prod }}%
\def\dbigcap{\mathop{\displaystyle \bigcap }}%
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}%
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}%
\def\dbigodot{\mathop{\displaystyle \bigodot }}%
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}%
\def\dcoprod{\mathop{\displaystyle \coprod }}%
\def\dbigcup{\mathop{\displaystyle \bigcup }}%
\def\dbigvee{\mathop{\displaystyle \bigvee }}%
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}%
\def\dbiguplus{\mathop{\displaystyle \biguplus }}%
\def\makeatother\endinput{\makeatother\endinput}
\bgroup
\ifx\ds@amstex\relax
\message{amstex already loaded}\aftergroup\makeatother\endinput
\else
\@ifpackageloaded{amsmath}%
{\message{amsmath already loaded}\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amstex}%
{\message{amstex already loaded}\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amsgen}%
{\message{amsgen already loaded}\aftergroup\makeatother\endinput}
{}
\fi
\egroup
\typeout{TCILATEX defining AMS-like constructs}
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode}%
\def\FN@{\futurelet\next}%
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@}%
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}%
\def\iiiint{\DOTSI\intno@4 \FN@\ints@}%
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@}%
\def\ints@{\findlimits@\ints@@}%
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi}%
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop}%
\def\intic@{%
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}%
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}%
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@}%
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}%
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}%
\def\intdots@{\mathchoice{\plaincdots@}%
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}%
\def\RIfM@{\relax\protect\ifmmode}
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi}
\let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}}%
{\textdef@\textstyle\tf@size{\firstchoice@false #1}}%
{\textdef@\textstyle\sf@size{\firstchoice@false #1}}%
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}}%
\glb@settings}
\def\textdef@#1#2#3{\hbox{{%
\everymath{#1}%
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}%
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}%
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}%
\def\Sb{_\multilimits@}%
\def\endSb{\crcr\egroup\egroup\egroup}%
\def\Sp{^\multilimits@}%
\let\endSp\endSb
\newdimen\ex@
\[email protected]
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}%
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\overrightarrow{\mathpalette\overrightarrow@}%
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@}%
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\overleftrightarrow{\mathpalette\overleftrightarrow@}%
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\underrightarrow{\mathpalette\underrightarrow@}%
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}%
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@}%
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}%
\def\underleftrightarrow{\mathpalette\underleftrightarrow@}%
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}%
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}}
\def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}}
\def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@}
\def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@}
\def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}}
\def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}}
\def\mathpalette\varlimsup@{}@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}%
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12
\catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim
You are using the "align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim
You are using the "alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim
You are using the "xalignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endxalignat*\endcsname =\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim
You are using the "gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim
You are using the "multiline*" environment in a style in which it is not defined.}
\expandafter\let\csname endmultiline*\endcsname =\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim
You are using a type of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim
You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\def\endequation{%
\ifmmode\ifinner
\iftag@
\addtocounter{equation}{-1}
$\hfil
\displaywidth\linewidth\@taggnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\else
$\hfil
\displaywidth\linewidth\@eqnnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\fi
\else
\iftag@
\addtocounter{equation}{-1}
\eqno \hbox{\@taggnum}
\global\@ifnextchar*{\@tagstar}{\@tag}@false%
$$\global\@ignoretrue
\else
\eqno \hbox{\@eqnnum
$$\global\@ignoretrue
\fi
\fi\fi
}
\newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
\@ifundefined{tag}{
\def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}}
\def\@tag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@tagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
}{}
\makeatother
\endinput
|
2,877,628,089,085 | arxiv | \section{Introduction} \label{Introduction}
Photoacoustic Imaging (PAI) is an emerging medical imaging modality, which enables the recovery of optical tissue properties with a "light-in-sound-out" approach \cite{zackrisson_light_2014}. The key idea is to use the initial pressure distribution $p_0$ to determine physiological tissue properties like blood oxygenation $sO_2$. However, the non-linear effect of the light fluence makes the optical inverse problem ill-posed \cite{yang_deep_2020}. This can potentially lead to ambiguous solutions of the tissue properties. Prior work has addressed related problems with different approaches to uncertainty quantification \cite{tarvainen_bayesian_2013,tick_image_2016,grohl_confidence_2018,godefroy_solving_2020}, yet, explicitly representing ambiguities by full posterior distributions has not been attempted in the context of machine learning-based image analysis.
In this work, we address this gap in the literature with conditional invertible neural networks (cINNs) \cite{ardizzone_guided_2019}. In contrast to conventional neural networks, the INN architecture enables the computation of the full posterior density function (rather than a simple point estimate), which naturally enables the encoding of various types of uncertainty, including multiple solutions (modes).
The contribution of this paper is two-fold: (1) We adapt the concept of cINNs to the specific problem of quantifying tissue parameters from PAI data. (2) We demonstrate the value of our approach with two use-cases, namely photoacoustic device design and optimization of photoacoustic image acquisition.
\section{Materials and methods}
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{Figures/pose_example.png}
\caption{\textit{In silico} setting illustrating how slight changes in the PAI probe pose can resolve model ambiguity (training on S2b, see sec.~\ref{Experiments}). Left: the posterior corresponding to a pixel of interest features two modes. Right: Owing to an improved acquisition pose, the same pixel features a uni-modal posterior.}
\label{fig:multimodes}
\end{figure}
\subsection{Virtual photoacoustic imaging environment}
The virtual environment that was created for testing the proposed approach to uncertainty quantification is based on a digital PAI device. With it 3D representations of the optical and acoustic properties of tissue can be generated, which are used to simulate synthetic PAI data for a given probe design, pose and ground truth tissue properties. The data is simulated using the Monte Carlo eXtreme (MCX) framework \cite{fang_monte_2009}. In the implementation for this study, each simulation is performed with $10^7 \ts {\rm photons}$ originating from a pencil-like source and a grid spacing of $0.34 \ts {\rm mm}$. Each volume is simulated at 26 equidistant wavelengths between $700 \ts {\rm nm}$ and $950 \ts {\rm nm}$.
\subsection{Approach to uncertainty quantification} \label{INN}
Our architecture builds upon the cINN architecture proposed in \cite{ardizzone_guided_2019}.
cINNs transform an input $x$ (in our case blood oxygenation) given a conditioning input $y$ (in our case a single-pixel initial pressure spectrum) to a latent variable $z$. Maximum likelihood (ML) training ensures $z$ to be distributed according to a standard normal distribution. During inference time, because of the invertible architecture, we can sample the latent distribution and, given a conditioning input $y$, generate a conditional probability distribution $p(x|y)$.
The specific architecture implemented in this work consists of 20 blocks, each of which comprises a random (but fixed) permutation and a conditional generative flow (GLOW) coupling block \cite{kingma_glow_2018} (two fully connected layers of size 512 and rectified linear unit (ReLU) activations). During training, we apply normally distributed random noise with $\sigma=0.001$ to the normalized input and $\sigma=0.1$ to the conditioning input. The models are trained for 60 epochs with the AdamW optimizer and weight decay of $0.01$. We start with a learning rate of $10^{-3}$ and reduce it by a factor of 10 after epoch 40 and 50.
In order to automate the detection of multi-modal posteriors we introduce a multi-mode score. We perform kernel density estimation on the posterior samples with 21 different bandwidths between $0.01 \ts {\rm p.p.}$ and $0.1 \ts {\rm p.p.}$. The score is then simply the fraction of estimates with more than one maximum relative to all estimates.
\subsection{Experiments} \label{Experiments}
The purpose of our experiments was to (1) validate the proposed approach to uncertainty quantification in PAI and to (2) showcase use cases that leverage the posteriors to not only detect and quantify uncertainties but to compensate for them. To this end, we generated four different settings.
\begin{description}
\item[S1: Single vessel, single illumination unit (IU):] Images (probabilistically) generated for this setting comprise a tube of muscle tissue with $2 \ts {\rm cm}$ diameter as background with a blood oxygenation uniformly drawn between $0$ and $1$. In the center, a blood vessel with a radius uniformly drawn between $1 \ts {\rm mm}$ and $3 \ts {\rm mm}$ and oxygenation between 0 and 1 is placed. A single illumination source is used.
\item[S2: Multiple vessels, single IU:] The setting S1 is enhanced by introducing an additional blood vessel randomly placed between the light source and the central vessel of interest. This vessel also has a radius between $1 \ts {\rm mm}$ and $3 \ts {\rm mm}$ and an oxygenation between 0 and 1. Fig.~\ref{fig:multimodes} illustrates the basic setup of the phantoms.
\item[S2b: Multiple vessels, shifted single IU:] This setting is identical to S2, but the scene is illuminated from two additional angles ($\pm$45\textdegree). We used the three different illumination setups as independent samples leading to a three times bigger data set. This setting (S2b) was exclusively used to generate Fig.~\ref{fig:multimodes}, i.\,e.\ to demonstrate the effect of probe position on the resulting posterior.
\item[S3: Multiple vessels, multiple IUs:] The setting uses the same data as S2b, but we concatenate the three spectra from the different illumination setups (thus simulating a complex device with three illumination units/detectors) leading to a conditioning input dimension of $3\cdot26$. Fig.~\ref{fig:histo} gives an overview about the settings S1-S3.
\end{description}
We simulated 2,000 volumes for each of the settings and trained cINN models as described in sec.~\ref{INN} on each of them with 85\% of the data. The remaining 15\% of the data was used for testing.
To validate the accuracy of the posteriors, we computed the calibration curves for scenarios S1-S3 as proposed in \cite{ardizzone_analyzing_2019}.
We further processed the results corresponding to all settings to analyze the capability of our method to reveal ambiguous problems (represented by multiple modes) and to determine the effect of device pose and design.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{Figures/histograms.png}
\caption{Worst, median and best case with respect to the IQR of S2 for three investigated settings. In contrast to a single vessel scenario (top), ambiguities are likely to occur in a multi-vessel scenario (middle) when using a single light source. These can be compensated for with multiple light sources (bottom).}
\label{fig:histo}
\end{figure}
\section{Results}
As can be seen in Fig.~\ref{fig:cal} all calibration curves are very close to the identity (median calibration error < 1.5 p.p.). This implies that the width of our posteriors is reliable. For the setting with single vessel and single illumination (S1) the model is slightly underconfident.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{Figures/Cal.png}
\caption{Calibration curves of the posterior distributions of the settings S1-S3 as described in sec.~\ref{Experiments}. Fraction of observations (left) and calibration error (right) as a function of confidence interval on the test set.}
\label{fig:cal}
\end{figure}
In Fig.~\ref{fig:violin} we compare the distribution of IQRs, absolute errors and the multi-mode score for the scenarios S1-S3 described in sec.~\ref{Experiments}. Our results demonstrate that not only the accuracy but also the likelihood for ambiguity of the problem depends crucially on the characteristics of the probe (e.g. number of illumination/detection units). For all three metrics, the performance for the setting with multiple vessels, but only one illumination (S2) is clearly the worst. In particular, this setting includes a non-negligible fraction of multi-modal posteriors.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{Figures/violin.png}
\caption{The violin plots show the interquartile range (IQR) of the posterior distribution on the test set, the absolute error when using the median as estimate and the multi-mode score, introduced in sec.~\ref{INN}. We differentiate between the settings S1-S3 as described in sec.~\ref{Experiments}.}
\label{fig:violin}
\end{figure}
Fig.~\ref{fig:multimodes} and Fig.~\ref{fig:histo} further show that the accuracy in a given pixel depends crucially on the pose and the illumination geometry of the PAI device. Moreover, ambiguity of the inverse problem can be potentially resolved by performing the acquisition from a different position/angle or by using a multiple illumination setting (S3). Our approach could thus serve as a basis for optimizing the measurement process and photoacoustic device design.
\section{Discussion}
To our knowledge, this is the first work exploring the concept of INNs in the context of PAI. Specifically, we have demonstrated the capabilities of cINNs to represent and quantify uncertainties in the context of physiological parameter estimation. Based on our initial experiments, we believe that our approach could also serve as a basis for optimizing PAI probe design and image acquisition.
With regard to device design, this work is similar to that proposed by Adler et al.~\cite{adler_uncertainty-aware_2019} in the context of multispectral optical imaging. However, this work differs in that we used cINNs instead of the original INN architecture, which comes along with several major advantages, including (1) no zero-padding needed, leading to smaller network size, (2) maximum likelihood training and (3) no hyperparameters in the loss function.
Our findings with respect to device design are in line with Shao et al.~\cite{shao_estimating_2011} where a multi-illumination setup is suggested to improve image reconstruction. Our initial experiments indicate that our method may help in the optimization of the acquisition process. As a next step, our approach has to be extended such that it not only shows the current ambiguities, but also proposes possible poses to resolve them. This might be achieved through the application of reinforcement learning.
In conclusion, we have demonstrated the potential of cINNs to reconstruct tissue parameters from PAI data while systematically representing and quantifying uncertainties. Future work will focus on translating the work to a real setting.
\ack{This project has received funding from the European Union’s Horizon 2020 research and innovation programme through the ERC starting grant COMBIOSCOPY under grant agreement No. ERC2015-StG-37960. }
\bibliographystyle{bvm2020}
|
2,877,628,089,086 | arxiv | \section{Introduction}~\label{section:1}
Wireless data caching plays an important role in maintaining the sustainability of future wireless networks by reducing the backhaul rate and the latency for retrieving content from networks without incurring any additional load on costly backhaul links\cite{R1-2,R1-3}. The core idea of caching is to bring content objects closer to the users by allowing the end terminals or helper nodes to cache a subset of popular content files locally.
\subsection{Prior Work}~\label{section:11}
The analysis of capacity scaling laws in large-scale wireless networks has attracted wide attention due to the dramatic growth of communication entities in today's networks. The pioneering work characterizing the capacity scaling law of static ad hoc networks having $n$ randomly distributed source--destination pairs in a unit network area was presented in~\cite{gupta}, in which the per-node throughput of $\Theta\left(\frac{1}{\sqrt{n \log n}}\right)$ was shown to be achievable using a nearest neighbor multihop transmission strategy. There have been further studies on multihop schemes in~\cite{R1-5,R1-6,R1-8}, where the per-node throughput scales far slower than $\Theta(1)$. In addition to the multihop schemes, there has been a steady push to improve the per-node throughput of wireless networks up to a constant scaling by using novel techniques such as networks with node mobility~\cite{algamal,grossglauser}, hierarchical cooperation~\cite{R1-9}, infrastructure support~\cite{R1-16,R1-17}, and directional antennas~\cite{R1-13,R1-15}.
In sharp contrast to the studies on ad hoc network modeling in which sources and destinations are given and fixed, investigating {\em content-centric ad hoc networks} would be quite challenging. As content objects are cached by numerous nodes over a network, finding the nearest content source of each request and scheduling between requests play a vital role in improving the overall network performance. The scaling behavior of content-centric ad hoc networks has received a lot of attention in the literature~\cite{alfano,as_law,jeon,R1-19}. In \cite{as_law,jeon}, throughput scaling laws were analyzed for {\em static} ad hoc networks using multihop communication, which yields a significant performance gain over the single-hop caching scenario~\cite{R1-3, R1-19}. More precisely, a decentralized and random cache allocation strategy along with a local multihop protocol was presented in~\cite{jeon}. A centralized and deterministic cache allocation strategy was presented in~\cite{as_law}, where replicas of each content object are statically determined based on the popularity of each content object. On the other hand, in {\em mobile} ad hoc networks, performance on the throughput and delay under a reshuffling mobility model was analyzed in~\cite{alfano}, where the position of each node is independently determined according to random walks (with an adjustable flight size) which is updated at the beginning of each time slot. It was shown in \cite{alfano} that increasing the mobility of nodes leads to worse performance. Performance on the throughput and delay under a correlated mobility model was investigated in~\cite{liu}, where nodes are partitioned into multiple clusters and the nodes belonging to the same cluster move in a correlated fashion. It was shown in \cite{liu} how correlated mobility affects the network performance. In addition, the optimal throughput--delay trade-off in mobile hybrid networks was studied in~\cite{anh} when each request is served by mobile nodes or static base stations (or helper nodes) via multihop transmissions. It was shown in \cite{anh} that highly popular content objects are mainly served by mobile nodes while the rest of the content objects are served by static base stations.
Recently, a different caching framework, termed coded caching \cite{Maddah1,Maddah2, lim, d2d}, has received a lot of attention in content-centric wireless networks. To achieve the global caching gain, the content placement (caching) phase was optimized so that several different demands can be supported simultaneously with a single coded {\em multicast} transmission. Another promising topic is applications of maximum distance separable (MDS)-coded caching, in which MDS-coded subpackets of content objects are stored in local caches and the requested content objects are retrieved using {\em unicast} transmission. It has been shown in~\cite{mds1,mds2, femto} that with some careful placement of MDS-encoded content objects, significant performance improvement can be attained over uncoded caching strategies.
\subsection{Main Contribution}~\label{section:12}
In this paper, we study the order-optimal throughput--delay trade-off performance in a large scale content-centric mobile ad hoc network employing {\em subpacketization} in which each node moves according to the {\em reshuffling mobility model}~\cite{alfano} and one central server is able to have access to the whole file library. We assume a cache enabled network in which time is divided into slots and each user requests a content object from the library independently at random according to a Zipf popularity distribution. The most distinctive feature in our model compared to previous approaches is that we consider the case when the users {\em mobility is too fast} to finalize a complete transition of a content in a single time slot. Our model is motivated by the increasing applications involving on-demand high-resolution videos requested by mobile users in future wireless networks. To account for the short time slot duration, we cache the content in multiple segments (subpackets) at the mobile nodes. We present two caching strategies, uncoded and MDS-coded caching. The main technical contributions of this paper are summarized as follows:
\begin{itemize}
\item We first present a large-scale cache-enabled mobile network framework where the size of each content object is considerably large and thus only a subpacket of a file can be delivered during one time slot.
\item We characterize fundamental trade-offs between throughput and delay for our content-centric mobile network for both uncoded sequential reception and the MDS-coded random reception cases under the reshuffling mobility model.
\item We formulate optimal cache allocation problems (i.e., the optimal content replication strategies) for both uncoded and MDS-coded caching scenarios and characterize the order-optimal solution using Lagrangian optimization.
\item We analyze the order-optimal throughput--delay trade-off for both uncoded and MDS-coded cases and identify different operating regimes with respect to the transmission range and the number of subpackets.
\item We intensively validate our analysis by numerical evaluations including the order-optimal solution to the cache allocation problem and the throughput--delay trade-off.
\item We identify the case where the performance difference between the uncoded and MDS-coded caching strategies become prominent with respect to system parameters including the Zipf exponent and the number of subpackets in a content object.
\item We extend our study to another scenario where each node moves according to the random walk mobility model.
\end{itemize}
The main motivation of the work is to alleviate the problematic case when a network of fast moving entities cannot
be served by the central server or is not cost-effective. For such cases, the idea is to use the cache-aided users as a distributed server for content distribution. This method essentially increases the capacity of the network by using the storage of each mobile node without the deployment of any additional expensive infrastructure. Under our proposed content-centric network, in addition to the caching gain, we are also capable of improving the overall throughput and delay performance since multiple device-to-device (D2D) communications are allowed in a single time slot. This paper is the first attempt to study large-scale content-centric ad hoc networks under a fading mobility model where subpacketization is employed, and thus sheds light on designing a caching framework in such mobility scenarios.
\subsection{Organization}~\label{section:13}
The rest of this paper is organized as follows. In Section II, some prerequisites and the system model is defined. In Section III, the content delivery protocol and reception strategies are presented. In Section IV, the fundamental throughput--delay trade-off is introduced and specialized in terms of scaling laws. The order-optimal throughput--delay trade-offs are derived by introducing the uncoded caching and MDS-coded caching strategies in Sections V and VI, respectively. In Section VII, numerical evaluations are shown to validate our analysis. In Section VIII, our study is extended to the random walk mobility model. Finally, Section IX summarizes the paper with some concluding remarks.
\subsection{Notations}~\label{section:14}
Throughout this paper, $\mathbb{E}[\cdot]$ is the expectation. Unless otherwise stated, all logarithms are assumed to be to the base 2. We use the following asymptotic notation: i) $f(x)= O(g(x))$ means that there exist constants $a$ and $c$ such that $f(x) \leq ag(x)$ for all $x > c$, ii) $f(x) = o(g(x))$ means that $\lim_{x \rightarrow \infty }\frac{f(x)}{g(x)} =0 $, iii) $f(x)=\Omega(g(x))$ if $g(x)= O(f(x))$, iv) $f(x) = \omega(g(x))$ means that $\lim_{x \rightarrow \infty }\frac{g(x)}{f(x)} =0 $, v) $f(x)=\Theta(g(x))$ if $f(x)= O(g(x))$ and $f(x)=\Omega(g(x))$~\cite{bigo}.
\section{Prerequisite and System Model}~\label{section:2}
\subsection{Overview of MDS Coding}~\label{section:21}
Linear coding is among the most popular coding techniques due to its simplicity and performance. The linear coding operation can be summarized as follows. We divide a content file $m$ into $K$ (uncoded) subpackets $\left\{F^{(u)}_{m,1}, F^{(u)}_{m,2}, \cdots, F^{(u)}_{m,K}\right\}$ and transmit them by linearly combining the subpackets with respect to an encoding vector $v = \left\{a_1, \cdots, a_K\right\}$, which is generated over a Galois field $GF(q)$ of size $q$ ~\cite{gf}. Each encoded subpacket is generated by
\begin{equation}\label{eq:enc}
\mathcal{E}_v =\sum_{j=1}^K a_{j} F^{(u)}_{m,j},
\end{equation}
\noindent where $\mathcal{E}_v$ is the encoded subpacket corresponding to the encoding vector $v$ and $a_j$ is the encoding coefficient for the $j$th subpacket. In \eqref{eq:enc}, the addition and multiplication operations are performed over the $GF(q)$. In this work, we consider a special class of linear codes, named MDS codes~\cite{mds}. Assume that a content file $m$ is divided into $K$ subpackets that are encoded into $r_m$ coded subpackets \!$\left\{\!F^{(c)}_{m,1},\! \cdots, F^{(c)}_{m,r_m}\!\!\right\}$\! using a $q$-ary $(r_m,K)$ MDS code. Then, by the property of MDS codes, reception of any subset of $K$ MDS-coded subpackets is sufficient to recover the complete file.
\subsection{System Model}~\label{section:22}
Let us consider a content-centric mobile ad hoc network consisting of $n$ mobile nodes and one central server, where $n$ mobile nodes are distributed uniformly at random in the network of a unit area (i.e., the dense network) and the central server is able to have access to the entire library of size $M=\Theta(n^\beta)$ via infinite-speed backhaul, where $0<\beta<1$. The time is divided into independent slots $t_1, t_2,\cdots$, and each mobile node is allowed to initiate a request during its allocated time slot. In our network model, the end nodes are assumed to prefetch a part of (popular) contents in their local cache from the central server when they are indoors. For example, during off-peak times, the central server can initiate the content placement phase and fill the cache of each node. On the other hand, for the case when the actual requests take place, we confine our attention to an outdoor environment where nodes are moving fast. For such cases, since file reception from the central server may not be cost-effective, only D2D communications are utilized for content delivery (e.g. \cite{R1-3}), i.e., the central server does not participate in the delivery phase.
We first adopt the reshuffling model~\cite{alfano} for the nodes' mobility pattern, which assumes that each mobile node will change their position uniformly at random over the network area at the start of each time slot and it will remain static during a time slot. In our content-centric mobile network, each node generates requests for content objects in the library during its allocated time slot. By following the approaches in ~\cite{R1-3, alfano, as_law, jeon, R1-19, liu, d2d}, we assume that the size of each content object $m \in \mathcal{M}$ is the same, where $\mathcal{M}= \left\{1, \cdots, M\right\}$. We assume that every node requests its content object independently according to the Zipf distribution~\cite{alfano,anh,milad,zipf}
\begin{equation}\label{eq:zipf}
p^{pop}_m = \frac{m^{-\alpha}}{H_{\alpha}(M)},
\end{equation}
\noindent where $\alpha>0$ is the Zipf exponent, and $H_{\alpha}(M) = \sum_{i=1}^{M} i^{-\alpha}$ is a normalization constant formed as the Riemann zeta function and is given by
\begin{equation} \label{eq:H}
H_{\alpha}(M) = \begin{cases}
\Theta\left(1 \right) & \alpha > 1 \\
\Theta\left(\log M \right) & \alpha =1 \\
\Theta\left( M^{1-\alpha} \right) & \alpha <1.
\end{cases}
\end{equation}
The main theme of our study is to understand how to deal with incomplete file transmissions in a mobile network. In such an example, a user watching a high-resolution video on mobile devices may move away from a source node while the file has not been completely transmitted. Simply expanding the time slot to ``fit'' the throughput of the user may not be feasible in such a case since the user is physically moving away resulting in a lost connection. Our goal is to design strategies that are robust against such examples using the concept of {\em subpacketization} and to analyze their performance. Hence, we assume that each content object is divided into $K$=$\Theta(n^\gamma)$ subpackets, where $0\!<\!\gamma\!<\!1$ and every subpacket has the same (unit) size such that each of the requesting nodes is able to completely download one subpacket from one of its nearest source node in one time slot. In a content-centric network, each node is equipped with a local cache to store the content objects, and in our work, we assume a practical scenario where each node is equipped with a local cache of the same finite storage capacity $S\!=\!\Theta(K)$, i.e., the cache can store $S$ distinct subpackets\footnote{Our problem formulation can be extended to a more general case having heterogeneous cache sizes by replacing the total caching constraints in~\eqref{eq:Cons1} and~\eqref{eq:Cons1c} by $\sum_{m=1}^M KX_m \le \sum_{i=1}^n S_i$ and $\sum_{m=1}^M r_m \le \sum_{i=1}^n S_i$, respectively, where $S_i$ is the storage capacity of node $i$. The general problems can be solved by following the same lines as those in Sections \ref{section:5} and \ref{section:6}.}. In cache-enabled wireless networks, content delivery can be divided into two stages, the content placement phase and the content delivery phase. We first describe the placement phase for both uncoded and MDS-coded caching scenarios, which determines the strategy for caching subpackets of content objects in the storage of $n$ nodes.\\
\textbf{Content placement phase for uncoded caching:} Let $X_{m,i}$, $m\in\mathcal{M}$, $i\in\{1,\cdots, K\}:=[1:K]$ represent the number of replicas (will be optimized later on the basis of the popularity of the content $m$) of each subpacket $i$ of content $m$. Since we will assume that $X_{m,i}$ is the same for all $i \in [1:K]$, henceforth we will drop the index $i$ and denote $X_{m,i}$ by $X_{m}$. Similarly as in \cite{as_law, alfano}, during the caching phase, the $X_{m}$ replicas of subpacket $i$ of content object $m$ are stored in the caches of $X_{m}$ distinct nodes. In order to have a feasible cache allocation strategy, $\left\{X_{m}\right\}_{m=1}^M$ should satisfy the following constraints:
\begin{align}
\sum_{m=1}^MKX_{m} &\leq Sn, \label{eq:Cons1}\\
1 \leq X_{m} &\leq n. \label{eq:Cons2b}
\end{align}
\noindent Note that the total caching constraint in \eqref{eq:Cons1} is a relaxed version of the individual caching constraints~\cite{milad} and the constraint in \eqref{eq:Cons2b} is to make sure that the network contains at least $1$ and at most $n$ copies of the each content.\\
\textbf{Content placement phase for MDS-coded caching:}
For the MDS-coded caching strategy, instead of replicating the subpackets, we encode $K$ subpackets of each content $m$ into $r_m$ MDS-coded subpackets (which will be optimized later). During the caching phase, $r_{m}$ encoded subpackets of content object $m$ are stored in the caches of $r_{m}$ distinct nodes. By the property of MDS codes, a client requesting content $m$ only needs to make sure that any $K$ out of $r_m$ distinct MDS-coded subpackets are decoded to successfully recover the entire content $m$. In order to have a feasible cache allocation strategy, $\!\left\{r_{m}\!\right\}_{m=1}^M$\! should satisfy the following constraints:
\begin{align}
\sum_{m=1}^Mr_{m} &\leq Sn, \label{eq:Cons1c} \\
r_{m} &\geq K. \label{eq:Cons2cb}
\end{align}
\noindent Note that the constraint in \eqref{eq:Cons2cb} is to make sure that for some content $m \in \mathcal{M}$, there exist at least $K$ MDS-coded subpackets in the network so that it can be recovered by a requesting node via MDS code decoding.
We now move on to the delivery phase, which allows the requested content objects to be delivered from the source node to the requesting node over wireless channels (i.e., D2D communications) possibly during peak times. As addressed before, content is assumed to be retrieved under an outdoor environment in which the nodes do not have reliable connection with the central server due to the fast mobility condition. In the delivery phase, each node downloads its requested content object via {\em single-hop} in its allocated time slots\footnote{We note that under the reshuffling mobility model, the network performance cannot be improved by delivering content over multihop routes~\cite[Section III]{alfano}.}, from one of the nodes storing the requested content object in their caches. The protocol model in~\cite{gupta} is adopted for successful content transmission. According to the protocol model, the content delivery from source node $s$ to requesting node $d$ will be successful if the following conditions hold; 1) $d_{sd}(t_a)\!\leq R$ and 2) $d_{bd}(t_a)\geq(1+\Delta)R$, where $d_{sd}(t_a)$ represents the Euclidean distance between the nodes $s$ and $d$ at given time slot $t_a$, $d_{bd}(t_a)$ represents the distance between the nodes $b$ and $d$, for every node $b$ that is simultaneously transmitting at given time slot $t_a$, $\Delta>0$ is a guard factor, and $R>0$ is the transmission range of each node. We assume $R=\Omega\left(\sqrt{\log n/n}\right)$ and $R=O(1)$, such that each square cell of area $a(n)=\!R^2$ has at least one node with high probability (whp) (see~\cite{gupta} for details). When successful transmission occurs, we assume that the total amount of data transferred during the slot is large enough to transfer one subpacket (either uncoded or MDS-coded) of a content from the sender to the receiver\footnote{Unlike our setup, the work in~\cite{algamal} adopted the fluid model to achieve improved performance as the multihop communications become feasible during each slot.}. Nevertheless, in a given time slot, a requesting node can receive no more than one subpacket. Thus, for a requesting node to successfully receive the entire content file, at least $K$ time slots are required. Note that by properly setting the parameter $K$, the size of each subpacket can be flexibly adjusted so that one subpacket would be transmitted or retrieved in one time slot when the above conditions in the protocol model hold.
\subsection{Performance Metrics}~\label{section:23}
In this subsection, we define performance metrics used throughout our paper. We define a {\em scheme} as a sequence of policies, which determines the transmission scheduling in each time slot as well as the cache allocations for all nodes. For a given scheme, the average content transfer delay $D_{avg}(n)$ (expressed in time slots) and the per-node throughput $\lambda(n)$ (expressed in content/slot) for a content-centric mobile ad hoc network are defined as follows.
\begin{definition}[Average Content Transfer Delay $D_{avg}(n)$]~\label{df:davg} Let $D(j,i)$ denote the transfer delay of the $i$th request for any content object by node $j$, which is measured from the moment that the requesting message leaves the requesting node until all the $K$ corresponding subpackets of the content object arrives at the node from the source nodes. Then, the delay over all the content requests for node $j$ is $\limsup_{z\rightarrow\infty}\frac{1}{z} \sum_{i=1}^z D(j,i)$ for a particular realization of the network. In this case, the average content transfer delay $D_{avg}(n)$ of all nodes is defined as
\begin{equation} \label{eq:ddavg}
D_{avg}(n) \overset{\Delta}{=} \mathbb{E}\left[ \frac{1}{n}\sum_{j=1}^n \limsup_{z\rightarrow\infty}\frac{1}{z} \sum_{i=1}^z D(j,i)\right],
\end{equation}
where the expectation is over all network realizations.
\end{definition}
\begin{definition}[Per-Node Throughput $\lambda(n)$]~\label{df:pnt} Let $T(j,\tau)$ denote the total number of requested content objects received by node $j$ during $\tau$ time slots. Note that this could be a random quantity for a given network realization. Then, the per-node throughput $\lambda(n)$ in our cache-enabled mobile network is
\begin{equation} \label{eq:dpnt}
\lambda(n) \overset{\Delta}{=} \mathbb{E}\left[\frac{1}{n}\sum_{j=1}^n \liminf_{\tau\rightarrow\infty}\frac{1}{\tau} T(j,\tau)\right],
\end{equation}
where the expectation is over all network realizations.
\end{definition}
\section{Content Delivery Protocol and Reception Strategies}~\label{section:3}
In this section, we describe the protocol for the content delivery along with the file reception strategies for both uncoded and MDS-coded caching.
\subsection{Content Delivery}~\label{section:31}
In the following, we explain the strategy for content delivery. First, each node generates a content request for a subpacket (either uncoded or MDS-coded) of content $m$ according to the Zipf's popularity distribution. If the requesting node finds a potential source node within a single-hop range, i.e., within a radius of $R$, then it will start retrieving its desired content. Otherwise, it waits until it finds an available source node for the request. Next, it generates another request for the rest of the subpackets of content $m$ until the requesting node successfully receives the $K$ distinct subpackets. Finally, it generates another request for a new content object by following the same procedure as described above.
\subsection{Reception Strategies}~\label{section:32}
In this subsection, we explain the sequential and random content reception strategies for uncoded and MDS-coded caching, respectively. These content reception strategies represent the sequence in which the $K$ subpackets of a desired content object are delivered to the requesting node.
\subsubsection{Sequential Reception of Uncoded Content}~\label{section:321}
We first explain the sequential reception strategy for the uncoded caching case. In the uncoded case, the reception strategy is sequential; that is, all the $K$ subpackets of a content object are delivered in a sequence to the requesting node. An illustration of the sequential reception strategy is shown for three representative cases. In Fig.~\ref{fig:seq_rec_a}, at time slot $t_a$, a node requests the subpacket $F^{(u)}_{m,1}$ of content $m$ to the nodes within its transmission range $R$. The request is respondent within a time slot if there exists a source node that has $F^{(u)}_{m,1}$ in his/her cache and falls within the transmission range of the requesting node at time slot $t_a$. In Fig.~\ref{fig:seq_rec_b}, the requesting node requests the subpacket $F^{(u)}_{m,2}$ of content $m$ at time slot $t_b$ and fails to find any source node within its transmission range. Thus, the requesting node will wait irrespective of the fact that there is a source node within its transmission range storing the subpacket $F^{(u)}_{m,3}$. In Fig.~\ref{fig:seq_rec_c}, the requesting node is still looking for the subpacket $F^{(u)}_{m,2}$ and the request is responded by a source node that has $F^{(u)}_{m,2}$ in his/her cache and falls within the transmission range of the requesting node at time slot $t_c$.
\begin{figure}[t]
\centering
\subfigure[Time Slot $t_a$]{ \includegraphics[ height=2.7cm, width= 2.6cm]{Seq_reception_a1.eps} \label{fig:seq_rec_a}}
\subfigure[Time Slot $t_b$]{ \includegraphics[ height=2.7cm, width= 2.6cm]{Seq_reception_b1.eps} \label{fig:seq_rec_b}}
\subfigure[Time Slot $t_c$]{ \includegraphics[ height=2.7cm, width= 2.6cm]{Seq_reception_c1.eps} \label{fig:seq_rec_c}}
\caption{ Content delivery following the sequential reception strategy for an uncoded content, where $F^{(u)}_{m,i}$ is the $i$th uncoded subpacket of content $m$, $Y_m$ is the set of received content's subpackets, and $W_m$ is the set of required subpackets of content $m$.}
\label{fig:seq_rec}
\end{figure}
\subsubsection{Random Reception of MDS-coded Content}~\label{section:322}
In the MDS-coded caching case, file reception is random; that is, the requesting node may receive any of the $K$ out of $r_m$ MDS-coded subpackets of a content object in an arbitrary order. Figure~\ref{fig:ran_rec} is an illustration of the random reception strategy. In Fig.~\ref{fig:ran_rec_a}, at time slot $t_a$, a node requests the subpackets of content $m$ from the nodes within its transmission range $R$. The request is respondent within the one time slot by a source node that has $F^{(c)}_{m,2}$ in his/her cache and falls within the transmission range of the requesting node at time slot $t_a$. In Fig.~\ref{fig:ran_rec_b}, a node requests the remaining subpackets of content $m$ and the request is responded by a source node that has $F^{(c)}_{m,3}$ in his/her cache that falls within the transmission range of the requesting node at time slot $t_b$.
\begin{figure}[t]
\centering
\subfigure[Time Slot $t_a$ ] { \includegraphics[height=3.8cm]{Ran_reception_a1.eps} \label{fig:ran_rec_a} }
\subfigure[Time Slot $t_b$ ] { \includegraphics[height=3.8cm]{Ran_reception_b1.eps} \label{fig:ran_rec_b} }
\caption{ Content delivery following the random reception strategy for an MDS-coded content, where $F^{(c)}_{m,j}$ is the $j$th MDS-coded subpacket of content $m$, $C_m$ is the set of all the MDS-coded subpackets of content $m$, $Y_m$ is the set of received content's MDS-coded subpackets, and $W_m$ is the set of required MDS-coded subpackets of content $m$.}
\label{fig:ran_rec}
\end{figure}
Intuitively, the random reception strategy should perform better than the sequential reception case. Nonetheless, both schemes play an important role in caching for different applications in practice. For example, the random reception strategy seems to be suitable for the case where content such as videos and documents are first downloaded completely, and then viewed offline. On the other hand, for the case where a user is streaming videos online, the random reception strategy will not work since the content is required to be downloaded sequentially.
We note that a playback buffer~\cite{buffer} that stores a few future subpackets could enhance the quality of a video streaming service as it enables us to play the next portion of the video, e.g., $j$th subpacket $F_{m,j}^{(u)}$ of content $m\in\mathcal{M}$ before a requesting user reaches the point of viewing the end of the current subpacket of the video (i.e., $F_{m,j-1}^{(u)}$). In this paper, we will not account for how such playback buffers are deployed and how the content is updated within the playback buffer, which goes beyond our scope\footnote{As long as the playback buffer has limited capacity that is independent of the scaling system parameters, content delivery for online on-demand video streaming would only be possible by sequential reception, which is consistent with the current video streaming protocols as discussed in~\cite{R1-19, R1-3}. This is because buffering the subpackets in an arbitrarily way does not guarantee seamless video streaming as the buffers may not contain the next sequential portion of the video that a requesting node is watching.}.
We will see in the next section that these reception strategies play key roles in defining the average content transfer delay $D_{avg}(n)$.
\section{Throughput--Delay Trade-off}~\label{section:4}
In this section, we characterize a fundamental throughput--delay trade-off in terms of scaling laws for the content-centric mobile network using the proposed content delivery protocol.
\begin{theorem}\label{th:TD}
Consider nodes generating requests according to the content delivery protocol in Section \ref{section:31}. Then, the throughput--delay trade-off in our proposed cache-enabled mobile network is given by
\begin{align} \label{eq:tdo}
\lambda(n) = \Theta\left(\frac{1}{na(n) D_{avg}(n)}\right),
\end{align}
where $\lambda(n)$ is the per-node throughput, $D_{avg}(n)$ is the average content transfer delay, and $a(n)=R^2$ is the area in which a node can communicate with other nodes.
\end{theorem}
\begin{proof}
The fundamental throughput--delay trade-off for the content-centric network employing the proposed content delivery protocol in Section~\ref{section:31} can be established using the elementary renewal theorem~\cite[Chapter 8]{renewal}. Let $\kappa(\tau, j)$ denote the total number of content objects transferred to request node $j$ observed up to $\tau$ time slots when node $j$ is assumed to be an active requester in every time slot. Then from the fact that the transfer delay $D(j,i)$ of the $i$th request for any content object by node $j$ represents the inter-arrival time, it follows whp that
\begin{align*}
\frac{1}{n}\sum_{j=1}^n \lim_{\tau\rightarrow\infty} \frac{\kappa(\tau,j)}{\tau} = \frac{1}{D_{avg}(n)},
\end{align*}
where $D_{avg}(n)$ is the average content transfer delay over all nodes in \eqref{eq:ddavg}. Since only one node in the transmission range of area $a(n)$ can be active in each time slot, the achievable per-node throughput in \eqref{eq:dpnt} is then expressed as \eqref{eq:tdo}, which completes the proof of Theorem~\ref{th:TD}.
\end{proof}
Theorem~\ref{th:TD} implies that the per-node throughput $\lambda(n)$ can be characterized for given the average content transfer delay $D_{avg}(n)$ or vice versa. Hence, we focus on minimizing $D_{avg}(n)$, which is equivalent to maximizing $\lambda(n)$ for a given $a(n)$. We establish the following lemma, which formulates the average content transfer delay $D_{avg}(n)$ for both the uncoded sequential reception in Section~\ref{section:321} and the MDS-coded random reception in Section~\ref{section:322}.
\begin{lemma}\label{le:otd}
Consider a content-centric mobile network with nodes retrieving their requests according to the content delivery protocol in Section~\ref{section:31}. Given a cache allocation strategy, the average content transfer delay $D_{avg}(n)$ for the uncoded caching case employing the sequential reception strategy in Section~\ref{section:321} is given by
\begin{align} \label{eq:davg_s}
D_{avg}(n) = \Theta \left( \sum_{m=1}^M \frac{K p^{pop}_m}{\min\left( 1, a(n) X_{m} \right)} \right)
\end{align}
\noindent and $D_{avg}(n)$ for the MDS-coded caching case employing the random reception strategy in Section~\ref{section:322} is given by
\begin{align} \label{eq:davg_c}
D_{avg}(n) = \Theta \left( \sum_{m=1}^M \sum_{j=0}^{K-1} \frac{p^{pop}_m}{\min\left( 1, (r_m-j) a(n) \right)} \right).
\end{align}
\end{lemma}
\begin{proof} First, consider the uncoded caching case employing the sequential reception strategy. Given a cache allocation strategy $\left\{X_{m}\right\}_{m=1}^M$, for any requesting mobile node, the transfer delay associated to the $i$th subpacket of content $m\in \mathcal{M} $ is given by the number of time slots that it take for a node to come in contact with another node storing the desired content, which is geometrically distributed with mean $1/p^{seq}_{m,i}$. Here, $p^{seq}_{m,i}$ is the contact probability that a node requesting the $i$th subpacket of content $m \in \mathcal{M}$ falls in a given time slot within distance $R$ of a node holding the requested subpacket, which is given by
\begin{align} \label{eq:pmi}
p^{seq}_{m,i}= 1- \left( 1- a(n)\right)^{X_{m}} ~~~ i\in\left[ 1:K\right].
\end{align}
The contact probability $p^{seq}_{m,i}$ in order sense is equivalent to $\Theta\left( \min\left( 1, a(n) X_{m} \right)\right)$. Then, the number of time slots required to successfully receives content object $m\in \mathcal{M}$ consisting of $K$ subpackets is given by $\Theta \left(\frac{K}{\min\left( 1, a(n) X_{m} \right)} \right)$. From the fact that each node generates its request following the same Zipf's law, the $D_{avg}(n)$ for the content-centric mobile network employing the sequential reception of the uncoded content is given by
\begin{equation*} \label{eq:davg_S}
D_{avg}(n) = \Theta \left( \sum_{m=1}^M \frac{Kp^{pop}_m}{\min\left( 1, a(n) X_{m} \right)}\right).
\end{equation*}
Next, we characterize the average content transfer delay $D_{avg}(n)$ for the case of MDS-coded caching employing the random reception strategy. Given a cache allocation strategy $\left\{r_{m}\right\}_{m=1}^M$, the contact probability $p^{ran}_{m,j}$ for the MDS-coded caching based random reception strategy is the probability that a node having pending requests for $K-j$ MDS-coded subpackets of content $m$ falls in a given time slot within distance $R$ of a node holding one of the requested MDS-coded subpackets while the requesting node is assumed to have already received $j$ MDS-coded subpackets. Then, $p^{ran}_{m,j}$ is given by
\begin{equation} \label{eq:pmjc}
p^{ran}_{m,j} = 1- \left( 1- a(n)\right)^{(r_m-j)} ~~~j \in\left[ 0:K-1\right].
\end{equation}
The contact probability $p^{ran}_{m,j}$ in order sense is equivalent to $\Theta\left(\min\left( 1, (r_m-j) a(n) \right)\right)$. Then, the expected number of time slots required to successfully receives content object $m\in \mathcal{M}$ consisting of $K$ MDS-coded subpackets is given by $\Theta \left( \sum_{j=0}^{K-1} \frac{1}{\min\left( 1, (r_m-j) a(n) \right)} \right)$. Thus, the $D_{avg}(n)$ for the content-centric mobile network employing the random reception of the MDS-coded content is given by
\begin{equation*} \label{eq:davg_C}
D_{avg}(n) = \Theta \left( \sum_{m=1}^M \sum_{j=0}^{K-1} \frac{p^{pop}_m}{\min\left( 1, (r_m-j) a(n) \right)} \right).
\end{equation*}
This completes the proof of the lemma.
\end{proof}
From Theorem~\ref{th:TD} and Lemma~\ref{le:otd}, the per-node throughput $\lambda(n)$ for the case of uncoded caching can be obtained using~\eqref{eq:tdo} and~\eqref{eq:davg_s}, while the per-node throughput $\lambda(n)$ for the case of MDS-coded caching can be obtained using~\eqref{eq:tdo} and~\eqref{eq:davg_c}. As expected, Lemma~\ref{le:otd} implies that the average content transfer delay $D_{avg}(n)$ for both reception strategies is influenced by the cache allocation strategies. The optimal performance in term of minimum average content transfer delay $D_{avg}(n)$ can be obtained by optimally selecting the cache allocation strategy, which is not straightforward due to caching constraints. Also, note that by Theorem~\ref{th:TD}, selecting the optimal cache allocation strategy that minimizes $D_{avg}(n)$ is equivalent to maximizing $\lambda(n)$ for a given $a(n)$. In the next section, we characterize the minimum average content transfer delay $D_{avg}(n)$ under our network model with subpacketization for uncoded caching by presenting the optimal cache allocation strategy.
\section{Order-Optimal Uncoded Caching in Mobile Networks with Subpacketization }~\label{section:5}
In this section, we characterize the order-optimal average content transfer delay $D_{avg}(n)$ and the corresponding maximum per-node throughput $\lambda(n)$ of the cache-enabled mobile ad hoc network employing subpacketization by selecting the order-optimal cache allocation strategies $\{\hat{X}_m\}_{m=1}^M$. We first introduce our problem formulation in terms of minimizing the average content transfer delay $D_{avg}(n)$ for the uncoded caching following the sequential reception strategy in Section~\ref{section:321}. Then, we solve the optimization problem and present the order-optimal cache allocation strategy under our network model. Finally, we present the minimum $D_{avg}(n)$ and the corresponding maximum $\lambda(n)$ using the order-optimal cache allocation strategy.
\subsection{Problem Formulation}~\label{section:51}
It is observed from Lemma~\ref{le:otd} that the average content transfer delay $D_{avg}(n)$ depends completely on the caching allocation strategy $\{X_m\}_{m=1}^M$. Among all the cache allocation strategies, the optimal one will be the one that has the minimum $D_{avg}(n)$. It is intuitive that there is no need to cache more than $a(n)^{-1}$ replicas of the subpacket $i$ of content object $m \in \mathcal{M}$ over the network for the uncoded sequential reception case due to the term $\min\left( 1, a(n) X_{m} \right)$ in~\eqref{eq:davg_s} of Lemma~\ref{le:otd}. Thus, we modify \eqref{eq:Cons2b} and impose the following individual caching constraints:
\begin{align} \label{eq:Cons2}
1 \leq X_{m} \leq a(n)^{-1}
\end{align}
for all $m \in \mathcal{M}$.
Now, from Lemma~\ref{le:otd} and the caching constraints in \eqref{eq:Cons1} and \eqref{eq:Cons2}, the optimal cache allocation strategy $\{\hat{X}_{m}\}_{m = 1}^M$ for the uncoded sequential reception scenario can thus be the solution to the following optimization problem:
\begin{subequations} \label{eq:op_s}
\begin{align} \label{eq:of_s}
\min_{\left\{X_{m}\right\}_{m \in \mathcal{M}}}\sum_{m=1}^M \frac{Kp^{pop}_{m}}{a(n) X_{m}}
\end{align}
subject to
\begin{align} \label{eq:c1}
\sum_{m=1}^MKX_{m} \leq Sn,
\end{align}
\begin{align} \label{eq:c2}
1 \leq X_{m} \leq a(n)^{-1}.
\end{align}
\end{subequations}
Note that the number of replicas $X_{m}$ of content object $m$ stored at the mobile nodes is an integer variable, which makes the optimization problem~\eqref{eq:op_s} non-convex and thus intractable. However, as long as scaling laws are concerned, the discrete variables $X_{m}$ for $m \in \mathcal{M}$ can be relaxed to real numbers in $[1, \infty)$ so that the objective function in \eqref{eq:op_s} becomes convex and differentiable.
\subsection{Order-Optimal Cache Allocation Strategy}~\label{section:52}
We use the Lagrangian method to solve the problem in~\eqref{eq:op_s}. Before diving into the optimization problem, we will introduce some useful operating regimes. In particular, we divide the entire content domain $\mathcal{M}$ into the following regimes according to content $m\in \mathcal{M}$:
\begin{itemize}
\item Regime $\text{I}^{(u)}$: $X_{m}=\Theta\left( a(n)^{-1} \right)$
\item Regime $\text{II}^{(u)}$: $ X_{m} = o\left( a(n)^{-1} \right)$.
\end{itemize}
\noindent Let $\mathcal{I}^{(u)}_{1}$ and $\mathcal{I}^{(u)}_{2}$ be partitions of $\mathcal{M}$ that consist of content belonging to Regimes I\(^{(u)}\) and II\(^{(u)}\), respectively. The Lagrangian function corresponding to \eqref{eq:op_s} by relaxing the $1 \leq X_{m}$ constraint is given by
\begin{align}\label{eq:LF_s}
&\mathcal{L}\left(\!\left\{X_{m}\right\}_{m \in \mathcal{M}},\delta,\left\{\sigma_m\right\}_{m \in \mathcal{M}}\right)\! =\! \sum_{m=1}^M\!\! \frac{K p^{pop}_{m}}{a(n)X_{m}}\!\nonumber \\ & +\! \delta\left(\sum_{m= 1}^{M}\!\!KX_{m}\!- Sn\!\right) + \sum_{m= 1}^{M}\!\!\sigma_m \left( X_{m}\!-\frac{1}{a(n)}\right),
\end{align}
\noindent where $\sigma_m, \delta \in \mathbb{R}$. The Karush-Kuhn-Tucker (KKT) conditions for \eqref{eq:op_s} are then given by
\begin{align}\label{eq:KKT1_s}
\frac{\partial \mathcal{L}\left(\left\{\hat{X}_{m}\right\}_{m \in \mathcal{M}},\hat{\delta},\left\{\hat{\sigma}_m\right\}_{m \in \mathcal{M}}\right) }{\partial \hat{X}_{m}}= 0,
\end{align}
\begin{align}\label{eq:KKT2_s}
\hat{\sigma}_m \left( \hat{X}_{m}- a(n)^{-1} \right) =0 ,
\end{align}
\begin{align}\label{eq:KKT3_s}
\hat{\delta} \left(\sum_{m= 1}^{M} K\hat{X}_{m} - Sn \right) =0 ,
\end{align}
\begin{align*
\hat{\delta} \geq 0 ,
\end{align*}
\begin{align*
\hat{\sigma}_m \geq 0
\end{align*}
\noindent for all $m \in \mathcal{M}$, where $\hat{X}_m, \hat{\delta}$, and $\hat{\sigma}_m$ represent the optimized values. Let the content index $m^{(u)}_1\in \mathcal{I}^{(u)}_{2}$ denote the smallest content index belonging to Regime II\(^{(u)}\). In the following, we introduce a lemma that presents an important characteristic of the optimal cache allocation strategy $\left\{\hat{X}_{m}\right\}_{m=1}^M$ and plays a vital role in solving \eqref{eq:op_s}.
\begin{lemma}\label{le:1_s}
The order-optimal cache allocation strategy denoted by $\left\{\hat{X}_{m}\right\}_{m=1}^M$ in \eqref{eq:op_s} is non-increasing with $m \in \mathcal{M}$.
\end{lemma}
\begin{proof} Deferred to Appendix \ref{AppendixA_s}.
\end{proof}
Lemma \ref{le:1_s} allows us to establish our first main result regarding the order-optimal cache allocation strategy for the uncoded case.
\begin{prop}\label{th:oprep_s}
Consider the content-centric mobile ad hoc network model employing subpacketization and following the uncoded sequential reception strategy in Section~{\em\ref{section:321}}. The order-optimal cache allocation strategy is then given by
\begin{equation} \label{eq:oprep_s}
\hat{X}_m = \begin{cases}
a(n)^{-1} & m \in \left\{1, \cdots, m^{(u)}_1-1 \right\} \\
\frac{\sqrt{p^{pop}_m}}{\sum_{\widetilde{m}= m^{(u)}_1}^M \sqrt{p^{pop}_{\widetilde{m}}} } S^{(u)} & m \in \left\{m^{(u)}_1, \cdots, M \right\}
\end{cases}
\end{equation}
\noindent where $p^{pop}_m$ is given in~\eqref{eq:zipf}, $S^{(u)}= n - (m^{(u)}_1-1) a(n)^{-1}$, and the boundary between Regimes {\em I}\(^{(u)}\) and {\em II}\(^{(u)}\) is defined by content index $m^{(u)}_1$, which is given by
\begin{equation} \label{eq:am1_s}
m^{(u)}_1 =\Theta\left( \min\left\{M, \left(\frac{na(n)}{H_{\frac{\alpha}{2}}(M)}\right)^{2/\alpha} \right\}\right),
\end{equation}
where
\begin{equation} \label{eq:H2}
H_{\frac{\alpha}{2}}\left(M\right) = \begin{cases}
\Theta\left(1 \right) & \alpha > 2 \\
\Theta\left(\log M \right) & \alpha =2 \\
\Theta\left( M^{1-\frac{\alpha}{2}} \right) & \alpha <2.
\end{cases}
\end{equation}
\end{prop}
\begin{proof}
Deferred to Appendix \ref{AppendixD_s}.
\end{proof}
From Proposition~\ref{th:oprep_s}, it is observed that the order-optimal cache allocation strategy is partitioned into two parts. The first part consisting of highly popular content with indice $m <m^{(u)}_1$ is replicated $a(n)^{-1}$ times. The rest is the content with index $m \geq m^{(u)}_1$ for which the order-optimal cache allocation strategy is to monotonically decrease the number of replicas with $m$. In addition, the value of $m^{(u)}_1$ depends on the choice of $a(n)$ and the Zipf exponent $\alpha$.
Next, based on our uncoded cache allocation strategy for the total caching constraint in~\eqref{eq:op_s}, we extend the strategy to satisfy the local caching constraints. Based on the solution $\{\hat{X}_{m}\}_{m=1}^M $ in Proposition~\ref{th:oprep_s}, the central server places replicas of the content in the cache of each node according to the replica allocation algorithm in~\cite[Appendix C]{alfano}, in which contents are considered in sequence and the algorithm is decomposed into $MK$ steps. The design of this algorithm is basically inspired by the well-known water-filling strategy. Each $(m,k)$th step (i.e., the $\left(k+(m-1)K\right)$th step) of the algorithm is responsible for caching the $\left\lceil \hat{X}_m\right\rceil$ replicas of the $k$th subpacket of content $m\in \mathcal{M}$. Here, $\left\lceil x\right\rceil$ denotes the ceiling function of $x$. More specifically, a set of $\left\lceil \hat{X}_m\right\rceil$ distinct nodes $\mathcal{N}_{m,k}^{(u)}$ is selected and a replica of the $k$th subpacket of content $m$ is assigned to each node in the set $\mathcal{N}^{(u)}_{m,k}$ at the $(m,k)$th step of the algorithm. In the first step (i.e., the $(1,1)$th step), $\left\lceil \hat{X}_1 \right\rceil$ nodes are randomly assigned to the set $\mathcal{N}_{1,1}^{(u)}$. In the subsequent process, at $(m,k)$th step, first all nodes are sorted in ascending order of the total number of subpackets cached by each node since the algorithm has been initiated, and then the top-$\left\lceil \hat{X}_m \right\rceil$ nodes from the sorted list are assigned to the set $\mathcal{N}_{m,k}^{(u)}$. In other words, a preference is given to the nodes to cache replicas in terms of the number of assigned subpackets to date. If there is a tie in the number of subpackets assigned to users' caches after sorting of each step, then a random node selection is made. The above steps are repeated $MK$ times until all the replicas of the content are assigned.
\begin{remark}\label{R:1}
Due to the fact that $\sum_{m=1}^M K\left\lceil \hat{X}_m \right\rceil \leq2 \sum_{m=1}^M$ $ K \hat{X}_m \leq 2Sn$, it is shown that as far as the cache of each node is filled with replicas according to the above replica allocation algorithm, the proposed order-optimal cache allocation strategy in Proposition~\ref{th:oprep_s} can be extended to satisfy the property that the number of subpackets stored by each node (i.e., the storage capacity per node) is bounded by $2S$, which is given by $\Theta(K)$. Hence, our cache allocation strategy in Proposition~\ref{th:oprep_s} fulfills the local cache size constraints within a factor of 2.
\end{remark}
In the next subsection, we characterize the optimized minimum average content transfer delay $D_{avg}(n)$ by adopting the order-optimal cache allocation strategy presented in Proposition~\ref{th:oprep_s} and analyze the impact of some key parameters, $K$, $M$, $a(n)$, and $\alpha$ on the order-optimal performance.
\subsection{Order-Optimal Performance}~\label{section:54}
In this subsection, we compute the minimum average content transfer delay $D_{avg}(n)$ using the order-optimal cache allocation strategy obtained in Proposition~\ref{th:oprep_s}.
\begin{theorem}\label{th:delay_s} Consider a content-centric mobile ad hoc network model with subpacketization adopting the order-optimal cache allocation strategy $\{\hat{X}_{m}\}_{m=1}^M$ in~\eqref{eq:oprep_s} and following the uncoded sequential reception strategy. Then, the minimum average content transfer delay $D_{avg}(n)$ is given by
\smallskip
\begin{equation} \label{eq:delay_s2}
D_{avg}(n) = \Theta\left(\max\left\{K, \frac{K\left(H_{\frac{\alpha}{2}}(M)\right)^2}{na(n)H_{\alpha}(M)} \right\} \right),
\end{equation}
\noindent where $K$ is the number subpackets of each content, $a(n)$ is the area in which a node can communicate with other nodes, and $H_\alpha(M)$ and $H_\frac{\alpha}{2}(M)$ are given in \eqref{eq:H} and \eqref{eq:H2}, respectively.
\end{theorem}
\smallskip
\begin{proof}
Deferred to Appendix \ref{AppendixE_s}.
\end{proof}
From Theorems~\ref{th:TD} and~\ref{th:delay_s}, the maximum achievable per-node throughput $\lambda(n)$ is given by
\begin{equation}\label{eq:pnt_s}
\lambda(n)= \Theta\left(\min\left\{\frac{1}{na(n)K} , \frac{H_{\alpha}(M)}{K\left(H_{\frac{\alpha}{2}}(M)\right)^2} \right\}\right).
\end{equation}
If the content follows the Zipf distribution with exponent $\alpha > 2$, then the best delay $D_{avg}(n)=\Theta(K)$ and the corresponding throughput $\lambda(n)=\Theta\left(\frac{1}{na(n)K}\right)$ are achieved. When $\alpha \leq 2$, the minimum delay $D_{avg}(n)$ and the corresponding throughput $\lambda(n)$ start to scale with $a(n)$, $K$, and $M$. In the next section, we characterize the minimum average content transfer delay $D_{avg}(n)$ and the corresponding per-node throughput $\lambda(n)$ under our network model for the MDS-coded caching case by presenting the order-optimal cache allocation strategy.
\section{Order-Optimal MDS-coded Caching in Mobile Networks with Subpacketization }~\label{section:6}
In this section, we propose the order-optimal MDS-coded cache allocation strategies $\{\hat{r}_m\}_{m=1}^M$ to characterize the order-optimal average content transfer delay $D_{avg}(n)$ and the corresponding maximum per-node throughput $\lambda(n)$ of the cache-enabled mobile ad hoc network employing subpacketization. We first introduce our problem formulation in terms of minimizing $D_{avg}(n)$ for the MDS-coded caching following the random reception strategy in Section~\ref{section:322}. Then, we solve the optimization problem and propose the order-optimal cache allocation strategy $\left\{\hat{r}_m\right\}_{m=1}^M$ under our network model. Finally, we present the minimum $D_{avg}(n)$ and the corresponding maximum $\lambda(n)$ using the order-optimal cache allocation strategy.
\subsection{Problem Formulation}~\label{section:61}
It can be seen that there is no need to cache more than $a(n)^{-1}+K$ MDS-coded subpackets of content object $m \in \mathcal{M}$ over the network for the MDS-coded random reception case due to the term $\min\left( 1, (r_m-j) a(n) \right)$ in~\eqref{eq:davg_c} of Lemma~\ref{le:otd}. Thus, we modify \eqref{eq:Cons2cb} and impose the following individual caching constraints:
\begin{equation} \label{eq:Cons2c}
K \leq r_{m} \leq a(n)^{-1}+K
\end{equation}
for all $m \in \mathcal{M}$. Now, from Lemma~\ref{le:otd} and the caching constraints in \eqref{eq:Cons1c} and \eqref{eq:Cons2c}, the optimal cache allocation strategy $\left\{\hat{r}_{m}\right\}_{m = 1}^M$ for the MDS-coded random reception scenario can thus be the solution to the following optimization problem:
\begin{subequations} \label{eq:opc}
\begin{equation} \label{eq:ofc}
\min_{\left\{r_{m}\right\}_{m \in \mathcal{M}}}\sum_{m=1}^M \sum_{j=0}^{K-1} \frac{p^{pop}_{m}}{\min\left( 1, (r_m-j) a(n) \right)}
\end{equation}
subject to
\begin{equation} \label{eq:c1c}
\sum_{m=1}^Mr_{m} \leq Sn,
\end{equation}
\begin{equation} \label{eq:c2c}
K \leq r_{m} \leq a(n)^{-1}+K.
\end{equation}
\end{subequations}
Similarly as in uncoded caching case, we relax the discrete variables $r_{m}$ for $m \in \mathcal{M}$ to real numbers in $[K, \infty)$ so that the objective function in \eqref{eq:opc} becomes convex and differentiable.
\subsection{Order-Optimal Cache Allocation Strategy}~\label{section:62}
The objective function in \eqref{eq:ofc} contains a $\min$ function in the denominator, which makes the optimization problem intractable. Thus, we first simplify the objective function in \eqref{eq:ofc} and then solve the simplified optimization problem to obtain the order-optimal cache allocation strategy.
\subsubsection{Simplifying Objective Function}~\label{section:621} We simplify the objective function in \eqref{eq:ofc} by dividing the entire content domain $\mathcal{M}$ into the following three regimes:
\begin{itemize}
\item Regime $\text{I}^{(c)}$: $r_{m}=\Omega\left( a(n)^{-1} \right)$
\item Regime $\text{II}^{(c)}$: $r_{m} = o\left( a(n)^{-1} \right)$ and $\Omega\left( K^{1+\epsilon}\right)$
\item Regime $\text{III}^{(c)}$: $r_{m} = o\left( K^{1+\epsilon}\right)$ and $\Omega(K)$,
\end{itemize}
\noindent where $\epsilon > 0$ is an arbitrarily small constant. Let $\mathcal{I}^{(c)}_{1}$, $\mathcal{I}^{(c)}_{2}$, and $\mathcal{I}^{(c)}_{3}$ be partitions of $\mathcal{M}$ consisting of content objects belonging to Regimes I\(^{(c)}\), II\(^{(c)}\), and III\(^{(c)}\), respectively. Now, characterize the transfer delay for each content $m \in \mathcal{M}$ according to the three regimes to simplify the objective function in \eqref{eq:ofc}. \\
\noindent \textbf{Transfer Delay for Content \(m\in\mathcal{I}^{(c)}_{1}\):}
In Regime I\(^{(c)}\), let $q_m$ be the integer such that $0 \leq q_m \leq K-1$, $(r_m-q_m) a(n) \geq 1$, and $(r_m-q_m -1)a(n)<1$. Now, the transfer delay for each content $m \in \mathcal{I}^{(c)}_{1}$ is given by
\begin{align*}
\sum_{j=0}^{K-1} &\frac{1}{\min\left( 1,(r_m-j)a(n) \right)}\\&= \sum_{j=0}^{q_m}1 +\sum_{j=q_m+1}^{K-1} \frac{1 }{(r_m-j) a(n) }\\& = (q_m+1 ) + \frac{1}{a(n)}\log \left(\frac{r_m-q_m-1}{r_m-K}\right),
\end{align*}
\noindent where the second equality holds due to the harmonic series. By definition of $q_m$, we have $r_m-q_m-1= \Theta \left( a(n)^{-1}\right)$, which gives us
\begin{align} \label{eq:dm_r1}
\sum_{j=0}^{K-1} &\frac{1}{\min\left( 1, (r_m-j) a(n) \right)} \nonumber
\\&= r_m- a(n)^{-1} + a(n)^{-1} \log\left(\!\frac{\!a(n)^{-1}}{r_m - K} \right).
\end{align}
\noindent Let $z=\!\left(\!r_m\!-K\!-a(n)^{-1}\!\right)/a(n)^{-1}$. Then, it follows that $\log\left(\frac{a(n)^{-1}}{r_m-K}\!\right)=-\log\left(1+z\right)$ and $ \log\left(1+ z\right) = z + O\left(z^2\right)$ due to $z=o(1)$ in Regime I\(^{(c)}\). This finally results in
\begin{equation} \label{eq:davgc1}
\sum_{j=0}^{K-1} \frac{1}{\min\left( 1, (r_m-j) a(n) \right)} = \Theta\left(K\right) \hspace{0.5cm}\textrm{for } m \in \mathcal{I}^{(c)}_{1}.
\end{equation}
\noindent \textbf{Transfer Delay for Content \(m\in\mathcal{I}^{(c)}_{2}\):}
In Regime II\(^{(c)}\), the transfer delay for each content $m \in \mathcal{I}^{(c)}_{2}$ is given by
\begin{align*}
\sum_{j=0}^{K-1} \frac{1}{\min\left( 1, (r_m-j) a(n) \right)} &= \sum_{j=0}^{K-1} \frac{1}{ (r_m-j) a(n) }
\\& = \frac{1}{a(n)} \log\left(\frac{r_m}{r_m-K} \right).
\end{align*}
\noindent Let $z= K/r_m$. Then, it follows that $\log\left(\frac{ r_m }{r_m - K} \right)=-\log\left(1-z\right)$ and $\log\left(1-z\right)= -z + O\left(z^2\right)$ due to $z=o(1)$ in Regime II\(^{(c)}\). This finally results in
\begin{equation} \label{eq:davgc2}
\sum_{j=0}^{K-1}\!\! \frac{1}{\min\left( 1, (r_m-j) a(n) \right)}= \Theta\left(\frac{K}{a(n)r_m}\!\right) \hspace{0.3cm} \textrm{for } m \in \mathcal{I}^{(c)}_{2}.
\end{equation}
\noindent \textbf{Transfer Delay for Content \(m\in\mathcal{I}^{(c)}_{3}\):}
In Regime III\(^{(c)}\), the transfer delay for each content $m \in \mathcal{I}^{(c)}_{3}$ is given by
\begin{align}\label{eq:dm_r3}
\sum_{j=0}^{K-1} \frac{1}{\min\left( 1, (r_m-j) a(n) \right)} &= \sum_{j=0}^{K-1} \frac{1 }{(r_m-j) a(n) } \nonumber
\\ &= \frac{1}{a(n)} \left(\! \log r_m- \log(r_m-K) \!\right).
\end{align}
\noindent In the regime, we have $r_{m} = o\left( K^{1+\epsilon}\right)$ and $\Omega(K)$ for an arbitrarily small $\epsilon>0$. Thus, it follows that $\log r_m - \log(r_m-K) = \Theta(\log r_m )$ $=$ $\Theta(\log K)$, which results in
\begin{align} \label{eq:davgc3}
\sum_{j=0}^{K-1} \frac{1}{\min\left( 1, (r_m-j) a(n) \right)} = \Theta\left(\frac{\log K}{a(n)} \right) \hspace{0.3cm} \textrm{for } m \in \mathcal{I}^{(c)}_{3}.
\end{align}
Now, using \eqref{eq:davgc1}, \eqref{eq:davgc2}, and \eqref{eq:davgc3}, we can establish the following equivalent optimization problem to the original problem in \eqref{eq:opc}:
\begin{align}\label{eq:opsc}
\min_{\left\{r_{m}\right\}_{m \in \mathcal{M}}}\!\!\left(\!\sum_{m \in \mathcal{I}^{(c)}_{1}}\!\!\!\!Kp^{pop}_{m}\!+\!\!\!\!\sum_{m\in \mathcal{I}^{(c)}_{2}}\!\!\!\frac{p^{pop}_{m}K}{a(n)r_m}\!+\!\!\!\sum_{m \in \mathcal{I}^{(c)}_{3}}\!\!\!\!\frac{p^{pop}_{m}\log K}{a(n)}\!\!\right)
\end{align}
subject to
\begin{align*}
\sum_{m=1}^Mr_{m} \leq Sn,
\end{align*}
\begin{align*}
K \leq r_{m} \leq a(n)^{-1}+K.
\end{align*}
\subsubsection{Solving the Simplified Optimization Problem}~\label{section:622}
The Lagrangian function corresponding to \eqref{eq:opsc} is given by
\begin{align*} \label{eq:LFc}
& \mathcal{L}\left(\left\{r_{m}\right\}_{m \in \mathcal{M}},\delta,\left\{\sigma_m\right\}_{m \in \mathcal{M}}, \left\{\mu_m\right\}_{m \in \mathcal{M}}\right) = \sum_{m \in \mathcal{I}^{(c)}_{1}} K p^{pop}_{m} \\& +\sum_{m \in \mathcal{I}^{(c)}_{2}} \frac{p^{pop}_{m}K}{a(n)r_m}+ \!\!\sum_{m \in \mathcal{I}^{(c)}_{3}}\frac{p^{pop}_{m}\log K}{a(n)}\!\! + \delta \left(\sum_{m= 1}^{M} r_{m} - Sn \right) \\ & + \sum_{m= 1}^{M} \sigma_m \left( K- r_{m} \right) + \sum_{m= 1}^{M} \mu_m \left( r_{m} - a(n)^{-1}-K\right),
\end{align*}
\noindent where $\mu_m, \sigma_m, \delta \in \mathbb{R}$. The KKT conditions for \eqref{eq:opsc} are then given by
\begin{equation}\label{eq:KKT1_c}
\frac{\partial \mathcal{L}\left(\left\{\hat{r}_{m}\right\}_{m \in \mathcal{M}},\hat{\delta}, \left\{\hat{\mu}_m \right\}_{m \in \mathcal{M}}, \left\{\hat{\sigma}_m\right\}_{m \in \mathcal{M}}\right) }{\partial \hat{r}_{m}}= 0,
\end{equation}
\begin{equation}\label{eq:KKT2_c}
\hat{\sigma}_m \left( K- \hat{r}_{m} \right) =0 ,
\end{equation}
\begin{equation}\label{eq:KKT6_c}
\hat{\mu}_m \left( \hat{r}_{m} - a(n)^{-1}- K\right) =0 ,
\end{equation}
\begin{equation}\label{eq:KKT3_c}
\hat{\delta} \left(\sum_{m= 1}^{M} \hat{r}_{m} - Sn \right) =0 ,
\end{equation}
\begin{equation*}\label{eq:KKT4_c}
\hat{\delta} \geq 0 ,
\end{equation*}
\begin{equation*}\label{eq:KKT5_c}
\hat{\sigma}_m \geq 0 ,
\end{equation*}
\begin{equation*}\label{eq:KKT7_c}
\hat{\mu}_m \geq 0
\end{equation*}
\noindent for all $m \in \mathcal{M}$, where $\hat{r}_m, \hat{\delta}$, $\hat{\mu}_m$, and $\hat{\sigma}_m$ represent the optimized values. Let the content indice $m^{(c)}_1\in \mathcal{I}^{(c)}_{2}$ and $m^{(c)}_2\in \mathcal{I}^{(c)}_{3}$ denote the smallest content indice belonging to Regimes II\(^{(c)}\) and III\(^{(c)}\), respectively. In the following, we introduce a lemma that presents an important characteristic of the optimal cache allocation strategy $\left\{\hat{r}_{m}\right\}_{m=1}^M$.
\begin{lemma}\label{le:1_c}
The optimal cache allocation strategy denoted by $\left\{\hat{r}_{m}\right\}_{m=1}^M$ in \eqref{eq:opsc} is non-increasing with $m \in \mathcal{M}$.
\end{lemma}
\begin{proof}
Deferred to Appendix \ref{AppendixA_c}.
\end{proof}
Lemma \ref{le:1_c} allows us to establish the second main result regarding the optimal cache allocation strategy for the MDS-coded caching scenario.
\begin{prop}\label{th:oprep_c}
Consider the content-centric mobile ad hoc network model employing subpacketization and following the MDS-coded random reception strategy in Section~{\em\ref{section:322}}. The order-optimal cache allocation strategy is given by
\begin{equation} \label{eq:oprep_c}
\hat{r}_m = \begin{cases}
a(n)^{-1} & m \in \left\{1, \cdots, m^{(c)}_1-1 \right\} \\
\frac{\sqrt{p^{pop}_m}}{\sum_{\widetilde{m}= m^{(c)}_1}^{m^{(c)}_2-1} \sqrt{p^{pop}_{\widetilde{m}}} } S^{(c)} & m \in \left\{m^{(c)}_1, \cdots, m^{(c)}_2-1 \right\} \\
K & m \in \left\{m^{(c)}_2, \cdots, M \right\},
\end{cases}
\end{equation}
where $p^{pop}_m$ is given in \eqref{eq:zipf}, $ S^{(c)}= Sn - (m^{(c)}_1-1)a(n)^{-1}- (M-m^{(c)}_2+1)K $, and the boundaries between any two regimes are defined by content indice $m^{(c)}_1$ and $m^{(c)}_2$, which are given by
\begin{equation} \label{eq:am2_c}
m^{(c)}_2\! =\!\! \begin{cases}
\Theta\left(\min\left\{M,(n-M)^{\frac{2}{\alpha}} \right\}\right) &\!\! \alpha > 2 \\
\Theta\left( \min\left\{M, (n-M)\left(\frac{a(n)^{-1}}{K}\right)^{\frac{2}{\alpha}-1} \right\}\right) & \!\!\alpha \leq 2
\end{cases}
\end{equation}
\noindent and
\begin{equation} \label{eq:am1_c}
m^{(c)}_1 = \begin{cases}
\Theta\left(\min\left\{M,\left(\frac{K(n-M)}{a(n)^{-1}}\right)^{\frac{2}{\alpha}}\!\!\right\}\right) &\!\! \alpha > 2 \\
\Theta\left( \min\left\{M, (n-M)Ka(n) \right\} \right) & \alpha \leq 2,
\end{cases}
\end{equation}
respectively.
\end{prop}
\begin{proof}
Deferred to Appendix \ref{AppendixD_c}.
\end{proof}
From Proposition~\ref{th:oprep_c}, the order-optimal cache allocation strategy is partitioned into three parts, and the content indice $m^{(c)}_1$ and $m^{(c)}_2$ are specified as a function of key parameters $K$, $M$, $a(n)$, and $\alpha$. Similarly as in the uncoded caching scenario, our MDS-coded cache allocation strategy under the total caching constraint in \eqref{eq:opc} can be extended to satisfy the local caching constraints when the replica allocation algorithm in Section~\ref{section:52} is employed in which for each content $m \in \mathcal{M}$, $\left\lceil \hat{r}_m\right\rceil$ MDS-coded subpackets are cached instead of $\left\lceil \hat{X}_m\right\rceil$ replicas. Based on the same argument as those in Remark~\ref{R:1}, the local cache size constraints hold within a factor of 2.
In the next subsection, we characterize the optimized minimum average content transfer delay $D_{avg}(n)$ by adopting the order-optimal cache allocation strategy presented in Proposition~\ref{th:oprep_c} and also analyze the impact of key parameters $K$, $M$, $a(n)$, and $\alpha$ on the order-optimal performance.
\subsection{Order-Optimal Performance}~\label{section:63}
In this subsection, we compute the minimum average content transfer delay $D_{avg}(n)$ using the order-optimal cache allocation strategy obtained in Proposition~\ref{th:oprep_c}.
\begin{theorem}\label{th:delay_c} Consider a content-centric mobile ad hoc network model with subpacketization adopting the optimal cache allocation strategy $\left\{\hat{r}_{m}\right\}_{m=1}^M$ in~\eqref{eq:oprep_c} and following the MDS-coded random reception strategy. Then, the minimum average content transfer delay $D_{avg}(n)$ is given by
\smallskip
\begin{align}
D_{avg}(\!n\!)\!=\!\!\!\left\{\begin{array}{lll} \!\!\!\!\Theta\!\!\left(K\right)
&\!\!\!\textrm{\!\!$m^{(c)}_1\!\!=\!\Theta(\!M\!)$} \\
\!\!\!\!\Theta\!\!\left(\!\!\max \!\!\left\{\!K,\frac{\left(H_\frac{\alpha}{2}(M)\right)^2}{H_\alpha(M) na(n)}\right\}\!\!\right) &\!\!\!\textrm{\!\!$m^{(c)}_2\!\!=\!\Theta(\!M\!)$ \upshape} \\
\hspace{4.8cm}\textrm{\upshape and} & \!\!\!\textrm{\!\!$m^{(c)}_1\!\!=\!o(\!M\!)$} \\
\!\!\!\!\Theta\!\left(\!\!\max\!\!\left\{\!\! K,\frac{a(n)^{-1}\left(H_\frac{\alpha}{2}(m^{(c)}_2)\right)^2}{H_\alpha(M) (n-M)},\!\frac{\log K}{a(n)}\!\right\}\!\!\right)
&\!\!\!\!\textrm{\!\!$m^{(c)}_2\!\!=\!o(\!M\!)$},
\end{array}\right. \label{eq:delay_c2}
\end{align}
\noindent where $K$ is the number subpackets of each content, $a(n)$ is the area in which a node can communicate with other nodes, and $H_\alpha(M)$ and $H_\frac{\alpha}{2}(M)$ are given in \eqref{eq:H} and \eqref{eq:H2}, respectively.
\end{theorem}
\smallskip
\begin{proof}
Deferred to Appendix \ref{AppendixE_c}.
\end{proof}
From Theorems~\ref{th:TD} and~\ref{th:delay_c}, the maximum achievable per-node throughput $\lambda(n)$ is given by
\begin{equation} \label{eq:pnt_c}
\lambda(n)\!\!=\!\!\left\{\begin{array}{lll} \!\!\!\!\Theta\!\!\left(\frac{1}{na(n)K}\!\!\right)
&\!\!\!\!\textrm{$m^{(c)}_1\!\!=\!\Theta(\!M\!)$} \\
\!\!\!\!\Theta\left(\!\min \left\{\frac{1}{na(n)K},\frac{H_\alpha(M)}{\left(H_\frac{\alpha}{2}(M)\right)^2}\right\}\!\!\right) &\!\!\!\!\textrm{$m^{(c)}_2\!\!=\!\Theta(\!M\!)$} \\
\hspace{5cm}\textrm{\upshape and} &\!\!\!\!\textrm{$m^{(c)}_1\!\!=\!o(\!M\!)$} \\
\!\!\!\! \Theta\!\!\left(\!\!\min\left\{\!\!\frac{1}{na(n)K},\!\frac{H_\alpha(M) (n-M)}{n\left(H_\frac{\alpha}{2}(m^{(c)}_2)\right)^2},\!\frac{1}{n\log K}\right\}\!\!\right)\!\!
&\!\!\!\!\textrm{$m^{(c)}_2\!\!=\!o(\!M\!)$}.
\end{array}\right.
\end{equation}
Similarly as in the uncoded caching case, the average content transfer delay $D_{avg}(n)$ and the per-node throughput $\lambda(n)$ for MDS-coded caching scale with respect to $a(n)$, $K$, $M$ and $\alpha$. To validate the analytical results obtained in Sections~\ref{section:5} and~\ref{section:6}, we perform intensive numerical evaluation in the next section.
\section{Numerical Evaluation and Performance Comparison}~\label{section:7}
In this section, we perform intensive computer simulations with finite system parameters $a(n)$, $K$, $M$, and $\alpha$ to obtain the numerical solutions to the optimization problems in \eqref{eq:op_s} and \eqref{eq:opsc}. We compare the numerical evaluations with the analytical results presented in Sections~\ref{section:5} and~\ref{section:6} to validate our analysis. We first validate the order-optimal caching allocation strategies presented in \eqref{eq:oprep_s} and \eqref{eq:oprep_c} and highlight the impact of system parameters according to the operating regimes. Then, we compare the order-optimal performance on the average content transfer delay $D_{avg}(n)$ for uncoded and MDS-coded caching scenarios.
\subsection{Order-Optimal Cache Allocation Strategy}~\label{section:71}
Figure~\ref{fig:caching_uncoded} is an illustration of the optimal caching strategy for the uncoded caching case employing sequential reception. We can observe the consistency between the analytical results in Fig.~\ref{fig:Aseq_rep} obtained using Proposition \ref{th:oprep_s} and the results obtained by numerically solving the problem in~\eqref{eq:op_s} in Fig.~\ref{fig:Nseq_rep} for $M=250$, $K=20$, and $n=30000$. We can see how the optimal number of replicas $\hat{X}_m$ behaves according to different values of the area $a(n)$ and the Zipf exponent $\alpha$ (i.e., values corresponding to their respective operating regime) as depicted in Fig.~\ref{fig:caching_uncoded}.
When $\alpha=0.5$, the boundary between Regimes I\(^{(u)}\) and II\(^{(u)}\) is given by $m^{(u)}_1=\Theta\left(\frac{(na(n))^4}{M^3}\right)$. In this case, if $a(n)= \Theta\left(\log n/n\right)$, then the optimal number of replicas $\hat{X}_m$ is monotonically decreasing with a slope of $\alpha/2$, i.e., the caching strategy operates in Regime II\(^{(u)}\). When we increase $a(n)$ $($e.g., $a(n)= \Theta\left(M^{0.8}/n\right))$, the caching strategy operates in both Regimes I\(^{(u)}\) and II\(^{(u)}\). On the other hand, when $\alpha=2$, the boundary between two regimes is given by $m^{(u)}_1=\Theta\left(\frac{na(n)}{\log M}\right)$. In this case, the range of Regime I\(^{(u)}\) tends to be wider than the case of $\alpha=0.5$, as shown in Fig~\ref{fig:caching_uncoded}.
\begin{figure}[t]
\centering
\subfigure[Analytical results]{\includegraphics[width=0.48\linewidth]{Ana_Seq_caching1.eps} \label{fig:Aseq_rep}}
\subfigure[Numerical results]{\includegraphics[width=0.48\linewidth]{Num_Seq_caching1.eps} \label{fig:Nseq_rep}}
\caption{ Optimal cache allocation strategy versus content object $m$ for the uncoded caching case employing sequential reception.}
\label{fig:caching_uncoded}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[Analytical results]{\includegraphics[width=0.48\linewidth]{Ana_Cod_caching1.eps} \label{fig:Aran_rep}}
\subfigure[Numerical results]{\includegraphics[width=0.48\linewidth]{Num_Cod_caching1.eps} \label{fig:Nran_rep}}
\caption{ Optimal cache allocation strategy versus content object $m$ for the MDS-coded caching case employing random reception.}
\label{fig:caching_coded}
\end{figure}
In Fig.~\ref{fig:caching_coded}, the optimal caching strategy for the MDS-coded caching case employing random reception is illustrated, where the analytical results are depicted in Fig.~\ref{fig:Aran_rep} obtained by Proposition \ref{th:oprep_c}. The results obtained by numerically solving the problem in~\eqref{eq:opsc} are also shown in Fig.~\ref{fig:Nran_rep} for $M=250$, $K=3$, and $n=30000$. Similarly as in the uncoded caching case, we can see how the optimal number of MDS-coded subpackets $\hat{r}_m$ behaves according to different values of the area $a(n)$ and the Zipf exponent $\alpha$ (i.e, values corresponding to their respective operating regime) as depicted in Fig.~\ref{fig:caching_coded}.
Form Propositions \ref{th:oprep_s} and \ref{th:oprep_c}, an important observation is that for given system parameters, the range of Regime I\(^{(c)}\) (the MDS-coded caching case) tends to scale $K$ times wider than that of Regime I\(^{(u)}\) (the uncoded caching case).
\subsection{Order-Optimal Performance}~\label{section:72}
In Fig.~\ref{fig:del_area}, we illustrate how the optimal average content transfer delay $D_{avg}(n)$ behaves according to different values of the area $a(n)$ and the Zipf exponent $\alpha$. We can observe the consistency between the analytical results in Fig.~\ref{fig:Adel_area} obtained using Theorems~\ref{th:delay_s} and~\ref{th:delay_c} and the results obtained by numerically solving the problems in~\eqref{eq:op_s} and~\eqref{eq:opsc} in Fig.~\ref{fig:Ndel_area}, respectively, for $M=250$, $K=20$, and $n=30,000$. When $\alpha=3$, the average content transfer delay of $D_{avg}(n)= \Theta(K)$ is achieved for both the uncoded and MDS-coded caching cases, which is the minimum that we can hope for as far as $a(n)=\Omega(\log n/n)$. The performance difference between the uncoded and the MDS-coded caching scenarios becomes prominent when $\alpha < 2$ as shown in the figure. From the fact that for $\alpha=1.5$, the average delay $D_{avg}(n)$ is given by $\Theta\left(\max\left\{K, \frac{KM^{0.5}}{na(n)}\right\}\right)$ and $\Theta\left(\max\left\{K,\frac{M^{0.5}}{na(n)}\right\}\right)$ for the uncoded and the MDS-coded caching cases, respectively. Moreover, we have $D_{avg}(n) =\Theta(K)$ when $a(n)$ scales as $\Omega\left(\sqrt{M}/n\right)$ and as $\Omega\left(\sqrt{M}/nK\right)$ for the uncoded and MDS-coded caching cases, respectively. Similarly, for $\alpha=0.5$, it follows that $D_{avg}(n) =\Theta(K)$ when $a(n)$ scales as $\Omega(M/n)$ and as $\Omega(M/nK)$ for the uncoded and MDS-coded caching cases, respectively. For $\alpha<2$, based on the above arguments and from Theorem~\ref{th:TD}, the per-node throughput $\lambda(n)$ for the MDS-coded caching case scales $K$ times larger than the uncoded caching case while attaining the order-optimal delay $D_{avg}(n) =\Theta(K)$.
\begin{figure}[t]
\centering
\subfigure[Analytical results]{\includegraphics[width=0.48\linewidth]{Ana_delay_area1.eps} \label{fig:Adel_area}}
\subfigure[Numerical results]{\includegraphics[width=0.48\linewidth]{Num_delay_area1.eps} \label{fig:Ndel_area}}
\caption{The average content transfer delay $D_{avg}(n)$ versus cell area $a(n)$.}
\label{fig:del_area}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[Analytical results]{\includegraphics[width=0.48\linewidth]{Ana_delay_content1.eps} \label{fig:Adel_content}}
\subfigure[Numerical results]{\includegraphics[width=0.48\linewidth]{Num_delay_content1.eps} \label{fig:Ndel_content}}
\caption{The average content transfer delay $D_{avg}(n)$ versus the number of content $M$.}
\label{fig:del_content}
\end{figure}
Figure~\ref{fig:del_content} is an illustration of how the optimal average content transfer delay $D_{avg}(n)$ behaves according to different values of the number of content $M$ and the Zipf exponent $\alpha$. Figure~\ref{fig:Adel_content} illustrates the analytical results obtained from Theorems~\ref{th:delay_s} and~\ref{th:delay_c}, and Fig.~\ref{fig:Ndel_content} illustrates the results obtained by numerically solving the problems in~\eqref{eq:op_s} and~\eqref{eq:opsc} for $K=5$, $a(n)= n/\log n$, and $n=30,000$. The performance difference between the uncoded and the MDS-coded caching cases also becomes prominent when $\alpha < 2$. For $\alpha=1.25$, the average delay $D_{avg}(n)$ is given by $\Theta\left(\max\left\{K, \frac{KM^{0.75}}{\log n}\right\}\right)$ and $\Theta\left(\max\left\{K,\frac{M^{0.75}}{\log n}\right\}\right)$ for the uncoded and the MDS-coded caching cases, respectively. Moreover, we have $D_{avg}(n) =\Theta(K)$ when $M$ scales as $O\left(\left(\log n\right)^\frac{4}{3}\right)$ and as $O\left(\left(K \log n\right)^\frac{4}{3}\right)$ for the uncoded and the MDS-coded caching cases, respectively. Similarly, for $\alpha=0.5$, it follows that $D_{avg}(n) =\Theta(K)$ when $M$ scales as $O(\log n)$ and as $O(K \log n)$ for the uncoded and the MDS-coded caching cases, respectively.
\section{Extension to the Random Walk Mobility Model}~\label{section:8}
In this section, we extend our study to another scenario where each node moves independently according to the random walk mobility model studied in~\cite{alfano, rw, algamal}. In the mobility model, the position $d(t)$ of a node at time slot $t$ is updated by $d(t)= d(t-1)+ y_t$, where $y_t$ is a sequence of independent and identically distributed (i.i.d.) random variables that represent a node's flight vector with an average flight length $L= \mathbb{E}\left[\left\|y_t\right\|\right]$. Here, $L$ is assumed to scale as the transmission range $R$ (i.e., $R=\Theta(L)$) as in~\cite{rw, algamal}\footnote{Note that when the transmission range $R$ scales slower than the flight length $L$ (i.e., $R=o(L)$), one can achieve the same results as those for $R=\Theta(L)$ based on~\cite[Lemma 7]{alfano}. The single-hop scenario is known to be appropriate for $R=o(L)$, where higher throughput can be achieved compared to the case using multihop relaying protocols~\cite[Section IV]{alfano}.}. Likewise, we adopt the single-hop-based content delivery that does not employ any relaying strategies. Thus, a requesting node can successfully retrieve its desired content only if a potential source node is within the transmission range $R= \Theta(L)$ in a given time slot. Otherwise, it moves until it finds an available source node for the request. The content delivery protocol and reception strategies essentially follow the same line as those in Section~\ref{section:3}. We state the following lemma introduced in~\cite{alfano} in terms of our notations for completeness.
\begin{lemma}[\!\protect{\cite[Lemma 7]{alfano}}]\label{le:tavg} Consider two arbitrary nodes that are uniformly distributed over a region of unit area at time $t=0$ and assume that each node moves independently according to the random walk model with average flight length $L$. Then, the average first hitting time $T_{avg}$ required such that the distance between the nodes is less than or equal to $R=\Theta(L)$ is given by
\begin{align} \label{eq:Tavg}
T_{avg} = O\left(\frac{\log n}{R^2} \right) \textrm{\upshape and }\Omega\left(\frac{1}{R^2}\right).
\end{align}
\end{lemma}
Note that both upper and lower bounds on the average first hitting time $T_{avg}$ are of the same order within a factor of $\log n$. From Lemma \ref{le:tavg}, we establish the following lemma, which formulates the average content transfer delay $D_{avg}(n)$ for both the uncoded sequential reception in Section~\ref{section:321} and the MDS-coded random reception in Section~\ref{section:322} under the random walk mobility model.
\begin{lemma}\label{le:otdRW}
Consider a content-centric mobile network in which each node moves according to the random walk mobility model with average flight length $L$ and retrieves its requests according to the content delivery protocol in Section~\ref{section:31}. The average content transfer delay $D_{avg}(n)$ for the uncoded caching case employing the sequential reception strategy in Section~\ref{section:321} is given by
\begin{align} \label{eq:davg_SRW}
D_{avg}(n) &\hspace{0.13cm}= O\left(\sum\limits_{m=1}^{M} \frac{Kp^{pop}_m}{\min\left(1, \frac{a(n)X_{m}}{\log n}\right)} \right) \nonumber
\\ & \textrm{\upshape and } \Omega\left(\sum\limits_{m=1}^{M} \frac{Kp^{pop}_m}{\min\left(1, a(n)X_{m}\right)}\right)
\end{align}
\noindent and $D_{avg}(n)$ for the MDS-coded caching case employing the random reception strategy in Section~\ref{section:322} is given by
\begin{align} \label{eq:davg_CRW}
D_{avg}(n) &\hspace{0.13cm}= O\left(\sum\limits_{m=1}^M \sum\limits_{j=0}^{K-1} \frac{p^{pop}_m}{\min\left( 1, \frac{(r_m-j)a(n)}{\log n}\right)} \right) \nonumber \\ & \textrm{\upshape and } \Omega\left(\sum\limits_{m=1}^M \sum\limits_{j=0}^{K-1} \frac{p^{pop}_m}{\min\left( 1, (r_m-j)a(n)\right)}\right).
\end{align}
\end{lemma}
\begin{proof} We first consider the uncoded caching case employing the sequential reception strategy. Let $p^{seq}_{m,i}$ be the contact probability that a node requesting the $i$th subpacket of content $m \in \mathcal{M}$ falls within distance $R$ of a node holding the requested subpacket in a given time slot. Then, by employing the cache allocation strategy $\left\{X_{m}\right\}_{m=1}^M$ and using Lemma \ref{le:tavg}, the contact probability $p^{seq}_{m,i}$ is order-equivalent to $\min\left( 1, \frac{X_{m}}{T_{avg}} \right)$. Then, the number of time slots required to successfully receive content object $m\in \mathcal{M}$ which consists of $K$ subpackets is given by $\Theta \left(\frac{K}{\min\left( 1, \frac{X_{m}}{T_{avg}} \right)} \right)$. Thus, using~\eqref{eq:Tavg}, the average content transfer delay $D_{avg}(n)$ for the content-centric mobile network employing the uncoded sequential reception strategy and following the random walk mobility model is given by~\eqref{eq:davg_SRW}. \\
Next, we characterize the average content transfer delay $D_{avg}(n)$ for the MDS-coded caching case employing the random reception strategy. Let $p^{ran}_{m,j}$ be the contact probability that a node having pending requests for $K-j$ MDS-coded subpackets of content $m$ falls within distance $R$ of a node in a given time slot holding one of the requested MDS-coded subpackets while the requesting node is assumed to have already received $j$ MDS-coded subpackets. Then, by employing the cache allocation strategy $\left\{r_{m}\right\}_{m=1}^M$ and from Lemma~\ref{le:tavg}, the contact probability $p^{ran}_{m,j}$ is order-equivalent to $\min\left( 1, \frac{r_m-j}{T_{avg}} \right)$. Furthermore, the expected number of time slots required to successfully receives content object $m\in \mathcal{M}$ consisting of $K$ MDS-coded subpackets is given by $\Theta \left( \sum_{j=0}^{K-1} \frac{1}{\min\left( 1, \frac{r_m-j}{T_{avg}} \right)} \right)$. Thus, using~\eqref{eq:Tavg}, the $D_{avg}(n)$ for the MDS-coded caching case employing the random reception strategy and following the random walk mobility model is given by~\eqref{eq:davg_CRW}. This completes the proof of the lemma.
\end{proof}
Under the random walk mobility model, we now turn to analyzing the main results. By comparing Lemmas~\ref{le:otd} and \ref{le:otdRW}, we observe that the average content transfer delay $D_{avg}(n)$ of the random walk mobility model scales as that of the reshuffling mobility model within a factor of $\log n$. Hence, it is straightforward to achieve essentially the same optimal cache allocation strategies and order-optimal throughput--delay trade-offs for both uncoded and MDS-coded caching scenarios as those in the reshuffling mobility model within a polylogarithmic factor (refer to~\eqref{eq:oprep_s},~\eqref{eq:delay_s2},~\eqref{eq:pnt_s},~\eqref{eq:oprep_c},~\eqref{eq:delay_c2}, and~\eqref{eq:pnt_c} for comparison). This implies that as long as the single-hop-based content delivery protocol is adopted, the random walk mobility model does not fundamentally change the results attained from the reshuffling mobility model.
\section{Concluding Remarks}
This paper investigated the utility of subpacketization in a content-centric mobile ad hoc network, where each mobile node equipping finite-size cache space moves according to the reshuffling mobility model and only a subpacket of a content object consisting of $K$ subpackets can be delivered during one time slot due to the fast mobility condition. The fundamental trade-offs between throughput and delay under our network model were first established by adopting single-hop-based content delivery. Order-optimal caching strategies in terms of throughput--delay trade-offs were then presented for both the sequential reception strategy for uncoded caching and the random reception strategy for MDS-coded caching. In addition, our analytical results were comprehensively validated by numerical evalaution. In consequence, it was found that as $\alpha<2$, the MDS-coded caching strategy has a significant performance gain over the uncoded caching case. More precisely, it was shown that the per-node throughput for MDS-coded caching scales $K$ times faster than that of uncoded caching when the delay is fixed to a minimum. Moreover, for the MDS-coded caching strategy, if $K$ scales faster than $M$, then the best performance on the throughput and delay is achieved. It was also investigated that adopting the random walk mobility model does not essentially change our main results.
An interesting direction for further research is to characterize the optimal throughput--delay trade-off in mobile hybrid networks employing subpacketization, where both mobile nodes and static helper nodes are able to cache a subset of content objects with different capabilities. Potential avenues of another future research in this area include analyzing the optimal throughput--delay trade-off by adopting mobility models that better reflect human mobility patterns in outdoor settings (e.g., the random waypoint mobility and Levy walk mobilty models).
|
2,877,628,089,087 | arxiv | \section{Introduction}\label{s:introduction}
Observations of the solar corona during total eclipses reveal a striking ray-like view of the middle and outer solar corona \citep[see e.g.][]{loucif_solar_1989}. Many rays present a characteristic shape: a bulge or helmet-like feature close to the Sun, narrowing to a very thin stalk further out \citep{koutchmy_coronal_1992}. These very bright narrow rays, called helmet streamers, trace out the global magnetic field configuration of the Sun. The brightness inferred from the solar corona in white-light observations is mainly the result of Thomson scattering of photospheric light on free electrons in the corona \edit1{\added{\citep[e.g.][]{inhester2016}}}. The bright streamers thus indicate where regions of high plasma density can be found. Typically, helmet streamers are large-scale structures found overlying active regions and filament channels, which often lie above a photospheric neutral line that often coincides with the heliospheric current sheet \citep[see e.g.][]{newkirk_structure_1967, koutchmy_three_1971, zhukov_origin_2008}.
Observations of streamers have much improved since the 1970s with the launch of space-borne coronagraphs. A very large improvement came with the Large Angle Spectroscopic Coronagraph aboard the \textit{Solar and Heliospheric Observatory} \citep[LASCO aboard \textit{SOHO}; see][]{brueckner_large_1995}. A number of models for the electron density distributions were developed based on the high-resolution observations of the total brightness of the white-light corona from the LASCO coronagraphs \citep{koomen_shape_1998, wang_dynamical_2002, gibson_three-dimensional_2003, saez_3-dimensional_2005, saez_three-dimensional_2007, morgan_empirical_2007}. Typically, only one vantage point was used in combination with the solar rotation. This implies that only slowly-varying features can be correctly captured. Consequently, the focus of modeling has mostly been on the large-scale corona which has more long-lived features. However, \citet{eselevich_investigation_1999} and \citet{thernisien_electron_2006} have shown that the streamer belt clearly has many fine ray-like structures. Moreover, even large structures like helmet streamers can be disrupted by coronal mass ejections (CMEs) and thus only exist for a short time.
Most recently, we have the addition of the twin spacecraft (A and B) of the \textit{Solar Terrestrial Relations Observatory} (\textit{STEREO}) mission \citep{kaiser_stereo_2008}. There are two coronagraphs, COR1 and COR2, aboard each spacecraft. They are part of the Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI) instrument package \citep{howard_sun_2008}. The combination of coronagraph observations from different vantage points gives the possibility of viewing the coronal structure from different angles, since the STEREO~A (B) spacecraft moves in an orbit around the Sun ahead (behind) the Earth, respectively, while SOHO remains in the L1 point between the Earth and the Sun. The different viewing angles create ideal opportunities for three-dimensional (3D) reconstructions of coronal structures.
The 3D reconstruction of coronal features typically is a two-part problem: on the one hand, one has the geometric structure to be inferred with a correct estimation of projection effects, on the other hand, one needs to perform the inversion of the perceived brightness to electron density. The problem of reconstruction can be approached with different methods, each with their own assumptions and limitations. Tomography is a valuable inverse method to make reconstructions of the electron density in the large-scale corona and the streamer belt, though it typically needs continuous observations for multiple days \citep[see e.g.][]{vasquez_validation_2008, frazin_three-dimensional_2010, vasquez_whi_2011, aschwanden_solar_2011}. More recently, \citet{morgan2015} and \citet{morgan2019} have developed a novel tomography method to reconstruct qualitative global coronal density maps at a specific height in the inner corona. By identifying the same feature in at least two different views, triangulation methods or tie-point reconstructions can provide the location of coronal features in 3D. This technique has already been demonstrated for coronal loops \citep{aschwanden_first_2008-1} and CMEs \citep{mierla_3d_2009}. By incorporating stereoscopic images from different wavelengths, the electron density was derived by \citet{aschwanden_first_2008}. \citet{minnaert_continuous_1930} and \citet{van_de_hulst_electron_1950} laid out the foundations for the forward modeling technique to estimate the electron density radial profile in coronal streamers and coronal holes. Usually based on physical assumptions, a specific model is chosen to represent the 3D geometry of the feature. This reduces the inverse problem to the determination of the free parameters in the model. \citet{gibson_three-dimensional_2003}, \citet{thernisien_modeling_2006} and \citet{thernisien_forward_2009} have expanded this technique to make three-dimensional models of the solar corona, to work with non-spherically symmetric structures, and to use it for dynamical events such as CMEs.
In this paper, we investigate the 3D structure of a coronal streamer, observed by the white-light coronagraphs SOHO/LASCO and STEREO/COR2 from two vantage points located close to the quadrature. In Section~\ref{s:data}, we describe the different observations used to determine the location of the streamer and how we pre-processed the data. We present our forward modeling method of fitting a simple slab model to the streamer with a multivariate minimization technique in Section~\ref{s:fitting}. In Section~\ref{s:discussion}, we discuss and compare our results with each other and the observations, and with other electron density models. We present our conclusions in Section~\ref{s:conclusions}.
\begin{figure*}
\centering
\plottwo{no_point_filter_annotate.pdf}{c2c3_2603_annotated.pdf}
\caption{STEREO~A/COR2 and SOHO/LASCO images showing respectively the edge-on and face-on views of the streamer on April 30, 2011. The streamer is located above the south-east limb in the COR2 image and above the south limb in the LASCO image. The right panel is a composite image of the C2 and C3 coronagraph field of view (FOV) and goes out to $30\ R_{\sun}$. The FOV of COR2 is $15\ R_{\sun}$ (left panel).}
\label{fig:views}
\end{figure*}
\section{Data}\label{s:data}
\begin{figure}
\centering
\plotone{stereo_orbit.pdf}
\caption{Positions of STEREO~A and B, and Earth for 2011-04-30 16:24 UT (courtesy STEREO Science Center, \url{https://stereo-ssc.nascom.nasa.gov/cgi-bin/make_where_gif}).} \label{fig:stereo_orbit}
\end{figure}
On April 30, 2011 a clear streamer structure was observed above the south-east limb by the COR2 coronagraph aboard STEREO~A (see Figure \ref{fig:views}, left panel). At the time of observation the STEREO~A spacecraft and Earth were separated by an angle of 91.415\degr{}, as can be seen in Figure \ref{fig:stereo_orbit}. Since SOHO is in the vicinity of the Sun-Earth Lagrange L1 point, the separation angle between STEREO~A and SOHO was very close to 90\degr{} as well. Since STEREO B at this time was almost exactly opposite of STEREO~A with respect to the Sun, the view from COR2 B is redundant and not used in this paper. The choice of COR2 A was made, since COR2 A has a better signal-to-noise ratio than COR2 B. The streamer structure is therefore observed by two spacecraft carrying coronagraphs nearly in quadrature, which makes this situation very well suited for three-dimensional reconstructions.
\begin{figure*}
\centering
\plottwo{cor2_bw_extr.pdf}{c2c3bw3_11_extr.pdf}
\caption{STEREO~A/COR2 (left) and SOHO/LASCO C2+C3 (right) zoomed images showing respectively the edge-on and face-on views of the streamer on April 30, 2011. R, A1, A2, A3, and A4 are the radial and arc-shaped lines along which brightness profiles are extracted for the fitting procedure.}
\label{fig:extractions}
\end{figure*}
The view from COR2 shows a narrow streamer stalk around a latitude of -39\degr{}, located in the south-east quadrant. It presents the typical image we have for coronal streamers and which we will call henceforth the \textit{edge-on} view (Figure~\ref{fig:extractions}, left panel). With the LASCO C2 and C3 coronagraphs, we have the view rotated by approximately 90\degr{}, which we will refer to as the \textit{face-on} view. In the face-on view in LASCO, the streamer has a fan-like structure with radially extended regions of higher and lower brightness, see the right panels in Figures~\ref{fig:views} and~\ref{fig:extractions}. The density along the azimuthal direction is clearly not uniform.
We use both unpolarized and polarized images to reconstruct the coronal electron density. The first set of images used in this study was taken at 16:24:00 UT by COR2 using so-called ``double'' (i.e. total brightness) exposure \citep[see][]{howard_sun_2008}, at 16:23:04 UT by LASCO C2 and at 16:17:04 UT by LASCO C3. These are total brightness images. We also use a set of polarized brightness (\textit{pB}) images, namely the \textit{pB} image sequence from COR2 at 16:08:15 UT, 16:08:45 UT and 16:09:15 UT, and the \textit{pB} image sequence for LASCO C2 at 14:54:08 UT, 14:57:58 UT and 15:01:48 UT.
\subsection{Background removal}\label{s:calibrating}
As a first step, the data needs to be correctly pre-processed and calibrated to separate the K corona from the F corona and straylight. We need calibrated data in units of mean solar brightness (MSB), since our model will be fitted to intensity values, which correspond in turn to the line-of-sight integration of electron density values.
We use the total brightness data, prepared using the standard procedures \verb|reduce_level_1.pro| and \verb|secchi_prep.pro| in SolarSoft\footnote{\url{http://www.lmsal.com/solarsoft/}} and the appropriate subroutines for the \textit{pB} images. The pre-processing procedures most importantly subtract the offset bias, correct for exposure time, and calibrate the data to obtain physical units (MSB). For the total brightness images, we also need to remove the background that contains the straylight and the F-corona. To remove the background from the total brightness images, we use a monthly minimum image. We follow techniques implemented earlier for SOHO/LASCO and STEREO/COR1 \citep{morrill_calibration_2006, thompson_background_2010}. For each day in a 28-day period around our chosen date, we select at random 40 images from the available image sets. For the 28 sets of 40 images, the images are processed, as described above. However, they are not yet calibrated to physical units. Then we find the median brightness in each pixel over the 40 images. This gives us 28 daily median images. Next, we take the minimum brightness in each pixel from the 28 median images. Finally, this monthly minimum background image is subtracted and the resulting image is calibrated to units of MSB. The background image will also contain the fraction of the K corona that remains constant during the full rotation. Subtracting this background from our total brightness images gives us a reasonable approximation of the dynamic K corona intensity, but will inevitably underestimate the total K corona brightness, and thus the electron density derived from our method.
Electron density inversion methods have also been widely used with polarized brightness (\textit{pB}) images, since the F corona is unpolarized below approximately 5~$R_{\sun}$ \citep{quemerais_two-dimensional_2002}. The K corona can thus be directly inferred from \textit{pB} images in the regions below 5~$R_{\sun}$. Above this height the polarization of the F corona cannot be neglected \citep{koutchmy_f-corona_1985, mann_solar_1992}, so using a \textit{pB}-based method overestimates the electron density in this region. We present the results using a full set of \textit{pB} images in Section~\ref{s:pbparameters} below. In Section~\ref{s:comparisontwomodels}, we compare the results of the two input methods.
An important argument in favor of using the total brightness data is that the cadence of total brightness images is often significantly higher than for \textit{pB} images for the white-light coronagraphs used here. For example, LASCO C2 takes only a few polarized image sequences per day, while the cadence of unpolarized images is typically 12--20 minutes. If our model would be used as a direct density input to study more dynamical events, we need the highest cadence available to capture as many features as possible. Examples of transient events that could be studied are streamer blow-out CMEs \citep[see e.g.][]{vourlidas_streamer-blowout_2018} and streamer waves. Streamer waves are propagating waves along a coronal streamer, typically caused by a CME hitting and displacing the streamer \citep{chen_streamer_2010, feng_streamer_2011}.
\subsection{Position of the streamer}\label{s:position}
\begin{figure*}
\centering
\plotone{synoptic_map.pdf}
\caption{Synoptic map for CR 2109 made at 5.0~$R_{\sun}$ above the east limb from COR2 A data images (from \url{https://secchi.nrl.navy.mil/synomaps/}). Black and white vertical lines are due to missing or bad data. The COR2 image in Figure~\ref{fig:views} corresponds to the Carrington longitude of 118\degr{} at the east limb.}\label{fig:synopticmap}
\end{figure*}
In order to determine the position and geometry of the streamer in 3D space, we use the common method of comparison of coronagraphic synoptic maps with the extrapolated coronal magnetic field \citep[e.g.][]{wang_evolution_2000, saez_3-dimensional_2005, saez_three-dimensional_2007, zhukov_origin_2008}. In the COR2 A synoptic map, shown in Figure~\ref{fig:synopticmap} for Carrington rotation 2109 (CR 2109), the streamer structure spans about 40\degr{} in longitude at a latitude of around -40\degr{}, between Carrington longitudes 150\degr{} and 110\degr{}. The characteristic curvature at the edges of the streamer track towards the pole is due to the projection effect appearing during the rotation of the Sun \citep{wang_evolution_2000}. To get a more accurate latitude of the narrow streamer, we extract a circular profile from the edge-on view at 5.0 $R_{\sun}$ (shown as curve A1 in Figure~\ref{fig:extractions}) and locate the peak of brightness in this profile. This gives us the location of the central streamer axis at -39\degr{} latitude at the east limb, or a position angle (P.A.) of 129\degr{}, where the solar north coincides with a P.A. of 0\degr{}. The radial line R on Figure~\ref{fig:extractions} shows the streamer axis according to this method.
\begin{figure*}
\centering
\plotone{wso_map.pdf}
\caption{WSO source surface field map from the PFSS radial model at $2.5\ R_{\sun}$ for CR 2109 (from April 12 to May 9, 2011). The part of the neutral line in the southern hemisphere at the latitude around 50\degr{} and between longitudes 150\degr{} and 110\degr{} corresponds to the studied streamer. (from \url{http://wso.stanford.edu/synsourcel.html})} \label{fig:WSOmap}
\end{figure*}
Figure~\ref{fig:WSOmap} shows the potential field source surface (PFSS) map of the extrapolated coronal magnetic field at 2.5~$R_{\sun}$ from the Wilcox Solar Observatory (WSO) photospheric magnetogram. In the southern hemisphere, between around 110\degr{} and 150\degr{} longitude, the neutral line remains parallel to the equator at a constant latitude of about 50\degr{}. Since STEREO and SOHO are separated by about 90\degr{}, we can infer from the source surface map that the narrow streamer seen in the COR2 view corresponds to this flat part of the source surface neutral line seen edge-on. This indicates that the streamer is centered around a current sheet in the corona, and thus is a true helmet streamer as opposed to a pseudo-streamer \citep{2007ApJ...658.1340W}. The latitude of the part of the neutral line parallel to the equator (around 50\degr{}) does not completely agree with the latitude we get from the COR2 synoptic map and the brightness peak in the circular profile around 40\degr{}. This difference of only 10\degr{} is probably due to the inaccuracy of the PFSS model in the determination of the location of the neutral line \citep[see e.g.][]{zhukov_origin_2008}. We will come back to this issue later on in this paper.
We thus approximate the 3D configuration of our streamer by a slab-like structure, having an angular extent of 40\degr{}, essentially in Carrington longitude, in the face-on view centered around the central axis in the southern hemisphere, at a position angle of 129\degr{} in the edge-on view.
\section{Fitting the observations with the model}\label{s:fitting}
Using observations from several vantage points, a realistic reconstruction of the 3D configuration can be made for coronal streamers. We develop a forward model based on plausible assumptions about the 3D streamer structure derived in Section~\ref{s:position}. Our model is based upon the model presented by \citet{thernisien_electron_2006}. It has already been determined that a slab model is a reasonable approximation of the 3D structure of a streamer \citep{guhathakurta_large-scale_1996,vibert_streamer_1997,thernisien_electron_2006}. The major improvement in this work is that we for the first time use truly simultaneous views from different vantage points. Previous works had to use the solar rotation to obtain the face-on and edge-on view, or only compare simulated and observed synoptic maps.
The narrow streamer will be modeled as a plasma slab, located at the position we derived above. A great advantage of this kind of model is that we can work with an orthogonal set of parameters. That is, we can separate the radial, azimuthal, and latitudinal model parameters and fit them to the observations independently. This makes the electron density derivation much more straightforward and computationally less expensive than e.g. tomographic inversion methods \citep[e.g.][]{vasquez_validation_2008}.
\subsection{The slab model}\label{s:model}
The slab model considered for our streamer is built up from the multiplication of three functions. They describe respectively the radial, transverse and face-on profiles for the electron density $n_{\mathrm{e}}$. It can be described by the following equation \citep{thernisien_electron_2006}:
\begin{equation}
n_{\mathrm{e}}(r, \alpha, \theta) = n_{\mathrm{e, radial}}(r)n_{\mathrm{e, shape}}(r, \theta)n_{\mathrm{e, face}}(\alpha),
\end{equation}
where $r$ is the radial distance (in solar radii), $\alpha$ is the azimuthal angle in the plane of the slab and $\theta$ is the latitudinal angle between the line defined by a point in the corona and the Sun center to the plane of the slab, see Figure~4 in \citet{thernisien_electron_2006}. The three $n_{\mathrm{e}}$ functions (radial, shape, and face) from \citet{thernisien_electron_2006}, in the order they appear in the fitting process, are here repeated for convenience.
The shape function $n_{\mathrm{e, shape}}$ is specified by
\begin{align}\label{eq:shape}
&n_{\mathrm{e, shape}} (r, \theta) = \nonumber \\
&\begin{cases}
\exp\left( -\dfrac{\theta^2}{\theta_1(r)}\right) , &\left\vert\theta\right\vert < \dfrac{\theta_1(r)}{2\theta_2(r)},\\
\exp\left( -\left\vert\dfrac{\theta}{\theta_2(r)}\right\vert + \dfrac{\theta_1(r)}{4 \theta_2(r)^2} \right) , &\left\vert\theta\right\vert \geq \dfrac{\theta_1(r)}{2\theta_2(r)}.
\end{cases}
\end{align}
This function represents how the thickness of the slab varies with the radial distance, i.e., how the shape of the narrow streamer, seen edge-on, changes. The 10 parameters to be determined are identified within the $\theta_1$ and $\theta_2$ functions, which are given by
\begin{subequations}
\begin{align}
\theta_1(r) &= \sum_{i=0}^4 b_i r^{-i},\\
\theta_2(r) &= \sum_{i=0}^4 c_i r^{-i},
\end{align}
\end{subequations}
where we have to find $b_i$ and $c_i$ in the optimization.
The radial dependence of the electron density is given by the $n_{\mathrm{e, radial}}$ function. It is given by the following equation:
\begin{equation}
n_{\mathrm{e, radial}}(r) = \sum_{i=1}^4 a_i r^{-i},
\end{equation}
and has four coefficients that need to be fitted to the observations, in contrast to the five coefficients for the polynomials that we have to find for both the $\theta_1$ and $\theta_2$ function.
Finally, the face-on modulation is expressed by simply inserting the normalized circular profile at 3 $R_{\sun}$, taken from the face-on brightness along the angular extent of the slab. This is different from the modulation implemented in \citet{thernisien_electron_2006}, where a linearization between two brightness profiles along the face-on view was used. The $n_{\mathrm{e,face}}$ function will enhance or reduce the electron density in the plane of the slab (along the current sheet), in order to reproduce the structure with a number of radially extended rays that we observe in the face-on view (see Figure~\ref{fig:views}, right panel). With this approach, we consider all the ray-like structures that are observed to be in the plasma slab and assume that they are radial features.
\subsection{Fitting the total brightness images}\label{s:parameters}
We start with the fitting process by the slab model using the set of total brightness images, pre-processed as outlined in Section~\ref{s:calibrating}.
First, the shape function is fitted to arc-shaped profiles taken from the edge-on observations. We have used seventeen arc-shaped profiles taken at different heights in the COR2 field of view (FOV), ranging from 3 to 10~$R_{\sun}$ with intervals of half a solar radius and at 11 and 12~$R_{\sun}$. Curve A1 shown in Figure~\ref{fig:extractions} is an example of such an arc-shaped profile at 5~$R_{\sun}$. The bright structure at the right of the streamer that can be seen in the left panel of Figure~\ref{fig:views} around the latitude of 50\degr{} must be excluded from the data in the fitting process, since it does not contribute to the streamer profile. The profiles are normalized so that they can be fitted directly to the $n_{\mathrm{e, shape}}$ function. By doing the fit directly with the electron density function, we neglect any projection effects in this viewing direction. This approach is reasonable since the streamer slab is oriented orthogonal to the plane-of-the-sky. The coefficients $b_i$ and $c_i$, corresponding to the functions $\theta_1$ and $\theta_2$, are determined with an MPFIT minimization technique on a $\chi^2$ criterion \citep{markwardt_non-linear_2009}. The profiles at all heights are fitted simultaneously. An example of the fit at 5~$R_{\sun}$ can be seen in Figure \ref{fig:5Rsun}. The black line shows the normalized profile extracted from the COR2 A observations along the line A1 in Figure~\ref{fig:extractions}. The red line represents the $n_{\mathrm{e,shape}}$ function calculated on the base of the optimized parameters. The shape function can be directly compared with normalized observations since it is dimensionless, as can be seen in Equation~\ref{eq:shape}. The optimized shape function matches the observations reasonably well.
\begin{figure}
\centering
\epsscale{1.1}
\plotone{r5_1803_2.pdf}
\caption{Normalized profiles of the brightness of the streamer viewed edge-on at 5 $R_{\sun}$ in the STEREO~A/COR2 data (along the line A1 in Figure~\ref{fig:extractions}) and in the model. The black line represents the observed brightness. The red line is the $n_{\mathrm{e,shape}}$ function, calculated using the optimized parameters. In this and other figures, the position angle is measured with respect to the position angle of 129\degr{}, which was determined to be the position of the central axis of the streamer slab in Section~\ref{s:position}.}
\label{fig:5Rsun}
\end{figure}
In Figure~\ref{fig:7Rsun} we show the shape function and observed coronal brightness profile at 7~$R_{\sun}$ along the line A4 in Figure~\ref{fig:extractions}. In the definition of the shape function, it is assumed that the axis of the streamer in the edge-on view is radial, due to the symmetry in the $\theta$ coordinate. We see however in Figure~\ref{fig:7Rsun} that the center of the streamer at 7~$R_{\sun}$ is not in the same location as the one that we derived through fitting the peak of brightness at 5~$R_{\sun}$. The shape of the streamer reasonably agrees with the observations if we shift the $n_{\mathrm{e, shape}}$ function by -1.34\degr{} to match the center with the peak in brightness of the observations (Figure~\ref{fig:7Rsun}, right panel). To compensate for the offset of the streamer axis, we determine the peak of brightness at each height chosen for the fitting of the shape function. Then we take the average of the peak location as the fixed location of the streamer axis, around which we assume the streamer to be symmetrical. The new P.A. of the streamer axis for the fitting procedure is 128.17\degr{}, as can be seen by the slight offset of the peak of $n_{\mathrm{e, shape}}$ to the center of the plot in Figure~\ref{fig:5Rsun}.
Since the brightness peak at larger heights is located closer to the equator, the streamer appears to be bent towards the equator. This indicates that we are dealing with a non-radial streamer, which our model unfortunately can not reproduce. Non-radiality can arise when the magnetic field in the outer corona contains, in addition to the dipole, one or more higher order harmonic components of comparable strength \citep{wang_nonradial_1996}. The current sheet bends in latitude because the different multipoles, each with its own particular neutral-line topology, decrease with the radial distance at different rates. This could also be an explanation for the difference found in the latitude for the streamer, where we locate the peak of brightness in an arc-shaped profile (see Section~\ref{s:position}), and the location of the neutral line on the source surface field map. PSFF extrapolation, where the field lines are required to be purely radial beyond the source surface, can not reproduce the non-radial streamers.
\begin{figure*}
\centering
\plottwo{r7_1803_2.pdf}{r7_shifted_1803_2.pdf}
\caption{Normalized profiles of the brightness of the streamer viewed edge-on at 7 $R_{\sun}$ in the STEREO~A/COR2 data (along the line A4 in Figure~\ref{fig:extractions}) and in the model. The black line represents the observed brightness. The red line is the $n_{\mathrm{e,shape}}$ function, calculated using the optimized parameters. On the left, $n_{\mathrm{e,shape}}$ is centered at the streamer axis as determined at 5 $R_{\sun}$ (see section \ref{s:position}), on the right the center of $n_{\mathrm{e,shape}}$ is shifted by -1.34\degr{} in the P.A. to match the peak in brightness at 7 $R_{\sun}$.}
\label{fig:7Rsun}
\end{figure*}
Next, we continue with the independent fit in the radial direction by using one radial profile along the streamer in the edge-on view at P.A. 129\degr{} (line R in Figure~\ref{fig:extractions}) and one arc-shaped profile along the face-on view at 3~$R_{\sun}$ (curve A2 in Figure~\ref{fig:extractions}) to determine the parameters $a_i$ from the $n_{\mathrm{e, radial}}$ function. Since the range of the radial brightness profile covers a few orders of magnitude in the COR2 field of view, we modify the profile by taking the logarithm before implementing it in a $\chi^2$ criterion on which we perform a minimization. We use the SCRaytrace raytracing software available in SolarSoft \citep{2004AGUFMSH21B0404T} to create a synthetic view of our density model by integrating along the line-of-sight, corresponding to the coronagraph we used in the observation. The electron density inversion used in this software is based upon the method presented in \citet{hayes_deriving_2001}. The brightness due to Thomson-scattering relates to the electron density. This relation is inverted along radial profiles to obtain the forward models. We then obtain the reconstructed profiles by extracting the same profiles that we chose in the observations from these forward modeled views. The following criteria is then minimized:
\begin{equation}
\boldsymbol{\hat{a}} = \operatorname*{arg\,min}_{\boldsymbol{a}} \sum \left[ \log (B) - \log (\tilde{B}(\boldsymbol{a})) \right]^{2} ,
\end{equation}
where $B$ is the radial brightness profile of the edge-on image, and $\tilde{B}$ is the reconstructed radial brightness profile of the forward modeled view; $\boldsymbol{a}$ refers to the vector of the coefficients $a_i$.
Again the minimization is done with the multivariate MPFIT algorithm. The result for the fit of the radial profile can be seen in Figure~\ref{fig:radial} (solid lines). The radial profile of the modeled brightness is in good agreement with the COR2 A observations.
\begin{figure}
\centering
\epsscale{1.1}
\plotone{bkg3_radial.pdf}
\caption{Radial profiles of the K corona brightness along the streamer viewed edge-on by COR2 aboard STEREO~A, derived from the total brightness, background subtracted image. The position angle of the profile is 129\degr{} in the STEREO~A/COR2 image. The black and red lines correspond to the data and the model fitted with total brightness images, respectively.}
\label{fig:radial}
\end{figure}
\begin{figure}
\centering
\epsscale{1.1}
\plotone{bkg3_azimuthal.pdf}
\caption{Face-on (azimuthal) profiles of the brightness of the streamer at 3 (solid lines) and 11~$R_{\sun}$ (dashed lines) in the face-on view (curves A2 and A3 in Figure~\ref{fig:extractions}). The black and red brightness profiles are related to the total brightness, background subtracted LASCO observations and the forward model calculated using the optimized $n_{\mathrm{e}}$ to the total brightness images, respectively.}
\label{fig:azimuthal}
\end{figure}
We also compare the azimuthal profiles from the LASCO observations (arcs A2 and A3 in Figure~\ref{fig:extractions}) to reconstructed azimuthal brightness profiles from corresponding forward modeled views of the model. The observed brightness enhancements and depletions near the center of the streamer slab are quite nicely reproduced by our model at the height of our chosen profile, as can be seen in Figure~\ref{fig:azimuthal}. At the edges, the features seem to become smeared out to comply with the condition that the density must go to zero outside of the streamer slab. In this respect, our model behaves similarly to the model by \citet{thernisien_electron_2006}, see their Figure~10. Even though the absolute values of intensity are only incorporated in the model through the radial profile of the COR2 view, we get a very good match for the absolute values of intensity in the LASCO C2 and C3 field of view, as can be seen in the profiles extracted from the model at 3 and 11~$R_{\sun}$. At 11~$R_{\sun}$, not all the variations in the profile correspond to variations that can be seen in the input profile at 3~$R_{\sun}$, so we do not capture all the variations at heights other than the one of our chosen profile. This indicates that not all the ray-like structures in the LASCO view are perfectly radial.
\begin{deluxetable}{c c c c c c }[b]
\tablecaption{Summary of the fitting parameters for the set of total brightness images \label{tbl:overview}}
\tablehead{\colhead{} & \multicolumn{5}{c}{Subscript} \\
\colhead{Parameter} & \colhead{0} & \colhead{1} & \colhead{2} & \colhead{3} & \colhead{4} }
\startdata
$a_i$ & \ldots & 1.736e5 & -2.999e4 & 5.762e6 & 9.699e7 \\
$b_i$ & -87.79 & 1105. & -4785. & 8035. & -4967. \\
$c_i$ & 3.927 & -3.623 & 136.4 & -1064. & 2409.
\enddata
\end{deluxetable}
A summary of the values for the different parameters used in our optimization process can be found in Table~\ref{tbl:overview}.
\subsection{Fitting the polarized brightness images}\label{s:pbparameters}
We now repeat the process described in the previous section, but for the set of \textit{pB} images from LASCO~C2 and COR2. Again, we first fit the shape function to seventeen normalized arc-shaped profiles taken at different heights in the edge-on view. Then, we continue with the fit in the radial direction using one radial profile along the streamer in the edge-on view at P.A. 129\degr{} (line R in Figure~\ref{fig:extractions}) and one arc-shaped profile along the face-on view at 3~$R_{\sun}$ (curve A2 in Figure~\ref{fig:extractions}). For the minimization however, this time we calculate the \textit{pB} forward modeled profiles, and fit these to the observed \textit{pB} profiles. A summary of the values for the different parameters resulting from this minimization can be found in Table~\ref{tbl:overviewpb}.
To compare this model to the observations, we calculated the forward modeled views of \edit1{\added{both}} the \textit{pB} \edit1{\added{and the total brightness from the model with the \textit{pB} images as input,}} and extracted the corresponding profiles. Figure~\ref{fig:radial_pb} shows the observed and forward modeled \textit{pB} profiles along the radial line R (as seen in Figure~\ref{fig:extractions}), as the cyan and blue line respectively. \edit1{\added{The red line corresponds to the total brightness profile obtained with the model from \textit{pB} images, and the black line is again the observed profile from the total brightness, background subtracted image.}} We find that our \edit1{\added{\textit{pB} profile from the}} model fits the \edit1{\added{observed \textit{pB}}} radial profiles very well. \edit1{\added{We provide the corresponding four curves for the azimuthal view (along arc A2 in Figure~\ref{fig:extractions}) in Figure~\ref{fig:azimuthal_pb}, that is the forward modeled total brightness and \textit{pB} profile from the model with \textit{pB} images as input, the observed profile from the \textit{pB} image, and the observed profile from the total brightness, background subtracted image.}} For the azimuthal profile at 3~$R_{\sun}$, Figure~\ref{fig:azimuthal_pb} shows that there is a significant difference in contrast between the small-scale structures in the \edit1{\added{observed}} total and polarized brightness profiles. For the total brightness\edit1{\added{, background subtracted}} profile, we see clear enhancements and depletions, but they are much less pronounced in the profile from the \textit{pB} observations. In general, the signal-to-noise ratio in the \textit{pB} images is lower than that in the total brightness images, which could explain why these variations are less pronounced in the \textit{pB} image. In addition, in both Figure~\ref{fig:radial_pb} and Figure~\ref{fig:azimuthal_pb} the observed \textit{pB} curve is slightly above the total brightness curve, which means that probably too much of the K corona was removed during the background subtraction (see Section~\ref{s:calibrating}). From Figure~\ref{fig:azimuthal_pb}, it is also clear that the fit of the forward modeled \textit{pB} profile is significantly lower than the observed \textit{pB} profile in LASCO C2, although the modeled total brightness curve roughly matches the background subtracted data. \edit1{\deleted{We will discuss this discrepancy in the next section.}}
\begin{deluxetable}{c c c c c c }[b]
\tablecaption{Summary of the fitting parameters for the set of \textit{pB} images \label{tbl:overviewpb}}
\tablehead{\colhead{} & \multicolumn{5}{c}{Subscript} \\
\colhead{Parameter} & \colhead{0} & \colhead{1} & \colhead{2} & \colhead{3} & \colhead{4} }
\startdata
$a_i$ & \ldots & 6.866e5 & -1.047e7 & 7.557e7 & -7.696e7 \\
$b_i$ & -73.61 & 841.8 & -2710. & 1956. & 1827. \\
$c_i$ & -1.120 & 128.9 & -1105. & 3473. & -3302.
\enddata
\end{deluxetable}
\begin{figure}
\centering
\epsscale{1.1}
\plotone{pol_radial.pdf}
\caption{Radial profiles of the K corona brightness along the streamer viewed edge-on by COR2 aboard STEREO~A. Data is derived from the total brightness, background subtracted image (black) and the \textit{pB} image (cyan). The red and blue lines correspond respectively to the total and polarized brightness calculated with the model, fitted with \textit{pB} images. The position angle of the profile is 129\degr{} in the STEREO~A/COR2 image.}
\label{fig:radial_pb}
\end{figure}
\begin{figure}
\centering
\epsscale{1.1}
\plotone{pol_azimuthal.pdf}
\caption{Face-on (azimuthal) profiles of the brightness of the streamer at 3~$R_{\sun}$ in the face-on view (curve A2 in Figure~\ref{fig:extractions}). Data is derived from the total brightness, background subtracted image (black) and the \textit{pB} image (cyan) taken by SOHO/LASCO~C2. The red and blue lines correspond respectively to the total and polarized brightness calculated with the model, fitted with \textit{pB} images.}
\label{fig:azimuthal_pb}
\end{figure}
\begin{figure}
\centering
\epsscale{1.1}
\plotone{azimuthal_gmodel.pdf}
\caption{Azimuthal profiles of the streamer brightness at 3~$R_{\sun}$ in the face-on view (curve A2 in Figure~\ref{fig:extractions}). Data is derived from the SOHO/LASCO~C2 total brightness, background subtracted image (black) and the \textit{pB} image (cyan). These curves are the same as in Figure~\ref{fig:azimuthal_pb}. The red and blue lines correspond respectively to the total and polarized brightness calculated with the model, to which a polar density from \citet{guhathakurta_large-scale_1996} is added and total brightness images are fitted.}
\label{fig:azimuthal_g}
\end{figure}
\edit1{\added{This discrepancy could be partly explained by the high sensitivity of LASCO C2 \textit{pB} observations to small changes in the calibration factors. \cite{morgan2015} argued that the calibration factors should be modified slightly in comparison to the standard SolarSoft calibration factors that we used in the present study. This could lead to a noticeable decrease of the observed \textit{pB}, especially above the poles. The calibration factor modification would lead to a decrease of the observed \textit{pB} in Figure~\ref{fig:azimuthal_pb} since the arc A2 for which we extract the azimuthal profiles is situated above the south pole in the field of view of LASCO C2.}}
\edit1{\deleted{Since the results of our two models are consistent with each other and with previous results, we believe that the origin of a poor correspondence of the observed and modeled \textit{pB} profiles in Figure~\ref{fig:azimuthal_pb} must be searched elsewhere. }\replaced{A}{Another} plausible explanation \added{for this discrepancy} is that there is a static K corona component, comparable to the intensity values in the LASCO field of view, which contributes to the \textit{pB} images in the LASCO field of view. If this additional component is located around the south pole (i.e. close to the plane of the sky in LASCO images), then it would provide a significant contribution to the LASCO C2 image. However, this component would not be significant in the COR2 field of view, since it would be located out of the plane of the sky at the lines of sight passing through the streamer, and the intensity of the streamer is much larger there. This static component is a real part of the K corona, but it gets subtracted with our background removal procedure. To test this hypothesis, we added an additional global density to our model from the total brightness images, using the model by \citet{guhathakurta_large-scale_1996}. In their equation (10), we replaced the current sheet density term $N_{\rm cs}(r)$ with our streamer model and implemented the polar density term $N_{\rm p}(r)$ as listed in their Table 3. The calculations using this combined streamer+pole density model show that the results for the COR2 field of view have almost not changed and the resulting $pB$ curve is very close to the blue curve in Figure~\ref{fig:radial_pb}. However, in the LASCO field of view, the calculated \textit{pB} has increased significantly and now matches the observed values much more closely, as can be seen in Figure~\ref{fig:azimuthal_g}. We can also see that this added density in the polar region smoothens the variations present in the azimuthal profile. This static density component could thus also explain the lower contrast of the ray-like structures in the \textit{pB} observations. }
\edit1{
This combined model is used here only for illustration as we aim only at constructing a local density model for the streamer, and not creating a global coronal electron density model to fit the full observed image. For such a global model, one would need to fit also the static component of the K corona. \edit1{\replaced{However, it is impossible to separate correctly the K corona}{Separating the}} static components \edit1{\added{of the K corona}} and the ray-like structures in the streamer \edit1{\added{is difficult}} with the currently available data \edit1{\added{and would require a more advanced model, which goes beyond the scope of the present work. }}}
\section{Discussion}\label{s:discussion}
\subsection{Comparison of fitting with total and polarized brightness images}\label{s:comparisontwomodels}
With the profiles shown in Figure~\ref{fig:radial_pb} and Figure~\ref{fig:azimuthal_pb}, we can first compare the total brightness observations with the polarized brightness observations. Below 5~$R_{\sun}$ in the COR2 field of view, we see that the observed radial \textit{pB} profile (cyan line in Figure~\ref{fig:radial_pb}) falls slightly above the observed total brightness profile. As noted before, this is probably due to subtracting too much of the K corona. Above this height, we know that the F corona becomes polarized, and therefore starts adding to the \textit{pB}. Since we still subtract the static F corona in the total brightness image through the background subtraction, the total brightness profile significantly falls below the \textit{pB} profile. From the total brightness and \textit{pB} profiles generated by the model (red and blue curves in Figure~\ref{fig:radial_pb}), we can see that in this particular streamer configuration, it is expected that the total brightness from only the K corona lies very close to the \textit{pB} and thus that all modeled and observed profiles fall very close to each other below 5~$R_{\sun}$. The profiles at 3~$R_{\sun}$ in the LASCO C2 face-on view tell a different story, as shown in Figure~\ref{fig:azimuthal_pb}. The observed profile from the total brightness image with the background subtracted (black) is much lower than the observed profile from the \textit{pB} image (cyan). This is contrary to what one would expect, since polarized brightness should be always lower than the total brightness. This can be seen from the forward modeled total and polarized brightness profiles (red and blue curves in Figure~\ref{fig:azimuthal_pb}). We also notice that the polarization degree in the LASCO view is lower than that in the COR2 view. This is explained by the orientation of our streamer. \edit1{\added{In the field of view of COR2, the streamer density is strongly concentrated close to the plane of the sky which causes the Thomson-scattered emission to be more polarized.}} In the LASCO view \edit1{\added{however}}, the density is mostly concentrated out of the plane of the sky, and thus is less polarized. \edit1{\added{As the distance from the plane of the sky increases, the polarized brightness decreases faster than the total brightness \citep{deforest2013}, which explains the difference in the polarization degree in COR2 and LASCO views of the modeled streamer. }}
\begin{figure*}
\centering
\epsscale{0.7}
\plotone{electron_density_full_R3.pdf}
\caption{Coronal electron density profiles along the radial direction derived using our streamer model from total brightness and \textit{pB} images (blue and magenta, respectively) compared to the values obtained by \citet{thernisien_electron_2006} (orange). The solid (dashed) lines correspond to the maximum (minimum) density values in the plasma slab. The shaded zone presents the range of the density profiles in the bright ray-like structures visible in the face-on view.}\label{fig:electron_density}
\end{figure*}
\begin{figure}
\centering
\plotone{scatterplot_final.pdf}
\caption{Scatterplot of the electron density resulting from two models: fitting to total brightness images versus fitting to \textit{pB} images. Points are colored according to their distance from the solar surface. A line illustrating the same values in both models is shown in blue for reference.}\label{fig:scatterplot}
\end{figure}
Next, we compare the densities derived from both fittings, to see how well the two models correspond to each other. Figure~\ref{fig:electron_density} shows the comparison between the radial profiles of our electron density derived from the total brightness (blue) and \textit{pB} (magenta) images, and those obtained by \citet{thernisien_electron_2006} (orange). The solid and dashed lines show the maximum and minimum electron density profiles of each modeled streamer, respectively. The electron density profiles range between the maximum and minimum values, as is highlighted by the shaded zones. For the model derived from the total brightness images, this presents a maximal density contrast of bright ray-like structures around a factor of 3 with respect to the background streamer density. Since the brightness variation in the face-on profile in the \textit{pB} images is lower, the maximal density contrast for the model from the \textit{pB} images is also lower, at about 1.5. Below 5~$R_{\sun}$, there is a very good correspondence between the two density models. Above 5~$R_{\sun}$, we see that the model from \textit{pB} images gives higher densities than the one from the total brightness images, which is to be expected from the observations. In Figure~\ref{fig:scatterplot}, we show a scatterplot of the two density cubes, where each point is colored according to its distance from the solar surface. This plot shows that there is a very high correlation between the density cubes derived from the two models, which is also indicated by the correlation coefficient of 0.96. We also see that the correspondence between the two density cubes gets better the closer one approaches the solar surface. Farther out from the Sun, the density values for the density cube from the \textit{pB} images are too high in comparison with those from the density cube from the total brightness images. This can again be attributed to the F corona becoming polarized higher up in the corona, and thus adding to the density model from the \textit{pB} images. We can conclude that our two models are very consistent with each other regarding the resulting values for the electron density.
Figure~\ref{fig:electron_density} shows that our results are also consistent with the density values found previously by \citet{thernisien_electron_2006}. In their paper, a comparison was also made between their model and previous density models \citep{saito_study_1977, leblanc_tracing_1998, quemerais_two-dimensional_2002}, demonstrating a good agreement with them. \edit1{\added{Since the results of our two models are consistent with each other and with previous results, we believe that the origin of a poor correspondence of the observed and modeled \textit{pB} profiles in Figure~\ref{fig:azimuthal_pb} must be searched elsewhere, as was explained in the Section~\ref{s:pbparameters}.}}
\subsection{3D rendering of the density and forward modeled views}
Figure \ref{fig:3D} presents the slab in an isometric view of the 3D density cube obtained from the fitting to the total brightness images. This Figure gives a better understanding of how the slab looks in three dimensions. The edge-on view is actually a superposition of all the different ray-like features that form the slab structure. The rays have the rate of radial brightness decay that is similar to the rate of radial decay that can be seen in the edge-on view.
Figure \ref{fig:3D} also clearly illustrates our assumption, namely that all the rays visible in the face-on view are situated inside the streamer slab. In reality, some of the radial features seen in the face-on view may be polar plumes, or the quasi-radial density enhancement that we can see slightly southward of the streamer in the edge-on view (Figure~\ref{fig:views}, left panel), or other structures in the solar corona.
\begin{figure}
\centering
\epsscale{1.1}
\plotone{3dfinal_crop.pdf}
\caption{Three-dimensional rendering of the density cube resulting from our parameter fits to the streamer model. The density cube ranges from -15 to 15~$R_{\sun}$ in all three directions, which corresponds to the FOV of COR2. In each direction we have 512 pixels, which is half of the resolution that was used for the COR2 observations that serve as input to the model (a lower resolution was chosen to make the rendering reasonably faster). The coloring gives a representation of the scaling of the density in a logarithmic scale, but was capped to enhance visibility of different ray-like features in the plane of the slab. The orb in the center surrounds the Sun and has a radius of 2.5~$R_{\sun}$.}\label{fig:3D}
\end{figure}
In Figure~\ref{fig:comparison} we show the forward modeled views of the streamer from which we obtained the reconstructed profiles for the fitting to the total brightness images. They present the density cube as it would be seen by a coronagraph from two vantage points in quadrature. The central bright feature in the edge-on view resembles the observed streamer rather well. The main difference between the model and the observations is the slight non-radiality of the streamer. The face-on views are harder to compare, due to low signal-to-noise ratio of the observations. Some common features can clearly be distinguished. The bright structure to the south of our streamer in the edge-on view probably corresponds to the bright structure just outside of our slab on the right in the face-on view. The bright structures to the right of our slab in the LASCO view are too bright to lie in the plane of the slab, and must be closer to the plane of the sky. From the COR2 synoptic map (Figure~\ref{fig:synopticmap}), we can also see that there is a second streamer structure crossing the plane of the sky before our streamer (situated around Carrington longitudes 150-180\degr{}), which probably corresponds to this feature. This streamer is significantly out of the plane of the sky on the day of our observations (April 30, 2011; see Figure~\ref{fig:views}).
\begin{figure*}
\centering
\plotone{compare_fm.pdf}
\caption{Forward modeled views of the obtained 3D density cube as they would appear observed by COR2 A (top) and LASCO C2 (bottom) coronagraphs (left), compared to the corresponding observations (right).}
\label{fig:comparison}
\end{figure*}
In Section~\ref{s:position}, we located the streamer slab following the neutral line in the PFSS extrapolation map. The high variability we now have in our density model with a number of ray-like structures along the slab, indicates that the plasma sheet is not a smooth layer centered at the current sheet. This may have implications for the magnetic field structure inside the streamer, which needs to be explained theoretically. \citet{wang2} proposed that the streamer plasma sheet is filled with coronal material due to interchange reconnection with closed magnetic loops at the streamer cusp. If this reconnection is not uniform in space and time, different parts of the plasma sheet may be filled with plasma at different times, producing the pattern with ray-like structures similar to that reported in our work. The ray-like structure would then be intrinsically dynamic. The interchange reconnection at the streamer cusp was modeled by \citet{higginson}, but considering only the magnetic (not density) structure.
It is difficult to confirm the occurrence of this process observationally with the current instrumentation. There is a significant gap between the fields of view of externally occulted coronagraphs (like COR2) and EUV imagers (like Extreme UltraViolet Imager aboard STEREO) that is covered only by internally occulted coronagraphs (like STEREO/COR1), which are prone to high straylight and do not allow observations of fine coronal structures at sufficient resolution. We checked the data taken by the MLSO/Mk4 coronagraph \citep[that has the field of view connecting those of LASCO C2 and disk EUV imagers, see e.g.][]{mk4} but could not identify any structures corresponding to rays we observed in the C2 field of view. Future missions like the ASPIICS coronagraph aboard PROBA-3 \citep{proba32, proba33, proba3} will fill this observational gap and have a potential to improve our knowledge of how these ray-like structures relate to the typical cusp structure that helmet streamers have in the low corona.
\section{Conclusions}\label{s:conclusions}
We have analyzed the 3D structure of a coronal streamer using a forward model based on plausible assumptions about the large-scale 3D geometry. The streamer is represented by a dense plasma slab with a radial density decrease and azimuthal fine structure. We fitted this model to both total and polarized brightness data of a streamer, observed by STEREO~A/COR2 and SOHO/LASCO C2 and C3 coronagraphs, while the STEREO and SOHO spacecraft were in quadrature. Our model can reproduce the observations reasonably well, as shown by the forward model calculations of the streamer images in the two different views and the fits to the observed brightness profiles. The two sets of fittings (one to the total brightness background-subtracted images, and one to the \textit{pB} images) are consistent with each other, and we also obtained a good agreement with earlier streamer density models. The assumption that coronal streamers are purely radial features is not fully correct, which could be improved by expanding the model to allow for non-radial streamers.
The model sheds light on the fine three-dimensional structure of the electron density distribution in coronal streamers. The ray-like structures in the slab model are necessary to reproduce the observations in the face-on view. We found variations up to a factor of 3 between the radial profiles of the electron density of brighter and darker structures in the streamer.
Using total brightness images, from which we subtracted a minimum background, is not ideal for separating the K and F corona. We have shown that below approximately 5~$R_{\sun}$, the model obtained from the total brightness images corresponds very well to the model obtained from the \textit{pB} images. Although the subtraction of a minimum image is not an ideal method for separating the K and F corona, we demonstrate it to be a decent method to obtain density values inside coronal streamers. The derived densities are within a factor 3 from the densities obtained from the model using \textit{pB} data as input. Above 5~$R_{\sun}$ however, polarized brightness data is no longer suitable due to the polarization of the F corona. A better understanding of the F corona in this region would certainly improve the K and F corona separation, and refine methods for the inversion of total and polarized brightness images into electron density.
The density model developed here can be used as a background density model for modeling of e.g. streamer wave events. Nevertheless, the method should be further expanded to cases where different available coronagraphs are not in quadrature to be able to use it for specific events found during the epoch of STEREO observations. \edit1{\added{Further development of the model could go towards combining it with more global coronal models (e.g. those derived from tomography). At the end of Section~\ref{s:pbparameters}, we gave an illustration of how the model better describes the observations when it is implemented in a simple global coronal density model. This could be explored more by optimizing the static density component to fit the observations, and by using more advanced models for the static density component. This way, our model could locally provide finer details in a global coronal density model. An advanced combined model could then shed light on the role of the fine structure of streamers in the global corona, and how it relates to the slow solar wind formation.}}
\acknowledgments
The authors thank the anonymous referee for the valuable comments which led to significant improvements of this paper. ANZ thanks the European Space Agency (ESA) and the Belgian Federal Science Policy Office (BELSPO) for their support in the framework of the PRODEX Programme. TVD was supported by the GOA-2015-014 (KU~Leuven) and the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 724326). We thank A. Thernisien, M. Mierla and R. Colaninno for useful discussions. The SECCHI data used here were produced by an international consortium of the Naval Research Laboratory (USA), Lockheed Martin Solar and Astrophysics Lab (USA), NASA Goddard Space Flight Center (USA), Rutherford Appleton Laboratory (UK), University of Birmingham (UK), Max-Planck-Institut for Solar System Research (Germany), Centre Spatiale de Li\`ege (Belgium), Institut d'Optique Th\'eorique et Appliqu\'ee (France), Institut d'Astrophysique Spatiale (France). The SOHO/LASCO data used here are produced by a consortium of the Naval Research Laboratory (USA), Max-Planck-Institut for Sonnensystemforschung (Germany), Laboratoire d'Astrophysique de Marseille (France), and the University of Birmingham (UK). SOHO is a project of international cooperation between ESA and NASA. Wilcox Solar Observatory data used in this study were obtained via the web site \url{http://wso.stanford.edu} at 2018/11/16 07:41:31 PST, courtesy of J.T. Hoeksema. The Wilcox Solar Observatory is currently supported by NASA.
\bibliographystyle{aasjournal}
|
2,877,628,089,088 | arxiv | \section{Introduction}
\label{intro}
Magnetic fields play an important role in supernova remnants
(F\"urst \& Reich\ \cite{FR04}), in the interstellar medium of
galaxies (Beck et al.\ \cite{B96}; Beck \cite{B04}), in the intergalactic
medium of clusters (Carilli \& Taylor\ \cite{CT02}), and in the jets and
lobes of active galaxies (Rees\ \cite{R85}; Blandford\ \cite{BL01}). The
relative importance among the other competing forces can be estimated by
comparing the corresponding energy densities or pressures.
For most extragalactic objects
measurements of the magnetic field strength are based on radio
synchrotron emission and Faraday rotation measures (RM) of the
polarized radio emission (see Heiles\ \cite{H76} and Verschuur\
\cite{V79} for reviews of the observational methods).
Total synchrotron emission traces the total field in the sky plane,
polarized synchrotron emission the regular field component.
The regular field can be coherent (i.e. preserving its direction within the
region of observation) or incoherent (i.e. frequently reversing its
direction).
Determination of the field strength from the synchrotron intensity needs
information about the number density of the cosmic ray electrons, e.g.
via their $\gamma$-ray bremsstrahlung emission or X-ray emission by
inverse Compton scattering. If such data are lacking, an assumption
about the relation between cosmic ray electrons and magnetic fields
has to be made. The most commonly used approaches are:
\begin{itemize}
\item Minimum total energy density ($\epsilon_\mathrm{tot} =
\epsilon_\mathrm{CR} + \epsilon_\mathrm{B}= \min$)
\item Equipartition between the total energy densities of cosmic rays
and that of the magnetic field ($\epsilon_\mathrm{B} = \epsilon_\mathrm{CR}$)
\item Pressure equilibrium ($\epsilon_\mathrm{B} = {1\over3} \epsilon_\mathrm{CR}$)
\end{itemize}
The minimum-energy and equipartition estimates give very similar results and are
often used synonymously. The idea is that cosmic ray particles and magnetic fields
are strongly coupled and exchange energy until equilibrium is reached. Deviations
from equilibrium occur if, for example, the injection of particles or the generation
of magnetic fields is restricted to a few sources or to a limited period.
The mean distance between sources and their mean lifetime define the smallest
scales in space and time over which equipartition can be assumed to hold.
The minimum-energy assumption was first proposed by Burbidge (\cite{B56})
and applied to the optical synchrotron emission of the jet in M87. Since
then the validity of this method has been discussed in the literature.
Duric (\cite{D90}) argued that any major deviation from equipartition
would be in conflict with radio observations of spiral galaxies.
The azimuthally averaged equipartition strength of the field in the Milky Way
and its radial variation (Berkhuijsen, in Beck\ \cite{B01}) agree well with
the model of Strong et al. (\cite{S00}, their Fig.~6) based on radio continuum and
$\gamma$-ray surveys and cosmic ray data near the sun.
On the other hand, Chi \& Wolfendale (\cite{CW93}) claimed significant
deviations from equipartition conditions in the Magellanic Clouds.
Pohl (\cite{P93b}) replied that the standard proton-to-electron
ratio used in the equipartition formula may be smaller in the
Magellanic Clouds compared to the Milky Way.
Equipartition estimates of field strengths were determined in many spiral galaxies
(Beck\ \cite{B00}). The mean strength of the total field ranges from a
few $\mu$G in galaxies with a low star-formation rate to $\simeq30~\mu$G
in grand-design galaxies like M~51. The mean total field strength of a
sample of Shapley-Ames galaxies is $9~\mu$G (Niklas\ \cite{N97}, see
Sect.~\ref{spiral}). The relation between the total field strength and the star
formation rate is deduced from the radio-infrared correlation
(Helou \& Bicay\ \cite{HB93}; Niklas \& Beck\ \cite{NB97}).
The total field is strongest in the spiral
arms, but the strength of the large-scale regular field in most galaxies
is highest in interarm regions (Beck\ \cite{B00}, \cite{B01}).
In the radio lobes of strong FRII radio galaxies the equipartition estimates
of the field strength are 20--100~$\mu$G, in hot spots 100--600~$\mu$G,
and 0.5--1~$\mu$G in the intracluster medium of galaxy clusters and in
relics (see Sect.~\ref{clusters}). In these estimates
relativistic protons were assumed to be absent (``absolute minimum energy'')
or contribute similarly to the total energy density as the electrons.
The ratio of radio synchrotron to inverse Compton X-ray intensities can be
used as another estimate of the field strength (e.g. Carilli \& Taylor\
\cite{CT02}). In most radio lobes the two estimates are similar,
but there are also significant deviations where ``out of equipartition''
conditions have been claimed (see Sect.~\ref{inverse}).
\emph{Faraday rotation measures} RM are sensitive to the coherent regular
field component along the line of sight and to the density of thermal electrons
which can be derived from thermal emission or pulsar dispersion measures DM.
The ratio $RM/DM$ is a widely used measure of coherent field strengths in
the Milky Way. The derived values are lower than the equipartition
estimates from polarized intensity (Beck et al.\ \cite{B+03}).
In galaxy clusters, observations of fluctuations in RM are
used to estimate total field strengths. In the Coma cluster such data
indicate fields one order of magnitude stronger than the equipartition value
(Giovannini et al.\ \cite{G+93}; Feretti et al.\ \cite{F+95}; Fusco-Femiano
et al.\ \cite{F+99}; Govoni \& Feretti\ \cite{GF04}). Strong total fields have
also been claimed from observations of rotation measures in many other clusters
(Carilli \& Taylor\ \cite{CT02}).
All methods to estimate field strengths are subject to bias. Firstly,
a positive correlation between field strength and cosmic ray electron density
on small scales, which are unresolved by the telescope beam or occur
along the line of sight, leads to an \emph{overestimate} of the equipartition
strengths of the total and regular field components (Beck et al.\ \cite{B+03}).
Furthermore, synchrotron intensity is biased towards regions of strong
fields, so that the equipartition estimates are higher than the average
field strength (Sect.~\ref{inverse}).
Field strengths based on Faraday rotation measures may also
be \emph{overestimated} if small-scale fluctuations in magnetic field and
in thermal gas density are positively correlated. Finally, Newman et al.
(\cite{N+02}) and Murgia et al. (\cite{M+04}) pointed out that field
estimates based on Faraday rotation measures are likely to be too high
by a factor of a few if there is a spectrum of turbulence scales.
In this paper we show that there are inherent problems with the classical
equipartition / minimum-energy formula, especially when computing its
parameters from observable quantities.
We present a revised formula which is based on integration of the
cosmic ray energy spectrum rather than the radio frequency spectrum
and discuss the limitations of its application.
\section{The classical minimum-energy formula}
\label{textbook}
The classical textbook minimum-energy formula is based on a value for the
total cosmic ray energy density which is determined by integrating the radio
spectrum from $\nu_\mathrm{\min}$ to $\nu_\mathrm{\max}$,
usually from 10~MHz to 10~GHz, which is the typical range
accessible to observations.
Particles and magnetic fields are assumed to fill the source
homogeneously, and the field is assumed to be completely
tangled (no preferred direction). Then the field minimum-energy
strength $B_{\min}$ is quoted as (e.g. Pacholcyk\ \cite{P70};
Longair\ \cite{L94}, p.~292; Rohlfs \& Wilson\ \cite{RW96}):
\begin{equation}
B_\mathrm{class} = \big(\, 6\, \pi\, G\, ({\cal K}+1)\, L_{\nu}\, / V\, \big)^{2/7}
\label{book}
\end{equation}
where $G$ is a product of several functions varying with
$\nu_\mathrm{\min}$, $\nu_\mathrm{\max}$
and synchrotron spectral index $\alpha$ (see Eq.~(\ref{bmin3}) for details).
$\cal K$ is the ratio of the total energy of cosmic ray
nuclei to that of the synchrotron emitting electrons + positrons. $L_{\nu}$ is the
synchrotron luminosity at frequency $\nu$, $V$ is the source's volume, and
$L_{\nu}/V$ is the synchrotron emissivity.
The resulting magnetic energy density $\epsilon_\mathrm{B}$ is 3/4 of
the cosmic ray energy density $\epsilon_\mathrm{CR}$, so that $B_\mathrm{class}$
is 8\% smaller than the equipartition field strength $B_\mathrm{eq,class}$:
\begin{displaymath}
B_\mathrm{class} / B_\mathrm{eq,class} = (3/4)^{2/7}
\end{displaymath}
The problems with the classical formula are the following:
\medskip
(1) The radio emissivity $L_{\nu}/V$ is the average over the source's volume.
Since radio intensity, in the case of equipartition between magnetic fields
and cosmic rays, varies with about the fourth power of the total field
strength, it emerges mainly from the regions with the highest field strengths,
so that $B_{\min}$ is larger than the volume-averaged field strength if the
local field strength $B$ is inhomogeneous within the source.
This problem can be reduced by replacing $L_{\nu}/V$ by $I_{\nu}/l$,
where $I_{\nu}$ is the \emph{local} synchrotron intensity (surface brightness)
and $l$ is the pathlength through the emitting medium (see Eq.~(\ref{bmin3})).
\bigskip
(2) A fixed integration interval $\nu_\mathrm{\min}$ to $\nu_\mathrm{\max}$ is used
in Eq.~(\ref{book}). The critical frequency of synchrotron emission of
an electron with energy $E$ in a magnetic field of strength $B$ with a
component $B_{\perp}$ perpendicular to the velocity vector is
(Lang\ \cite{L99}, p.~29):
\begin{equation}
\nu_\mathrm{crit} \, = \, c_1 \, E^2 \, B_{\perp} \, = \, 16.08~\mbox{MHz} \, (E/\mbox{GeV})^2 \,
(B_{\perp}/\mu\mbox{G})
\label{freq}
\end{equation}
where $c_1=3 e / ( 4 \pi {m_\mathrm{e}}^3 c^5 )$. $e$ is the elementary charge,
$m_\mathrm{e}$ the electron mass, and $c$ the velocity of light.
Note that the synchrotron emission of a single electron is maximum at the
frequency $\nu_\mathrm{max}=0.29\nu_\mathrm{crit}$ (Longair\ \cite{L94}, p.~246), but
for a continuous energy spectrum of electrons the maximum contribution at a given
frequency is emitted by electrons which are a factor of almost two lower in
energy (Webber et al.\ \cite{W80}). As a result, Eq.~(\ref{freq}) can be used
for the \emph{maximum} synchrotron emission from a spectrum of electrons around $E$.
The standard integration interval of 10~MHz--10~GHz corresponds to an interval
[$E_1$ -- $E_2$] in the electron energy spectrum of 800~MeV--25~GeV in a
$1\,\mu$G field, 250~MeV--8~GeV in a $10\,\mu$G field, or to 80~MeV--2.5~GeV in
a $100\,\mu$G field.
The integrated cosmic ray energy $\epsilon_\mathrm{CR}$ is proportional to
[$E_1^{1-2\alpha} - E_2^{1-2\alpha}$] where $\alpha$ is the synchrotron
spectral index (see Eq.~(\ref{energy1})), and $E_1$ and $E_2$ are fixed
integration limits.
Replacing $E_1$ and $E_2$ by $\nu_\mathrm{\min}$ and $\nu_\mathrm{\max}$
in the classical minimum-energy formula
via Eq.~(\ref{freq}) introduces an additional term depending
on the magnetic field strength. As a consequence, the total energy
$\epsilon_\mathrm{tot}$ depends on a constant power of $B$ (see Longair\ \cite{L94},
p.~292) and the wrong derivative $d \epsilon_\mathrm{tot} / d B$
leads to the constant exponent 2/7 in Eq.~(\ref{book}).
The classical minimum-energy formula is {\bf formally incorrect}.
\bigskip
(3) $\cal K$ in Eq.~(\ref{book}) is the ratio of the \emph{total}
energy density of cosmic ray nuclei to that of the electrons (+positrons).
Knowledge of $\cal K$ would require measurements of the spectra of the main
cosmic ray species over the whole energy range, especially at low particle
energies which, for steep spectra, contribute mostly to the total energy.
Instead, the total energy of cosmic ray electrons is approximated in the classical
formula by integrating the radio spectrum between fixed frequency limits,
and the energy of the cosmic ray nuclei is assumed to scale with $\cal K$.
This classical procedure is subject to major uncertainties. Firstly,
the observable radio spectrum only traces the spectrum of cosmic ray electrons
over a small energy range (Eq.~(\ref{freq})). Secondly, the ratio $\cal K$ of
total energies may differ from that in the observable energy range.
What would be needed in the classical formula is the energy ratio $\cal K'$
in the limited energy range traced by the observed synchrotron emission. In case of
energy losses (Sect.~\ref{ratio}) $\cal K'$ may deviate strongly from $\cal K$.
As the other input numbers of the classical formula are generally known with
sufficient accuracy, the uncertainty in $\cal K$ is the reason why the
equipartition / minimum energy field strengths are regarded as crude estimates
and the field estimates are often given with a scaling of $({\cal K}+1)^{2/7}$.
\bigskip
We propose to use, instead of the energy ratio $\cal K$, the
\emph{ratio of number densities} {\bf K} of cosmic ray protons and electrons per
particle energy interval within the energy range traced by the observed
synchrotron emission. Measurements of the local Galactic cosmic rays near the sun
(see Appendix) yield ${\mathrm{\bf K_0}} \simeq100$ at a few GeV, which is the
energy range relevant for radio synchrotron emission. This value is consistent
with the predictions from Fermi shock acceleration and hadronic interaction models
(see Table~\ref{table1}). At lower and higher energies, however, {\bf K} may vary
with particle energy (see Sect.~\ref{ratio}).
The observed energy spectrum of cosmic rays is the result of balance
between injection, propagation and energy losses.
Interactions with matter and radiation are different for protons and electrons,
so that the shape of their energy spectra are generally different
(Pohl\ \cite{P93a}; Lieu et al.\ \cite{LI99}; Schlickeiser\ \cite{S02}).
At low energies (typically below a few 100~MeV) the dominating loss of
cosmic ray protons and electrons is ionization of the neutral gas and/or
Coulomb interaction with the ionized gas.
At energies beyond 1~GeV the dominating loss of protons (and other nucleons) are
inelastic collisions with atoms and molecules, producing pions and secondary
electrons. The spectral index of the stationary energy spectrum is not changed.
The dominating loss mechanism for electrons is nonthermal bremsstrahlung
producing low-energy $\gamma$-rays (Schlickeiser\ \cite{S02}, p.~100).
At even higher energies (beyond a few GeV) the electrons suffer from synchrotron
and inverse Compton losses (see Sect.~\ref{ratio}). Furthermore, the spectra of all
cosmic ray species in galaxies may be steepened if particle diffusion (escape)
is energy-dependent. As the result of energy loss processes, the cosmic ray electron
spectrum is not proportional to the proton spectrum, so that {\bf K}
varies with energy.
Only in a relativistic electron/positron plasma as discussed for jets and
lobes of radio galaxies, where cosmic ray protons are absent, ${\cal K}=0$ and
${\mathrm{\bf K}}=0$ are valid at all energies (see Sects.~\ref{positrons}
and \ref{clusters}).
\bigskip
In this paper we present an easily applicable formula with two input
parameters from observations, synchrotron intensity and spectral index.
We discuss the energy/frequency range where the revised formula can be
applied because a reliable and constant value for the proton-electron number
density ratio ${\mathrm{\bf K_0}}$ can be adopted.
As a result, a more accurate estimate
of the equipartition field strength is possible than in the classical approach.
\bigskip
Pfrommer \& En{\ss}lin (\cite{PE04b}) give two formulae, for the classical
minimum-energy criterion and for the hadronic interaction model applied
to two galaxy clusters, taking into account (2) and (3) discussed above.
Their formula (6.2) for the classical case includes luminosity,
cluster volume, the proton-to-electron energy density ratio and the
lower cut-off energy of the electron spectrum.
\section{The revised formula}
The equipartition and minimum-energy procedures need the total energy of
cosmic rays, to be derived from the observed
radio synchrotron spectrum. An accurate treatment needs to account
for all energy loss processes which affect the energy spectrum of
nucleons and electrons, as discussed in detail by Pohl (\cite{P93a}).
This method, however, requires additional knowledge of the distributions of
gas density and radiation field and is applicable only to a few
well-studied objects.
In this paper we derive a revised formula for the simple case that the number
density ratio ${\mathrm{\bf K_0}}$ of protons and electrons is \emph{constant}
which is valid in a limited range of particle energies. We further
assume that the cosmic rays are accelerated by electromagnetic processes
which generate power laws in momentum, and that the
same total numbers of protons and electrons are accelerated.
The total energy is dominated by the protons. As the proton energy spectrum
flattens below the fixed proton rest energy $E_\mathrm{p}$, the total
cosmic ray energy can be computed easily.
The details are presented in the Appendix.
\subsection{Revised equipartition formula for the total field}
\label{revisedeq}
From the total cosmic ray energy density $\epsilon_\mathrm{CR}$ as a function
of field strength $B$ (Eq.~(\ref{energy7}), see Appendix) and assuming
$\epsilon_\mathrm{CR} = \epsilon_\mathrm{B} = B_\mathrm{eq}^2 /8\pi$ we get:
\begin{eqnarray}
B_\mathrm{eq} & = & \left\{ \, 4\pi (2\alpha+1)\, ({\mathrm{\bf K_0}}+1)\, I_{\nu}\,\,
E_\mathrm{p}^{1-2\alpha}\,\, (\nu/2 c_1)^{\alpha} \right. \nonumber \\
& &\left. \big/ \,\, \big[ (2\alpha-1)\, c_2(\alpha)\, l\, c_4(i) \, \big]
\right\}^{1/(\alpha+3)}
\label{beq}
\end{eqnarray}
$I_{\nu}$ is the synchrotron intensity at frequency $\nu$ and $\alpha$ the
synchrotron spectral index. ${\mathrm{\bf K_0}}$ is the constant ratio of
the number densities
of protons and electrons in the energy range where their spectra are proportional
(see Sect.~\ref{ratio}). $E_\mathrm{p}$ is the proton rest energy.
$c_1$, $c_2$ and $c_4$ are constants. $c_2$ and $c_4$ depend on $\alpha$ and
the magnetic field's inclination, respectively (see the Appendix for
details).
$I_{\nu}$ and $\alpha$ can be determined from observations, while the
proton-to-electron ratio ${\mathrm{\bf K_0}}$ and pathlength $l$
have to be assumed. If the synchrotron sources
have a volume filling factor $f$, $l$ has to be replaced by $l \times f$
in order to obtain $B_\mathrm{eq}$ within the source. $B_\mathrm{eq}$
depends only weakly on the source's distance via the dependence on $l$.
We have restricted the discussion in this paper to nearby sources.
For redshifts $z > 0$, correction terms are required which are given e.g.
in Govoni \& Feretti (\cite{GF04}).
Eq.~(\ref{beq}) yields field strengths which are larger by 7\%
for $\alpha=0.6$, larger by 2\% for $\alpha=0.75$, and smaller by 8\% for
$\alpha=1.0$ compared to the results obtained with the earlier version of the
revised equipartition formula (e.g. Krause et al.\ \cite{K84}; Beck\ \cite{B91};
Niklas\ \cite{N97}; Thierbach et al.\ \cite{T03}). The reason is that a
simplified version of Eq.~(\ref{energy4}) was used previously, with
integration only from $E_1=300$~MeV to $E_2 \rightarrow \infty$.
However, the differences to Eq.~(\ref{beq}) are small, smaller than the
typical errors caused by uncertainties in the input values, so that the
values published by the Bonn group previously are still valid.
In case of equilibrium between magnetic and cosmic ray \emph{pressures},
the field estimate (Eq.~(\ref{beq})) has to be reduced by the factor
$3^{-1/(\alpha+3)}$.
A formula similar to Eq.~(\ref{beq}) has been derived by Brunetti et al.
(\cite{B97}). However, Eq.~(A3) in Brunetti et al., which includes the
lower energy cutoff of the cosmic ray electron spectrum, is not applicable
in the case of dominating protons.
\subsection{Revised minimum energy formula for the total field}
\label{revisedmin}
From Eq.~(\ref{energy7}) (Appendix) and $d\epsilon_\mathrm{tot} / dB \, = \, 0$ we get:
\begin{eqnarray}
B_\mathrm{min} & = & \left\{ \, 2\pi (2\alpha+1)\, (\alpha+1)\, ({\mathrm{\bf K_0}}+1)\, I_{\nu}\,\,
E_\mathrm{p}^{1-2\alpha} \right. \nonumber \\
& & \left. \times (\nu/2 c_1)^{\alpha}\,\, \big/
\big[ (2\alpha-1)\, c_2(\gamma)\, l\, c_4(i) \big] \right\}^{1/(\alpha+3)}
\label{bmin1}
\end{eqnarray}
The ratio of minimum-energy magnetic and cosmic ray energy densities is:
\begin{displaymath}
\epsilon_\mathrm{B} / \epsilon_\mathrm{CR} = (\alpha+1)/2
\end{displaymath}
Hence, the ratio of minimum-energy (\ref{bmin1}) and equipartition (\ref{beq}) field
strengths is not constant, as in the classical case (\ref{book}), but depends
on the synchrotron spectral index $\alpha$:
\begin{displaymath}
B_\mathrm{min} / B_\mathrm{eq} = \left[ (\alpha+1)/2\right]^{1/(\alpha+3)}
\end{displaymath}
For $\alpha=1$ the revised formula gives identical results for the
equipartition and minimum energy conditions.
The revised formula is not valid for $\alpha\le0.5$ ($\gamma\le2$)
because the second integral in Eqs.~(\ref{energy1}) and (\ref{energy4}) diverges
for $E_2\rightarrow\infty$. Such flat injection spectra are
observed in a few young supernova remnants and can be explained by
Fermi acceleration operating in a strongly magnetized (low-$\beta$) plasma
(Schlickeiser \& F\"urst\ \cite{SF89}).
In the diffuse medium of galaxies, radio lobes and clusters, $\alpha\le0.5$ is
observed only at low frequencies and indicates strong energy losses of the
electrons due to ionization and/or Coulomb interaction (Pohl\ \cite{P93a};
Sarazin\ \cite{S99}) where the formula cannot be used.
Strong shocks in a non-relativistic $\beta\ge1$ plasma may generate injection
spectra with $\alpha=0.5$ (Schlickeiser \& F\"urst\ \cite{SF89}) where
the total cosmic ray energy is computed according to Eq.~(\ref{energy6}) and the
minimum energy formula has to be modified accordingly:
\begin{eqnarray}
B_\mathrm{min} & = & \left\{4 \pi (\alpha+1) ({\mathrm{\bf K_0}}+1) I_{\nu} E_0^{1-2\alpha}
\left[{1\over2} + \mathrm{ln}(E_2/E_\mathrm{p}) \right] \right. \nonumber \\
& & \left. \times \,\, (\nu/2 c_1)^{\alpha}\,\, \big/\,\,
\big[ c_2(\gamma)\, l\, c_4(i) \big] \right\}^{1/(\alpha+3)}
\label{bmin2}
\end{eqnarray}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{bmin.fig1.eps}
\caption{Ratio between the revised minimum-energy field strength
$B_\mathrm{min}$ (Eq.~(\ref{bmin1})) and the classical value $B_\mathrm{class}$
(Eq.~(\ref{bmin3}), assuming $\nu_\mathrm{min}=10$~MHz and $\nu_\mathrm{max}=$10~GHz)
as a function of the revised field strength $B_\mathrm{min}$ and
the synchrotron spectral index $\alpha$. A proton-to-electron number
density ratio of ${\mathrm{\bf K_0}}=100$ was adopted.
}
\label{fig1}
\end{figure}
Fig.~\ref{fig1} shows the ratio $q$ between the revised minimum-energy field
strength $B_\mathrm{min}$ (Eq.~(\ref{bmin2})) and the classical value
$B_\mathrm{class}$ (Eq.~(\ref{bmin3})), using ratios of ${\mathrm{\bf K_0}}=100$
and ${\cal K}=100$, respectively. For $\alpha\simeq0.5$ ($\gamma\simeq2$) the ratio
is almost constant and about 2. For larger values of $\alpha$, the ratio
becomes a function of the field strength $B_\mathrm{min}$. For weak fields
(below a few $\mu$G) and values of $\alpha$ between $\simeq0.6$ and $\simeq1$
the revised value differs insignificantly (with respect to the typical
errors of 20\% -- 30\%) from the classical one.
For flat radio spectra ($0.5< \alpha <0.6$) the classical estimate is too low
because the fixed upper limit $\nu_\mathrm{max}=10$~GHz
used for integrating the electron energy excludes the high-energy
part of the cosmic ray spectrum carrying a significant fraction of the total energy.
On the other hand, the classical estimate is too high for steep radio spectra
($\alpha > 0.7$) if the field strength is $> 10~\mu$G. Here the lower integration
limit $\nu_\mathrm{min}=10$~MHz corresponds to energies $<250$~MeV which is
below the lower break energy $E_\mathrm{p}=938$~MeV of the proton spectrum
(see Sect.~\ref{ratio}), so that the total cosmic ray energy is
overestimated.
The ratio $B_\mathrm{min}/B_\mathrm{class}$ depends weakly on
${\mathrm{\bf K_0}}$ ($q\propto {\mathrm{\bf K_0}}^{\,(1/(\alpha+3))\, - 2/7}$)
and on the frequency limits used for the classical formula.
For example, $q$ is about 10\% smaller for $\alpha=0.51$ when
increasing $\nu_\mathrm{max}$ from 10~GHz to 100~GHz.
\subsection{Relativistic electron/positron pair plasma}
\label{positrons}
Jets of radio galaxies may be composed of electrons and positrons
(Bicknell et al.\ \cite{BW01}). Shock acceleration acting on a relativistic
electron/positron pair plasma generates a power law in momentum which leads
to a break in the energy spectrum at
$E_\mathrm{e}=511.00$~keV = $8.1871\cdot 10^{-7}$~erg,
but this break is not observable in the radio range because low-frequency
radio spectra are flattened by ionization and/or Coulomb losses causing
a break at $E_\mathrm{i}$ ($E_\mathrm{i} > E_\mathrm{e}$) in the stationary
energy spectrum (see Sect.~\ref{ratio}).
The revised formulae (\ref{beq}) or (\ref{bmin1}) can be applied to an
electron/positron plasma by using ${\mathrm{\bf K_0}}=0$ and replacing
the lower energy break $E_\mathrm{p}$ by $E_\mathrm{i}$.
If, however, synchrotron or inverse Compton
loss is significant beyond $E_\mathrm{syn}$, the total cosmic ray energy
as to be computed by integration over the observed spectrum, not
according to Eq.~(\ref{energy7}).
The ratio of the revised field strength in a relativistic electron/positron
plasma $B_\mathrm{min,e}$ (${\mathrm{\bf K_0}}=0$) to that in a
proton-dominated plasma $B_\mathrm{min}$ (using Eq.~(\ref{bellratio}) and
assuming $({\mathrm{\bf K_0}}+1)\simeq {\mathrm{\bf K_0}}$) is:
\begin{eqnarray}
B_\mathrm{min,e} / B_\mathrm{min}\,\, & = &
(E_\mathrm{p}/E_\mathrm{e})^{-\alpha_0/(\alpha+3)} \nonumber \\
& & \times \,\, (E_\mathrm{p}/E_\mathrm{i})^{(2\alpha-1)/(\alpha+3)}
\label{bratio}
\end{eqnarray}
where $\alpha_0$ is the spectral index of the injection synchrotron spectrum.
$B_\mathrm{min,e}/B_\mathrm{min}$ varies only weakly with
spectral index $\alpha$ and lower energy break $E_\mathrm{i}$.
For a wide range in $\alpha_0$ and $\alpha$ (0.5--0.75) and in $E_\mathrm{i}$
(10--100~MeV), $B_\mathrm{min,e}/B_\mathrm{min}\simeq 1/3$.
\subsection{Proton-to-electron ratio and energy losses}
\label{ratio}
The ratio {\bf K}{\rm (E)} of the proton-to-electron number densities per
particle energy interval
($n_\mathrm{p}/n_\mathrm{e}$) depends on the acceleration process, the propagation
and the energy losses of protons and electrons. In the range of particle
energies $E_\mathrm{p} < E < E_\mathrm{syn}$, where losses are small or affect
protons and electrons in the same way (``thin target'', see below),
${\mathrm{\bf K}} = {\mathrm{\bf K_0}}$ is constant.
For electromagnetic acceleration mechanisms which generate a power law in
momentum, ${\mathrm{\bf K_0}}$ depends only on the transition energies from
the non-relativistic to the relativistic regime for protons and electrons,
$E_\mathrm{p}$ and $E_\mathrm{e}$, and on the particle injection spectral index
$\gamma_0$ (Eq.~(\ref{number2}); see also Bell\ \cite{B78};
Schlickeiser\ \cite{S02}, p.~472):
\begin{eqnarray}
{\mathrm{\bf K_0}} & = & n_\mathrm{p,0} / n_\mathrm{e,0}\,\,
= (E_\mathrm{p}/E_\mathrm{e})^{(\gamma_0-1)/2} \nonumber \\
& = & (E_\mathrm{p}/E_\mathrm{e})^{\alpha_0}\,\,\,\,\,\,\,
(E_\mathrm{p} < E < E_\mathrm{syn}, \, \mathrm{thin \, target})
\label{bellratio}
\end{eqnarray}
where $\gamma_0$ is the spectral index of the injection cosmic ray spectrum.
$\gamma_0$ is related to the shock compression ratio $r$ via
$\gamma_0=(r+2)/(r-1)$ (see Appendix).
For $\gamma_0\simeq2.2$, as expected from acceleration in supernova remnants,
we get ${\mathrm{\bf K_0}}\simeq100$, consistent with the local Galactic
cosmic ray data near the sun at particle energies of a few GeV (see Appendix).
\begin{figure}
\centering
\includegraphics[bb = 21 16 476 634,angle=270,width=0.45\textwidth]{bmin.fig2.eps}
\caption{Variation of the intrinsic ratio of proton-to-electron number densities
${\mathrm{\bf K_0}}$ (thick line) and of the intrinsic ratio of total energy
densities $\cal K$ (dotted line) with the shock compression ratio $r$.
Energy losses are assumed to be negligible.
}
\label{fig2}
\end{figure}
\begin{center}
\begin{table*}
\caption{\label{table1} Cosmic-ray proton/electron number density ratio
${\mathrm{\bf K_0}}$
(``thin target'') and the ratio $\cal K$ of total energy densities
for various injection mechanisms}
\begin{tabular}{@{}lll@{}}
\hline
CR origin & ${\mathrm{\bf K_0}}$ & $\cal K$ \\
\hline
\\
Fermi shock acceleration (strong shocks, non-relativistic gas) & 40 -- 100 & 40 -- 20\\
Secondary electrons & 100 -- 300 & $\gg1$\\
Turbulence (resonant scattering of Alfv\'en waves) & $\simeq100$ & $\simeq20$\\
Pair plasma & 0 & 0\\[3pt]
\hline
\end{tabular}
\end{table*}
\end{center}
Eqs.~(\ref{energy2}) and (\ref{bellratio}) are used to compute the intrinsic
ratio $\cal K$ of \emph{total} energies of protons and electrons (for negligible
energy losses):
\begin{equation}
{\cal K} = (E_\mathrm{p}/E_\mathrm{e})^{(3-\gamma_0)/2}
\label{classratio}
\end{equation}
Fig.~\ref{fig2} shows the variation of ${\mathrm{\bf K_0}}$ and $\cal K$ with the
compression ratio $r$ in the shock. Table~\ref{table1} gives values of
${\mathrm{\bf K_0}}$ and $\cal K$ for various injection processes.
For strong shocks in non-relativistic gas ($r=4$, $\gamma_0=2$)
${\cal K}={\mathrm{\bf K_0}}\simeq40$. For steeper intrinsic spectra
(weaker shocks) $\cal K$ decreases, while ${\mathrm{\bf K_0}}$ increases
(Fig.~\ref{fig2}). This demonstrates why the classical formula using $\cal K$
is of little practical use: The number density ratio ${\mathrm{\bf K_0}}$,
in the particle energy range relevant for the observed synchrotron intensity,
is \emph{not} proportional to the energy ratio $\cal K$.
In galaxy clusters secondary electrons are generated by hadronic interactions
between cosmic ray protons and nuclei of the hot gas and may contribute to synchrotron
emission (Dennison\ \cite{D80}). The spectral index $\gamma_\mathrm{e}$ of the
electron spectrum is larger (steeper) than that of the protons $\gamma_\mathrm{p}$,
depending on the model (Pfrommer \& En{\ss}lin\ \cite{PE04a}).
As only a small fraction of the proton energy can be converted into secondary
electrons, the ratio of total proton/electron energies is ${\cal K}\gg 1$.
Assuming balance between injection and
losses, Dennison (\cite{D80}) estimated ${\cal K}=5\,[9 + (B/ \mu G)^2]$.
Typical expected values for the number density ratio in galaxy clusters are
${\mathrm{\bf K_0}}=100$ -- 300 above particle energies of a few GeV
(Pfrommer \& En{\ss}lin\ \cite{PE04b}).
\bigskip
Eq.~(\ref{bellratio}) is valid (i.e. ${\mathrm{\bf K_0}}$ is constant) in the
energy range $E_\mathrm{p} < E < E_\mathrm{syn}$ if the spectral indices
of the electrons and protons are the same. This can be achieved if
energy losses are negligible, or if
the timescale for cosmic ray \emph{escape loss} is smaller than that
for electron loss by nonthermal bremsstrahlung, which is the case for
low interstellar or intergalactic gas densities (``thin target'', Pohl\ \cite{P93a}),
so that nonthermal bremsstrahlung is unimportant for the stationary electron
spectrum. Energy-dependent escape steepens all cosmic ray spectra in
the same way, $\gamma_\mathrm{e}=\gamma_\mathrm{p}=\gamma_0+\Delta\gamma$,
where $-\Delta\gamma\simeq-0.6$ is the exponent of the energy-dependent
escape time $t_{esc}$ (see Appendix), and the revised formula
for equipartition or minimum energy can be applied.
If, however, the gas density is high or the escape time is large, the object
is a ``thick target'' for electrons, so that \emph{nonthermal bremsstrahlung
loss} dominates the electron spectrum at GeV energies. The slopes of the
electron and the radio synchrotron spectra are the same as for the injection
spectra ($\gamma_0$ and $\alpha_0$, respectively). As the loss rate
increases linearly with gas density $n$, the proton-to-electron ratio
increases with gas density.
If the proton spectrum beyond $E_\mathrm{p}$ is steepened by energy-dependent
escape ($t_{esc} \propto E^{-\Delta\gamma}$), e.g. in galaxies, the ratio
{\bf K}{\rm (E)} also depends on energy:
\begin{equation}
{\mathrm{\bf K}(E)} \,\, \propto \, n \, E^{-\Delta\gamma} \,\,\,\,\,\,\,
(E_\mathrm{p} < E < E_\mathrm{syn}, \, \mathrm{thick \, target})
\label{ratio2}
\end{equation}
The result is that the ratio {\bf K}{\rm (E)} varies over the whole range
of observable energies,
so that \emph{the revised formula for energy equipartition or minimum energy
should not be applied}. The classical formula faces a similar
problem as both the synchrotron intensity and the total energy ratio are
affected by bremsstrahlung loss, but the effect on the ratio $\cal K$ of
total energies is hardly predictable.
\bigskip
We will now discuss the ratio {\bf K}{\rm (E)} for low and high electron
energies (outside the energy range $E_\mathrm{p} < E < E_\mathrm{syn}$).
In a proton-dominated (${\mathrm{\bf K_0}} \gg 1$) cosmic ray plasma,
the lower energy limit
in Eq.~(\ref{bellratio}) or (\ref{ratio2}) is set by the proton spectrum
which flattens below $E_\mathrm{p}=938$~MeV, while the electron spectrum
flattens below $E_\mathrm{e}=511$~keV (Eq.~(\ref{bell})).
For $E < E_\mathrm{p}$ the proton and electron spectra are not proportional,
so that {\bf K}{\rm (E)} is not constant:
\begin{equation}
{\mathrm{\bf K}(E)} = (E/E_\mathrm{e})^{\alpha_0} \,\,\,\,\,\, (E_\mathrm{e} < E < E_\mathrm{p})
\label{kvar1}
\end{equation}
\emph{The revised formula should not be applied} for $E < E_\mathrm{p}$.
In a 10~$\mu$G magnetic field, electrons with $E < E_\mathrm{p}$ are observed at
frequencies below about 140~MHz (Eq.~(\ref{freq})).
The classical formula is also affected if radio observations at low
frequencies are used. Even in a weak 1~$\mu$G magnetic field, the minimum
allowed frequency corresponding to $E_\mathrm{p}$ is still higher than
the standard lower frequency limit of 10~MHz used in the classical formula
(Eq.~(\ref{book})). The standard integration range
of $\ge$10~MHz would trace an appropriate part of the energy spectrum
only if the field strength is below 0.7~$\mu$G. However, synchrotron
emission from such weak fields is too faint to be detected with present-day
radio telescopes. Hence, the lower frequency limit of 10~MHz used
in the classical formula (Eq.~(\ref{book})) is generally too low
and leads to an \emph{overestimate} of the field strength.
This bias increases with increasing field strength (see Fig.~\ref{fig1}).
The ratio $\cal K$ of total energies cannot be applied to
low-frequency radio data.
At low energies, energy losses modify the proton and electron spectra
(Pohl\ \cite{P93a}) and the effective {\bf K}{\rm (E)} is different from
that according to Eq.~(\ref{kvar1}). The break in the electron spectrum by
$\Delta\gamma_\mathrm{e}=-1$ (flattening) due to \emph{ionization and/or
Coulomb loss} depends on gas density (see Eq.~(1) in
Brunetti et al.\ \cite{B97}) and occurs at a few 100~MeV in the ISM of
galaxies and at a few tens MeV in radio lobes and clusters. These energies
correspond to radio frequencies which are generally below the observable
range, so that this effect is irrelevant for equipartition estimates in
a proton-dominated relativistic plasma.
\bigskip
$E_\mathrm{syn}$ in Eq.~(\ref{bellratio}) is the upper energy break where
\emph{synchrotron or inverse Compton loss} of the electrons becomes dominant.
Inverse Compton loss has the same energy and frequency dependence
as synchrotron loss; both are of similar importance if the energy
density of the radiation field and the magnetic energy density are
comparable. Inverse Compton loss from the CMB background dominates if the
field strength is below $3.25\times(1+z)^2~\mu$G.
For $E_\mathrm{e} > E_\mathrm{syn}$ the spectral index steepens by
$\Delta\gamma_\mathrm{e}=1$, observable as a smooth steepening of the
synchrotron spectrum by $\Delta\alpha = 0.5$. In the revised formula for
energy equipartition or minimum energy one should {\bf not} use data at
radio frequencies
corresponding to the electron energy range $E > E_\mathrm{syn}$, where
their energy spectrum is not proportional to the proton spectrum and the
ratio {\bf K} is a function increasing with $E$
(Pohl\ \cite{P93a}; Lieu et al.\ \cite{LI99}). A simplified estimate is:
\begin{equation}
{\mathrm{\bf K}(E)} = {\mathrm{\bf K_0}} (E/E_\mathrm{syn}) \,\,\,\,\,\, (E > E_\mathrm{syn})
\label{kvar2}
\end{equation}
Using instead ${\mathrm{\bf K_0}}$ in the revised formula leads to an
\emph{underestimate} of the field strength.
Again, the classical formula has a similar problem because the
ratio $\cal K$ of total energies also increases in the case of strong synchrotron
or inverse Compton loss.
If the field strength varies along the line of sight or within the volume
observed by the telescope beam, synchrotron loss may lead to an anticorrelation
between field strength and cosmic ray electron density, so that the equipartition
field is underestimated further (Beck et al.\ \cite{B+03}).
Synchrotron loss is significant in sources with strong magnetic fields like
young supernova remnants, starburst galaxies and radio lobes (Sects.~\ref{starburst}
and \ref{clusters}), and also in galaxies away from the acceleration sites of
cosmic rays, for example in interarm regions, in the outer disk
and in the halo (Sect.~\ref{spiral}).
The various bias effects are summarized in Table~\ref{table2}.
\begin{center}
\begin{table*}
\caption{\label{table2} Bias of equipartition field estimates.
{\bf K}{\rm (E)} is the cosmic ray proton/electron ratio, where
${\mathrm{\bf K_0}}$ denotes a value which does not vary with energy}
\begin{tabular}{@{}lllll@{}}
\hline
Method &Cosmic-ray composition &Bias effect &Field strength &Reference\\
\hline
\\
Classical &$p^{+}+e^{-}$ (${\mathrm{\bf K_0}}\simeq100$) &Integration over frequency &Underestimate ($\alpha<0.6$) &This paper\\
& & &Overestimate ($\alpha>0.7$) &(Fig.~\ref{fig1})\\[5pt]
Classical &$e^{-} + e^{+}$ (${\mathrm{\bf K_0}}=0$) &Fixed frequency range &Underestimate (weak fields) &This paper\\
& &(e.g. 10~MHz--10~GHz) &Overestimate (strong fields) & \\[5pt]
Classical+revised&$p^{+}+e^{-}$ ({\bf K}{\rm (E)}$>100$) &Synchrotron/IC/ &Underestimate &This paper\\
& & bremsstrahlung losses\\[5pt]
Classical+revised& any &Field fluctuations without/&Overestimate (weak fields)/ &Beck et al.(\cite{B+03})\\
& &with synchrotron loss &Underestimate (strong fields) & \\[3pt]
\hline
\end{tabular}
\end{table*}
\end{center}
\subsection{Equipartition and minimum-energy estimates of the regular field}
Knowing the equipartition or minimum-energy estimate of the total field strength,
the equipartition or minimum-energy estimate of the regular field strength
$B_\mathrm{reg,\perp}$ in the sky plane can be
computed from the observed degree of polarization $p$ of the synchrotron emission:
\begin{equation}
p \,\, = \,\, p_0 \, (1 \, + \, \frac{7}{3} q^2) \,\,
/ \,\, (1 \, + \, 3 q^2 \, + \, \frac{10}{9} q^4)
\end{equation}
where $p_0$ is the intrinsic degree of polarization ($p_0=(3-3\alpha)/(5-3\alpha)$)
and $q$ is the ratio of the isotropic turbulent field $B_\mathrm{turb}$
to the regular field component $B_\mathrm{reg,\perp}$ in the sky plane
(Beck et al.\ \cite{B+03}). For the case of a dominating turbulent field ($q\gg1$),
$p\simeq 2.1 \, p_0 \, q^{-2}$.
\section{Discussion and examples}
\subsection{Weak fields: spiral galaxies}
\label{spiral}
The Milky Way and most spiral galaxies have steep radio spectra in the frequency
range of a few GHz which indicates that the energy spectra of their cosmic ray
electrons are steepened by escape loss (see Appendix). Galaxies are
``thin targets'' for cosmic ray electrons, except for massive spiral arms and
starburst regions (see Sect.~\ref{starburst}).
The revised formula can be applied, using the part of the radio spectrum below
the break frequency $\nu_\mathrm{syn}$ beyond which synchrotron or inverse Compton losses
become important. The upper energy limit $E_\mathrm{syn}$ corresponding to
$\nu_\mathrm{syn}$ can be estimated as follows.
The synchrotron lifetime $t_\mathrm{syn}$ for electrons
of energy $E$ is (Lang\ \cite{L99}, p.~32):
\begin{eqnarray}
t_\mathrm{syn} & = & 8.35\cdot 10^9\, \mbox{yr} \left/ \left[ (B_\perp /\mu \mbox{G})^2
\,\, (E_\mathrm{syn}/\mbox{GeV}) \, \right] \right. \nonumber \\
& = & 1.06\cdot 10^9\, \mbox{yr} \left/ \left[ (B_\perp /\mu \mbox{G})^{1.5}
\,\, (\nu_\mathrm{syn}/\mbox{GHz})^{0.5} \, \right] \right.
\label{tsyn}
\end{eqnarray}
Note that the constant is different from that used in most papers
(e.g. Carilli et al.\ \cite{C+91}; Mack et al.\ \cite{M+98}; Feretti et al.\ \cite{F+98}).
The escape time $t_\mathrm{esc}$ in the Milky Way is $\simeq 10^7$~yr
at non-relativistic and mildly relativistic energies
(Engelmann et al.\ \cite{E+90}; Schlickeiser\ \cite{S02}, p.~439),
so that the break in the radio frequency spectrum is expected at:
\begin{equation}
\nu_\mathrm{syn} \approx 10\, \mbox{GHz} / (B_\perp / 10 \mu\mbox{G})^3
\label{Esyn}
\end{equation}
For the typical strength of ISM magnetic fields of 10~$\mu$G (Niklas\ \cite{N95})
we get $\nu_\mathrm{syn} \simeq 10$~GHz. At higher frequencies the synchrotron
intensity $I_{\nu}$ and the spectral index $\alpha$ can be used neither
for the revised formula nor for the classical formula. If the
resulting field strength is larger than 10~$\mu$G, like in grand-design
spiral galaxies, the useful frequency range shifts to even lower frequencies.
Furthermore, the thermal contribution to the total radio intensity increases
with increasing frequency and has to be subtracted.
In galaxies with a high star-formation rate, galactic winds may form.
The escape time may then be shorter than in the Milky Way and $\nu_\mathrm{syn}$
is larger than according to Eq.~(\ref{Esyn}). Signature of
galactic winds is that at low frequencies the local radio spectrum in
edge-on galaxies remains flat up to large heights above the disk
(Lerche \& Schlickeiser\ \cite{LS82}). In the Milky Way, the radio spectral
index distribution is consistent with a galactic wind (Reich \& Reich\
\cite{RR88}).
Pohl (\cite{P93a}) argued that in the spiral galaxy M~51 the spectral index of the
total radio emission (integrated over the whole galaxy)
is $\alpha\simeq0.5$ ($\gamma_0\simeq2.0$) at low frequencies,
but a frequency break at about 1.4~GHz due to energy losses causes a steepening
of almost the whole observable radio spectrum. As a result, the classical
minimum-energy field strength in the spiral galaxy M~51 is too high.
However, the compilation of all data by Niklas (\cite{N95}) does not
indicate a significant steepening of the radio spectrum of M~51 until
25~GHz, but a flattening due to thermal emission. In his sample of
spiral galaxies, Niklas (\cite{N95}) found only very few cases of spectral
steepening. Hence, escape seems to be the dominant loss process
in the spectra of integrated radio emission of galaxies between about
100~MHz and 10~GHz. (This does not hold for spectra of the \emph{local}
emission, see below.)
Fitt \& Alexander (\cite{FA93}) used the 1.49~GHz radio fluxes of
146 spiral and irregular galaxies to derive minimum-energy field strengths
with the classical formula, simply assuming a constant spectral index of
$\alpha=0.75$, a negligible thermal fraction and the standard frequency limits.
Their distribution peaks around 11~$\mu$G with a standard deviation
$\sigma$ of 4~$\mu$G (for a ratio of total energies $\cal K$ of 100).
Niklas (\cite{N95}) observed a sample of spiral galaxies in radio continuum
at 10.55~GHz and compiled all available data at other frequencies.
Based on the spectra of the integrated radio emission, Niklas et al. (\cite{N97})
could separate the synchrotron from the thermal emission for 74 galaxies.
The mean thermal fraction at 1~GHz is 8\%, and the mean synchrotron
spectral index is $\alpha=0.83$ with a standard deviation of 0.13.
Hence, the average spectrum of the extragalactic cosmic ray electrons
(and probably also that of the protons)
at particle energies of a few GeV has the same spectral index
($\gamma\simeq2.7$) as that in the Milky Way. Niklas (\cite{N95}) derived
galaxy-averaged field strengths according to the classical formula
(\ref{bmin3}) and to a revised formula similar to Eq.~(\ref{bmin1}), approximating
integration (\ref{energy4}) by one integral from $E_1=300$~MeV to infinity.
His results (for ${\mathrm{\bf K_0}}=100$) are similar for the two cases,
as expected from
Fig.~\ref{fig1}. The mean is $9~\mu$G (with a standard deviation
$\sigma=3~\mu$G) both for the revised formula and for the classical case.
The distribution of field strengths is more symmetrical for the
revised case.
However, application of the equipartition formula to regions within spiral
galaxies or in their radio halos needs special care. For example, the
``ring'' of M~31 emitting in radio continuum is formed by cosmic ray
electrons, while the magnetic field extends to regions inside and
outside the ring, as indicated by Faraday rotation data (Han et al.\
\cite{han+98}). Leaving the star-formation regions, cosmic ray electrons
rapidly lose their energy due to synchrotron loss. Hence,
{\bf K}{\rm (E)}$>100$, so that the equipartition formula underestimates
the field strength inside and outside the ring. The same argument holds
for the outer disk and the halo of galaxies, far away from the acceleration
sites of cosmic rays.
In galaxies with strong magnetic fields like M~51, synchrotron loss may
dominate already at a few 100~pc from the acceleration sites, e.g. in
interarm regions, as indicated by spectral steepening between the spiral
arms (Fletcher et al.\ \cite{fletcher+05}). Hence, the interarm field
strengths given by Fletcher et al. (\cite{fletcher+04}) are underestimates.
\subsection{Strong fields: young supernova remnants and starburst galaxies}
\label{starburst}
The classical equipartition formula overestimates the magnetic field strength
in objects with proton-dominated cosmic rays and with strong fields
(Fig.~\ref{fig1}), like massive spiral arms,
young supernova remnants or starburst galaxies.
Here the lower frequency limit is too low and hence overestimates the
total cosmic ray energy (see Sect.~\ref{ratio}).
In a 100~$\mu$G field, for example, the energy of an electron radiating
at 10~MHz is only 80~MeV, where the corresponding proton number in any
medium is strongly reduced ($E \ll E_\mathrm{p}$) and the proton-to-electron
number density ratio {\bf K}{\rm (E)} may drop to values even smaller than 1.
Hence, many values for the field strength quoted in the literature are
strong overestimates. For example, Allen \& Kronberg (\cite{AK98})
integrated the radio spectrum from 10~MHz to 100~GHz of young supernova
remnants in the galaxy M82 and derived field strengths of a few mG for
${\cal K}=0$, scaling with $({\cal K}+1)^{2/7}$. The revised formula gives
values which are between 4 and 50$\times$ smaller, depending on spectral
index $\alpha$. P\'erez-Torres et al. (\cite{P+02}) derived a minimum-energy
field strength in SN~1986J ($\alpha=0.69$) of about
13~mG$\times({\cal K}+1)^{2/7}$ which, according to our revised formula,
is too high by about one order of magnitude.
On the other hand, the equipartition strength of the flat-spectrum SN~1993J
($\alpha=0.51$) of 38~mG (Chandra et al.\ \cite{CRB04}) is too low; the
revised formula gives $\simeq100$~mG$\times({\cal K}+1)^{2/7}$
which fits better to the field strength derived from the synchrotron break
energy (by inserting the SN age as $t_\mathrm{syn}$ in Eq.~(\ref{tsyn})).
Note that the cosmic ray electron distribution in young supernova remnants is not
in a stationary state, so that energy losses modify the spectrum in a
different way than discussed in Sect.~\ref{ratio}. The observed spectrum
is that of injection, possibly modified by synchrotron loss, so that the
revised formula can be applied.
Hunt et al. (\cite{H04}) discussed the radio emission of the blue compact
dwarf galaxy SBS~0335-052. Applying the classical formula (assuming ${\cal K}=40$
and a frequency range of 10~MHz--100~GHz) these authors obtained an
equipartition field of $\simeq0.8$~mG, while the value from our revised
formula is about 30~$\mu$G which fits much better to other starburst
galaxies like M82 (Klein et al.\ \cite{K88}) or the Antennae
(Chy\.zy \& Beck\ \cite{CB04}) or blue compact
dwarf galaxies (Deeg et al.\ \cite{D+93}).
The same discrepancy may arise in galactic nuclei with starbursts.
Beck et al. (\cite{B+05}) derived field strengths of $\simeq60~\mu$G
in the central starburst regions of the barred galaxies NGC~1097
and NGC~1365, while the classical formula would give much larger values.
However, starburst galaxies and regions of high star formation rate in the
central regions and massive spiral arms of galaxies have high gas densities
and mostly flat radio spectra ($\alpha=0.4$--0.7), so that they probably are
``thick targets'' for the cosmic ray electrons. If so, the equipartition
estimate is too low, and the correct value can be computed only by
constructing a model of gas density and cosmic ray propagation.
\subsection{Radio lobes and galaxy clusters}
\label{clusters}
The classical equipartition field strengths are 20--100~$\mu$G in the radio lobes
of strong FRII radio galaxies (Brunetti et al.\ \cite{B97}), 100--600~$\mu$G in
hot spots (Carilli et al.\ \cite{C+91}; Harris et al.\ \cite{H00}; Wilson et al.\ \cite{W00};
Wilson et al.\ \cite{W01}; Brunetti et al.\ \cite{BB01}; Hardcastle\ \cite{H01};
Hardcastle et al.\ \cite{HB01}; \cite{H02}), and 0.5--1~$\mu$G
in the diffuse gas of galaxy clusters and relics (Feretti \& Giovannini\ \cite{FG96}).
In all these cases it was assumed that relativistic protons are absent
(${\cal K}=0$, ``absolute minimum energy'') or contribute
similarly to the total energy density as the electrons (${\cal K}=1$).
According to Fig.~\ref{fig2}, ${\cal K}\le1$ could also be the result of
a weak shock. However, particle acceleration at low Mach numbers is inefficient
(Bogdan \& V\"olk\ \cite{BV83}; V\"olk et al.\ \cite{V88}), so that its
contribution to the total cosmic ray population is negligible.
If ${\cal K}=0$, the radio spectrum reflects the energy spectrum of the total cosmic
rays, and the equipartition formula
gives reliable results, supposing that the integration limits are set properly
(Sect.~\ref{positrons}).
Integration over the range of Lorentz factors of the electrons/positrons
according to the breaks as observed in the radio spectrum of sources
leads, in the case of weak magnetic fields, to
slightly larger field strengths than applying the classical formula
with fixed frequency limits (Brunetti et al.\ \cite{B97}).
However, the lower break energy $E_\mathrm{i}$ in the electron/positron
energy spectrum needed for the equipartition estimate
may correspond to a radio frequency which is too low to be observed.
For example, electrons with the minimum Lorentz factor of 50 assumed for
the lobes of 3C219 (Brunetti et al.\ \cite{BC99}) in a 10~$\mu$G field
radiate at 100~kHz, a frequency much below the observable range.
Here the frequency limit of $\nu_\mathrm{min}=10$~MHz assumed in the classical
method is too high, and hence the field strength is \emph{underestimated},
by 1.5$\times$ and 2$\times$ for $\alpha=0.8$ and $\alpha=1$, respectively,
or by an even larger factor if the minimum Lorentz factor is smaller than 50.
If the fields are strong, the break energy $E_\mathrm{i}$
of the electron spectrum is observable in the radio spectrum. Hardcastle et al.
(\cite{H02}) determined typical minimum Lorentz factors of 1000 from the
breaks observed at 0.5--1~GHz in hot spots of radio lobes and estimated
field strengths of 100--200~$\mu$G (assuming equipartition with
${\mathrm{\bf K_0}}=0$).
In such cases, the classical formula gives \emph{overestimates} because
$\nu_\mathrm{min}=10$~MHz is too small.
The occurence of electron/positron plasmas (${\cal K}=0$) in astronomical objects
is under debate. A relativistic electron/positron plasma was discussed for
the jets (Bicknell et al.\ \cite{BW01}). However,
observational evidence tends to favour an
electron/proton plasma (Leahy et al.\ \cite{LGT01}). Even if jets do
eject electrons and positrons into a cluster with the same energy spectrum,
these particles will not survive long enough to generate significant
radio emission from the IGM medium, so that (re)acceleration by
intergalactic shocks (Blandford \& Ostriker\ \cite{BO78}; Sarazin\ \cite{S99};
Gabici \& Blasi\ \cite{GB03}),
by interaction with MHD waves (Schlickeiser \& Miller\ \cite{SM98};
Brunetti et al.\ \cite{BSF01}; Fujita et al.\ \cite{FTS03}), or by reconnection
(Hanasz \& Lesch\ \cite{HL03}), or production of secondary electrons
(Pfrommer \& En{\ss}lin\ \cite{PE04a}) is necessary.
\bigskip
If, however, the contribution of relativistic \emph{protons} to the total cosmic
ray energy is dominant (${\cal K}\gg1$), as predicted by all electromagnetic
acceleration models (Table~\ref{table1}),
the equipartition field strength increases with respect to that
for ${\cal K}=0$. In the hot spots of radio lobes
(Wilson et al.\ \cite{W01}; Hardcastle\ \cite{H01}; Hardcastle et al.\ \cite{HB01},
\cite{H02}) stronger equipartition fields would fit much better to the high field
strengths derived from Faraday rotation measures
(Feretti et al.\ \cite{F+95}; Fusco-Femiano et al.\ \cite{F+99}).
\bigskip
Another reason for systematic field underestimates are energy losses.
The electron spectrum is a result of the balance between acceleration and
energy losses. Ionization loss dominates at low energies; at high energies
synchrotron loss dominates in clusters with magnetic fields stronger than
a few $\mu$G while inverse Compton loss due to CMB photons dominates in
clusters with weaker fields (Sarazin\ \cite{S99}).
$E > E_\mathrm{syn}$ is the energy range where synchrotron and/or
inverse Compton losses are strong and the electron spectrum
is steeper than the proton spectrum ($\gamma_\mathrm{e}=\gamma_\mathrm{p}+1$).
The proton-to-electron number density ratio {\bf K} increases with energy
according to Eq.~(\ref{kvar2}). In this case, the revised formula
\emph{underestimates} the field strength. For example,
using a value for {\bf K} of 100 instead of 1000 would underestimate the
equipartition field strength by a factor of about 2.
The classical formula has a similar problem.
Synchrotron and inverse Compton losses steepen the radio spectra.
The radio spectra of most radio lobes and hot spots in the range 0.4--15~GHz
are well approximated by power laws with spectral indices between of
$\alpha\simeq0.7$--0.8 in the frequency range 0.4--15~GHz,
with a spectral break to $\alpha > 1$ between the radio and X-ray regimes.
Meisenheimer et al. (\cite{M89}) fitted the spectra of five bright
hot spots with a low-frequency spectral index of $\alpha\simeq0.5$ with
smooth breaks which start to steepen the spectra already beyond a few GHz.
The spectral index of the diffuse radio emission from the Coma cluster is
$\alpha\simeq 1.0$ below 100~MHz and strongly steepens at higher frequencies
(Fusco-Femiano et al.\ \cite{F+99}; Thierbach et al.\ \cite{T03}),
which is a clear sign of energy loss. The (revised) equipartition field
strength in the Coma cluster is 0.6~$\mu$G$\times({\mathrm{\bf K}}+1)^{1/(\alpha+3)}$
(Thierbach et al.\ \cite{T03}), which gives $\simeq4~\mu$G for $\alpha=0.8$
and ${\mathrm{\bf K}}=1000$. This value is not far from that derived from
Faraday rotation data (Feretti et al.\ \cite{F+95}).
Pfrommer \& En{\ss}lin (\cite{PE04b}) modeled the radio spectrum of the Coma
and Perseus clusters assuming equilibrium between injection of secondary electrons
(hadronic interaction) and energy losses. Their self-consistent minimum-energy
formula, which does not require an assumption about $\cal K$ or
{\bf K}, gives slightly larger
field strengths for the Coma and Perseus clusters compared with the classical
formula. However, this model cannot account for the radial steepening of the radio
spectra observed in both clusters.
\bigskip
Our revision of total field strengths may have significant effects on estimates of
the age of electron populations based on the synchrotron break frequency in the radio
spectrum (Eq.~(\ref{tsyn})). Application of the classical formula for objects with
strong fields may significantly overestimate the field (Fig.~\ref{fig1}) and hence
underestimate the electron lifetime. More serious is the assumption of a
proton/electron total energy ratio ${\cal K}=1$ in most papers (Carilli et al.\ \cite{C+91};
Mack et al.\ \cite{M+98}; Feretti et al.\ \cite{F+98}; Parma et al.\ \cite{P+99})
which is not supported by any acceleration model (see above).
${\mathrm{\bf K_0}}\simeq100$ in the revised formula would
increase field strength by $3.5\times$ and hence decrease the synchrotron age by
$6.5\times$. The discrepancy between dynamical and synchrotron ages found by
Parma et al. (\cite{P+99}) is increased. This problem needs to be re-investigated.
\subsection{Comparison with field estimates from the ratio of synchrotron to
inverse Compton X-ray intensities} \subsectionmark{Comparison with field
estimates ...}
\label{inverse}
The same cosmic ray electrons emitting radio synchrotron emission also produce
X-ray emission via the inverse Compton effect. The ratio of radio to
X-ray intensities can be used as a measure of field strength
(Harris \& Grindlay\ \cite{HG79}; Govoni \& Feretti\ \cite{GF04}).
Comparison with the equipartition estimates reveals three cases:
\bigskip
(1) A high equipartition strength is needed to achieve a consistent
picture.
The lobes of two low-power radio galaxies reveal an apparent deficit
in X-ray emission compared with radio emission when assuming no protons
(${\cal K}=0$)
(Croston et al.\ \cite{C03}). Relativistic protons with 300--500~times more
energy than the electrons would increase the equipartition field strength
from $\simeq3$~$\mu$G to $\simeq15$~$\mu$G and reduce the number density
of electrons, which would explain the weak X-ray emission and also ensure
pressure balance between the radio lobes and the external medium.
\bigskip
(2) A low equipartition strength is needed to achieve a consistent
picture.
The equipartition values (assuming ${\cal K}=0$ or 1)
are similar to those derived from X-ray emission by
inverse Compton scattering of the background photons in most radio lobes
(Brunetti et al.\ \cite{B97}) and in the Coma cluster
(Fusco-Femiano et al.\ \cite{F+99}, see also Table~3 in Govoni \& Feretti\
\cite{GF04}).
In most hot spots the equipartition values are also similar to those
derived from X-ray emission by inverse Compton scattering of the
synchrotron photons (``synchrotron self-Compton emission''), e.g.
in Cyg~A (Harris et al.\ \cite{H94}; Wilson et al.\ \cite{W00}),
3C123 (Hardcastle et al.\ \cite{HB01}), 3C196 (Hardcastle \cite{H01}),
3C295 (Harris et al.\ \cite{H00}), in 3C263 and in 3C330
(Hardcastle et al.\ \cite{H02}).
If, however, ${\cal K}\gg1$, as prediced by acceleration models
(Table~\ref{table1}), the field estimates increase and fit better to the
Faraday rotation data (see above), but contradict the X-ray data in
these objects which then are ``too bright'' in X-rays.
\bigskip
(3) The equipartition strength is always too high.
Some objects are ``too bright'' in X-rays compared with the energy density of the
cosmic ray electrons derived from the equipartition assumption with ${\cal K}=0$.
In other words, the equipartition values even for ${\cal K}=0$ are already
several times larger than those allowed from the ratio of radio/X-ray intensities.
This is the case in the lobes of PKS~1343-601
(Tashiro et al.\ \cite{T98}) and 3C219 (Brunetti et al.\ \cite{BC99}),
and in the hot spots of Pictor~A (Wilson et al.\ \cite{W01}), 3C351
(Brunetti et al.\ \cite{BB01}) and 3C351 (Hardcastle et al.\ \cite{H02}).
Dominating protons (${\cal K}\gg1$) would even increase the
discrepancy.
The correction introduced for strong fields by our revised equipartition formula
(Fig.~\ref{fig1}) cannot solve the problem.
For example, in the bright core of the western hot spot of Pictor A, Wilson et al.
(\cite{W01}) found the largest discrepancy (more than 10) between the field
strength derived from the classical equipartition formula
($\simeq470~\mu$G, assuming ${\cal K}=1$) and that from the radio/X-ray
intensity ratio ($\simeq33~\mu$G). Our revised formula (for ${\mathrm{\bf K_0}}=1$)
reduces the equipartition value only to $\simeq350~\mu$G, not
enough to remove the discrepancy. If cosmic ray protons dominate, the equipartition
field strength \emph{increases} to $\simeq1~$mG (for ${\mathrm{\bf K_0}}=100$
and $\alpha=0.8$), so that the discrepancy between the field estimates
increases further.
Carilli \& Taylor (\cite{CT02}) discussed possible solutions.
Firstly, most of the X-ray emission may not be of inverse Compton, but of
thermal origin, which should be tested with further observations.
Secondly, an anisotropic pitch angle distribution of the cosmic ray
electrons could weaken the synchrotron relative to the inverse Compton
emission. Thirdly, in magnetic fields of about $1~\mu$G the observed
synchrotron spectrum traces electrons of larger energies than those
emitting the inverse Compton spectra. If the electron spectrum
steepens with energy (e.g. due to energy losses), the radio emission is
reduced at high energies.
Another possible (though improbable) explanation of the discrepancy is that
equipartition between magnetic fields and cosmic rays is violated. An increase
in electron density by typically $n'_\mathrm{e}/n_\mathrm{e,eq}\simeq5$ is
required to match the excessive X-ray intensities in the objects listed above.
The corresponding decrease in field strength for a fixed radio synchrotron
intensity and $\alpha\simeq0.8$ is
$B'/B_\mathrm{eq}=(n'_\mathrm{e}/n_\mathrm{e,eq})^{-1/(\alpha+1)}\simeq0.4$, and
the energy density ratio $q$ between cosmic rays and magnetic fields
increases by $q=(n'_\mathrm{e}/n_\mathrm{e,eq})^{(\alpha+3)/(\alpha+1)}\simeq30$.
Such an imbalance between particle and field energies is unstable
and would cause a rapid outflow of cosmic rays.
Finally, the magnetic field may be concentrated in filaments.
Intracluster magnetic fields can be amplified by shocks in merging clusters
(Roettiger et al.\ \cite{RSB99}). The ratio of
synchrotron to X-ray intensities from the same cosmic ray electron spectrum is
$I_\mathrm{syn}/I_\mathrm{X} \, \propto \, <n_\mathrm{e} B^{1+\alpha}> / <n_\mathrm{e}>$.
If small-scale fluctuations in $n_\mathrm{e}$ and in $B$ are uncorrelated,
$I_\mathrm{syn}/I_\mathrm{X} \, \propto \, <B^2> \, = \, <B>^2/f_\mathrm{B}$,
where $f_\mathrm{B}$ is the effective volume filling factor of the magnetic
field, so that the
field strength estimate from the radio/X-ray intensity ratio varies with
$f^{-0.5}$. To explain a 10$\times$ discrepancy between the field estimates,
a filling factor of $10^{-2}$ is required. The enhanced magnetic field in the
filaments leads to synchrotron loss of the electrons and hence to an
anticorrelation between $n_\mathrm{e}$ and $B$. The X-ray emission may be
biased towards regions of weak fields, while synchrotron intensity is biased
towards regions of strong fields. This explanation appears plausible
and deserves further investigation.
\section{Summary}
Assuming that cosmic rays are generated by electromagnetic shock acceleration
producing the same total numbers of relativistic protons and electrons
with power laws in momentum, we showed that the ratio of total energies of protons
and electrons $\cal K$ is about 40 for strong shocks and \emph{decreases} with
decreasing shock strength. The ratio {\bf K} of \emph{number densities}
of protons and electrons per energy interval for particle energies $E\ge1$~GeV is
also about 40 for strong shocks and \emph{increased} with decreasing shock strength.
Both ratios further depend on the various energy loss processes for protons and
electrons.
The classical equipartition or minimum-energy estimate of total magnetic
fields strength from radio synchrotron intensities is based on the ratio $\cal K$
of the total energies of protons and electrons, a number
which is hardly known because the proton and electron spectra have not been measured
over the whole energy range. We propose a revised formula which instead uses the
number density ratio ratio {\bf K} in the energy interval relevant for synchrotron
emission. This ratio can be adopted from the value observed in the local Galactic
cosmic rays,
or it can be estimated for the energy range where energy losses are negligible or
particle escape loss dominates (``thin target''), so that {\bf K} is constant
(${\mathrm{\bf K_0}}$).
Furthermore, the classical equipartition / minimum-energy estimate is
incorrect because the cosmic ray energy density is determined by integration over a
fixed interval in the radio spectrum, which introduces an implicit dependence
on the field strength. Applying our revised formula, the field strengths for a
proton-dominated relativistic plasma are larger by up to 2$\times$ for flat
radio spectra (synchrotron spectral index $\alpha < 0.6$), but smaller by
up to 10$\times$ for steep radio spectra ($\alpha > 0.7$) and for total field
strengths $B>10~\mu$G (Fig.~\ref{fig1} and Table~\ref{table2}).
The revised formula can be applied if energy losses are negligible or if
escape is the dominant loss process for cosmic ray electrons at GeV energies.
The classical field estimates for young supernova remnants
are probably too large by a factor of several.
The average field strengths for spiral galaxies remain almost unchanged.
For objects containing dense gas (``thick targets'') where energy loss of
cosmic ray electrons by nonthermal bremsstrahlung is significant, e.g. in
starburst galaxies, massive spiral arms or in starburst regions in
galaxies, neither the revised nor the classical estimate can be applied.
Equipartition values for radio lobes and galaxy clusters are usually
computed assuming a constant cosmic ray proton/electron energy ratio of ${\cal K}=0$
(i.e. pure electron/positron plasma) or ${\cal K}=1$ (i.e. the same contribution of
protons and electrons to the total cosmic ray energy). However, all current models
of cosmic ray origin predict that protons are dominant (${\cal K}\gg1$,
${\mathrm{\bf K_0}}\simeq100$--300), so that the field estimate is too low by
a factor $({\mathrm{\bf K_0}}+1)^{1/(\alpha+3)}$.
Furthermore, the radio spectra of radio lobes and clusters indicate that
synchrotron or inverse Compton loss of the cosmic ray electrons are significant,
so that $\cal K$ and {\bf K} and hence the field strength increase further.
The revised, stronger fields in clusters are consistent
with the results of Faraday rotation observations.
In case of strong fields in radio lobes, the discrepancy with the much lower field
estimates from the radio/X-ray intensity ratio in several hot spots cannot
be solved and requires alternative explanations, e.g. a concentration of the
field in small filaments with a low volume filling factor, or a thermal
origin of the X-ray emission.
Our code {\sc BFIELD} to compute the strengths of the total and regular fields
from total and polarized synchrotron intensities,
allowing various assumptions as discussed in this paper, is available from
{\em www.mpifr-bonn.mpg.de/staff/mkrause}.
\begin{acknowledgements}
We wish to thank Elly M. Berkhuijsen, Luigina Feretti, Martin Pohl, Wolfgang Reich,
and Anvar Shukurov for many useful discussions, and especially Reinhard Schlickeiser
for his patience to explain to us the labyrinth of cosmic ray loss processes.
Leslie Hunt is acknowledged for encouraging us to publish this paper.
We are grateful to our anonymous referee for his help to make the paper clearer.
\end{acknowledgements}
|
2,877,628,089,089 | arxiv | \section{Introduction}\label{sec:intro}
The L\"uscher approach~\cite{Luescher-torus} has become a standard tool to study
hadron-hadron scattering processes on the lattice. The use of this approach in
case of elastic scattering is conceptually straightforward: besides technical
complications, caused by partial-wave mixing, each measured energy level at a
given volume uniquely determines the value of the elastic phase shift at the
same energy.
In the presence of multiple channels, the extraction of the scattering phase
becomes more involved. In case when only two-particle coupled channels appear,
one can make use of the coupled-channel L\"uscher
equation~\cite{Lage-KN,Lage-scalar,He,Sharpe,Briceno-multi,Liu,PengGuo-multi}
and fit a simple pole parameterization for the multi-channel $K$-matrix
elements to the measured energy spectrum in the finite
volume~\cite{Wilson-pieta}. A more sophisticated parameterization of the
$K$-matrix elements, which is applicable in a wider range of the energies, can
be obtained using unitarized chiral perturbation theory
(ChPT)~\cite{oset,Doring-scalar,oset-in-a-finite-volume}. This approach has been
successfully applied, e.g., in Ref.~\cite{Wilson-rho} to analyze
coupled-channel $\pi\pi-K\bar K$ $P$-wave scattering and to study the properties
of the $\rho$-meson. Note the difference to the one-channel case: here, one has
to determine several $K$-matrix elements (unknowns) from a single measurement
of a finite-volume energy level. Hence, using some kind of
(phenomenology-inspired) parameterizations of the multi-channel $K$-matrix
elements becomes inevitable in practical applications.
In case when some of the inelastic channels contain three or more particles, the
situation is far more complicated. Despite the recent progress in the
formulation of the theoretical
framework~\cite{Polejaeva,Briceno-3,Sharpe-3,Rios}, it is still too cumbersome
to be directly used in the analysis of the data. Moreover, the problem of the
choice of the parameterization for three-particle scattering might become more
difficult (and lead to even larger theoretical uncertainties) than in
two-particle scattering.
From the above discussion it is clear that a straightforward extension of the
L\"uscher approach through the inclusion of more channels has its limits that
are reached rather quickly. On the other hand, many interesting systems, which
are already studied on the lattice, may decay into multiple channels. In our
opinion, the present situation warrants a rethinking of the paradigm. One may
for example explore the possibility to analyze the lattice data without
explicitly resolving the scattering into each coupled channel separately. Such a
detailed information is usually not needed in practice. Instead, in the
continuum scattering problem, the effect of inelastic channels could be
included in the so-called optical potential~\cite{Feshbach,Kerman:1959fr}, whose
imaginary part is non-zero due to the presence of the open inelastic channels.
In many cases, it would be sufficient to learn how one extracts the real and
imaginary parts of the optical potential from the lattice data, without
resorting to the multi-channel L\"uscher approach. In the present paper, we
propose such a method, which heavily relies on the use of twisted
boundary
conditions~\cite{Bedaque,Sachrajda,rest-twisted,Chen,Agadjanov-twisted}. Due to
this, the method has its own limitations, but there exist certain systems,
where it could in principle be applied. In particular, we have the following
systems in mind:
\begin{itemize}
\item
The scattering in the coupled-channel $\pi\eta-K\bar K$
system in the vicinity of the $K\bar K$ threshold and the $a_0(980)$ resonance.
\item
The spectrum and decays of the $XYZ$ states; namely,
$Z_c(3900)^\pm$ that couples to the channels $J/\psi\pi^\pm$, $h_c\pi^\pm$ and
$(D\bar D^*)^\pm$ (this system was recently studied in Ref.~\cite{Padmanath})
or the $Z_c(4025)$ that couples to the $D^*\bar D^*$ and $h_c\pi$ channels
(see, e.g., Ref.~\cite{ChuanLiu}).
\end{itemize}
There certainly exist other systems where this method can be used. It should
also be stressed that the systems, where the partial twisting
(i.e., twisting only the valence quarks) can be carried out, are interesting in
the first place -- for an obvious reason. All examples listed above belong to
this class. In general, the partial twisting can always be carried out when the
annihilation diagrams are absent. In the presence of annihilation diagrams, each
particular case should be analyzed separately, invoking the methods of effective
field theories in a finite volume~\cite{Agadjanov-twisted}. The present paper
contains an example of such an analysis.
Further, note that there exists an alternative method for the extraction of
hadron-hadron interaction potentials from the measured
Bethe-Salpeter wave functions on the Euclidean lattice. This method goes under
the name of the HAL QCD approach and its essentials are explained in
Refs.~\cite{HAL-essentials,HAL-derivatives}. Most interesting in the present
context is the claim that the HAL QCD approach can be extended to the
multi-channel systems, including the channels that contain three and
more-particles~\cite{HAL-multi}. It should also be pointed out
that this approach has already been used to study various systems on the
lattice, including the analysis of coupled-channel baryon-baryon
scattering (see, e.g., Ref.~\cite{HAL-applications}). It would be interesting to
compare our method with the HAL QCD approach.
The layout of the present paper is as follows. In Sect.~\ref{sec:complex_plane}
we discuss the theoretical framework for the extraction of the real and
imaginary parts of the optical potential and provide an illustration of the
method with synthetic data, generated by using unitarized ChPT. Further, in
Sect.~\ref{sec:realisticpseudophase}, we discuss the role of twisted boundary
conditions for measuring the optical potential. Namely, the possibility of
imposing partially twisted boundary conditions is explored in
Sect.~\ref{sec:partialtwisting}. Here, we also discuss the possibility of
imposing the different boundary conditions on the quark and antiquark fields.
The analysis of synthetic data, including an error analysis, is presented in
Sect.~\ref{sec:simulation}. Finally, Sect.~\ref{sec:concl} contains our
conclusions.
\section{Optical potential in the L\"uscher approach}
\label{sec:complex_plane}
\subsection{Multichannel potential, projection operators}
In the continuum scattering theory, the inelastic channels can be effectively
included in the so-called optical potential by using the Feshbach projection
operator technique~\cite{Feshbach}. Namely, let us start from the multi-channel
$T$-matrix which obeys the Lippmann-Schwinger equation
\begin{align}\label{eq:LS} T=V+VG_0T\,. \end{align} Here, $V$ is the potential
and $G_0=(E-H_0)^{-1}$ denotes the free Green's function with $E$ the total
energy in the center-of-mass system. The quantities $T,V,G_0$ are all $N\times
N$ matrices in channel space.
In case when only two-particle intermediate states are present, using
dimensional regularization together with the threshold expansion, it can be
shown that the Lippmann-Schwinger equation~\eqref{eq:LS} after
partial-wave expansion reduces to an algebraic matrix equation (see, e.g.,
Ref.~\cite{Lage-dist}). With the proper choice of normalization, the matrix
$G_0(E)$ in this case takes the form
\begin{align}
G_0(E)&=\mbox{diag}\,(ip_1(E),\cdots,ip_n(E))\,,
\end{align}
where $p_k(E)$ denotes the magnitude of the center-of-mass three-momentum, i.e.,
\begin{align}\label{eq:pcms}
p_k(E)&=\frac{1}
{2E}\sqrt{\left(E^2-\left(m_1^{(k)}+m_2^{(k)}\right)^2\right)
\left(E^2-\left(m_1^{(k)}-m_2^{(k)}\right)^2\right)}
\end{align}
and $m_{1,2}^{(k)}$ are the masses of particles in the $k^{\rm th}$ scattering
channel. Hence, if dimensional regularization is used in case of two-particle
channels, the potential $V$ coincides with the multi-channel $K$ matrix. The
latter quantity can always be defined,
irrespectively of the used regularization. Our
final results are of course independent of the use of a particular
regularization.
Suppose further that we focus on the scattering in a given two-particle channel.
Let us introduce the projection operators $P$ and $Q=1-P$, which project on this
channel and on the rest, respectively. In
the following, we refer to them as the primary (index $P$) and the secondary
(index $Q$) channels. The secondary channels may contain an arbitrary number of
particles. It is then straightforward to show that the quantity $T_P(E)=PT(E)P$
obeys the following single-channel Lippmann-Schwinger equation
\begin{align}
T_P(E)&=W(E)+W(E)G_P(E)T_P(E)\,,
\end{align}
where
\begin{align}
&W(E)=P\biggl(V+VQ\frac{1}{E-H_0-QVQ}QV\biggr)P \quad
\text{ and }\quad G_P(E)=PG_0(E)P\,.
\end{align}
It is easily seen that, while $V$ is Hermitean, $W(E)$ above the secondary
threshold(s) is not. The imaginary part of $W(E)$ is expressed through the
transition amplitudes into the secondary channels
\begin{align}
W(E)-W^\dagger(E)&=-2\pi i\, PT_Q^\dagger(E)Q\,\delta(E-H_0)\,QT_Q(E)P\,,
\end{align}
where
\begin{align}
T_Q(E)&=V+VG_Q(E)T_Q(E) \quad\text{ and }\quad G_Q(E)=QG_0(E)Q\,.
\end{align}
For illustration, let us consider scattering in the $\pi \eta - K\bar K$
coupled channels. Let $K\bar K$ and $\pi\eta$ be the primary and secondary
channels, respectively. Then, the formulae for the S-wave scattering take the
following form (we suppress the partial-wave indices for brevity):
\begin{align}
T_{K\bar K\to K\bar K}(E)=W(E)+ip_{K\bar K}\,W(E)T_{K\bar K\to K\bar K}(E)\,.
\end{align}
Here,
\begin{align}
W(E)=V_{K\bar K\to K\bar K}+\frac{ip_{\pi\eta}V_{K\bar K\to \pi\eta}^2}{1-ip_{\pi\eta}V_{\pi\eta\to \pi\eta}}\, ,
\end{align}
$p_{K\bar K}$, $p_{\pi\eta}$ denote the magnitude of the relative three-momenta
in the center-of-mass frame in the respective channel, as given in
Eq.~\eqref{eq:pcms}.
It is often useful to introduce the so-called $M$-matrix $M=V^{-1}$. In terms of
this quantity, the above formula can be rewritten in the following
form:
\begin{eqnarray}
W^{-1}(E)=M_{K\bar K\to K\bar K}
-\frac{M_{K\bar K\to \pi\eta}^2}{M_{\pi\eta\to\pi\eta}-ip_{\pi\eta}}\, .
\label{winv}
\end{eqnarray}
Using the latter form can be justified, when a resonance
near the elastic threshold exists that shows up as a pole on the real
axis in $V$. In contrast, the quantity $M$ is smooth in
this case and can be Taylor-expanded near threshold.
In a finite volume, one may define a counterpart of the scattering amplitude
$T_{K\bar K\to K\bar K}(E)$. Imposing, e.g., periodic boundary conditions leads
to the modification of the loop functions
(for simplicity, we restrict ourselves to the S-waves from here on and
neglect partial wave mixing)
\begin{eqnarray}
ip_k\to \frac{2}{\sqrt{\pi} L}\,Z_{00}(1;q_k^2) \quad\text{for}\quad
q_k=\frac{p_kL}{2\pi}\, ,
\end{eqnarray}
whereas the potential $V$ remains unchanged up to exponentially suppressed
corrections. In the above expressions, $L$ is the size of the cubic box and
$Z_{00}$ denotes the L\"uscher zeta-function.
The energy levels of a system in a finite volume coincide with the poles of the
modified scattering amplitude. The position of these poles is determined from
the secular equation
\begin{eqnarray}\label{eq:secular}
\biggl(M_{K\bar K\to K\bar K}-\frac{2}{\sqrt{\pi}L}\,
Z_{00}(1;q_{K\bar K}^2)\biggr)
\biggl(M_{\pi\eta\to\pi\eta}-\frac{2}{\sqrt{\pi}L}\,Z_{00}(1;q_{\pi\eta}^2)
\biggr)
-M_{K\bar K\to\pi\eta}^2=0\, .
\end{eqnarray}
The positions of these poles on the real axis are the quantities that are
measured on the lattice.
\subsection{Continuation to the complex energy plane}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{WLinv_Winv.pdf}
\caption{The real and imaginary parts of the quantity $W^{-1}(E)$, as well as
its finite-volume counterpart $W_L^{-1}(E)$ for $L=5M_\pi^{-1}$.}
\label{fig:pseudophase-exact}\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{PerfektFit.pdf}
\end{center}
\caption{Fit of the function specified in Eq.~\eqref{eq:realaxis} to the
quantity $W_L^{-1}(E)$ for $L=5M_\pi^{-1}$ and uniformly distributed values of energy $E$.}
\label{fig:pseudophase-exact-fit}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.98\linewidth]{oscillations.pdf}
\end{center}
\caption{The real and imaginary parts of the quantity $\hat W_L^{-1}(E+i\varepsilon)$
for $\varepsilon=0.02~\mbox{GeV}$ (solid black lines) and
$\varepsilon=0.05~\mbox{GeV}$ (dashed blue lines)
versus the real and imaginary parts of the infinite-volume counterpart
$W^{-1}(E)$ (dotted red lines). All quantities are given in units of GeV.}
\label{fig:oscillations}
\end{figure}
The main question, which we are trying to answer, can now be formulated as
follows: Is it possible to extract the real and imaginary parts of $W(E)$ from
the measurements performed on the lattice? We expect that the answer exists and
is positive, for the following reason. Let us imagine that all scattering
experiments in Nature are performed in a very large hall with certain boundary
conditions imposed on its walls. It is {\it a priori} clear that
nothing could change in the interpretation of the results of this experiment, if
the walls are moved to infinity. Consequently, there {\it should} exist a
consistent definition of the infinite-volume limit in a finite-volume theory
that yields all quantities defined within the scattering theory in the
continuum. Since the optical potential is one of these, there should exist a
quantity defined in a finite volume, which coincides with the optical potential
in the infinite-volume limit.
In order to find out, which quantity corresponds to the optical potential in a
finite volume and how the infinite-volume limit should be performed, let us
follow the same pattern as in the infinite volume. Namely, we apply the
one-channel L\"uscher equation for the analysis of data, instead of the
two-channel one. As a result, we get:
\begin{eqnarray}\label{eq:WL}
W_L^{-1}(E):=
\frac{2}{\sqrt{\pi}L}\,Z_{00}(1;q_{K\bar K}^2)=
M_{K\bar K\to K\bar K}
-\frac{M_{K\bar K\to \pi\eta}^2}{M_{\pi\eta\to\pi\eta}
-\frac{2}{\sqrt{\pi}L}\,Z_{00}(1;q_{\pi\eta}^2)}\, .
\end{eqnarray}
The left-hand side of this equation is measured on the lattice at fixed values
of $p_{K\bar K}$, corresponding to the discrete energy levels in a finite
volume. Methods to measure $W_L^{-1}$ are discussed in
Sect.~\ref{sec:realisticpseudophase}. The quantity on the right-hand side is
proportional to the cotangent of the so-called {\it pseudophase}, defined as
the phase extracted with the one-channel L\"uscher
equation~\cite{Lage-KN,Lage-scalar}. It coincides with the usual scattering
phase in the absence of secondary channels.
Fig.~\ref{fig:pseudophase-exact} shows the real and imaginary parts of the
quantity $W^{-1}(E)$ that is constructed by using a simple parameterization of
the two-channel $T$-matrix, based on unitarized ChPT (see Ref.~\cite{Oller}).
For comparison, the finite-volume counterpart $W_L^{-1}(E)$, which is defined
by Eq.~(\ref{eq:WL}), is also shown. If the secondary channels were absent,
$W^{-1}(E)$ would be real and equal to $W_L^{-1}(E)$, up to exponentially
suppressed contributions. Fig.~\ref{fig:pseudophase-exact} clearly demonstrates
the effect of neglecting the secondary channels. While the ``true'' function
$W^{-1}(E)$ is a smooth (and complex) function of energy, the (real) function
$W_L^{-1}(E)$ has a tower of poles and zeros. The (simple) zeros of
$W_L^{-1}(E)$ (poles of $W_L(E)$) emerge, when $E$ coincides with one of the
energy levels in the interacting $\pi\eta$ system. The background, obtained by
subtracting all simple poles, is a smooth function of $E$. It should be
stressed that this statement stays valid even in the presence of multiple
secondary channels, some of which containing three or more particles. The only
singularities that emerge in general are the simple poles that can be traced
back to the eigenvalues of the total Hamiltonian restricted to the subspace of
the secondary states\footnote{Strictly speaking, this argument applies only to
$W_L(E)$. However, assuming the absence of accidental multiple zeros in
$W_L(E)$, one may extend this argument to $W_L^{-1}(E)$.}.
It is important to note that, if $L$ tends to infinity, the optical potential
does not have a well-defined limit at a given energy. As the energy levels in
the secondary channel(s) condense towards the threshold, the quantity
$W_L^{-1}(E)$ at a fixed $E$ oscillates from $-\infty$ to $+\infty$. Thus, the
question arises, how the quantity $W^{-1}(E)$ can be obtained in the
infinite-volume limit.
It should be pointed out that this question has been already addressed in the
literature in the past. In this respect, we find Ref.~\cite{DeWitt} most useful.
In this paper it is pointed out that, in order to give a correct causal
description of the scattering process, one should consider adiabatic switching
of the interaction. This is equivalent to attaching an infinitesimal imaginary
part $E\to E+i\varepsilon$ to the energy. Further, as argued in
Ref.~\cite{DeWitt}, the limits $L\to\infty$ and $\varepsilon\to 0$ are not
interchangeable. A correct infinite-volume limit is obtained, when $L\to\infty$
is performed first (see Ref.~\cite{finite-t} for a more detailed discussion of
this issue). Physically, this statement is clear. The quantity $\varepsilon$
defines the available energy resolution, and the distance between the
neighboring energy levels tends to zero for $L\to\infty$. If this distance
becomes smaller than the energy resolution, the discrete levels merge into a cut
and the infinite-volume limit is achieved. It is also clear, why the
infinite-volume limit does not exist on the real axis: $\varepsilon=0$
corresponds to an infinitely sharp resolution and the cut is never observed.
The above qualitative discussion can be related to L\"uscher's regular summation
theorem~\cite{Luescher-regular}. On the real axis above threshold, the
zeta-function $Z_{00}(1;q_{\pi\eta}^2)$ in Eq.~(\ref{eq:WL}) does not have a
well-defined limit. Assume, however, that the energy $E$ gets a small positive
imaginary part, $E\to E+i\varepsilon$. The variable $q_{\pi\eta}^2$ also becomes
imaginary:
\begin{eqnarray}
q_{\pi\eta}^2\to
q_{\pi\eta}^2+\frac{i\varepsilon E}{2}\,\biggl(\frac{L}{2\pi}\biggr)^2
\biggl(1-\frac{(M_\eta^2-M_\pi^2)^2}{E^4}\biggr)=
q_{\pi\eta}^2+i\varepsilon'\, .
\end{eqnarray}
It is immediately seen that above threshold, $E>M_\eta+M_\pi$, the quantity
$\varepsilon'$ is strictly positive. Now, for real energies $E$, the nearest
singularity is located at the distance $\varepsilon$ from the real axis, so
the regular summation theorem can be applied. It can be straightforwardly
verified that the remainder term in this theorem vanishes as
$\exp(-\varepsilon'L)$ (modulo powers of $L$), when $L\to\infty$.
The above argumentation can be readily extended to the cases when intermediate
states contain any number of particles. Consider a generic loop diagram in the
effective field theory where these particles appear as internal lines. It is
most convenient to use old-fashioned time-ordered perturbation theory, where
the integrand contains the energy denominator ${(E+i\varepsilon-w_1({\bf p}_1)
\ldots -w_n({\bf p}_n))^{-1}}$. Here, $w_i({\bf p}_1)\, ,~i=1,\ldots,n$ stand
for the (real) energies of the individual particles in the intermediate state.
It is clear that, if $\varepsilon\neq 0$, the denominator never vanishes, and
the regular summation theorem can be applied. The remainder, as in the
two-particle case, vanishes exponentially when $\varepsilon\neq 0$.
The analytic continuation into the complex plane can be done as follows.
Suppose one can measure the quantity $W_L^{-1}(E)$ on the real axis. Bearing in
mind the above discussion, one may fit this function by a sum of simple poles
plus a regular background.
Fig.~\ref{fig:pseudophase-exact-fit} shows the result of such a fit which was
performed by using the function \begin{eqnarray}\label{eq:realaxis} \hat
W_L^{-1}(E)=\sum_i\frac{Z_i}{E-Y_i}+D_0+D_1E+D_2E^2+D_3E^3\, \end{eqnarray} to fit a sample
of the exact $W_L^{-1}$ without errors. The exact values of the fit parameters
are not listed here since Fig.~\ref{fig:pseudophase-exact-fit} is given for
the illustrative purposes only. In the actual numerical simulation of
Sect.~\ref{sec:simulation}, the order of the polynomial is varied.
The continuation into the complex plane is trivial: one uses
Eq.~(\ref{eq:realaxis}) with fixed values of $Z_i,~Y_i,~D_i$ and makes the
substitution $E\to E+i\varepsilon$. The real and imaginary parts of the
quantity $\hat W_L^{-1}(E+i\varepsilon)$ for $\varepsilon=0.02~\mbox{GeV}$ and
$\varepsilon=0.05~\mbox{GeV}$ are shown in Fig.~\ref{fig:oscillations}. For
comparison, the real and imaginary parts of the infinite-volume counterpart
$W^{-1}(E)$ are also given. As seen, the finite-volume ``optical potential''
oscillates around the true one and the magnitude of such oscillation grows
larger, when $\varepsilon$ becomes smaller. On the other hand, the artifacts
caused by a finite $\varepsilon$ grow, when $\varepsilon$ becomes large.
\subsection{Infinite-Volume Extrapolation}\label{sec:smearing}
From the above discussion it is clear that, performing the limit $L\to\infty$
for a fixed $\varepsilon$, and then taking $\varepsilon\to 0$, the
infinite-volume limit is restored from $\hat W_L^{-1}(E+i\varepsilon)$. For the
actual extraction on the lattice, however, taking the large volume limit could
be barely feasible. An alternative to this procedure is to ``smooth'' the
oscillations arising from Eq.~(\ref{eq:realaxis}) if evaluated at complex
energies at a finite $L$ and $\varepsilon$. This allows one to perform the
extraction of the optical potential at a reasonable accuracy even at
sufficiently small values of $L$. As in the present study the true optical
potential is known, the validity of this procedure can be tested. We would like
to stress that $LM_\pi=5$ used in this study is rather small and thus not
completely beyond reach.
In the present section we test two different algorithms for smoothing the
quantity $\hat W_L^{-1}(E+i\varepsilon)$. In both cases, the result is called
$\hat W^{-1}$, i.e., the estimate of the true infinite-volume potential
$W^{-1}$. The final results of the numerical studies, presented in
Sect.~\ref{sec:simulation} are evaluated with both methods.
\subsubsection*{Parametric method}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.48\textwidth]{nmax_dependence.PDF}
\hspace*{0.2cm}
\includegraphics[width=0.48\textwidth]{finalCC.PDF}
\end{center}
\caption{
{\bf Left}: The $\chi^2$ as a function of the degree of the fit polynomial,
$n_{\rm max}$. While the $\chi^2$ of the unconstrained fits (gray squares)
monotonically decreases, a finite penalty factor of $\lambda=\hat\lambda_{\rm
opt}=0.2$ for $P_2$ stabilizes the result (red triangles). {\bf Right}: Cross
validation. The $\chi^2$ of the fits to the training set according to
Eq.~(\ref{chi22}) are shown with gray squares; the $\chi^2_V$ of these fits,
evaluated for the test/validation set, are indicated with red triangles; the
$\chi_t^2$ of these fits evaluated for the (unknown) true optical potential
according to Eq.~(\ref{truechi}) are displayed with blue circles. The minimum of
the $\chi_V^2(\lambda)$ of the test/validation set (red) estimates the penalty
factor $\hat\lambda_{\rm opt}\sim 0.15-0.2$ which is very close to the truly
optimal $\lambda_{\rm opt}\sim 0.2-0.3$ (blue). The absolute and relative scales
of the different $\chi^2$'s are irrelevant.
}
\label{fig:LASSO1}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.48\textwidth]{smooth_poly_penalty_re.PDF}
\hspace*{0.2cm}
\includegraphics[width=0.48\textwidth]{smooth_poly_penalty_im.PDF}
\end{center}
\caption{Real and imaginary parts of the optical potential. The thick dashed
(red) lines show the true optical potential $W^{-1}$. The thick solid (black)
lines show the reconstructed potential $\hat W^{-1}$ with $\hat\lambda_{\rm
opt}=0.2$. The thin lines show a largely under-constrained result (thin solid,
oscillating lines) with $\lambda=0.05$ and a largely over-constrained result
(thin dashed lines) with $\lambda=1$.}
\label{fig:LASSO2}
\end{figure}
The basic idea of this method is to fit the optical potential $\hat
W_L^{-1}(E+i\varepsilon)$ from Eq.~\eqref{eq:realaxis} at complex energies in
the whole energy range with a suitable Ansatz. Model selection is performed
with LASSO regularization (as explained in detail later) in combination with
cross validation. Such methods have the advantage that basic properties of the
optical potential, like Schwartz's reflection principle and threshold behavior,
can be built in explicitly. In our problem, this is particularly simple because
the only non-analyticity is given by the branch point at the $\pi\eta$
threshold. In more complex problems, additional non-analyticities like resonance
poles or complex branch points from multi-channel states~\cite{Doring:2009yv,
Ceci:2011ae} have to be included in the parameterization. Yet, all these
non-analyticities are situated on other than the first Riemann sheet. The
parametric and non-parametric methods proposed here use an extrapolation from
finite, but positive $\varepsilon$ to $\varepsilon\to 0$, i.e., an
extrapolation performed on the first Riemann sheet that is analytic by
causality.
A suitable yet sufficiently general parameterization of the optical potential
is given by
\begin{align}
\hat W^{-1}(E)=\sum_{j=0}^{n_{\rm max}}
\left[\left(a_j+i \,b_j\,p_{\pi\eta}\right)(E-E_0)^j\right]
\label{ansatz}
\end{align}
with real parameters $a_j,\,b_j$. The only non-analyticity of $\hat W^{-1}$ is
given by the cusp function $i\,p_{\pi\eta}$, evaluated at the complex energy $E$
(see Eq.~\eqref{winv}), that is therefore explicitly included in the Ansatz; the
rest is then analytic and can be expanded in a power series around a real $E_0$
chosen in the center of the considered energy region, in order to reduce
correlations among fit parameters (the actual value of $E_0$ is irrelevant).
To perform the effective infinite-volume extrapolation through smoothing, we
consider the minimization of the $\chi^2$,
\begin{align}\label{chi22}
\chi^2=\sum_{k=1}^m\frac{\left|\hat W^{-1}(E_k)-\hat W_L^{-1}(E_k)\right|^2}{\sigma_k^2}+P_i(a_j,\,b_j)\,,
\end{align}
where $P_i$ are penalty functions specified below. The absolute scale of the
$\chi^2$ is irrelevant. The quantity $\hat W_L^{-1}$ is fitted by sampling at
the complex energies $E_k=E_{\rm min}+k\,\delta E+i\varepsilon$
($\varepsilon=0.05$~GeV) over the considered energy range $E_{\rm min}\leq E\leq
E_{\rm max}$ with a step $\delta E=10$~MeV, and assigning an arbitrary error of
$\sigma_k=\sigma=1$~GeV. Note that in cross validation (to be specified later),
the position of the minimal $\chi^2$ determines the size of the penalty, i.e.,
the size of $\sigma$ is irrelevant. The infinite-volume optical potential is
then obtained by simply evaluating $\hat W^{-1}$ at real energies, i.e., setting
$\varepsilon=0$.
If we assume for the moment that the penalty function $P_i$ in Eq.~\eqref{chi22}
is zero, then it is clear that the minimized $\chi^2$ is a monotonically
decreasing function of the degree of the fit polynomial $n_{\rm max}$. This is
demonstrated by the gray squares in Fig.~\ref{fig:LASSO1} (left panel).
Apparently, the fit stabilizes first for $n_{\rm max}=3-6$, which might lead to
the wrong conclusion that an optimal smoothing had been obtained. Then, for
higher $n_{\rm max}$, another plateau is reached at $n_{\rm max}=7-9$ and then
another one for $n_{\rm max}=10-14$. Thus, without an additional criterion, one
cannot decide which $n_{\rm max}$ is optimal.
In general, for a small $n_{\rm max}$, the smoothing will be too aggressive
(large $\chi^2$), while for too large values of $n_{\rm max}$ the fit will start
following the oscillations (Fig.~\ref{fig:oscillations}), resulting in a low
$\chi^2$ but missing the point of smearing the optical potential. These two
extreme cases are illustrated in Fig.~\ref{fig:LASSO2} with the thin dashed and
thin solid lines, respectively\footnote{These curves are derived in a similar
but slightly different context, see below. However, they still may serve as a
good illustration for the statement given here.}. There is obviously a {\it
sweet spot} for $n_{\rm max}$. Model selection refers to the process of
determining this spot as outlined in the following.
Model selection for the fit~(\ref{ansatz}) is formally introduced through a
penalty $P(a_j,b_j)$ imposed on the fit parameters. The penalty is formulated
using the LASSO method developed by Tibshirani in 1996~\cite{Tib0}. See also
Refs.~\cite{Tib1, Tib2} for an introduction into the topic. The LASSO method has
been recently applied in hadronic physics for the purpose of amplitude
selection~\cite{Guegan:2015mea}.
A natural choice to suppress oscillations is to penalize the modulus of the
second derivative~\cite{Tib1},
\begin{align}\label{p1}
P_1(a_j,\,b_j)=\lambda^4\,
\int\limits_{E_{\rm min}+i\varepsilon}^{E_{\rm max}+i\varepsilon} dE\,
\left|\frac{\partial^2 \hat W^{-1}(E)}{\partial E^2}\right| \,,
\end{align}
where the integral is performed along a straight line in the complex plane.
Another choice is to penalize only the polynomial part of the Ansatz
(\ref{ansatz}), i.e., removing the $p_{\pi\eta}$ factor that has an inherently
large second derivative at the $\pi\eta$ threshold,
\begin{align}\label{p2}
P_2(a_j,\,b_j)=\lambda^4\,
\int\limits_{E_{\rm min}+i\varepsilon}^{E_{\rm max}+i\varepsilon} dE\,
\left(\left|\frac{\partial^2}{\partial E^2}
\sum_{j=0}^{n_{\rm max}}
a_j(E-E_0)^j\right|+\left|\frac{\partial^2}{\partial E^2}
\sum_{j=0}^{n_{\rm max}} b_j(E-E_0)^j\right|\right) \,.
\end{align}
Including $\lambda$ to the fourth power is done in order to have a clearer
graphical representation of the penalty factor in subsequent plots. Imposing a
penalty, the decrease of $\chi^2$ with $n_{\rm max}$ is eventually stabilized,
as shown by the red triangles in Fig.~\ref{fig:LASSO1} (left panel) for some yet
to be determined value of $\lambda$. Clearly, the minimized $\chi^2$ from
Eq.~\eqref{chi22} is a monotonically increasing function of $\lambda$ as
demonstrated by the gray squares in Fig.~\ref{fig:LASSO1} (right panel) for the
penalty function $P_2$.
The fitted data ($\varepsilon=0.05$~GeV) form the so-called {\it training
set}~\cite{Tib0}. The main idea of cross validation to determine the sweet spot
of $\lambda$ is as follows (for more formal definitions and $k$-fold cross
validation, see Refs.~\cite{Tib0,Tib1,Tib2}): after a random division of a given
data set into {\it training} and {\it test/validation} sets, the fit obtained
from the training set is used to evaluate its $\chi^2$ with respect to the
test/validation set, called $\chi^2_V$ in the following (without changing fit
parameters and setting $P_i=0$). For too large values of $\lambda$, both values
of $\chi^2$ from training and from test/validation sets will be large. For too
small $\lambda$, the fit to the training set is too unconstrained and sensitive
to unwanted random properties such as fluctuations in the training data.
However, those unwanted random properties are different in the validation set,
leading to a {\it worse} $\chi^2_V$ for too small $\lambda$. It is then clear
that $\chi^2_ V(\lambda)$ exhibits a minimum at the sweet spot
$\lambda=\hat\lambda_{\rm opt}$.
Here, we cannot meaningfully divide the data set randomly. Instead, we have to
look for data, for which the physical property (infinite-volume optical
potential) is unchanged, but the unphysical property (oscillations from
finite-volume poles) is changed. This is naturally given by $\hat W_L^{-1}$ but
evaluated for a substantially different value of $\varepsilon$ (we choose
$\varepsilon=0.15$~GeV). The analytic form of Eq.~(\ref{ansatz}) ensures
that the infinite-volume optical potential can be analytically continued to
different values of $\varepsilon$, and only the unwanted finite-volume
oscillations are different for different $\varepsilon$. Indeed, as indicated
with the red triangles in Fig.~\ref{fig:LASSO1} (right panel), $\chi^2_V$
exhibits a clear minimum at $\lambda=\hat\lambda_{\rm opt}\sim 0.2$. The
potential dependence of the this value on the chosen $\varepsilon$ is discussed
below.
Furthermore, in this example, we know the underlying optical potential and can
simply determine the (generally unknown) truly optimal value for $\lambda$,
$\lambda_{\rm opt}$ by evaluating the $\chi^2$ of the estimate of the optical
potential, $\hat W^{-1}$, with respect to the true optical potential on the real
axis, $W^{-1}$,
\begin{align}\label{truechi}
\chi^2_t(\lambda)=\sum_{k=1}^m\frac{\left|\hat W^{-1}({\rm Re}\,E_k)
-W^{-1}({\rm Re}\,E_k)\right|^2}{\sigma^2} \,.
\end{align}
Note that the quantity $\chi^2_t(\lambda)$ (implicitly) depends on $\lambda$,
because the quantity $\hat W^{-1}({\rm Re}\,E_k)$ was determined at a fixed
value of $\lambda$. The quantity $\chi^2_t$ is shown with the blue filled
circles in Fig.~\ref{fig:LASSO1} (right panel). Its minimum at
$\lambda=\lambda_{\rm opt}$ is very close to the minimum of the validation
$\chi^2_V$ at $\lambda=\hat\lambda_{\rm opt}$, demonstrating that cross
validation~\cite{Tib1} is indeed capable of estimating the optimal penalty in
our case.
Instead of using the penalty function $P_2$, one can also choose $P_1$, see
Eqs.~\eqref{p1} and \eqref{p2}. The estimated $\hat \lambda_{\rm opt}$ given by
the minimum of $\chi^2_V$ will, of course, change. But, again, it was checked
that the new $\hat \lambda_{\rm opt}$ is very close to the new $\lambda_{\rm
opt}$ given by the minimum of $\chi^2_t$. Similarly, we have checked other forms
of penalization, with the same findings: imposing penalty on the third
derivative, variation of the value of $\varepsilon$ for the training set, and
variation of the value of $\varepsilon$ for the test/validation set. The only
restriction is that the $\varepsilon$ of the test/validation set has to be
chosen sufficiently larger than $\varepsilon$ of the training set for a minimum
in $\chi^2_V$ to emerge --- if the two $\varepsilon$'s are too similar, the
oscillations are too similar and no minimum in $\chi^2_V$ is obtained. Also,
$n_{\max}$ has to be chosen high enough so that, at a given $\varepsilon$ for
the training set, the fit is capable of fitting oscillations (for small
$\lambda$) which is a prerequisite for a minimum in $\chi^2_V$ to appear. In all
simulations we have chosen $n_{\rm max}=18$ although $n_{\rm max}\sim 7$ would
suffice as the left panel of Fig.~\ref{fig:LASSO1} shows.
For the initially considered case, using $P_2$ for the penalty,
$\varepsilon=0.05$~GeV for the training set, and $\varepsilon=0.15$~GeV
for the test/validation set, the resulting optical potential is shown in
Fig.~\ref{fig:LASSO2} with the thick black solid lines. For comparison, the true
optical potential is shown with the thick red (dashed) lines. The optical
potential is well reconstructed over the entire energy range. At the $\pi\eta$
threshold, the reconstructed potential reproduces the square-root behavior due
to the explicit factor $p_{\pi\eta}$ in the parameterization~(\ref{ansatz}). The
reconstructed potential explicitly fulfills Schwartz's reflection principle and
its imaginary part is zero below threshold. At the highest energies, small
oscillations become visible originating from the upper limit of the fitted
region at $E_{\rm max}=1.7$~GeV. Here, the smoothing algorithm, that is an
averaging in energy, has simply no information on $\hat W_L^{-1}$ beyond $E_{\rm
max}$. Note that in the numerical simulation of the next section, that uses
re-sampling techniques and realistic error bars, these small oscillations
themselves average out over the Monte-Carlo ensemble, simply resulting in a
widened, but smooth, error band at the highest energies.
For illustration, we also show in Fig.~\ref{fig:LASSO2} a largely
under-constrained result (too small $\lambda$, thin solid lines) in which the
oscillations from the finite-volume poles in $\hat W_L^{-1}$ survive. The
opposite case, i.e., an over-constrained fit with too large $\lambda$, is shown
with the thin dashed lines exhibiting too large of a penalization on the second
derivative.
\subsubsection*{Non-parametric method}
The advantage of non-parametric methods lies in its blindness of analytic
structures, which, however, also leads to the fact that threshold behavior and
Schwartz' reflection principle cannot be implemented easily. As a particular
method, we utilize an approach, commonly used in image processing applications.
This approach goes under the name of Gaussian smearing. The basic idea of the
Gaussian smearing is quite simple: for a given set of uniformly distributed
data, any data point is replaced by a linear combination of its neighboring data
points (within a given radius $r$), with individual weights, $w(x)$ given by
\begin{align}\label{eq:GAUSS}
w(x)\propto e^{-\frac{x^2}{2\sigma_0^2}}\,.
\end{align}
Here, $x$ and $\sigma_0$ denote the distance of the individual points from the
central one and the standard deviation. Typically, the latter value is chosen to
match the radius of the smearing by $\sigma_0=r/2$. Therefore, the only
undetermined quantity is given by the smearing radius $r$.
The general prescription to determine the smearing radius should rely on the
properties of the original data only. Recall that the latter is determined by
the function $\hat W_L^{-1}$ in Eq.\eqref{eq:realaxis}, which splits up into a
real and an imaginary set, when evaluated at the energy $E+i\varepsilon$ for a
fixed $\varepsilon>0$ and uniformly distributed values of $E$. Therefore, after
the fits to the (synthetic) lattice data are performed, the scale of the
structures to be smeared is determined by the distance between two poles, see
Fig.~\ref{fig:twisted-fits}. Of course, since the poles are not distributed
uniformly over the whole energy range, one could argue in favor of using
different values of $r$ for different energies. It is also clear that constraint
on the standard deviation $\sigma_0=r/2$ affects the result of the smoothing.
However, in order not to over-complicate the procedure, in the following we
choose the smearing radius to be twice as large as the typical (average)
distance between two poles. If the radius is much larger than this, physical
information (i.e. the functional form of the optical potential) will be smeared
out. If, however, the radius is much smaller than this value, then the
(unphysical) oscillations will remain, preventing the reconstruction of the
underlying optical potential. The situation is in fact very similar to the
under- and over-constrained results, discussed in the previous section for the
too small and too large values of $\lambda$.
After the parameters of the smearing kernel \eqref{eq:GAUSS} are fixed, the
method is applied to the sets of real and imaginary parts of $\hat W_L^{-1}$ at
a fixed $\varepsilon>0$. Then the procedure is repeated, each time assuming
slightly smaller value of $\varepsilon$ than before. In the final step, a simple
(polynomial) extrapolation is performed to real energies, i.e.
$\varepsilon=0$, to obtain the final result of this procedure, namely $\hat
W^{-1}(E)$.
\medskip
In this section, we have demonstrated that the real and imaginary parts of the
optical potential can be reconstructed from the pseudophase measured on the
lattice for real energies, $W_L^{-1}$, if the analytic continuation into the
complex plane is performed. Two distinct methods are presented to smear the
oscillations which emerge from the analytic continuation, and to recover the
optical potential for real energies. It remains to be seen, how the pseudophase
can be measured in practice. This issue will be considered in the
Sect.~\ref{sec:realisticpseudophase} where a realistic numerical simulation will
be carried out as well.
\section{Reconstruction of the optical potential}
\label{sec:realisticpseudophase}
The quantity $W_L^{-1}(E)$, which is used to extract the optical potential,
along with the energy $E$, depends on other external parameters, say, on
the box size $L$, boundary conditions, etc. In the fit to $W_L^{-1}(E)$, the
values of these parameters have to be fixed. Otherwise, for example, the
position of the poles in $W_L^{-1}(E)$ will be volume-dependent and a fit is not
possible. Hence, we are quite restricted in the ability to scan the variable
$E$: the knob, which tunes $E$, must leave all other parameters in the
pseudophase intact.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{Points-1MeV188p50sL5.pdf}
\end{center}
\caption{
Subset (75 sets) of the re-sampled lattice data, where each type of marker
symbols shows the set of $189$ energy eigenvalues, randomly distributed with
$\Delta E=1$~MeV around the central energy eigenvalues, extracted from
Eq.~\eqref{eq:WL} imposing twisted boundary condition. The gray dashed line
shows the actual amplitude $W_L^{-1}(E)$ to guide the eye.}
\label{fig:twisted-sets}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth,trim= 0.1cm 3cm 0.1cm 3cm]
{DoWeNeedPolesBelowKKbar.pdf}
\end{center}
\caption{Comparison of different scenarios with respect to the number of poles
reconstructed below the primary threshold. The curves were produced by using
the parameters of the perfect fit from the Sect.~\ref{sec:complex_plane},
but neglecting a certain number of poles below the $K\bar K$ threshold.}
\label{fig:doweneedpoles}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{FITSreal_1MeV188p75sL5.pdf}
\end{center}
\caption{A subset (75 sets) of the fits of Eq.~\eqref{eq:WL} to the
synthetic lattice data as described in the main text. Different curves represent
fits to different sets of re-sampled synthetic lattice data corresponding to the notation
of Fig.~\ref{fig:twisted-sets}. The gray dashed line shows the
actual amplitude $W_L^{-1}(E)$ to guide the eye.}
\label{fig:twisted-fits}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{POLY-1MeV500s188p28lambda3th.pdf}
\includegraphics[width=\linewidth]{GAUSS-1MeV500s188p28lambda3th.pdf}
\end{center}
\caption{Results of the smearing and extrapolation to real energies using
parametric method (top) and Gaussian smearing (bottom). The full lines show the
average of the re-sampling of all sets, whereas the darker (lighter)
bands show the corresponding 1 (2) $\sigma$ error bands. The exact infinite
volume solution is shown by the dashed lines for comparison.}
\label{fig:final1MeV}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{POLY-2MeV300s188p28lambda3th.pdf}
\includegraphics[width=\linewidth]{POLY-3MeV300s188p28lambda3th.pdf}
\end{center}
\caption{Results of the smearing and extrapolation to real energies using
parametric method for synthetic lattice data with $\Delta E=2$~MeV (top) and
$\Delta E=3$~MeV (bottom). The full lines show the average of the
re-sampling of all sets, whereas the darker (lighter) bands show the
corresponding 1 (2) $\sigma$ error bands. The exact infinite volume solution is
shown by the dashed lines for comparison.}
\label{fig:final23MeV}
\end{figure}
\subsection{Partially twisted boundary conditions}\label{sec:partialtwisting}
In certain systems, there indeed exists a possibility to scan the energy within
a given range in this manner. It is provided by the use of twisted boundary
conditions and can be realized, e.g., in the coupled-channel $\pi\eta-K\bar K$
scattering. Namely, as was discussed in
Refs.~\cite{Lage-scalar,Agadjanov-twisted}, in this system it is possible to
apply (partially) twisted boundary conditions so that, when the twisting angle
is changed continuously, the $K\bar K$ threshold moves, whereas the $\pi\eta$
threshold stays intact. This can be achieved, for example, by twisting the light
$u,d$ quarks by the same angle and leaving the $s$-quark with periodic boundary
conditions. This will lead to the modification of the secular
equation~\eqref{eq:secular}, replacing $Z_{00}(1;q_{K\bar K}^2)$ by
\begin{equation} Z_{00}^\theta(1;q_{K\bar
K}^2)=\frac{1}{\sqrt{4\pi}}\,\sum_{{\bf n}\in{\mathbb Z}^3}\frac{1}{\left({\bf
n}+\boldsymbol{\theta}/2\pi\right)^2-q_{K\bar K}^2}\, . \end{equation}
The expression for $W_L^{-1}(E)$ remains the same and does not contain the
twisting angle $\boldsymbol{\theta}$.
The method can be used to study the isospin $I=1$ scattering in the
$\pi\eta-K\bar K$ system. As shown in Ref.~\cite{Agadjanov-twisted}, despite the
presence of the annihilation diagrams, the partial twisting in this case is
equivalent to the full twisting, if the light quarks are twisted, whereas
twisting of the $s$-quark does not lead to an observable effect. As a rule of
thumb, one expects that the partial twisting of a given quark will be equivalent
to full twisting, only if this quark line goes through the diagram without being
annihilated (of course, a rigorous proof of this statement should follow by
using effective field theory methods~\cite{Agadjanov-twisted}). In our case, we
could choose to work with the state with maximal projection of the isospin, say
$I=1,I_3=1$. This state contains one $u$-quark and one $\bar d$-quark, which
cannot be annihilated. Choosing the same twisting angle for both quarks, the
system stays in the center-of-mass frame and the pseudophase becomes independent
from the twisting angle, as required. From the above discussion it is also
clear that using our method for the extraction of the optical potential in the
channel with isospin $I=0$ implies the use of full twisting instead
of partial twisting.
The same trick can be used to study the $Z_c(3900)$ and $Z_c(4025)$ states,
which both have isospin $I=1$. Twisting $u$- and $d$-quarks by the same angle,
the $D$- and $D^*$-mesons will get additional momenta proportional to the
twisting angle, whereas the $J/\psi$, $h_c$ and $\pi$-mesons will not.
Consequently, one may choose the channels containing the $D$ and $D^*$ mesons as
the primary ones (in our nomenclature) and regard every other channel as
secondary. For this choice, the pseudophase will not depend on the twisting
angle.
Last but not least, an unconventional twisting procedure was used in the study
of the $J/\psi\phi$ scattering from $Y(4140)$ decays~\cite{Sasaki}. Namely,
in that work the $c$- and $s$-quarks were twisted by the angles
$\boldsymbol\theta$ and $-\boldsymbol\theta$, respectively, whereas their
Hermitean conjugates $\bar c$, $\bar s$ were subject to periodic boundary
conditions. Albeit in the particular case of $J/\psi\phi$ scattering the
twisting cannot be used for the extraction of the optical potential, one could
not exclude a possibility that this kind of twisting could be applied in other
systems for this purpose. For this reason, we consider in detail this case of
(unconventional) twisting in App.~\ref{app:twisting}.
\subsection{Analysis of synthetic data}\label{sec:simulation}
In the following, we shall reconstruct the optical potential from a synthetic
lattice data set generated by the chiral unitary approach of
Ref.~\cite{Oller}. Twisted boundary conditions are applied as described
above, and the box size is taken to be $L=5M_\pi^{-1}$. In the first stage of
our analysis we have observed that more than 100 energy eigenvalues are required
to extract the potential in the considered, and quite wide, energy range from $E=2M_K$ to $E=1.7$~GeV. To produce the synthetic data, we consider the following set
of six different twisting angles
\begin{align}
\boldsymbol{\theta}=
\begin{pmatrix}
0\\0\\0
\end{pmatrix},~\begin{pmatrix}
0\\ 0\\ \pi
\end{pmatrix},~\begin{pmatrix}
0\\ \pi\\ \pi
\end{pmatrix},~\begin{pmatrix}
\pi\\ \pi\\ \pi
\end{pmatrix},~\begin{pmatrix}
0\\ 0\\ \pi/2
\end{pmatrix},~\begin{pmatrix}
0\\ \pi/2\\ \pi/2
\end{pmatrix}.
\label{thetas}
\end{align}
For these values, $Z_{00}^\theta(1;q_{K\bar K}^2)$ has the smallest
number of poles. This requirement is important, when the energy eigenvalues are
measured with a finite accuracy. Then, in proximity of its poles, the function
$Z_{00}^\theta(1;q_{K\bar K}^2)$ will exhibit a very large
uncertainty. Solving Eq.~\eqref{eq:secular} with $Z_{00}(1;q_{K\bar K}^2)$
replaced by $Z_{00}^\theta(1;q_{K\bar K}^2)$ for each of the aforementioned
angles we were able to extract $186$ energy eigenvalues above and $3$ below the
$K\bar K$ threshold. Further, in any realistic lattice simulation, the
eigenvalues will be known only up to a finite precision. To check the
feasibility of the proposed method, it is important to account for this error,
$\Delta E$, and to see how this uncertainty\footnote{Since higher excited levels
are harder to measure, the uncertainty will presumably increase with the energy.
However, in this first study we will assume constant values for $\Delta E$.} is
reflected in the final result as studied with re-sampling techniques in the
following. Therefore, we start from a sufficiently large number ($\sim 1000$) of
re-sampled lattice data sets, normally distributed around the
($189$) synthetic eigenvalues with a standard deviation of $\Delta E$. An example of 75
synthetic lattice data sets with $\Delta E=1$~MeV is presented in
Fig.~\ref{fig:twisted-sets}.
In the next step, we determine the parameters of Eq.~\eqref{eq:realaxis} for
each of these sets. Prior to doing so, we have to clarify several questions:
\begin{itemize}
\item \textbf{Range of applicability.}
Below the $K\bar K$ threshold, the function $Z_{00}^\theta(1;q_{K\bar K}^2)$
does not depend on $\boldsymbol{\theta}$ up to exponentially suppressed
contributions. Therefore, only a limited number of energy eigenvalues can be
determined. A reliable extraction of positions and residua of all four lowest
poles is not possible because the twisting cannot generate the necessary scan
of $W_L^{-1}$ in this energy region. This means that, on the one hand, this
approach does not allow one to extract the optical potential below the primary
($K\bar K$) threshold. On the other hand, it is crucial to recall that, due to
smearing applied in the complex energy plane, this failure will yield the wrong
real and especially imaginary parts of the reconstructed $\hat W^{-1}(E)$. This
is demonstrated in Fig.~\ref{fig:doweneedpoles}, which was produced by using the
test parameters of the perfect fit from the last section, but neglecting a
certain number of poles below the $K\bar K$ threshold. It is seen that the
imaginary part of $\hat W^{-1}$ at the primary threshold deviates by about
$50\%$, if no poles are considered below this threshold. However, already the
inclusion of the first pole below the primary threshold improves the
description drastically. Therefore, all poles above as well as the one below the
primary threshold should be considered in the fit to the (synthetic) lattice
data. Note also that if the secondary channels open above the primary channel,
none of these complications arise.
\item \textbf{Number of poles - starting values.}
We found that, for sufficiently many eigenvalues and $\Delta E$ of the
order of several MeV, the number of poles above the primary threshold to be
fitted can be determined, searching for a rapid sign change of
$Z_{00}^\theta(1;q_{K\bar K}^2)$. The corresponding energy eigenvalues serve us
as limits on the pole positions, while the residua are allowed to vary freely.
\item \textbf{Highest order of the polynomial part.}
In principle, the order of the polynomial part of
Eq.~(\ref{eq:realaxis}) is not restricted a priori. We have tested explicitly
that adding terms of fourth or fifth order in energy to the fit function yields
only a small change of the reconstructed potential. This part may be further
formalized by conducting combined $\chi^2$- and $F$-tests on the $\chi^2$
defined below.
\item \textbf{Definition of $\chi^2$.}
The uncertainty of the (synthetic) lattice data is given by $\Delta E$ only.
Therefore, a proper definition of $\chi^2_{\rm d.o.f.}$ should account for the
difference between the measured $\{E_i|i=1,...,N\}$ and fitted eigenvalues
$\{E^{f}_i|i=1,...,N\}$ compared to $\Delta E$ for all $N$ data points. The
$E_i^f$ eigenvalues are defined as the solutions of the following equation
\begin{align}
\frac{2}{\sqrt{\pi}L}\,Z_{00}^\theta(1;q^2_{K\bar K}(E))
=\sum_j\frac{Z_j}{E-Y_j}+D_0+D_1E+D_2E^2+D_3E^3\,,
\end{align}
which is technically very intricate. The problem of finding such solutions can
be circumvented by expanding both sides of the latter equation in powers of
$(E_i^f-E_i)$ around $E_i$ for each $i=1,...,N$. Up to next-to-leading order in
this expansion, the correct quantity to minimize reads
\begin{align}\label{chi2dof}
\chi^2_{\rm d.o.f.} = \frac{1}{N-n}\sum_{i=1}^N \frac{1}{\Delta E^2}
\left(\frac{\hat W_L^{-1}(E)-Z_{00}^{\theta_i}(1;q^2_{K\bar K}(E))}
{\left(Z_{00}^{\theta_i}(1;q^2_{K\bar K}(E)\right)'-\left(\hat W_L^{-1}(E)
\right)'}\right)^2_{E=E_i} \,,
\end{align}
where $n$ is the number of free parameters and $\boldsymbol{\theta}_i$ is the
twisting angle corresponding to the energy eigenvalue $E_i$. Note that the
$\chi^2$ in Eq.~(\ref{chi2dof}) differs from the usual definition by a
correction factor in the denominator, given by the difference of the derivatives
of the L\"uscher and the fit function.
\end{itemize}
For every member of the data sets, each consisting of 188 energy eigenvalues
(186 above and 2 below threshold), we perform a fit, minimizing $\chi^2_{\rm
d.o.f.}$ given in Eq.~\eqref{chi2dof}. Note that the two closest energy eigenvalues
below the $K\bar K$ threshold, which are included in the fit, are assigned a
weight factor of 6, because they are measured at every value of
$\boldsymbol{\theta}$ of Eq.~(\ref{thetas}) and do not depend on its
value up to exponentially suppressed contributions. Further, the number of free
parameters $n$ is set to $32$, consisting of 13(1) pole positions and 13(1)
residua above(below) $K\bar K$ threshold, as well as 4 parameters in the
polynomial part. The minimization is performed by using the Minuit2 (5.34.14)
library from Ref.~\cite{MINUIT}. A representative subset (75 synthetic lattice
data sets) of the results of the fits is shown in Fig.~\ref{fig:twisted-fits}.
It is seen that the data are described fairly well by all fits in a large energy
region starting above the $K\bar K$ threshold. At and below this threshold,
there is much larger spread of the fit curves describing the data. Especially
the pole at $\sim0.9$~GeV is not fixed very precisely which is quite
natural, keeping in mind the small number of synthetic data points in this
energy region.
For each of the above fits we proceed as described in
Sect.~\ref{sec:complex_plane}. First, the function $\hat W_L^{-1}(E)$ is
evaluated at the complex energies. Second, using the Gaussian smearing as well
as the parametric method discussed in
Sect.~\ref{sec:smearing}, the real and imaginary parts of the potential are
smoothened. The penalty factor $\lambda=0.28$ (see App.~\ref{app:reallambda})
and the smearing radius $r=0.2$~GeV are used in these methods, respectively.
Finally, for every energy, we calculate the average and the standard deviation
$\sigma$. The result of this procedure is presented in Fig.~\ref{fig:final1MeV}.
It is seen that both smearing methods yield very similar results. Overall, the
exact solution (the dashed line) in the considered energy region lies within 1
or 2 sigma bands around the reconstructed potential. The error band appears to
be comfortably narrow, but becomes broader around the $K\bar K$ threshold and
$E_{\rm max}=1.7$~GeV. This effect is a natural consequence of the missing
information outside the energy region, which influences the prediction within
the energy region via smearing during the intermediate steps of the potential
reconstruction.
Furthermore, we have repeated the whole procedure of synthetic lattice data
generation, fitting and recovering of the optical potential for higher
uncertainty of the energy eigenvalues, $\Delta E=2$~MeV and $\Delta E =3$~MeV.
The results are presented in Fig.~\ref{fig:final23MeV} and show that the error
bars grow roughly linearly with $\Delta E$ and that the real part of the
reconstructed amplitude remains quite stable. The imaginary part is more
sensitive to the value of $\Delta E$. Further, at even higher values of
$\Delta E\sim 10$~MeV, the fit is not reliable anymore and the imaginary part
becomes very small.
\section{Conclusions}\label{sec:concl}
\begin{itemize}
\item[i)]
In the present paper, we formulate a framework for the extraction of the
complex-valued optical potential, which describes hadron-hadron scattering in
the presence of the inelastic channels, from the energy
spectrum of lattice QCD. An optical potential, defined in the present article,
is obtained by using causal prescription $E\to E+i\varepsilon$ for
the continuation into the complex energy plane. It converges to the ``true''
optical potential in the limit $L\to\infty$, $\varepsilon\to 0$. A
demonstration of the effectiveness of the method has been carried out by
utilizing synthetic data.
\item[ii)]
The approach requires the precise measurement of the whole tower of the energy
levels in a given interval. The optical
potential is then obtained through averaging over all these levels.
\item[iii)]
Moreover, the availability of this approach critically depends on our ability to
take the lattice data at neighboring energies without changing the interaction
parameters in the secondary channels. This can be achieved, e.g., by using
(partially) twisted boundary conditions that affects the pripary
channel only. In the paper, we consider several systems,
where the method can be applied. It is remarkable that some candidates for the
QCD exotica are also among these systems.
We would like to emphasize that the use of twisted boundary
conditions is only a tool, which is used to perform a continuous energy scan of
a certain interval. Whatever method is used to measure the dependence of the
pseudophase on energy (all other parameters fixed), our approach, based on the
analytic continuation into the complex plane, could be immediately applied.
\item[iv)]
The approach could be most useful to analyze systems, in which the
inelastic channels contain three or more particles. Whereas
direct methods based on the use of multi-particle scattering equations in a
finite volume will be necessarily cumbersome and hard to use, nothing changes,
if our approach is applied. The reason for this is that, in case of an
intermediate state with any number of particles, the single poles are the only
singularities in any Green's function in a finite volume.
\end{itemize}
\section*{Acknowledgments}
We would like to thank S. Aoki, G. Schierholz and C. Urbach for helpful
discussions. Financial support by the Deutsche Forschungsgemeinschaft (CRC 110,
``Symmetries and the Emergence of Structure in QCD''), the Volkswagenstiftung
under contract no. 86260, the National Science Foundation (CAREER grant No.
1452055, PIF grant No. 1415459), GWU (startup grant), The Chinese
Academy of Sciences (CAS) President's International Fellowship Initiative
(PIFI) (Grant No. 2015VMA076), and by the Bonn-Cologne Graduate School of
Physics and Astronomy is gratefully acknowledged.
|
2,877,628,089,090 | arxiv | \section{Introduction}
Throughout this paper we work over the complex number field $\mathbb{C}$.
A \textit{smoothing} of a surface $X$ is a flat family $\mathfrak{X}\to\mathfrak{D}$
over a unit disk $0\in\mathfrak{D}\subset\mathbb{C}$ such that the fiber $\mathfrak{X}_0$
is isomorphic to $X$ and the general fiber is smooth.
In this situation $X$ can be considered as a degeneration of a fiber $\mathfrak{X}_t$, $0\neq t\in\mathfrak{D}$.
A smoothing is said to be $\mathbb{Q}$-\textit{Gorenstein} if so the total family $\mathfrak{X}$ is.
Throughout this paper a \textit{del Pezzo surface} means a normal projective
surface whose anticanonical divisor is $\mathbb{Q}$-Cartier and ample.
We study $\mathbb{Q}$-Gorenstein smoothings of del Pezzo surfaces with log canonical singularities.
This is interesting for applications to birational geometry and the minimal model program
(see e.g. \cite{Mori-Prokhorov-2008d}, \cite{Prokhorov-e-QFano7}) as well as to moduli problems \cite{Kollar-ShB-1988}, \cite{Hacking2004}.
Smoothings of del Pezzo surfaces with log terminal singularities were considered in \cite{Manetti-1991},
\cite{Hacking-Prokhorov-2010}, \cite{Prokhorov-degenerations-del-Pezzo}.
\begin{theorem}
\label{theorem-main}
Let $X$ be a del Pezzo surface with only log canonical singularities and $\uprho(X)=1$.
Assume that $X$ admits a $\mathbb{Q}$-Gorenstein smoothing and there exists at least one non-log terminal point $(o\in X)$.
Let $\eta : Y\to X$ be the minimal resolution. Then there is a rational curve fibration
$\varphi : Y\to T$ over a smooth curve $T$ such that a component $C_1$ of the $\eta$-exceptional divisor
dominating $T$
is unique, it is a section of $\varphi$, and its discrepancy equals $-1$.
Moreover, $o$ is the only non-log terminal singularity and singularities of $X$
outside $o$ are at worst Du Val of type \type {A}. The surface
$X$ and singular fibers of $\varphi$ are described in the table below.
All the cases except possibly for
\ref{types-I=2-2I2-IV} with $5\le n\le 8$ and~\ref{types-I=2-2IV} with $5\le n\le 10$ occur.
\end{theorem}
\par\medskip\noindent
\scalebox{0.9}{
\setlength{\extrarowheight}{6pt}
\begin{tabularx}{\textwidth}{c|c|c|c|c|c|c}
&\multicolumn2{c|}{singularities }&\multirow2{*}{$\uprho(Y)$ }&
\multirow2{*}{$K_X^2$ }&
\multirow2{*}{singular}&
\multirow2{*}
{condition}
\\
\cline{2-3}
& $(o\in X)$&$X\setminus\{o\}$&&&fibers of $\varphi$&on $n$
\\
\hline
\nr{}
\label{types-simple-elliptic}
& $\mathrm{Ell}_n$& $\emptyset$& $2$& $n$& $\emptyset$ & $n\le 9$
\\
\nr{}
\label{types-I=2-4-I2}
& $[n; [2]^4]$& $4\operatorname{A}_1$& $10$& $n-2$& $4\mathrm{(I_2)}$ & $3\le n\le6$
\\
\nr{}
\label{types-I=2-2I2-IV}
&$[n,2,2; [2]^4]$& $2\operatorname{A}_1$& $10$& $n-2$& $2\mathrm{(I_2)}\mathrm{(II)}$ & $3\le n\le 8$
\\
\nr{}
\label{types-I=2-2IV}
& $[2,2,n,2,2; [2]^4]$&$\emptyset$& $10$& $n-2$& $2\mathrm{(II)}$ & $3\le n\le10 $
\\
\nr{}
\label{types-I=3}&
$[n; [3]^3]$&$3\operatorname{A}_2$& $11$& $n-1$& $3\mathrm{(I_3)}$ & $2,3,4$
\\
\nr{}
\label{types-I=4-I22I4}&
$[n; [2],[4]^2]$ & $\operatorname{A}_1$, $2\operatorname{A}_3$& $12$& $n-1$ & $\mathrm{(I_2)}2\mathrm{(I_4)}$ & $2,3$
\\
\nr{}
\label{types-I=6}&
$[2; [2],[3],[6]]$&$\operatorname{A}_1$, $\operatorname{A}_2$, $\operatorname{A}_5$& $13$& $1$& $\mathrm{(I_2)}\mathrm{(I_3)}\mathrm{(I_6)}$ &
\end{tabularx}
}
\par\medskip\noindent
For a precise description the surfaces that occur in our classification we refer to Sect.
\ref{section-fibrations}.
To show the existence of $\mathbb{Q}$-Gorenstein smoothings we use unobstructedness
of deformations (see Proposition~\ref{no-obstructions}) and
local investigation of $\mathbb{Q}$-Gorenstein smoothability
of log canonical singularities:
\begin{theorem}
\label{Theorem-Q-smoothings}
Let $(X\ni P)$ be a strictly log canonical surface singularity of index $I>1$ admitting a
$\mathbb{Q}$-Gorenstein smoothing. Then it belongs to one of the following types:
\par\medskip\noindent\setlength{\extrarowheight}{1pt}
\begin{tabularx}{\textwidth}{l|l|l|l|l|l}
\hline
&$I$ & $(X\ni P)$&{\rm condition}& $\upmu_P$&$-\operatorname{K}^2$
\\\hline
\nrrr
\label{smoothing-2}&$2$& $[n_1,\dots, n_s; [2]^4]$& $\sum (n_i-3)\le 3$& $4-\sum (n_i-3)$& $\sum (n_i-2)$\\
\nrrr
\label{smoothing-3}&$3$ &$[n; [3],[3],[3]]$& $n=2,3,4$& $4-n$&$n$\\
\nrrr
\label{smoothing-4}&$4$ & $[n; [2],[4],[4]]$& $n=2,3$& $3-n$&$n+1$\\
\nrrr
\label{smoothing-6}&$6$ &$[2; [2],[3],[6]]$&&$0$&$4$
\end{tabularx}
where $\upmu_P$ is the Milnor fiber of the smoothing.
$\mathbb{Q}$-Gorenstein smoothings exist in cases~\ref{smoothing-3},
\ref{smoothing-4},~\ref{smoothing-6}, as well as in the case~\ref{smoothing-2}
for singularities of types $[n; [2]^4]$ with $n\le 6$,
$[n_1,\dots, n_s; [2]^4]$ with $\sum (n_i-2)\le 2$, $[4,3; [2]^4]$, and $[3,3,3; [2]^4]$.
In all other cases the existence of $\mathbb{Q}$-Gorenstein smoothings is unknown.
\end{theorem}
Smoothability of log canonical singularities of index $1$ were studied earlier
(see e.g. \cite[Ex. 6.4]{Looijenga-Wahl-1986}, \cite[Corollary 5.12]{Wahl1980}.
As a bi-product we construct essentially canonical threefold
singularities of index $5$ and $6$.
We say that a canonical singularity $(\mathfrak{X}\ni o)$ is \textit{essentially canonical}
if there exist a crepant divisor with center $o$.
V.~Shokurov conjectured that essentially canonical singularities of
given dimension have bounded indices.
This is well-known in dimension two: canonical surface singularities are
Du Val and their index equals $1$.
Shokurov's conjecture was proved
in dimension three by M. Kawakita \cite{Kawakita-index}.
More precisely, he proved that the index of an essentially canonical
threefold singularity is at most $6$.
The following theorem supplements Kawakita's result.
\begin{theorem}
\label{theorem-main-index}
For any $1\le I\le 6$ there exist
a three-dimensional essentially
canonical singularity of index $I$.
\end{theorem}
In fact our result is new only for $I=5$ and $6$: \cite{Hayakawa-Takeuchi-1987}
classified threefold canonical hyperquotient singularities
and among them there are examples satisfying conditions of
our theorem with $I \le 4$.
Theorem~\ref{theorem-main-index} together with \cite{Kawakita-index} gives the following
\begin{theorem}
Let $\mathfrak I$ be the set of indices of three-dimensional essentially
canonical singularities. Then
\begin{equation*}
\mathfrak I=\{1,2,3,4,5,6\}.
\end{equation*}
\end{theorem}
The paper is organized as follows.
Sect.~\ref{sect-lc} is preliminary. In Sect.~\ref{sect-smoothings}
we obtain necessary conditions for $\mathbb{Q}$-Gorenstein smoothability
of two-dimensional log canonical singularities.
In Sect.~\ref{section-Examples} we construct examples of $\mathbb{Q}$-Gorenstein smoothings.
Theorem~\ref{theorem-main-index} will be proved in Sect.
\ref{sect-Indices}.
In Sect.~\ref{sect-Noether}
we collect important results on del Pezzo surfaces
admitting $\mathbb{Q}$-Gorenstein smoothings. The main birational construction
for the proof of Theorem~\ref{theorem-main}
is outlined in Sect.~\ref{section-Del-Pezzo-surfaces}.
which will be considered in Sect.
\ref{section-fibrations} and
\ref{section-del-pezzo}.
\par\medskip\noindent
\textbf{Acknowledgments. }
I thank Brendan Hassett whose questions encouraged me to write up
my computations. The questions were asked during Simons Symposia ``Geometry Over Nonclosed Fields, 2016''.
I am grateful to the organizers of this activity for the invitation and creative atmosphere.
I also would like to thank the referee for careful reading and numerous helpful comments and suggestions.
\section{Log canonical singularities}\label{sect-lc}
For basic definitions and terminology of the minimal model program,
we refer to \cite{Kollar-Mori-1988} or \cite{Utah}.
\begin{case}
\label{notation-singularities}
Let $(X\ni o)$ be a log canonical surface singularity.
The \textit{index} of $(X\ni o)$ is the smallest positive integer $I$
such that $IK_X$ is Cartier.
We say that $(X\ni o)$ is \textit{strictly log canonical}
if it is log canonical but not log terminal.
\end{case}
\begin{definition}
A normal Gorenstein surface singularity is said to be \textit{simple
elliptic} if the exceptional divisor of the minimal resolution is a smooth elliptic
curve. We say that a simple
elliptic singularity is of type $\mathrm{Ell}_n$ if the self-intersection of
the exceptional divisor equals $-n$.
A normal Gorenstein surface singularity is called a \textit{cusp} if the
exceptional divisor of the minimal resolution is a cycle of smooth rational curves
or a rational nodal curve.
\end{definition}
\begin{case}
We recall a notation on weighted graphs.
Let $(X\ni o)$ be a rational surface singularity, let
$\eta:Y\to X$ be its minimal resolution, and let
$E=\sum E_i$ be the exceptional divisor.
Let $\Gamma=\Gamma(X,o)$ be the dual graph of $(X\ni o)$,
that is, $\Gamma$ is a weighted graph whose
vertices correspond to exceptional prime divisors $E_i$
and edges join vertices meeting each other.
In the usual way we attach to each vertex $E_i$ the number $-E_i^2$.
Typically, we omit $2$ if $-E_i^2=2$.
If $(X\ni o)$ is a cyclic quotient singularity of type $\frac 1r (1,q)$, $\gcd(r,q)=1$,
then the graph $\Gamma$ is a chain:
\begin{equation}
\label{equation-chain}
\xy
\xymatrix@C=38pt{
&\underset{n_1}\circ\ar@{-}[r]&\underset{n_2}\circ\ar@{-}[r]
&\cdots\ar@{-}[r]&\underset{n_k}\circ
}
\endxy
\end{equation}
We denote it by $[n_1,\dots,n_k]=\langle r,\, q\rangle$. The numbers $n_i$ are determined by the expression
of $r/q$ as a continued fraction \cite{Brieskorn-1967-1968}.
For positive integers $n$, $r_i$, $q_i$,\, $\gcd(r_i,q_i)=1$, $i=1,\dots,s$, the symbol
\begin{equation*}
\langle n; r_1,\dots, r_s;\, q_1,\dots, q_s\rangle
\end{equation*}
denotes the following graph
\begin{equation*}
\xy
\xymatrix@C=38pt{
\langle r_2,\, q_2\rangle&\cdots&\langle r_{s-1},\, q_{s-1}\rangle
\\
\langle r_1,\, q_1\rangle&\underset{n}\circ\ar@{-}[r]\ar@{-}[ul]\ar@{-}[ur]\ar@{-}[l]&\langle r_s,\, q_s\rangle
}
\endxy
\end{equation*}
For short, we will omit $q_i$'s:$\langle n; r_1,\dots, r_s\rangle$.
If $\langle r_i,\, q_i\rangle=[n_{i,1},n_{i,2},\dots]$, then we also denote
\begin{equation*}
\langle n; r_1,\dots, r_s;\, q_1,\dots, q_s\rangle=
[n; [n_{1,1},n_{1,2},\dots],\dots, [n_{s,1},n_{s,2},\dots]].
\end{equation*}
For example,
$\langle n; 3,3,3; 1,1, 2\rangle=[n; [3], [3], [2,2]]$ is the graph:
\begin{equation*}
\xy
\xymatrix@R=3pt{
&\overset3\circ&
\\
\underset3\circ&\underset{n}\circ\ar@{-}[r]\ar@{-}[u]\ar@{-}[l]&\underset{}\circ\ar@{-}[r]&\underset{}\circ
}
\endxy
\end{equation*}
The graph
\begin{equation*}
\xy
\xymatrix@R=3pt{
\circ&&&&\circ
\\
&\underset{n_1}\circ\ar@{-}[r]\ar@{-}[lu]\ar@{-}[ld]&\cdots\ar@{-}[r]&\underset{n_s}\circ\ar@{-}[ru]\ar@{-}[rd]
\\
\circ&&&&\circ
}
\endxy
\end{equation*}
will be denoted by $[n_1,\dots,n_s; [2]^4]$.
\end{case}
\begin{theorem}[{\cite[\S 9]{Kawamata-1988-crep}}]
\label{theorem-classification-lc-singularities}
Let $(X\ni o)$ be a strictly log canonical surface singularity of index $I$.
Then one of the following holds:
\begin{enumerate}
\item
\label{Theorem-simple-elliptic-cusp=I=1}
$I=1$ if and only if $(X\ni o)$ is either a simple elliptic singularity or a
cusp,
\item
$I=2$ if and only if $\Gamma(X,o)$ is of type $[n_1,\dots,n_s; [2]^4]$, $s\ge 1$,
\item
$I=3$ if and only if $\Gamma(X,o)$ is of type $\langle n; 3,3,3\rangle$,
\item
$I=4$ if and only if $\Gamma(X,o)$ is of type $\langle n; 2,4,4\rangle$,
\item
$I=6$ if and only if $\Gamma(X,o)$ is of type $\langle n; 2,3,6\rangle$.
\end{enumerate}
\end{theorem}
\begin{scorollary}
A strictly log canonical surface singularity
is not rational if and only if it is of index $1$.
\end{scorollary}
\begin{case}
Let $(X\ni o)$ be a strictly log canonical surface singularity of index $I$,
let
$\eta: Y\to X$ be its minimal resolution, and let $E=\sum E_i$
be the exceptional divisor. Let us contract all the components of $E$
with discrepancies $>-1$:
\begin{equation}
\label{equation-min-resolution}
\eta: Y \overset{\tilde \eta}\longrightarrow \tilde X \overset{\sigma}\longrightarrow X.
\end{equation}
Let $\tilde C=\sum \tilde C_i:= \tilde \eta_* E$ be the $\sigma$-exceptional divisor.
Then the pair $(\tilde X,\tilde C)$ has only divisorial log terminal singularities (dlt)
and the following relation holds
\begin{equation*}
K_{\tilde X}= \sigma^* K_X-\tilde C.
\end{equation*}
The extraction $\sigma: \tilde X\to X$ is called the \textit{dlt modification} of $(X\ni o)$.
\end{case}
\begin{scorollary}[see {\cite[\S 9]{Kawamata-1988-crep}}, {\cite[\S 3]{Utah}},
{\cite[\S 4.1]{Kollar-Mori-1988}}, {\cite[\S 6.1]{Prokhorov-2001}}]
\label{proposition-classification-lc-singularities}
In the above notation one of the following holds:
\begin{enumerate}
\item
$I=1$, $\tilde X=Y$ is smooth, and $(X\ni o)$ is either a simple elliptic
or a cusp singularity;
\item
$I=2$, $\tilde C=\sum_{i=1}^s \tilde C_i$ is a chain of smooth rational curves
meeting transversely at smooth points of $\tilde X$ so that $\tilde C_i\cdot \tilde C_{i+1}=1$, and
the singular locus of $\tilde X$ consists of two Du Val points
of type \type{A_1} lying on $\tilde C_1$ and two Du Val points
of type \type{A_1} lying on $\tilde C_s$ \textup(the
case $s=1$ is also possible and then $\tilde C=\tilde C_1$
is a smooth rational curve containing four Du Val points of type \type{A_1}\textup);
\item
$I=3$, $4$, or $6$, $\tilde C$ is a smooth rational curve, the pair $(\tilde X,\tilde C)$
has only purely log terminal singularities \textup(plt\textup), and the singular locus of $\tilde X$ consists of
three cyclic quotient singularities of types $\frac 1{r_i}(1, q_i)$, $\gcd(r_i, q_i)=1$ with
$\sum 1/r_i=1$. In this case $I=\operatorname{lcm} (r_1,r_2,r_3)$.
\end{enumerate}
\end{scorollary}
\begin{case}\label{index-one-cover}
Let $(X\ni o)$ be a log canonical singularity of index $I$
(of arbitrary dimension).
Recall (see e.g. \cite[Definition 5.19]{Kollar-Mori-1988})
that the \textit{index one cover} of $(X\ni o)$ is a
finite morphism $\pi:X^\sharp\to X$, where
\begin{equation*}
X^\sharp:=\operatorname{Spec}\left(\bigoplus _{i=0}^{I-1}\OOO_X(-iK_X)\right).
\end{equation*}
Then $X^\sharp$ is irreducible, $o^\sharp=\pi^{-1}(o)$ is one point,
$\pi$ is \'etale over $X\setminus\operatorname{Sing}(X)$, and
$K_{X^\sharp}=\pi^*K_X$ is Cartier.
In this situation, $(X^\sharp\ni o^\sharp)$ is a log canonical singularity of index $1$.
Moreover if $(X\ni o)$ is log terminal (resp. canonical, terminal), then
so the singularity $(X^\sharp\ni o^\sharp)$ is.
\end{case}
\begin{scorollary}
A strictly log canonical surface singularity of index $I>1$ is a quotient
of a simple elliptic or
cusp singularity $(X^\sharp\ni o^\sharp)$ by a cyclic group ${\boldsymbol{\mu}}_I$
of order $I=2$, $3$, $4$ or $6$ whose action on $X^\sharp\setminus\{o^\sharp\}$ is free.
\end{scorollary}
\begin{construction}[see {\cite[Proof of Theorem 9.6]{Kawamata-1988-crep}}]
\label{construction-index-one-cove-log-canonical}
Let $(X\ni o)$ be a strictly log canonical surface singularity of index $I>1$,
let $\pi: (X^\sharp\ni o^\sharp)\to (X\ni o)$ be the index one cover,
and let $\tilde \sigma: (\tilde X^\sharp\supset \tilde C^\sharp)\to (X^\sharp\ni o^\sharp)$
be the minimal resolution.
The action of ${\boldsymbol{\mu}}_I$ lifts to $\tilde X^{\sharp}$ so that
the induced action on $\OOO_{\tilde X^{\sharp}}(K_{\tilde X^{\sharp}}+ \tilde C^{\sharp})=
\tilde \sigma^*\OOO_{X^\sharp}(K_{X^\sharp})$
and $H^0(\tilde C^{\sharp}, \OOO_{\tilde C^{\sharp}}(K_{\tilde C^{\sharp}}))$
is faithful. Let $(\tilde X\supset \tilde C):= (\tilde X^\sharp\supset \tilde C^\sharp)/{\boldsymbol{\mu}}_I$.
Thus we obtain the following diagram
\begin{equation}
\label{equation-diagram-resolution}\vcenter{
\xy
\xymatrix{
\tilde X^{\sharp}\ar[r]^{\tilde \pi}\ar[d]^{\tilde \sigma}&\tilde X\ar[d]^{\sigma}
\\
X^{\sharp}\ar[r]^{\pi}&X
}
\endxy}
\end{equation}
Here $\sigma: (\tilde X\supset \tilde C)\to (X\ni o)$
is the dlt modification.
\end{construction}
The following definition can be given in arbitrary dimension.
For simplicity we state it only for dimension two which is
sufficient for our needs.
\begin{case}{\bf Adjunction.}
\label{different}
Let $X$ be a normal surface and $D$ be an effective $\mathbb{Q}$-divisor on $X$.
Write $D=C+B$, where $C$ is a reduced divisor on $X$, $B$ is effective,
and $C$ and $B$ have no common component.
Let $\nu:C'\to C$ be the normalization of $C$.
One can construct an effective $\mathbb{Q}$-divisor $\operatorname{Diff}_C(B)$
on $C'$, called the \textit{different}, as follows;
see \cite[Chap. 16]{Utah} or \cite[\S 3]{Shokurov-1992-e-ba} for details. Take
a resolution of singularities $f:X'\to X$ such that the
proper transform $C'$ of $C$ on $X'$ is also smooth.
Clearly, $C'$ is nothing but the normalization of the curve $C$.
Let $B'$ be the proper transform of $B$ on $X'$.
One can find an
exceptional $\mathbb{Q}$-divisor $A$ on $X'$ such that $K_{X'}+C'+B'\equiv_fA$. The different
$\operatorname{Diff}_C(B)$ is defined as the $\mathbb{Q}$-divisor $(B'-A)|_{ C'}$. Then $\operatorname{Diff}_C(B)$ is effective
and it satisfies the equality (adjunction formula)
\begin{equation}
\label{equation-adjunction}
K_{C'}+\operatorname{Diff}_C(B)=\nu^* (K_X+C+B)|_C.
\end{equation}
\end{case}
\begin{stheorem}[Inversion of Adjunction {\cite{Shokurov-1992-e-ba}}, {\cite{Kawakita2007}}]
\label{Inversion-of-Adjunction}
The pair $(X, C+B)$ is lc \textup(resp. plt\textup) near $C$ if and only if
the pair $(C',\operatorname{Diff}_C(B))$ is lc \textup(resp. klt\textup).
\end{stheorem}
\begin{sproposition}
Let $(X\ni P)$ be a surface singularity and let $o\in C \subset X$ be
an effective reduced divisor such that the pair $(X,C)$ is plt.
Then $(P\in C \subset X)$ is analytically isomorphic to
\[
\left(0\in \{x_1-axis\}\subset \mathbb{C}^2\right)/{\boldsymbol{\mu}}_r(1,q),\qquad \gcd(r,q)=1.
\]
In particular, $C$ is smooth at $P$ and $\operatorname{Diff}_C(0)=(1-1/r)P$.
The dual graph of the minimal resolution of $(X\ni P)$ is a chain
\eqref{equation-chain} and the proper transform of $C$ is attached to
one of its ends.
\end{sproposition}
\section{$\mathbb{Q}$-Gorenstein smoothings of log canonical singularities}
\label{sect-smoothings}
In this section we prove the classificational part of Theorem~\ref{Theorem-Q-smoothings}.
\begin{notation}
Let $(X\ni P)$ be a normal surface singularity, let $\eta:Y\to X$ be the minimal resolution and
let $E=\sum E_i$ be the exceptional divisor. Write
\begin{equation}
\label{equation-codiscrepancy}
K_{Y}=\eta ^*K_X-\Delta,
\end{equation}
where $\Delta$ is an effective $\mathbb{Q}$-divisor with $\operatorname{Supp}(\Delta)=\operatorname{Supp}(E)$.
Thus one can define the self-intersection $\operatorname{K}_{(X,P)}^2:=\Delta^2$ which is a well-defined natural invariant.
We usually write $\operatorname{K}^2$ instead of $\operatorname{K}_{(X,P)}^2$ if no confusion is likely.
The value $\operatorname{K}^2$ is non-positive and it equals zero if and only if $(X\ni P)$ is a Du Val point.
\begin{itemize}
\item We denote by $\varsigma_P$ the number of exceptional divisors over $P$.
\end{itemize}
\end{notation}
\begin{lemma}
\label{lemma-canonical}
Let $(X\ni P)$ be a normal surface singularity and let
$\mathfrak{X}\to \mathfrak{D}$ be its $\mathbb{Q}$-Gorenstein smoothing.
If $(X\ni P)$ is log terminal,
then the pair $(\mathfrak{X},X)$ is plt and the singularity $(\mathfrak{X}\ni P)$
is terminal.
If $(X\ni P)$ is log canonical,
then the pair $(\mathfrak{X},X)$ is lc and the singularity $(\mathfrak{X}\ni P)$
is isolated canonical.
\end{lemma}
\begin{proof}
By the higher-dimensional version of the inversion of adjunction (see \cite[Th. 5.50]{Kollar-Mori-1988},
\cite{Kawakita2007} and Theorem~\ref{Inversion-of-Adjunction})
the singularity $(X\ni P)$ is log terminal \textup(resp. log canonical\textup)
if and only if the pair $(\mathfrak{X},X)$ is plt \textup(resp. lc\textup) at $P$.
Since $X$ is a Cartier divisor on $\mathfrak{X}$, the assertion follows.
\end{proof}
\begin{lemma}[{\cite[Proposition 6.2.8]{Kollar1991a}}]
\label{lemma-K2-integer}
Let $(X\ni P)$ be a rational surface singularity.
If $(X\ni P)$ admits a $\mathbb{Q}$-Gorenstein smoothing, then $\operatorname{K}^2$ is an integer.
\end{lemma}
\begin{theorem}[{\cite[Proposition 3.10]{Kollar-ShB-1988}},
{\cite[Proposition 5.9]{Looijenga-Wahl-1986}}]
\label{classification-T-singularities}
Let $(X\ni P)$ be a log terminal surface singularity.
The following are equivalent:
\begin{enumerate}
\item
$(X\ni P)$ admits a $\mathbb{Q}$-Gorenstein smoothing,
\item
$\operatorname{K}^2\in\mathbb{Z}$,
\item
\label{classification-T-singularities-3}
$(X\ni P)$
is either Du Val or a cyclic quotient singularity of the form $\frac{1}{m}(q_1,q_2)$
with
\begin{equation*}
(q_1+q_2)^2\equiv 0\mod m,\qquad \gcd(m,q_i)=1.
\end{equation*}
\end{enumerate}
\end{theorem}
A log terminal singularity satisfying equivalent conditions above is called a
\textit{$\operatorname{T}$-singularity}.
\begin{sremark}[see {\cite{Kollar-ShB-1988}}]
It easily follows from~\ref{classification-T-singularities-3} that
any non-Du Val singularity of type $\operatorname{T}$ can be written in the form
\begin{equation*}\textstyle
\frac1{dn^2}(1, dna-1)
\end{equation*}
\end{sremark}
Below we describe log canonical singularities with
integral $\operatorname{K}^2$. Note however, that in general, the condition $\operatorname{K}^2\in\mathbb{Z}$ is necessary but not sufficient
for the existence of $\mathbb{Q}$-Gorenstein smoothing (cf. Theorem~\ref{Theorem-Q-smoothings} and Proposition
\ref{Proposition-computation-K2} (DV)).
\begin{proposition}
\label{Proposition-computation-K2}
Let $(X\ni P)$ be a rational strictly log canonical surface singularity. Then
in the notation of Theorem \xref{theorem-classification-lc-singularities}
the invariant $\operatorname{K}^2$ is integral if and only if
$X$ is either of type $[n_1,\dots, n_s; [2]^4]$ or
of type $\langle n; r_1,r_2,r_3; \varepsilon,\varepsilon,\varepsilon\rangle$,
where $(r_1,r_2,r_3)=(3,3,3)$, $(2,4,4)$ or $(2,3,6)$ and $\varepsilon=1$ or $-1$.
Moreover, we have:
\begin{itemize}
\item[(DV)]
if $X$ is of type $[n_1,\dots, n_s; [2]^4]$ or $\langle n; r_1,r_2,r_3; -1,-1,-1\rangle$,
then
\begin{equation*}
-\operatorname{K}^2=n-2,
\end{equation*}
where in the case
$[n_1,\dots, n_s; [2]^4]$, we put $n:=\sum (n_i-2)+2$;
\item[(nDV)]
if $X$ is of type $\langle n; r_1,r_2,r_3; 1,1,1\rangle$, then
\begin{equation*}
-\operatorname{K}^2=n-9+\sum r_i.
\end{equation*}
\end{itemize}
\end{proposition}
For the proof we need the following lemma.
\begin{slemma}
\label{Lemma-computation-K2}
Let $V$ be a smooth surface and let $C, E_1,\dots,E_m\subset V$ be
proper smooth rational curves on $V$ whose configuration is a chain:
\begin{equation*}
\xy
\xymatrix@R=3pt{
\underset{C}\circ&\underset{E_m}\circ\ar@{-}[r]\ar@{-}[l]&\underset{}
\cdots\ar@{-}[r]&\underset{E_1}\circ
}
\endxy
\end{equation*}
Let $D=C+\sum\alpha_i E_i$ be a $\mathbb{Q}$-divisor such that $(K_V+D)\cdot E_j=0$ for all $j$.
\begin{enumerate}
\item
If all the $E_i$'s are $(-2)$-curves, then $D^2-C^2=m/(m+1)$.
\item
If $m=1$ and $E_1^2=-r$, then $D^2-C^2=(r-1)(3-r)/r$.
\end{enumerate}
\end{slemma}
\begin{proof}
Assume that $E_i^2=-2$ for all $i$.
It is easy to check that
$D=C+\sum_{i=1}^m\frac i{m+1} E_i$.
Then
\begin{multline*}
D^2-C^2=\frac {2m}{m+1}+\left(\sum_{i=1}^m\frac i{m+1} E_i\right)^2=
\\
=\frac {2m}{m+1}+
\frac2{(m+1)^2}\left(-\sum_{i=1}^m i^2+\sum_{i=1}^{m-1} i(i+1)\right)
=\frac {m}{m+1}.
\end{multline*}
Now let $m=1$ and $E_1^2=-r$. Then $D=C+\frac {r-1}r E_1$.
Hence
\begin{equation*}
D^2-C^2=\frac{2(r-1)}r-\frac{(r-1)^2}{r}=\frac{(r-1)(3-r)}r.\qedhere
\end{equation*}
\end{proof}
\begin{proof}[Proof of Proposition \xref{Proposition-computation-K2}]
Let $\Delta$ be as in \eqref{equation-codiscrepancy}
and let $C:=\lfloor\Delta\rfloor$.
Write $\Delta=C+\sum\Delta_i$, where $\Delta_i$ are effective connected
$\mathbb{Q}$-divisors. By Lemma~\ref{Lemma-computation-K2} we have
\begin{equation*}
\delta_i:=\Bigl(\left(C+\Delta_i\right) ^2-C^2\Bigr)=
\begin{cases}
1-\frac1{r_i}&\text{if $\Delta_i$ is of type $\frac 1{r_i}(1,-1)$,}
\\
4-r_i-\frac3{r_i}&\text{if $\Delta_i$ is of type $\frac 1{r_i}(1,1)$.}
\end{cases}
\end{equation*}
Then
\begin{equation*}
\operatorname{K}^2=\left(C+\sum\Delta_i\right) ^2=C^2+\sum\delta_i.
\end{equation*}
If $(X\ni P)$ is of type $[n_1,\dots,n_s,[2],[2],[2],[2]]$,
then
\begin{equation*}
\operatorname{K}^2=C^2+2=-\sum (n_i-2).
\end{equation*}
Assume that $C$ is irreducible and $(X\ni P)$ is of type $\langle n; r_1,r_2,r_3\rangle$,
where $\sum 1/r_i=1$.
If all the $\operatorname{Supp}(\Delta_i)$'s are Du Val chains, then
\begin{equation*}
\operatorname{K}^2=C^2+\sum\left(1-\textstyle{\frac{1}{r_i}}\right)=-n+2.
\end{equation*}
If $(X\ni P)$ is of type
$\langle n; r_1,r_2,r_3;1,1,1\rangle$, then
\begin{equation*}
\operatorname{K}^2=C^2+\sum\left(4-r_i-\textstyle{\frac3{r_i}}\right)=-n+9-\sum r_i.
\end{equation*}
It remains to consider the ``mixed'' case. Assume for example that $(X\ni P)$ is of type
$\langle n; 3,3,3\rangle$.
Then $\delta_i\in\{0,\, 2/3\}$.
Since $\sum\delta_i$ is an integer, the only possibility is
$\delta_1=\delta_2=\delta_3$, i.e. all the chains
$\Delta_i$ are of the same type. The cases $\langle n; 2,4,4\rangle$
and $\langle n; 2,3,6\rangle$ are considered similarly.
\end{proof}
\begin{scorollary}
\label{canonical-cover-Z2}
Let $(X\ni P)$ be a strictly log canonical surface singularity of index $I\ge 2$
admitting a $\mathbb{Q}$-Gorenstein smoothing. Let $(X^{\sharp}\ni P^{\sharp})\to (X\ni P)$
be the index one cover. Then
\begin{equation*}
-\operatorname{K}^2_{(X^{\sharp}\ni P^{\sharp})}=
\begin{cases}
I(n-2)&\text{in the case \type{(DV)},}
\\
I(n-1)&\text{in the case \type{(nDV)}.}
\end{cases}
\end{equation*}
\end{scorollary}
\begin{proof}
Let us consider the \type{(nDV)} case.
We use the notation of \eqref{equation-min-resolution}
and \eqref{equation-diagram-resolution}. Let $E_1$, $E_2$, $E_3$ be the
$\tilde\eta$-exceptional divisors. Then
\[
K_{\tilde X}=\sigma^*K_X-\tilde C,\quad
K_Y=\eta^*K_X-\Delta=\tilde \eta^*K_{\tilde X}-\textstyle{\sum \frac {r_i-2}{r_i}E_i}.
\]
Therefore,
\[
\Delta=\tilde \eta^* \tilde C+ \textstyle{\sum \frac {r_i-2}{r_i}E_i},
\quad
\Delta^2=\tilde C^2- \textstyle{\sum \left(r_i-4+\frac{4}{r_i}\right)},
\]
\[
-\tilde C^2= n+3\textstyle{-\sum \frac{4}{r_i}}=n-1,
\quad -\operatorname{K}^2_{(X^{\sharp}\ni P^{\sharp})}=-I\tilde C^2=I(n-1).
\qedhere
\]
\end{proof}
\begin{sremark}
\label{canonical-cover-mult}
In the above notation
we have (see e.g. \cite[Theorem 4.57]{Kollar-Mori-1988})
\begin{eqnarray*}
\operatorname{mult} (X^{\sharp}\ni P^{\sharp})&=&\max \left(2,-\operatorname{K}^2_{(X^{\sharp}\ni P^{\sharp})}\right),
\\
\operatorname{emb}\dim (X^{\sharp}\ni P^{\sharp})&=&\max \left(3,-\operatorname{K}^2_{(X^{\sharp}\ni P^{\sharp})}\right).
\end{eqnarray*}
\end{sremark}
The following proposition is the key point in the proof of
of Theorem~\ref{Theorem-Q-smoothings}.
\begin{proposition}
\label{proposition-nDV}
Let $(X\ni P)$ be a strictly log canonical rational surface singularity of index $I\ge 3$ admitting
a $\mathbb{Q}$-Gorenstein smoothing.
Then $(X\ni P)$ is of type $[n; [r_1],[r_2],[r_3]]$.
\end{proposition}
\begin{proof}
By Lemma~\ref{lemma-K2-integer} the number
$\operatorname{K}^2$ is integral and by Proposition \xref{Proposition-computation-K2}
$(X\ni P)$ is either of type \type{nDV} or of type \type{DV}.
Assume that $(X\ni P)$ is of type \type{DV}.
\begin{case}
\label{new-label}
Let $\mathfrak{f}: \mathfrak{X}\to\mathfrak{D}$ be a $\mathbb{Q}$-Gorenstein smoothing.
By Lemma~\ref{lemma-canonical} the pair $(\mathfrak{X},X)$ is log canonical
and $(P\in \mathfrak{X})$ is an isolated canonical singularity.
Let $\pi: (\mathfrak{X}^\sharp\ni P^\sharp) \to (\mathfrak{X}\ni P)$ be the index one cover (see~\ref{index-one-cover})
and let $X^\sharp:=\pi^*X$. Then $X^\sharp$ is a Cartier divisor on $\mathfrak{X}^\sharp$,
the singularity $(\mathfrak{X}^\sharp\ni P^\sharp)$ is canonical (of index $1$), and the pair
$(\mathfrak{X}^\sharp, X^\sharp)$ is lc. Moreover, $\mathfrak{X}^\sharp$ is CM, $X^\sharp$ hence normal, and
the canonical divisor $K_{X^\sharp}$
is Cartier. Therefore, $\pi$ induces the index one cover $\pi_X: (X^\sharp\ni P^\sharp) \to (X\ni P)$.
In particular, the index of $(P\in \mathfrak{X})$ equals $I$.
Since $I\ge 3$, the singularity $(X^\sharp\ni P^\sharp)$ is simple elliptic and
the dlt modification coincides with the minimal resolution.
\end{case}
\begin{case}
First we consider the case where $(P\in \mathfrak{X})$ is \textit{terminal}.
Below we essentially use the classification of terminal singularities (see e.g. \cite{Reid-YPG1987}).
In our case, $(\mathfrak{X}^\sharp\ni P^\sharp)$ is either smooth or an isolated cDV singularity.
In particular,
\begin{equation*}
\operatorname{emb}\dim (X^{\sharp}\ni P^{\sharp})\le \operatorname{emb}\dim (\mathfrak{X}^{\sharp}\ni P^{\sharp})\le 4.
\end{equation*}
By our assumption $(X\ni P)$ is of type \type{DV}. So,
by Corollary~\ref{canonical-cover-Z2} and Remark~\ref{canonical-cover-mult}
\begin{equation}
\label{equation-emb-dim-I}
\operatorname{emb}\dim (X^{\sharp}\ni P^{\sharp})= I(n-2).
\end{equation}
If $\operatorname{emb}\dim (\mathfrak{X}^{\sharp}\ni P^{\sharp})=3$, i.e. $(\mathfrak{X}^{\sharp}\ni P^{\sharp})$
is smooth, then $\operatorname{emb}\dim (X^{\sharp}\ni P^{\sharp})=3$,
$\operatorname{mult}{(X^{\sharp}\ni P^{\sharp})}=3$, and $I=n=3$.
In this case $(\mathfrak{X}\ni P)$ is a cyclic quotient
singularity of type $\frac 13(1,1,-1)$ \cite{Reid-YPG1987}.
We may assume that $(\mathfrak{X}^{\sharp},P^\sharp)=(\mathbb{C}^3,0)$ and $X^{\sharp}$
is given by an invariant equation $\psi(x_1,x_2,x_3)=0$
with $\operatorname{mult}_{0}\psi=3$. Since $(X^{\sharp}\ni P^{\sharp})$ is a simple
elliptic singularity, the cubic part $\psi_3$ of $\psi$
defines a smooth elliptic curve on $\mathbb{P}^2$. Hence we can write
$\psi_3= x_3^3 + \tau(x_1,x_2)$, where $\tau(x_1,x_2)$
is a cubic homogeneous polynomial without multiple factors.
The minimal resolution $\tilde X^\sharp \to X^\sharp$ is the blowup of the origin.
In the affine chart $\{x_2\neq 0\}$ the surface $\tilde X^\sharp$ is given by the
equation $\tau(x_1',1)+x_3^{\prime 3}+x_2'(\cdots)=0$ and the action
of ${\boldsymbol{\mu}}_3$ is given by the weights $(0,1,1)$. Then it is easy to see that
$\tilde X$ has three singular points of type $\frac13(1,1)$. This contradicts our assumption.
Thus we may assume that $\operatorname{emb}\dim (\mathfrak{X}^{\sharp}\ni P^{\sharp})=4$, i.e.
$(\mathfrak{X}^{\sharp}\ni P^{\sharp})$ is a hypersurface singularity. Then $I= 4$ by \eqref{equation-emb-dim-I}.
We may assume that $(\mathfrak{X}^{\sharp}\ni P^{\sharp})\subset (\mathbb{C}^4\ni 0)$ is a hypersurface
given by an equation $\phi(x_1,\dots,x_4)=0$ with $\operatorname{mult}_0\phi=2$
and $X^{\sharp}$ is cut out by an invariant equation $\psi(x_1,\dots,x_4)=0$.
Furthermore, we may assume that $x_1,\dots,x_4$ are semi-invariants with ${\boldsymbol{\mu}}_4$-weights $(1,1,-1,b)$,
where $b=0$ or $2$ (see \cite{Reid-YPG1987}).
Consider the case $\operatorname{mult}_0\psi=1$. Since $\psi$ is invariant, we have
$\psi= x_4+(\text{higher degree terms})$ and $b=0$.
In this case the only quadratic invariants are $x_1x_3$, $x_2x_3$, and $x_4^2$.
Thus $\phi_2$ is a linear combination of $x_1x_3$, $x_2x_3$, $x_4^2$.
Since $I=4$ and $b=0$, by the classification of terminal singularities
$\phi$ contains either $x_1x_3$ or $x_2x_3$ (see \cite{Reid-YPG1987}). Then the eliminating $x_4$ we see that
$(X^\sharp\ni P^\sharp)$ is a hypersurface singularity
whose equation has quadratic part of rank $\ge 2$.
In this case $(X^\sharp\ni P^\sharp)$ is a Du Val singularity of type \type{A_n}, a contradiction.
Now let $\operatorname{mult}_0\psi>1$. Then
\begin{equation*}
\operatorname{emb}\dim (X^{\sharp}\ni P^{\sharp})=-K_{(X^{\sharp}\ni P^{\sharp})}
=\operatorname{mult}(X^{\sharp}\ni P^{\sharp})=4=I
\end{equation*}
(see Remark~\ref{canonical-cover-mult}).
According to \cite[Theorem 4.57]{Kollar-Mori-1988} the curve given by quadratic parts
of $\phi$ and $\psi$ in the projectivization
$\mathbb{P}(T_{P^{\sharp},\mathfrak{X}^{\sharp}})$
of the
tangent space is a smooth elliptic curve.
According to the classification \cite{Reid-YPG1987} there are two cases.
\subsection*{Case: $b=0$ and $\phi$ is an invariant.}
In this case, as above, $\phi_2$ and $\psi_2$ are linear combination of $x_1x_3$, $x_2x_3$, $x_4^2$
and so $\{\phi_2=\psi_2=0\}$ cannot be smooth, a contradiction.
\subsection*{Case: $b=2$ and $\phi$ is a semi-invariant of weight $2$.}
Then, up to linear coordinate change of $x_1$ and $x_2$, we can write
\begin{equation*}
\phi_2=a_1 x_1x_2+a_2 x_1^2+a_3 x_2^2+a_4 x_3^2,\qquad
\psi_2=b_1x_1x_3+b_2x_2x_3+b_3 x_4^2.
\end{equation*}
Since $\phi_2=\psi_2=0$ defines a smooth curve,
$a_1 x_1x_2+a_2 x_1^2+a_3 x_2^2$ has no multiple factors, so up to linear coordinate change of $x_1$ and $x_2$
we may assume that
$\phi_2=x_1x_2+ x_3^2$.
Similarly, $b_1,\, b_2,\, b_3\neq 0$. Then easy computations (see e.g \cite[7.7.1]{Kollar-Mori-1992}) show that
$(X^\sharp\ni P^\sharp)$ is a singularity of type $[2;[2],[4]^2]$.
This contradicts our assumption.
\end{case}
\begin{case}
Now we assume that $(P\in \mathfrak{X})$ is \textit{strictly canonical}.
Let $\upgamma : \tilde \mathfrak{X}\to \mathfrak{X}$ be the \textit{crepant blowup} of $(P\in \mathfrak{X})$.
By definition $\tilde \mathfrak{X}$ has only $\mathbb{Q}$-factorial terminal singularities
and $K_{\tilde \mathfrak{X}}=\upgamma^* K_{\mathfrak{X}}$.
Let $E=\sum E_i$ be the exceptional divisor and let $\tilde X$
be the proper transform of $X$. Since the pair $(\mathfrak{X},X)$ is log canonical,
we can write
\begin{equation}
\label{equation-discrepancies-deformation-space}
K_{\tilde \mathfrak{X}}+\tilde X+E=\upgamma^* (K_{\mathfrak{X}}+X),\qquad \upgamma^* X=\tilde X+E.
\end{equation}
The pair $(\tilde \mathfrak{X}, \tilde X+E)$ is log canonical
and $\tilde \mathfrak{X}$ has isolated singularities, so $\tilde X+E$ has generically
normal crossings along $\tilde X\cap E$. Hence $C:=\tilde X\cap E$ is a reduced curve.
By the adjunction we have
\begin{equation*}
K_{\tilde X}+C= (K_{\tilde \mathfrak{X}}+\tilde X+E)|_{\tilde X}= \upgamma^* (K_{\mathfrak{X}}+X)|_{\tilde X}
=\upgamma_{\tilde X}^* K_{X}.
\end{equation*}
Thus $\upgamma_{\tilde X}: \tilde X\to X$ is a dlt modification of $(X\ni P)$.
Since $I\ge 3$, there is only one divisor over $P\in X$ with discrepancy $-1$.
Hence this divisor coincides with $C$ and so $C$ is irreducible and smooth.
In particular, $\tilde X$ meets only one component of $E$.
\begin{claim*}
Let $Q\in \tilde \mathfrak{X}$ be a point at which $E$ is not Cartier.
Then in a neighborhood of $Q$ we have $\tilde X\sim K_{\tilde \mathfrak{X}}$.
In particular, $Q\in C$.
\end{claim*}
\begin{proof}
We are going to apply the results of \cite{Kawakita-index}.
The extraction $\upgamma: \tilde \mathfrak{X}\to \mathfrak{X}$ can be decomposed in a sequence
of elementary crepant blowups
\begin{equation*}
\upgamma_i: \mathfrak{X}_{i+1} \longrightarrow \mathfrak{X}_{i}, \quad i=0,\dots, N,
\end{equation*}
where $\mathfrak{X}_0=\mathfrak{X}$, $\mathfrak{X}_{N}=\tilde \mathfrak{X}$, for $i=1,\dots, N$
each $\mathfrak{X}_{i}$ has only $\mathbb{Q}$-factorial canonical singularities,
and the $\upgamma_i$-exceptional divisor $E_{i+1,i}$ is irreducible.
\cite{Kawakita-index} defined a divisor $F$ with $\operatorname{Supp}(F)=E$ on $\mathfrak{X}_{N}=\tilde \mathfrak{X}$
inductively: $F_1=E_{1,0}$ on $\mathfrak{X}_1$ and $F_{i+1}= \lceil \upgamma_i^*F_i\rceil$.
In our case, by \eqref{equation-discrepancies-deformation-space} the divisor $F$ is reduced, i.e.
$F=E$. Then by \cite[Theorem 4.2]{Kawakita-index} we have $E\sim -K_{\tilde \mathfrak{X}}$
near $Q$. Since $\tilde X+E$ is Cartier,
$\tilde X\sim K_{\tilde \mathfrak{X}}$
near $Q$.
\end{proof}
\begin{claim*}
The singular locus of $\tilde \mathfrak{X}$ near $C$ consists of three
cyclic quotient singularities $P_1$, $P_2$, $P_3$
of types $\frac 1{r_i}(1,-1, b_i)$, where $\gcd (b_i, r_i)=1$ and
$(r_1,r_2,r_3)=(3,3,3)$, $(2,4,4)$, and $(2,3,6)$ in cases $I=3$, $4$, $6$,
respectively.
\end{claim*}
\begin{proof}
Let $P_1,\, P_2,\, P_3\in C$ be singular points
of $\tilde X$.
Since $C=\tilde X\cap E$ is smooth, $E$ is not Cartier at $P_i$'s.
Hence $P_1,\, P_2,\, P_3\in \tilde \mathfrak{X}$ are (terminal) non-Gorenstein points.
Now the
assertion follows by \cite[Theorem 4.2]{Kawakita-index}.
\end{proof}
Therefore,
$P_i\in \tilde X$ is a point of index $r_i/\gcd(2,r_i)$. Hence the
singularities of $\tilde X$ are of types $\frac 1{r_i}(1,1)$.
This proves Proposition~\ref{proposition-nDV}.\qedhere
\end{case}
\end{proof}
\begin{case}
Let $(X\ni P)$ be a normal surface singularity admitting a $\mathbb{Q}$-Gorenstein smoothing
$\mathfrak{f}: \mathfrak{X}\to\mathfrak{D}$. Let $M_P$ be the Milnor fiber of $\mathfrak{f}$. Thus,
$(M_P,\partial M_P )$ is a smooth 4-manifold with boundary.
Denote by $\upmu_P = \operatorname{b}_2 (M_P)$ the Milnor number of the smoothing.
In our case we have (see \cite{Greuel-Steenbrink-1983})
\begin{equation}
\label{equation-Milnor-fiber-muP}
\operatorname{b}_1 (M_P) = 0,\qquad \operatorname{Eu}(M_P) = 1 + \upmu_ P.
\end{equation}
\end{case}
\begin{sproposition}[cf. {\cite[\S 2.3]{Hacking-Prokhorov-2010}}]
\label{proposition-computation-muP}
Let $(X\ni P)$ be a rational surface singularity.
Assume that $(X\ni P)$ admits a $\mathbb{Q}$-Gorenstein smoothing. Then
for the Milnor number
$\upmu_P$ we have
\begin{equation}
\label{equation-computation-muP}
\upmu_P=\operatorname{K}_{(X,P)}^2+\varsigma_P.
\end{equation}
\end{sproposition}
\begin{proof}
Obviously, $\operatorname{K}_{(X,P)}^2+\varsigma_P$ depends only on the analytic type of the singularity $(X\ni P)$.
According to \cite[Appendix]{Looijenga1986}, for $(X\ni P)$ there exists a projective surface
$Z$ with a unique singularity isomorphic to $(X\ni P)$ and a $\mathbb{Q}$-Gorenstein smoothing
$\mathfrak Z/ (\mathfrak T\ni 0)$.
Let $\eta:Y\to Z$ be the minimal resolution.
Write
\begin{equation*}
K_{Y}=\eta^* K_Z-\Delta,\qquad
K_{Y}^2=K_Z^2+\Delta^2.
\end{equation*}
Let $Z'$ be the general fiber. Since
\begin{equation*}
\operatorname{Eu}(Y)=\operatorname{Eu}(Z)+\varsigma_P,\qquad
\chi(\OOO_{Y})=\chi(\OOO_Z),
\end{equation*}
by Noether's formula we have
\begin{multline*}
0=K_Y^2+\operatorname{Eu}(Y)-12\chi(\OOO_Z)=K_Z^2+\Delta^2+\operatorname{Eu}(Z)+\varsigma_P-12\chi(\OOO_{Z'})=
\\
=\Delta^2+\varsigma_P+\operatorname{Eu}(Z)+K_{Z'}^2-12\chi(\OOO_{Z'})=
\Delta^2+\varsigma_P+\operatorname{Eu}(Z)-\operatorname{Eu}(Z').
\end{multline*}
By \eqref{equation-Milnor-fiber-muP} we have
$\upmu_P=\Delta_P^2+\varsigma_P$.
\end{proof}
\begin{scorollary}[see {\cite[Proposition 13]{Manetti-1991}}]
\label{Corollary-computation-muP-T}
If $(X\ni P)$ is a $\operatorname{T}$-singularity of type $\frac1{dm^2}(1, dma-1)$, then
\begin{equation}
\label{equation-computation-muP-T}
\upmu_P=d-1,\qquad-\operatorname{K}^2=\varsigma_P-d+1.
\end{equation}
\end{scorollary}
Proposition~\ref{proposition-computation-muP} implies the following.
\begin{scorollary}
\label{Corollary-mu-2}
Let $(X\ni P)$ be a strictly log canonical surface singularity of index $I>1$
admitting a $\mathbb{Q}$-Gorenstein smoothing. Then
\begin{equation}
\label{equation-computation-muP-nG}
\upmu_P=
\begin{cases}
4-\sum (n_i-3)&\text{in the case $(\mathrm{DV})$ with $I=2$,}
\\
13-n-\sum r_i&\text{in the case $(\mathrm{nDV})$.}
\end{cases}
\end{equation}
\end{scorollary}
\begin{proof}[Proof of the classificational part of Theorem \xref{Theorem-Q-smoothings}]
Let
\[
\pi:(X^{\sharp}\ni P^{\sharp})\to (X\ni P)
\]
be the index one cover.
A $\mathbb{Q}$-Gorenstein smoothing $(X\ni P)$ is induced by an equivariant
smoothing of $(X^{\sharp}\ni P^{\sharp})$ (cf.~\ref{new-label}).
In particular, $(X^{\sharp}\ni P^{\sharp})$ is smoothable.
Assume that $(X\ni P)$ is of type $[n_1,\dots,n_s;[2]^4]$ with $s>1$. Then
$(X^{\sharp}\ni P^{\sharp})$ is a cusp singularity. By \cite[Th. 5.6]{Wahl-1981} its smoothability implies
\begin{equation*}
\operatorname{mult} (X^{\sharp}\ni P^{\sharp})\le\varsigma_{P^{\sharp}}+9.
\end{equation*}
Since $\varsigma_{P^{\sharp}}=2\varsigma_{P}-10$,
by Corollary~\ref{canonical-cover-Z2} and Remark~\ref{canonical-cover-mult}
we have
\begin{equation*}
2\sum\left(n_i-2\right)\le 2\varsigma_{P}-1,\quad\sum\left(n_i-3\right)\le 3.
\end{equation*}
In the case where $(X\ni P)$ is of type $[n;[2]^4]$ the singularity $(X^{\sharp}\ni P^{\sharp})$ is
simple elliptic. Then $\operatorname{mult} (X^{\sharp}\ni P^{\sharp})\le 9$ (see e.g. \cite[Ex. 6.4]{Looijenga-Wahl-1986}).
Hence $n\le 6$.
In the case where $(X^{\sharp}\ni P^{\sharp})$ is of type $[n,[r_1], [r_2], [r_3]]$
the assertion follows from Corollary~\ref{Corollary-mu-2} because $\upmu_P\ge 0$.
\end{proof}
The existence of $\mathbb{Q}$-Gorenstein smoothings follows from
examples and discussions in the next two sections.
\section{Examples of $\mathbb{Q}$-Gorenstein smoothings}
\label{section-Examples}
\begin{proposition}[{\cite[Cor. 19]{Stevens1991a}}]
A rational surface singularity of index $2$ and multiplicity $4$ admits a $\mathbb{Q}$-Gorenstein
smoothing.
\end{proposition}
Recall that for any rational surface singularity $(X\ni P)$
one has
\[
\operatorname{mult} (X\ni P)=-\mathcal{Z}^2,
\]
where $\mathcal{Z}$ is the fundamental cycle on the minimal resolution
(see \cite[Cor. 6]{Artin-1966}).
\begin{slemma}
Let $(X\ni P)$ be a log canonical surface singularity of type
$[n_1,\dots,n_s; [2]^4]$.
Then
\begin{equation*}
-\mathcal{Z}^2=\max \bigl(4, 2+\sum (n_i-2)\bigr)=\max\bigl(4, 2-\operatorname{K}^2\bigr).
\end{equation*}
\end{slemma}
\begin{proof}
If either $s\ge 2$ and $n_1,\, n_s\ge 3$ or $s=1$ and $n_1\ge 4$, then
$\mathcal{Z}=\lceil\Delta\rceil$ and so $\mathcal{Z}^2=\Delta^2-2=-n$
by Proposition~\ref{Proposition-computation-K2}. If $\sum (n_i-2)=1$,
then $\mathcal{Z}=2\Delta$ and so
$\mathcal{Z}^2=4\Delta^2=-4$.
\end{proof}
\begin{scorollary}
A log canonical singularity of type $[n_1,\dots, n_s; [2]^4]$
with $\sum (n_i-2)\le 2$ admits a $\mathbb{Q}$-Gorenstein
smoothing.
\end{scorollary}
Let us consider explicit examples.
\begin{sexample}
\label{example-singularity-index-2-m}
Let $\mathfrak{X}=\mathbb{C}^3/{\boldsymbol{\mu}}_2(1,1,1)$ and
\begin{equation*}
\mathfrak{f}: \mathfrak{X} \to \mathbb{C},\quad (x_1,x_2,x_3) \mapsto x_1^2+\left(x_2^2+c_1 x_3^{2k}\right)
\left(x_3^2+c_2 x_2^{2m}\right),
\end{equation*}
where $k, m\ge 1$ and $c_1,\, c_2$ are constants.
The central fiber $X=\mathfrak{X}_0$ is a log canonical singularity
of type
\begin{equation*}
[\underbrace{2,\dots,2}_{k-1},3,\underbrace{2,\dots,2}_{m-1}; [2]^4].
\end{equation*}
Indeed, the $\frac12(1,1,1)$-blowup of $X'\to X\ni 0$ has irreducible exceptional divisor.
If $k,m \ge 3$, then
the singular locus if $X'$ consists of two
Du Val singularities of types \type{D_{k+1}} and \type{D_{m+1}}. Other cases are similar.
\end{sexample}
\begin{sexample}
\label{example-2}
Let ${\boldsymbol{\mu}}_2$ act on $\mathbb{C}^4_{x_1,\dots,x_4}$ diagonally
with weights $(1,1,1,0)$ and let $\phi(x_1,\dots,x_4)$ and $\psi(x_1,\dots,x_4)$
be invariants such that $\operatorname{mult}_0\phi=\operatorname{mult}_0\psi=2$ and the quadratic parts
$\phi_{(2)}$, $\psi_{(2)}$ define a smooth elliptic curve in $\mathbb{P}^3$.
Let $\mathfrak{X}:=\{\phi=0\}/{\boldsymbol{\mu}}_2(1,1,1,0)$.
Consider the family
\begin{equation*}
\mathfrak{f}: \mathfrak{X} \longrightarrow\mathbb{C} \qquad (x_1,\dots,x_4)\longmapsto \psi.
\end{equation*}
The central fiber $X=\mathfrak{X}_0$ is a log canonical singularity
of type $[4; [2]^4]$.
\end{sexample}
\begin{sproposition}[{\cite[Ex. 4.2]{deJong-vanStraten-1992}}]
Singularities of types $[5; [2]^4]$,
$[4,3; [2]^4]$, and $[3,3,3; [2]^4]$ admit $\mathbb{Q}$-Gorenstein smoothings.
\end{sproposition}
Now consider singularities of index $>2$.
\begin{example}[cf. {\cite[6.7.1]{Kollar-Mori-1992}}]
\label{example-singularity-index-3}
Let $\mathfrak{X}=\mathbb{C}^3/{\boldsymbol{\mu}}_3(1,1,2)$ and
\begin{equation*}
\mathfrak{f}: (x_1,x_2,x_3) \longmapsto x_1^3+x_2^3+x_3^3.
\end{equation*}
The central fiber $X=\mathfrak{X}_0$ is a log canonical singularity
of type $[2; [3]^3]$.
\end{example}
\begin{example}
\label{example-singularity-index-3-canonical}
Let $\mathfrak{X}=\mathbb{C}^3/{\boldsymbol{\mu}}_9(1,4,7)$ and
\begin{equation*}
\mathfrak{f}: (x_1,x_2,x_3) \longmapsto x_1x_2^2+x_2x_3^2+x_3x_1^2.
\end{equation*}
The central fiber $X=\mathfrak{X}_0$ is a log canonical singularity of type $[4; [3]^3]$.
The total space has a canonical singularity at the origin.
\end{example}
\begin{example}[cf. {\cite[7.7.1]{Kollar-Mori-1992}}]
\label{example-4}
\label{example-singularity-index-4}
Let
\begin{equation*}
\mathfrak{X}=\{x_1x_2+x_3^2+x_4^{2k+1}=0\}/{\boldsymbol{\mu}}_4(1,1,-1,2),\qquad k\ge 1.
\end{equation*}
Consider the family
\begin{equation*}
\mathfrak{f}: \mathfrak{X} \longrightarrow\mathbb{C},\quad
(x_1,\dots,x_4)\longmapsto x_4^2+x_3(x_1+x_2)+\psi_{\ge 3} (x_1,\dots,x_4),
\end{equation*}
where $\psi_{\ge 3}$ is an invariant with $\operatorname{mult}(\psi_{\ge 3})\ge 3$.
The central fiber $X=\mathfrak{X}_0$ is a log canonical singularity
of type $[2; [2], [4]^2]$.
The singularity of the total space is terminal of type $\mathrm{cAx/4}$.
\end{example}
\begin{example}
\label{example-singularity-index-4-canonical}
Let $\mathfrak{X}:=\{x_1x_2+x_3^2+x_4^2=0\}/ {\boldsymbol{\mu}}_8(1,5,3,7)$.
Consider the family
\begin{equation*}
\mathfrak{f}: \mathfrak{X} \longrightarrow\mathbb{C},\quad (x_1,\dots,x_4)\longmapsto x_1x_4+x_2x_3.
\end{equation*}
The central fiber $X=\mathfrak{X}_0$ is a log canonical singularity
of type $[3; [2], [4]^2]$.
The singularity of the total
space $\mathfrak{X}$ is canonical \cite{Hayakawa-Takeuchi-1987}.
\end{example}
More examples of $\mathbb{Q}$-Gorenstein smoothings will be given in the next section.
\section{Indices of canonical singularities}
\label{sect-Indices}
\begin{notation}
Let $S=S_d\subset \mathbb{P}^d$ be a smooth del Pezzo surface of degree $d\ge 3$.
Let $Z$ be the affine cone over $S$ and let $z\in Z$ be its vertex.
Let $\delta: \tilde Z\to Z$ be the blowup along the maximal ideal of $z$ and let $\tilde S\subset \tilde Z$
be the exceptional divisor.
The affine variety $Z$ can be viewed as the spectrum of the anti-canonical graded algebra:
\begin{equation*}
Z=\operatorname{Spec} R(-K_S), \qquad R(-K_S):= \bigoplus_{n\ge 0} H^0(S, \OOO_S(-nK_S))
\end{equation*}
and the variety $\tilde Z$ can be viewed as the total space $\operatorname{Tot}(\mathscr{L})$
of the line bundle $\mathscr{L}:=\OOO_S(K_S)$. Here $\tilde S$ is the negative section.
Denote by $\gamma: \tilde Z \to S$ the natural projection.
\end{notation}
\begin{lemma}
\label{lemma-crepant}
The map $\delta$ is a crepant morphism and
$(Z\ni z)$ is a canonical singularity.
\end{lemma}
\begin{proof}
Write $K_{\tilde Z}=\delta^* K_Z+a \tilde S$.
Then
\begin{equation*}
K_{\tilde S}= (K_{\tilde Z}+\tilde S) |_{\tilde S}= (a+1)\tilde S|_{\tilde S}.
\end{equation*}
Under the natural identification $S=\tilde S$ one has
$\OOO_{\tilde S}(K_{\tilde S})\simeq \OOO_{S}(-1)\simeq \OOO_{\tilde S}(\tilde S)$. Hence, $a=0$.
\end{proof}
\begin{construction}
Assume that $S$ admits an action $\varsigma: G\to \operatorname{Aut}(S)$ of a finite group $G$.
The action naturally extends to
an action on the algebra $R(-K_S)$, the cone $Z$, and its blowup $\tilde Z$.
We assume that
\begin{enumerate}
\renewcommand\labelenumi{{\rm (\Alph{enumi})}}
\renewcommand\theenumi{{\rm (\Alph{enumi})}}
\item
\label{condition-1}
$G\simeq {\boldsymbol{\mu}}_I$ is a cyclic group of order $I$,
\item
\label{condition-2}
the action $G$ on $S$ is free in codimension one, and
\item
\label{condition-3}
the quotient
$S/G$ has only Du Val singularities.
\end{enumerate}
Let $G_P$ be the stabilizer of a point $P\in S$. Since $\mathscr{L}=\OOO_S(K_S)$,
the fiber $\mathscr{L}_P$ of $\gamma: \tilde Z=\operatorname{Tot}(\mathscr{L})\to S$ is naturally identified with
$\wedge ^2 T_{P,S}^\vee$, where $T_{P,S}$ is the tangent space to $S$ at $P$.
By our assumptions~\ref{condition-2} and~\ref{condition-3}, in suitable analytic coordinates $x_1,x_2$ near $P$, the action
of $G_P$ is given by
\begin{equation}
\label{equation-DuVal-action}
(x_1,\, x_2) \longmapsto (\zeta_{I_P}^{b_P}\cdot x_1,\, \zeta_{I_P}^{-b_P}\cdot x_2),
\end{equation}
where $\zeta_{I_P}$ is a primitive $I_P$-th root of unity,
$\gcd(I_P, b_P)=1$, and
$I_P$ is the order of $G_P$. Therefore,
the action of $G_P$ on $\mathscr{L}_P\simeq \wedge ^2 T_{P,S}^\vee$
is trivial. Let $\tilde P:= \mathscr{L}_P\cap \tilde S$.
The algebra $R(-K_S)$ admits also a natural $\mathbb{C}^*$-action
compatible with the grading. Thus $\gamma: \tilde Z\to S$ is a
$\mathbb{C}^*$-equivariant $\mathbb A^1$-bundle, where $\mathbb{C}^*$-action on $S$ is trivial
and
the induced action $\lambda: \mathbb{C}^*\to \operatorname{Aut} (\tilde Z)$ is just
multiplication in fibers. Fix an embedding $G={\boldsymbol{\mu}}_I\subset \mathbb{C}^*$.
Then two actions $\varsigma$ and $\lambda$ commute and so we can
define a new action of $G$ on $\tilde Z$ by
\begin{equation}
\label{equation-index-character}
\varsigma'(\upalpha)=\lambda (\upalpha) \varsigma(\upalpha), \qquad \upalpha\in G.
\end{equation}
Take local coordinates
$x_1,x_2,x_3$ in a neighborhood of $\tilde P\in \tilde Z$ compatible with the decomposition
$T_{\tilde P, \tilde Z}=T_{\tilde P, \tilde S}\oplus T_{\tilde P, \mathscr{L}_P}$ of the tangent space
and \eqref{equation-DuVal-action}.
Then
the action of $G_P$ is given by
\begin{equation}
\label{equation-terminal-action}
(x_1, x_2, x_3) \longmapsto (\zeta_{I_P}^{b_P}\cdot x_1, \zeta_{I_P}^{-b_P}\cdot x_2,
\zeta_{I_P}^{a_P}\cdot x_3),\quad \gcd(a_P,I_P)=1.
\end{equation}
\begin{claim}
\label{claim-canonical-terminal}
The quotient
$\tilde \mathfrak{X}:=\tilde Z/ \varsigma'(G)$ has only terminal singularities.
\end{claim}
\begin{proof}
All the points of $\tilde Z$ with non-trivial stabilizers lie on the negative section $\tilde S$.
The image of such a point $\tilde P$ on $\tilde \mathfrak{X}$ is
a cyclic quotient singularity of type $\frac 1{I_P}(b_P,-b_P, a_P)$ by \eqref{equation-terminal-action}.
\end{proof}
By the universal property of quotients,
there is a contraction $\varphi: \tilde \mathfrak{X}\to \mathfrak{X}$ contracting $E$ to a point, say $o$,
where $\mathfrak{X}:=Z/G$ and $E:=\tilde S/G$.
Thus we have the following diagram:
\begin{equation}
\label{equation-diagram-square}
\vcenter{
\xymatrix@C=10pt
{
&&\tilde S \ar@{}[r]|-*[@]{\subset}\ar[d] & \tilde Z\ar[d]^\delta\ar[rr]\ar @/^/ [dlll]_(.6){\gamma}|(.43)\hole&
& \tilde \mathfrak{X}\ar[d]^{\varphi}\ar@{}[r]|-*[@]{\supset}& E \ar[d]
\\
S&&z\ar@{}[r]|-*[@]{\in} &Z\ar[rr]^{\pi} && \mathfrak{X}\ar@{}[r]|-*[@]{\ni}& o
}}
\end{equation}
\end{construction}
\begin{proposition}
\label{proposition-main-construction}
$(\mathfrak{X}\ni o)$ is an isolated canonical non-terminal singularity of index $|G|$.
\end{proposition}
\begin{proof}
Since the action $\varsigma'$ is free in codimension one,
the contraction $\varphi$ is crepant by Lemma~\ref{lemma-crepant}. The index of $(\mathfrak{X}\ni o)$
is equal to the l.c.m.
of $|G_P|$ for $P\in S$. On the other hand, by the holomorphic Lefschetz fixed point formula
$G$ has a fixed point on $S$. Hence, $G=G_P$ for some $P$.
\end{proof}
\begin{case}
\label{examples-index}
Now we construct explicit examples of del Pezzo surfaces
with cyclic group actions satisfying the conditions~\ref{condition-1}-\ref{condition-3}.
\end{case}
\begin{sexample}
\label{example-deg=6}
Recall that a del Pezzo surface $S$ of degree $6$ is
unique up to isomorphism and can be given in $\mathbb{P}^1_{u_0:u_1}\times \mathbb{P}^1_{v_0: v_1}\times \mathbb{P}^1_{w_0: w_1}$
by the equation
\begin{equation*}
u_1v_1w_1 =u_0v_0w_0.
\end{equation*}
Let $\upalpha\in \operatorname{Aut}(S)$ be the following element of order $6$:
\begin{equation*}
\upalpha: (u_0:u_1;\, v_0:v_1;\, w_0: w_1) \longmapsto (v_1:v_0;\, w_1:w_0;\, u_1:u_0).
\end{equation*}
Points with non-trivial stabilizers belong to one of three orbits
and representatives are the following:
\begin{itemize}
\item
$P=(1:1;\ 1:1;\ 1:1)$,\quad $|G_{P}|=6$,
\item
$Q=(1:\zeta_3;\ 1:\zeta_3;\ 1:\zeta_3)$,\quad $|G_{Q}|=3$,
\item
$R=(1:1;\ 1:-1;\ 1:-1)$,\quad $|G_{R}|=2$.
\end{itemize}
It is easy to check that they give us Du Val points of type
\type{A_5}, \type{A_2}, \type{A_1}, respectively.
\end{sexample}
\begin{sexample}
\label{example-deg=5}
A del Pezzo surface $S$ of degree $5$ is obtained by blowing up four
points $P_1$, $P_2$, $P_3$, $P_4$ on $\mathbb{P}^2$ in general position.
We may assume that $P_1 = (1: 0: 0)$, $P_2 = (0: 1: 0)$, $P_3 = (0: 0: 1)$, $P_4 = (1: 1: 1)$.
Consider the following Cremona transformation:
\begin{equation*}
\upalpha: (u_ 0 : u_ 1 : u_ 2 ) \longmapsto (u _0 (u _2 - u_ 1 ): u _2 (u_ 0 - u _1 ): u _0 u_ 2 ).
\end{equation*}
It is easy to check that $\upalpha^5=\operatorname{id}$ and
the indeterminacy points are exactly $P_1$, $P_2$, $P_3$.
Thus $\upalpha$ lifts to an element $\upalpha\in \operatorname{Aut}(S)$ of order $5$.
\begin{claim*}
Let $\upalpha\in \operatorname{Aut}(S)$ be any element of order $5$.
Then $\upalpha$ has only isolated
fixed points and the singular locus of the quotient $S/\langle a\rangle$
consists of two Du Val points of type \type{A_4}.
\end{claim*}
\begin{proof}
For the characteristic polynomial of $\upalpha$ on $\operatorname{Pic}(S)$
there is only one possibility: $t^5-1$. Therefore, the eigenvalues of $\upalpha$ are
$1,\zeta_5,\dots, \zeta_5^4$. This implies that every invariant curve is linearly proportional (in $\operatorname{Pic}(S)$) to
$-K_S$. In particular, this curve must be an ample divisor.
Assume that there is a curve of fixed points.
By the above it meets any line.
Since on $S$ there are at most two lines passing through a fixed point,
all the lines must be invariant. In this case $\upalpha$
acts on $S$ identically, a contradiction.
Thus the action of $\upalpha$ on $S$ is free in codimension one.
By the topological Lefschetz fixed point formula
$\upalpha$ has exactly two fixed points, say $Q_1$ and $Q_2$.
We may assume that actions of $\upalpha$ in local coordinates near $Q_1$ and $Q_2$
are diagonal:
\begin{equation*}
(x_1,x_2) \longmapsto (\zeta_5^r x_1, \zeta_5^k x_2),\qquad
(y_1,y_2) \longmapsto (\zeta_5^l y_1, \zeta_5^m y_2),
\end{equation*}
where $r$, $k$, $l$, $m$ are not divisible by $5$.
Then by the holomorphic Lefschetz fixed point formula
\begin{equation*}
1= (1-\zeta_5^r)^{-1}(1-\zeta_5^k)^{-1} + (1-\zeta_5^l)^{-1}(1-\zeta_5^m)^{-1}.
\end{equation*}
Easy computations with cyclotomics show that up to permutations and modulo $5$
there is only one possibility: $r=1$, $k=4$, $l=2$, $m=3$.
This means that the quotient has only Du Val singularities of type \type {A_4}.
\end{proof}
\end{sexample}
\begin{sexample}
\label{example-P2}
Let ${\boldsymbol{\mu}}_3$ act on $S=\mathbb{P}^2$ diagonally with weights $(0,1,2)$.
The quotient has three Du Val singularities of type \type{A_2}.
\end{sexample}
\begin{sexample}
\label{example-P1P1}
Let ${\boldsymbol{\mu}}_4$ act on $S=\mathbb{P}^1_{u_0:u_1}\times \mathbb{P}^1_{v_0: v_1}$ by
\begin{equation*}
(u_0:u_1;\ v_0: v_1) \longmapsto (v_0: v_1;\ u_1:u_0 ).
\end{equation*}
The quotient has three Du Val singularities of types \type{A_1}, \type{A_3}, \type{A_3}.
\end{sexample}
Note that in all examples above the group generated by $\upalpha^n$
also satisfies the conditions~\ref{condition-1}-\ref{condition-3}.
We summarize the above information in the following table.
Together with Proposition~\ref{proposition-main-construction} this proves Theorem~\ref{theorem-main-index}.
\par\medskip\noindent
\setlength{\extrarowheight}{6pt}
\begin{tabularx}{\textwidth}{c|c|c|c|c|l}
No. & $K_S^2$ & Ref. & $G$ & $I$ & \multicolumn1c{$\operatorname{Sing}(\tilde \mathfrak{X})$}
\\
\hline
\nrr
\label{smoothing-index=6}& $6$&~\ref{example-deg=6}& $\langle\upalpha\rangle$ & $6$ &
$\frac16(1,-1,1)$, $\frac13(1,-1,1)$, $\frac12(1,1,1)$
\\
\nrr
\label{smoothing-index=5}&$5$&~\ref{example-deg=5}& $\langle\upalpha\rangle$ & $5$ &
$\frac15(1,-1,1)$, $\frac15(2,-3,1)$
\\
\nrr
\label{smoothing-index=4}&$8$&~\ref{example-P1P1}& $\langle\upalpha\rangle$ & $4$ &
$2\times \frac14(1,-1,1)$, $\frac12(1,1,1)$
\\
\nrr
\label{smoothing-index=3-deg=6}& $6$&\ref{example-deg=6}& $\langle\upalpha^2\rangle$ & $3$ &
$3\times \frac13(1,-1,1)$
\\
\nrr
\label{smoothing-index=3-deg=9}& $9$&\ref{example-P2}& $\langle\upalpha\rangle$ & $3$ &
$3\times \frac13(1,-1,1)$
\\
\nrr
\label{smoothing-index=2-deg=6}& $6$&\ref{example-deg=6}& $\langle\upalpha^3\rangle$ & $2$ &
$4\times \frac12(1,1,1)$
\\
\nrr
\label{smoothing-index=2-deg=8}& $8$&\ref{example-P1P1}& $\langle\upalpha^2\rangle$ & $2$ &
$4\times \frac12(1,1,1)$
\end{tabularx}
\par\medskip\noindent
Note that our table agrees with the corresponding one in \cite{Kawakita-index}.
Now we apply the above technique to construct examples of $\mathbb{Q}$-Gorenstein smoothings.
\begin{theorem}
\label{theorem-Q-smmoothings-6}
Let $(X\ni o)$ be a surface log canonical singularity
of one of the following types
\begin{equation*}
[2; [2,3,6]],\hspace{5pt} [3; [2,4,4]],\hspace{5pt}
[n; [3,3,3]],\hspace{3pt} n=3,4,\hspace{5pt} [n; [2,2,2,2]],\hspace{3pt} n=5,6.
\end{equation*}
Then $(X\ni o)$ admits a $\mathbb{Q}$-Gorenstein smoothing.
\end{theorem}
\begin{slemma}
\label{lemma-index}
In the notation of \eqref{equation-diagram-square},
let $C\subset S$ be a smooth elliptic $G$-invariant curve such that
$C\sim -K_S$. Assume that $C$
passes through all the points with non-trivial stabilizers.
Let $\tilde X^\sharp:= \gamma^{-1}(C)$, $X^\sharp:= \delta(\tilde X^\sharp)$,
and $X:=\pi(X^\sharp)$.
Then the singularity $(X\ni o)$ is log canonical of index $|G|$. Moreover,
replacing $\lambda$ with $\lambda^{-1}$ if necessary we may assume that
$X$ is a Cartier divisor on $\mathfrak{X}$.
\end{slemma}
\begin{proof}
Put $\tilde X:= \tilde X^\sharp/G$. Since the divisor $\tilde X^\sharp+\tilde S$
is trivial on $\tilde S$, the contraction $\delta$ is log crepant with respect
to $K_{\tilde Z}+\tilde X^\sharp+\tilde S$ and so $\varphi$ is with respect to
$K_{\tilde \mathfrak{X}}+\tilde X+E$. By construction $X^\sharp$ is a cone over the
elliptic curve $C$ and $X=X^\sharp/G$. Therefore, $(X\ni o)$ is a log canonical
singularity. Comparing with~\ref{construction-index-one-cove-log-canonical} we
see that the index of $(X\ni o)$ equals $|G|$. We claim that $\tilde X+E$ is a
Cartier divisor on $\tilde \mathfrak{X}$. Identify $C$ with $\tilde C:=\gamma^{-1}(C)\cap \tilde
S=\tilde S\cap \tilde X^\sharp$.
Let $\omega\in H^0(C,\OOO_C(K_C))$ be a nowhere vanishing holomorphic $1$-form on $C$
and let $\upalpha$ be a generator of $G$. Since $\dim H^0(C,\OOO_C(K_C))=1$ and $G$
has a fixed point on $C$, the action of $G$ on $H^0(C,\OOO_C(K_C))$ is faithful and
we can write $\upalpha^*\omega=\zeta_I \omega$, where $\zeta_I$ is a suitable primitive
$I$-th root of unity.
Pick a point $\tilde P\in \tilde Z$ with
non-trivial stabilizer $G_P$ of order $I_P$. By our assumptions $\tilde P\in \tilde C$. Take
semi-invariant local coordinates $x_1,x_2,x_3$ as in \eqref{equation-terminal-action}.
Moreover, we can take them so that $x_1$ is a local coordinate along $C$.
Then we can write $\omega=\varpi d x_1$, where $\varpi$ is an invertible
holomorphic function in a neighborhood of $P$. Hence, $\varpi$ is an
invariant and $\upalpha^*x_1=\zeta_I^{I/I_P} x_1$.
Thus, by
\eqref{equation-terminal-action}, the action near $\tilde P$ has the form $\frac
1{I_P}(1,-1,a_P)$.
Since $G$ faithfully acts on $C$ with a fixed point, $I_P=2$, $3$, $4$ or $6$.
Since $\gcd(a_P,I_P)=1$, we
have $a_P\in \{\pm 1\}$.
Then by \eqref{equation-index-character} replacing $\lambda$ with $\lambda^{-1}$
we may assume that $a_P=1$. In our coordinates the local equation of $\tilde
S$ is $x_3=0$ and the local equation of $\tilde X^\sharp$ is $x_2=0$. Now it is
easy to see that the local equation $x_2x_3=0$ of $\tilde S+\tilde X^\sharp$ is
$G_P$-invariant. Therefore, $\tilde X+E$ is Cartier. Since it is
$\varphi$-trivial, the divisor $X=\varphi_*(\tilde X+E)$ on $\mathfrak{X}$ is Cartier as
well.
\end{proof}
\begin{proof}[Proof of Theorem \textup{\ref{theorem-Q-smmoothings-6}}]
It is sufficient to embed $X$ to a canonical threefold singularity $(\mathfrak{X}\ni o)$
as a Cartier divisor. Let $(X^\sharp \ni o^\sharp)\to (X\ni o)$ be the index one
cover. Then $(X^\sharp \ni o^\sharp)$ is a simple elliptic singularity (see
\ref{index-one-cover}). In the notation of Examples~\ref{examples-index}
consider the following ${\boldsymbol{\mu}}_I$-invariant elliptic curve $C\subset S$:
\par\medskip\noindent
\scalebox{1}{
\setlength{\extrarowheight}{6pt}
\begin{tabularx}{\textwidth}{@{}ll@{}}
\small{\ref{smoothing-index=6}\ref{smoothing-index=3-deg=6}}&
$\zeta_3(u_0w_1-u_1w_0)(v_0+v_1)+ (u_0v_1-u_1v_0)(w_0+w_1)$
\\
\small{\ref{smoothing-index=4}}&
$(u_1^2-u_0^2)v_0v_1+\zeta_4u_0u_1(v_1^2-v_0^2)$
\\
\small{\ref{smoothing-index=3-deg=9}}&
$u_0^2u_1+ u_1^2u_2+ u_2^2 u_0$
\\
\small{\ref{smoothing-index=2-deg=6}}&
$c_1(u_0w_1-u_1w_0)( v_0 +v_1)+ c_2(u_0 v_1-u_1 v_0) (w_0+ w_1)$
\\
\small{\ref{smoothing-index=2-deg=8}}&
$c_1( u_0^2 v_0^2{-}u_1^2 v_1^2)+c_2v_0 v_1( u_0^2 {-}u_1^2 )+ c_3 (u_0^2 v_1^2{-}u_1^2 v_0^2)+c_5u_0 u_1 (v_0^2{-} v_1^2)$
\end{tabularx}
}
\par\medskip\noindent
where $c_i$'s are constants and $\zeta_n$ is a primitive $n$-th root of unity.
Then we apply Lemma~\ref{lemma-index}.
\end{proof}
\section{Noether's formula}
\label{sect-Noether}
\begin{proposition}[{\cite{Hacking-Prokhorov-2010}}]
Let $X$ be a projective rational surface with only rational singularities.
Assume that every singularity of $X$ admits a $\mathbb{Q}$-Gorenstein smoothing. Then
\begin{equation}
\label{equation-Noether-formula}
K_X^2+\uprho(X)+\sum_{P\in X}\upmu_P=10.
\end{equation}
\end{proposition}
\begin{proof}
Let $\eta:Y\to X$ be the minimal resolution.
Since $X$ has only rational singularities, we have
\begin{equation*}
\operatorname{Eu}(Y)=\operatorname{Eu}(X)+\sum _P\varsigma_P,\qquad
\chi(\OOO_{Y})=\chi(\OOO_X).
\end{equation*}
Further, we can write
\begin{equation*}
K_{Y}=\eta^* K_X-\sum_P\Delta_P,\qquad
K_{Y}^2=K_X^2+\sum_P\Delta_P^2.
\end{equation*}
By the usual Noether formula for smooth surfaces
\begin{equation*}
\label{equation-Noether-formula-general-proof}
12\chi(\OOO_X)=K_{Y}^2+\operatorname{Eu}(Y)=
K_X^2+\operatorname{Eu}(X)+\sum_P (\Delta_P^2+\varsigma_P).
\end{equation*}
Now the assertion follows from \eqref{equation-computation-muP}.
\end{proof}
\begin{case}
Let $X$ be an arbitrary normal projective surface,
let $\eta:Y\to X$ be the minimal resolution, and let $D$ be a Weil divisor on $X$.
Write $\eta^*D=D_Y+D^\bullet$, where $D_Y$ is the proper transform of $D$
and $D^{\bullet}$ is the exceptional part of $\eta^*D$.
Define the following number
\begin{equation}
\label{equation-def-cP}
c_X(D)=-\textstyle{\frac12}\langle D^{\bullet}\rangle\cdot (\lfloor\eta^*D\rfloor-K_{Y }).
\end{equation}
\end{case}
\begin{sproposition}[{\cite[\S 1]{Blache1995}}]
\label{proposition-RR}
In the above notation we have
\begin{equation}
\label{equation-proposition-RR}
\chi(\OOO_X(D))=\textstyle{\frac12} D\cdot (D-K_X)+\chi(\OOO_X)+c_X(D)+c'_X(D),
\end{equation}
where
\begin{equation*}
c'_X(D):=
h^0(R^1\eta_*\OOO_{Y}(\lfloor\eta^*D\rfloor))
-h^0(R^1\eta_*\OOO_{Y}).
\end{equation*}
\end{sproposition}
\begin{sremark}
Note that $c_X(D)$ can be computed locally:
\begin{equation*}
c_{X}(D)=\sum_{P\in X} c_{P,X}(D),
\end{equation*}
where $c_{P,X}(D)$ is defined by the formula \eqref{equation-def-cP}
for each germ $(X\ni P)$.
\end{sremark}
\begin{slemma}
Let $(X\ni P)$ be a rational log canonical surface singularity.
Then
\begin{equation*}
c_{P,X}(-K_X)=\Delta^2-\lceil\Delta\rceil^2-3.
\end{equation*}
where, as usual, $\Delta$ is defined by $K_Y=\eta^*K_X-\Delta$.
\end{slemma}
\begin{proof}
Put $D:=-K_X$ and write
\begin{equation*}
\eta^*D=-K_{Y}-\Delta,\qquad\langle D^{\bullet}\rangle=\langle-\Delta\rangle=\lceil\Delta\rceil-\Delta,
\end{equation*}
\begin{equation*}
\lfloor\eta^*D\rfloor-K_{Y }=
-2K_{Y}-\lceil\Delta\rceil=-2\eta^*K_X+2\Delta-\lceil\Delta\rceil.
\end{equation*}
Therefore,
\begin{equation*}
c_{P,X}(D)=\textstyle{\frac12}(\Delta-\lceil\Delta\rceil)\cdot (-2\eta^*K_X+2\Delta-\lceil\Delta\rceil)
=\textstyle{\frac12}(\lceil\Delta\rceil-\Delta)\cdot (\lceil\Delta\rceil-2\Delta).
\end{equation*}
Since $(X\ni P)$ be a rational singularity, we have
\begin{equation*}
-2=2p_a(\lceil\Delta\rceil)-2=(\lceil\Delta\rceil-\Delta)\cdot\lceil\Delta\rceil,\quad
\lceil\Delta\rceil^2+2=\Delta\cdot\lceil\Delta\rceil
\end{equation*}
and the equality follows.
\end{proof}
\begin{scorollary}
Let $(X\ni P)$ be a rational log canonical surface singularity
such that $\operatorname{K}^2$ is integral.
Then
\begin{equation}
\label{equation-cP}
c_{P,X}(-K_X)=
\begin{cases}
-1&\text{in the case $(\mathrm{DV})$,}
\\
\phantom{-}0&\text{if $(X\ni P)$ is log terminal} \\& \text{or in the case $(\mathrm{nDV})$.}
\end{cases}
\end{equation}
\end{scorollary}
\begin{proof}
Let us consider the $(\mathrm{nDV})$ case (other cases are similar). By
Proposition~\ref{Proposition-computation-K2} we have $-\Delta^2=n-9+\sum r_i$.
On the other hand, $\lceil\Delta\rceil^2=-n + 6 -\sum r_i$. Hence,
$c_{P,X}(-K_X)=0$ as claimed.
\end{proof}
\begin{scorollary}
\label{lemma-RR}
Let $X$ be a del Pezzo surface with log canonical rational singularities and
$\uprho(X)=1$. Assume that
for any singularity of $X$ the invariant $\operatorname{K}^2$
is integral. Then
$H^i(X,\OOO_X)=0$ for $i>0$ and $\dim |-K_X|\ge K_X^2-1$.
\end{scorollary}
\begin{proof}
By the Serre duality $H^2(X,\OOO_X)=H^0(X,K_X)=0$.
If the singularities of $X$ are rational, then the Albanese map
is a well defined morphism $\operatorname{alb}:X\to\operatorname{Alb}(X)$.
Since $\uprho(X)=1$, we have $\dim\operatorname{Alb}(X)=0$ and so $H^1(X,\OOO_X)=0$.
The last inequality follows from \eqref{equation-proposition-RR} because
$c'_X(-K_X)\ge 0$ and $c_X(-K_X)\ge -1$ (see \eqref{equation-cP}).
\end{proof}
\section{Del Pezzo surfaces}
\label{section-Del-Pezzo-surfaces}
\begin{assumption}
\label{Assumptions}
From now on let $X$ be a del Pezzo surface satisfying the
following conditions:
\begin{enumerate}
\item
the singularities of $X$ are log canonical and
$X$ has at least one non-log terminal point $o\in X$,
\item
$X$ admits a $\mathbb{Q}$-Gorenstein smoothing,
\item
$\uprho(X)=1$.
\end{enumerate}
\end{assumption}
\begin{lemma}
\label{lemma-RR-and-one-point}
In the above assumptions the following hold:
\begin{enumerate}
\item \label{lemma-RR-and-one-point1}
$\dim |-K_X|>0$,
\item \label{lemma-RR-and-one-point2}
$X$ has exactly one non-log terminal point.
\end{enumerate}
\end{lemma}
\begin{proof}
\ref{lemma-RR-and-one-point1} is implied by semicontinuity (cf. \cite[Theorem 4]{Manetti-1991}).
\ref{lemma-RR-and-one-point2}
follows from Shokurov's connectedness theorem
\cite[Lemma 5.7]{Shokurov-1992-e-ba}, \cite[Th. 17.4]{Utah}.
\end{proof}
\begin{construction}
\label{construction-main}
Let $\sigma:\tilde X\to X$ be a dlt modification and let
\begin{equation*}
\tilde C=\sum_{i=1}^{s}\tilde C_i=\sigma^{-1}(o)
\end{equation*}
be the exceptional divisor.
Thus $\uprho(\tilde X)=s+1$.
For some large $k$ the divisor $-kK_X$ is very ample.
Let $H\in |-kK_X|$ be a general member and let $\Theta:=\frac 1k H$.
Then $K_X+\Theta\equiv 0$ and the pair $(X,\Theta)$ is lc at $o$ and klt outside $o$.
We can write
\begin{equation}
\label{equation-crepant-formula}
K_{\tilde X}+\tilde C=\sigma^*K_X,
\qquad
K_{\tilde X}+\tilde\Theta+\tilde C=\sigma^*(K_X+\Theta),
\end{equation}
where $\tilde \Theta$ is the proper transform of $\Theta$ on $\tilde X$.
Clearly $\tilde C\cap\operatorname{Supp}(\tilde\Theta)=\emptyset$ and $\tilde \Theta$ is nef and big.
Note also that $K_{\tilde X}$ is $\sigma$-nef.
\end{construction}
\begin{scase}
\label{case-I=6-tildeX}
Let $D\in |-K_X|$ be a member such that $o\in\operatorname{Supp}(D)$.
This holds automatically for any member $D\in |-K_X|$
if $I>1$ because $-K_X$ is not Cartier at
$o$ in this case.
In general, such a member exists by Lemma~\ref{lemma-RR-and-one-point}\ref{lemma-RR-and-one-point1}.
We have
\begin{equation}
\label{equation-I=6-tildeX}
K_{\tilde X}+\sum m_i\tilde C_i+\tilde D\sim 0,\quad m_i\ge 2\quad \forall i.
\end{equation}
\end{scase}
\begin{case}
We distinguish two cases that will be treated in Sect.~\ref{section-fibrations}
and~\ref{section-del-pezzo} respectively:
\begin{enumerate}
\renewcommand\labelenumi{{\rm (\Alph{enumi})}}
\renewcommand\theenumi{{\rm (\Alph{enumi})}}
\item
\label{division-fibrations}
there exists a fibration $\tilde X\to T$ over a smooth curve,
\item
\label{division-del-pezzo}
$\tilde X$ has no dominant morphism to a curve.
\end{enumerate}
\end{case}
Note that the divisor $-(K_{\tilde X}+\tilde C)$ is nef and big.
Therefore, in the case \ref{division-fibrations} the generic fiber
of the fibration $\tilde X\to T$ is a smooth rational curve.
To show the existence of $\mathbb{Q}$-Gorenstein smoothings we use unobstructedness
of deformations:
\begin{proposition}[{\cite[Proposition 3.1]{Hacking-Prokhorov-2010}}]
\label{no-obstructions}
Let $Y$ be a projective surface with log canonical singularities
such that $-K_Y$ is big. Then there are no local-to-global
obstructions to deformations of $Y$.
In particular, if the singularities of $Y$ admit $\mathbb{Q}$-Gorenstein smoothings, then
the surface $Y$ admits a $\mathbb{Q}$-Gorenstein smoothing.
\end{proposition}
However, in some cases the corresponding smoothings can be constructed explicitly:
\begin{sexample}
Consider the hypersurface $X\subset\mathbb{P}(1,1,2,3)$ given by $z^2=y\phi_4(x_1,x_2)$.
Then $X$ is a del Pezzo surface with $K_X^2=1$.
The singular locus of $X$ consists of
the point $(0:0:1:0)$ of type $[3; [2]^4]$
and four points
$\{z=y=\phi_4(x_1,x_2)=0\}$ of types $\operatorname{A}_1$.
Therefore, $X$ is of type~\ref{types-I=2-4-I2} with $n=3$.
\end{sexample}
\begin{sexample}
Consider the hypersurface $X\subset\mathbb{P}(1,1,2,3)$ given by $(x_1^3-x_2^3)z+y^3=0$.
Then $X$ is a del Pezzo surface with $K_X^2=1$. The singular locus of $X$ consists of
the point $(0:0:0:1)$ of type $[2; [3]^3]$
and three points
$(1:\zeta_3^k:0:0)$, $k=0,1,2$ of type $\operatorname{A}_2$. Therefore,
$X$ is of type~\ref{types-I=3}
with $n=2$.
\end{sexample}
\section{Proof of Theorem~\ref{theorem-main}: Fibrations}
\label{section-fibrations}
In this section we consider the case~\ref{division-fibrations} of~\ref{construction-main}.
First we describe quickly the singular fibers that occur in our classification.
\begin{case}
\label{List-A}
Let $Y$ be a smooth surface and let $Y\to T$ be a rational curve fibration.
Let $\Sigma\subset Y$ be a section and let $F$ be a singular fiber.
We say that $F$ is of type
\type{(I_k)} or \type{(II)}
if its dual graph has the following form,
where $\square$ corresponds to $\Sigma$ and $\bullet$ corresponds to a $(-1)$-curve:
\begin{equation*}
\vcenter{
\xy
\xymatrix"M"{
\square\ar@{-}[r]&
\overset{k}\circ\ar@{-}[r]&\bullet\ar@{-}[r]&\circ\ar@{-}[r]&\cdots\ar@{-}[r]&\circ&
}
\POS"M1,4"."M1,6"!C*\frm{_\}},-U*---!D\txt{$\scriptstyle{k-1}$}
\endxy
\vspace{16pt}
}
\leqno{\mathrm{(I_k)}}
\end{equation*}
\begin{equation*}
\vcenter{
\xy
\xymatrix@R=5pt{
&&&\circ\ar@{-}[r]&\bullet
\\
\square\ar@{-}[r]&\circ\ar@{-}[r]&\circ\ar@{-}[rd]\ar@{-}[ru]
\\
&&&\circ
}
\endxy
\vspace{16pt}
}
\leqno{\mathrm{(II)}}
\end{equation*}
Assume that $Y$ has only fibers of these types
\type{(I_k)} or \type{(II)}.
Let $Y\to \bar X$ be the contraction of all curves in fibers
having self-intersections less than $-1$, i.e. corresponding to white vertices.
Then $\uprho(\bar X)=2$ and $\bar X$ has a contraction $\theta: \bar X\to T$.
\begin{sremark}
Let $\bar C\subset \bar X$ be the image of $\Sigma$.
Assume that $\bar X$ is projective, $\bar C^2<0$, i.e. $\bar C$ is contractible,
and $(K_{\bar X}+\bar C)\cdot\bar C=0$.
For a general fiber $F$ of $\theta$ we have $(K_{\bar X}+\bar C)\cdot F=-1$.
Therefore, $-(K_{\bar X}+\bar C)$ is nef.
Now let $\bar X\to X$ be the contraction of $\bar C$. Then $X$ is a del Pezzo surface with
$\uprho(X)=1$.
\end{sremark}
\end{case}
\begin{case}
Recall that we use the notation of~\ref{Assumptions} and~\ref{construction-main}. In this section
we assume that $\tilde X$ has a rational curve fibration $\tilde X\to T$, where $T$ is a smooth curve
(the case~\ref{division-fibrations}).
Since $\uprho(X)=1$, the curve $\tilde C$ is not contained in the fibers.
A general fiber $\tilde F\subset\tilde X$ is a smooth rational curve.
By the adjunction formula $K_{\tilde X}\cdot\tilde F=-2$. By \eqref{equation-I=6-tildeX}
we have $\tilde F\cdot\sum m_i\tilde C_i=2$ and so $\tilde F\cdot\tilde D=0$.
Hence there exists exactly one component of $\tilde C$, say $\tilde C_1$, such that
$\tilde F\cdot\tilde C_1=1$, $m_1=2$, and for $i\neq 1$ we have
$\tilde F\cdot\tilde C_i=0$. This means that
the divisor $\tilde D$ and the components $\tilde C_i$ with $i\neq 1$ are contained in the fibers
and $\tilde C_1$ is a section of the fibration $\tilde X\to T$.
Let us contract all the vertical components of $\tilde C$, i.e. the components $\tilde C_i$
with $i\neq 1$.
We get the following diagram
\begin{equation*}
\xymatrix@C=80pt{
\tilde X\ar[d]_{\sigma}\ar[r]^{\nu}
&\bar X\ar[d]^{\theta}\ar[dl]
\\
X&T
}
\end{equation*}
Let $\bar C:=\nu_*\tilde C=\nu_*\tilde C_1$, $\bar\Theta=\nu_*\tilde\Theta$,
and $\bar D=\nu_*\tilde D$.
By \eqref{equation-crepant-formula} and \eqref{equation-I=6-tildeX}
we have
\begin{equation}
\label{equation-I=6-barX-fibration}
K_{\bar X}+\bar C+\bar\Theta\equiv 0,\qquad
K_{\bar X}+2\bar C+\bar D\sim 0.
\end{equation}
Moreover, the pair $(\bar X,\bar C+\bar\Theta)$ is lc and if $I>1$, then $\dim |\bar D|>0$.
\end{case}
\begin{lemma}[cf. {\cite{Fujisawa1995}}]
\label{lemma-elliptic}
If the singularity $(X\ni o)$ is not rational, then
$T$ is an elliptic curve, $\tilde X\simeq \bar X$ is smooth,
and $X$ is a generalized cone over $T$.
\end{lemma}
\begin{proof}
By Theorem~\ref{theorem-classification-lc-singularities}\ref{Theorem-simple-elliptic-cusp=I=1} the surface $\tilde X$ is smooth along $\tilde C$.
Since $\tilde C_1$ is a section, we have $\tilde C_1\simeq T$ and $\tilde C$ cannot be a combinatorial
cycle of smooth rational curves.
Hence both $\tilde C_1$ and $T$ are smooth elliptic curves.
Then $\tilde C=\tilde C_1$ and
$\uprho(\tilde X)=\uprho(X)+1=2$.
Hence any fiber $\tilde F$ of the fibration $\tilde X\to T$ is irreducible.
Since $\tilde F\cdot\tilde C_1=1$, any fiber is not multiple.
This means that $\tilde X\to T$ is a smooth morphism.
Therefore, $\tilde X$ is a geometrically ruled surface over an elliptic curve.
\end{proof}
From now on we assume that the singularities of $X$ are rational.
In this case, $T\simeq\mathbb{P}^1$ and $\dim |\bar D|\ge \dim |-K_X|>0$ (see~\ref{case-I=6-tildeX}
and Lemma~\ref{lemma-RR-and-one-point}).
\begin{lemma}
Let $\bar F$ be a degenerate fiber \textup(with reduced structure\textup).
Then the dual graph of $\bar F$ has one of the forms described in \xref{List-A}:
\par\smallskip
\type{(I_k)} with $k=2$, $3$, $4$ or $6$, or \type{(II)}.
\end{lemma}
\begin{proof}
Let $\bar P:=\bar C\cap\bar F$.
Since $-(K_{\bar X}+\bar C+\bar F)$ is $\theta$-ample,
the pair $({\bar X},\bar C+\bar F)$ is plt outside $\bar C$ by Shokurov's connectedness theorem.
Let $m$ be the multiplicity of $\bar F$.
Since $\bar C$ is a section of $\theta$, we have $\bar C\cdot\bar F=1/m<1$ and so
the point $\bar P\in\bar X$ is singular.
If the pair $(\bar X,\bar F)$ is plt at $\bar P$, then $\bar X$ has
on $\bar F$ two singular points and these points are of types
$\frac 1n (1,q)$ and $\frac 1n (1,-q)$ (see e.g. \cite[Th. 7.1.12]{Prokhorov-2001}).
We may assume that $\bar P\in\bar X$ is of type $\frac 1n (1,q)$.
In this case, $m=n$ and the pair $({\bar X},\bar C+\bar F)$ is lc at $\bar P$
because $\bar C\cdot\bar F=1/n$.
By Theorem~\ref{Theorem-Q-smoothings}
we have $n=2$, $3$, $4$, or $6$ and $q=1$.
We get the case \type{(I_k)}.
From now on we assume that $(\bar X,\bar F)$ is not plt at $\bar P$.
In particular, $(\bar X\ni\bar P)$ is not of type $\frac 1n(1,1)$.
Then again by Theorem~\ref{Theorem-Q-smoothings} the singularity $(o\in X)$
is of type $[n_1,\dots, n_s; [2]^4]$. Hence the part of the dual graph of $F$
attached to $C_1$ has the form
\begin{equation*}
\vcenter{
\xy
\xymatrix@R=5pt{
&&&&\circ
\\
\underset{\bar C}\square\ar@{-}[r]&\overset{n_1}\circ\ar@{-}[r]&\cdots\ar@{-}[r]&\overset{n_k}\circ\ar@{-}[ru]\ar@{-}[rd]&
\\
&&&&\circ
}
\endxy
}
\end{equation*}
where $k\ge 1$.
Then $K_{\bar X}+\bar C$ is of index $2$ at $\bar P$ (see \cite[Prop. 16.6]{Utah}).
Since $(K_{\bar X}+\bar C)\cdot m\bar F=-1$, the number
$2(K_{\bar X}+\bar C)\cdot \bar F=-2/m$
must be an integer.
Therefore, $m=2$. Assume that $\bar X$ has a singular point $\bar Q$
on $\bar F\setminus\{\bar P\}$. We can write $\operatorname{Diff}_{\bar F}(0)=\alpha_1\bar P+\alpha_2\bar Q$,
where $\alpha_1\ge 1$ (by the inversion of adjunction) and $\alpha_2\ge 1/2$. Then $\operatorname{Diff}_{\bar F}(\bar C)=\alpha_1'\bar P+\alpha_2\bar Q$,
where $\alpha_1'=\alpha_1+\bar F\cdot\bar C\ge 3/2$. On the other hand, the divisor
\[
-(K_{\bar F}+\operatorname{Diff}_{\bar F}(\bar C))=-(K_{\bar X}+\bar F+\bar C)|_{\bar F}
\]
is ample. Hence, $\deg\operatorname{Diff}_{\bar F}(\bar C)<2$, a contradiction.
Thus $\bar P$ is the only singular point of $\bar X$ on $\bar F$.
We claim that $\bullet$ is attached to
one of the $(-2)$-curves at the end of the graph.
Indeed, assume that
the dual graph of $F$
has the form
\begin{equation*}
\vcenter{
\xy
\xymatrix@R=5pt{
&&&&&&\circ
\\
\underset{\bar C}\square\ar@{-}[r]&\overset{n_1}\circ\ar@{-}[r]&\cdots\ar@{-}[r]&\overset{n_i}\circ\ar@{-}[r]&\cdots\ar@{-}[r]&\overset{n_k}\circ\ar@{-}[ru]\ar@{-}[rd]&
\\
&&&\bullet\ar@{-}[u]&&&\circ
}
\endxy
}
\end{equation*}
where $1\le i\le k$. Clearly, $n_i=2$.
Contracting the $(-1)$-curve $\bullet$ we obtain the following graph
\begin{equation*}
\vcenter{
\xy
\xymatrix@R=5pt{
&&&&&&\circ
\\
\underset{\bar C}\square\ar@{-}[r]&\overset{n_1}\circ\ar@{-}[r]&\cdots\ar@{-}[r]&\bullet\ar@{-}[r]&\cdots\ar@{-}[r]&\overset{n_k}\circ\ar@{-}[ru]\ar@{-}[rd]&
\\
&&&&&&\circ
}
\endxy
}
\end{equation*}
Continuing the process, on each step we have a configuration of the same type
and finally we get the dual graph
\begin{equation*}
\vcenter{
\xy
\xymatrix@R=5pt{
&&&&&\circ
\\
\underset{\bar C}\square\ar@{-}[r]&
\overset{n_1'}\circ\ar@{-}[r]&\cdots\ar@{-}[r]&\overset{n_{j}'}\circ\ar@{-}[r]&
\bullet\ar@{-}[ru]\ar@{-}[rd]&
\\
&&&&&\circ
}
\endxy
}
\end{equation*}
where $j\ge 0$. Then the next contraction gives us a configuration which is not a
simple normal crossing divisor. The contradiction proves our claim.
Similar arguments show that
$n_k=n_{k-1}=2$ and $k=2$,
i.e. we get the case \type{(II)}.
\end{proof}
\begin{proof}[Proof of Theorem \xref{theorem-main} in the case \xref{construction-main}\xref{division-fibrations}]
If all the fibers are smooth, then by Lemma~\ref{lemma-elliptic} we have the case~\ref{types-simple-elliptic}.
If there exist a fiber of type \type{(I_k)} with $k>2$, then
$I>2$ and by Theorem~\ref{Theorem-Q-smoothings} we have cases~\ref{types-I=3},~\ref{types-I=4-I22I4},~\ref{types-I=6}.
If all the fibers are of types \type{(I_2)} or \type{(II)}, then $I=2$
and we have cases~\ref{types-I=2-4-I2},~\ref{types-I=2-2I2-IV},~\ref{types-I=2-2IV}.
The computation of $K_X^2$ follows from \eqref{equation-Noether-formula}
and \eqref{equation-computation-muP-nG}.
\end{proof}
\section{Proof of Theorem~\ref{theorem-main}: Birational contractions}
\label{section-del-pezzo}
\begin{case}
In this section we assume that $\tilde X$ has no dominant morphism to a curve
(case~\ref{construction-main}\ref{division-del-pezzo}). It will we shown that this case does not occur.
Run the $K_{\tilde X}$-MMP on $\tilde X$.
Since $-K_{\tilde X}$ is big, on the last step we get a Mori fiber space $\bar X\to T$
and by our assumption $T$
cannot be a curve. Hence $T$ is a point and $\bar X$ is a del Pezzo surface with
$\uprho(\bar X)=1$.
Moreover, the singularities of $\bar X$ are log terminal and so $\bar X\not\simeq X$.
Thus we get the following diagram
\begin{equation*}
\xymatrix@R=0.7pc{
&\tilde X\ar[dl]_{\sigma}\ar[rd]^{\nu}
\\
X\ar@{-->}[rr]&&\bar X
}
\end{equation*}
Put $\bar C:=\nu_* \tilde C$ and $\bar C_i:=\nu_* \tilde C_i$.
By \eqref{equation-I=6-tildeX} we have
\begin{equation}
\label{equation-I=6-barX}
K_{\bar X}+\sum m_i\bar C_i+\bar D\sim 0,\qquad m_i\ge 2.
\end{equation}
Since $\uprho(X)=\uprho(\bar X)$ and $\tilde C$ is the
$\sigma$-exceptional divisor, the whole $\tilde C$ cannot be
contracted by $\nu$.
\end{case}
\begin{lemma}
\label{lemma-fiber-meets-C}
Any fiber $\nu^{-1}(\bar P)$ of positive dimension meets $\tilde C$.
\end{lemma}
\begin{proof}
Since $\bar X$ is normal, $\nu^{-1}(\bar P)$ is a connected contractible effective divisor.
Since all the components of $\tilde C$ are $K_{\tilde X}$-non-negative,
$\nu^{-1}(\bar P)\not\subset\tilde C$.
Since $\uprho(X)=1$, we have $\nu^{-1}(\bar P)\cap\tilde C\neq \emptyset$.
\end{proof}
\begin{lemma}
\label{lemma-fiber-meets-C-1}
If $\nu$ is not an isomorphism over $\bar P$, then $(\bar X,\bar C)$ is plt at $\bar P$.
In particular, $\bar C$ is smooth at $\bar P$.
\end{lemma}
\begin{proof}
Since $K_{\tilde X}+\tilde C+\tilde\Theta\equiv 0$,
the pair $(\bar X,\bar C+\bar\Theta)$ is lc.
By the above lemma there exists a component $\tilde E$ of $\nu^{-1}(\bar P)$
meeting $\tilde C$.
By Kodaira's lemma the divisor $\tilde\Theta-\sum\alpha_i\tilde C_i$ is ample for
some $\alpha_i>0$.
Hence $\tilde E$ meets $\tilde\Theta$ and so $\operatorname{Supp} (\bar\Theta)$ contains $\bar P$.
Therefore, $(\bar X,\bar C)$ is plt at $\bar P$.
\end{proof}
\begin{scorollary}
\label{corollary-dlt}
$(\bar X,\bar C)$ is dlt.
\end{scorollary}
\begin{lemma}
\label{lemma-barC-irrducible}
\begin{enumerate}
\item
\label{lemma-barC-irrducible-1}
$\bar C$ is an irreducible smooth rational curve;
\item
\label{lemma-barC-irrducible-2}
$\bar X$ has at most two singular points on $\bar C$;
\item
\label{lemma-barC-irrducible-3}
the singularities of $X$ are rational \textup{(see also \cite[Corollary 1.9]{Fujisawa1995})}.
\end{enumerate}
\end{lemma}
\begin{proof}
\ref{lemma-barC-irrducible-1}
Let $\bar C_1\subset\bar C$ be any component meeting $\bar D$
and let $\bar C':=\bar C-\bar C_1$.
Assume that $\bar C'\neq 0$. By~\ref{corollary-dlt} any point
$\bar P\in\bar C_1\cap\bar C'$ is a smooth point of
$\bar X$.
Hence $\operatorname{Diff}_{\bar C_1}(\bar C')$ contains $\bar P$ with positive integral
coefficient and $\deg\operatorname{Diff}_{\bar C_1}(\bar D+\bar C')\ge 2$ because
$\operatorname{Supp} (\bar D)\cap\bar C\neq\emptyset$.
On the other hand, $-(K_{\bar X}+\bar C+\bar D)$ is ample
by \eqref{equation-I=6-barX}. Thus contradicts the adjunction formula. Thus $\bar C$ is irreducible.
Again by the adjunction
\begin{equation*}
\deg K_{\bar C}+\deg\operatorname{Diff}_{\bar C}(0)<0.
\end{equation*}
Hence, $p_a(\bar C)=0$.
\ref{lemma-barC-irrducible-2}
Assume that $\bar X$ is singular at $\bar P_1,\dots,\bar P_N\in\bar C$.
Write
\begin{equation*}
\operatorname{Diff}_{\bar C}(0)=\sum_{i=1}^N\left(1-\textstyle{\frac 1{b_i}}\right)\bar P_i
\end{equation*}
for some $b_i\ge 2$.
The coefficient of $\operatorname{Diff}_{\bar C}(\bar D)$ at points of the intersection
$\operatorname{Supp}(\bar D)\cap\bar C$ is at least $1$.
Since $\operatorname{Supp} (\bar D)\cap\bar C\neq\emptyset$, we have $N\le 2$.
\ref{lemma-barC-irrducible-3}
If $(X\ni o)$ is a non-rational singularity, then $p_a(\tilde C)=1$ and $\tilde X$ is smooth along $\tilde C$.
Hence $p_a(\bar C)\ge1$. This contradicts~\ref{lemma-barC-irrducible-1}.
\end{proof}
\begin{lemma}
\label{lemma-contractions-K1}
Let $\varphi:S\to S'$ be a birational Mori contraction of surfaces with log terminal singularities
and let $E\subset S$ be the exceptional divisor.
Then $-K_S\cdot E\le 1$ and the equality holds if and only if the singularities of $S$
along $E$ are at worst Du Val.
\end{lemma}
\begin{proof}
Let $\psi:S^{\min}\to S$ be the minimal resolution and let $\tilde E\subset S^{\min}$
be the proper transform of $E$.
Write $K_{S^{\min}}=\psi^*K_S-\Delta$.
Since $K_{S^{\min}}\cdot\psi^* E<0$, the divisor $K_{S^{\min}}$ is not nef over $Z$.
Hence, $K_{S^{\min}}\cdot\tilde E=-1$ and so
$-K_S\cdot E+\tilde E\cdot\Delta=1$.
\end{proof}
\begin{lemma}
\label{lemma-E}
Let $\nu':\tilde X\to X'$ be the first extremal contraction in $\nu$
and let $\tilde E$ be its exceptional divisor.
Then $\tilde E\not\subset\tilde C$. Moreover,
$\tilde E\cap\tilde C$ is a singular point of $\tilde X$
and smooth point of $\tilde C$.
\end{lemma}
\begin{proof}
Since $\uprho(X)=1$, \ $\tilde E\cap \tilde C\neq\emptyset$.
Since $K_{\tilde X}$ is $\sigma $-nef, $\tilde E\not\subset\tilde C$.
Since $\bar C$ is a smooth rational curve, $\tilde E$ meets $\tilde C$ at a single point,
say $\tilde P$. Further, $\sigma (\tilde E)$ meets $\operatorname{Supp}(\Theta)$ outside $o$. Hence, $\tilde\Theta\cdot\tilde E>0$.
By Lemma~\ref{lemma-contractions-K1}\ $K_{\tilde X}\cdot\tilde E\ge -1$. Since
$K_{\tilde X}+\tilde C+\tilde\Theta\equiv 0$,
we have $\tilde C\cdot\tilde E<1$.
Hence $\tilde C\cap\tilde E$ is a singular point of $\tilde X$.
Since $(\tilde X,\tilde C)$ is dlt, $\tilde C\cap\tilde E$ is a smooth point of $\tilde C$
(see e.g. \cite[16.6]{Utah}).
\end{proof}
\begin{proposition}
\label{proposition-tilde-C-irreducible}
$\uprho(\tilde X)=2$ and $\tilde C$ is irreducible.
Moreover, $\bar X$ has exactly two singular points on $\bar C$ and $I>2$.
\end{proposition}
\begin{proof}
Assume the converse, i.e. $\tilde C$ is reducible.
By Lemma~\ref{lemma-barC-irrducible} the curve $\bar C$
is irreducible. Let $s$ be the number of components of $\tilde C$.
So, $\uprho(\tilde X)=s+1$. Hence $\nu$ contracts $s-1$ components of $\tilde C$
and exactly one divisor, say $\tilde E$ such that $\tilde E\not\subset\tilde C$.
By Lemma~\ref{lemma-E} the curve $\tilde E$ is contracted on the first step. Note that $\tilde C$ is a chain
$\tilde C_1+\cdots+\tilde C_s$, where both $\tilde C_1$ and $\tilde C_s$ contain
two points of type $\operatorname{A}_1$ and the middle curves $\tilde C_2$,\dots, $\tilde C_{s-1}$
are contained in the smooth locus. By Lemma~\ref{lemma-E}
we may assume that $\tilde E$ meets $\tilde C_1$.
Then $\nu$ contracts $\tilde C_1$,\dots, $\tilde C_{s-1}$.
However $\tilde C_s$ contains two points of type $\operatorname{A}_1$
and it is not contracted. Thus $\bar X$ has two singular points of type $\operatorname{A}_1$
on $\bar C$. Again by Lemma~\ref{lemma-barC-irrducible} the surface
$\bar X$ has no other singular points on $\bar C$.
In particular, $2\bar C$ is Cartier, $\bar X$ has only singularities of type $\operatorname{T}$,
and $K_{\bar X}^2$ is an integer.
On the other hand, we have $-K_{\bar X}=m\bar C+\bar D$, $m\ge 2$.
By the adjunction formula
\begin{equation*}
-1=\deg(K_{\bar C}+\operatorname{Diff}_{\bar C}(0))=(K_{\bar X}+\bar C)\cdot\bar C=
-\bar D\cdot\bar C-(m-1)\bar C^2.
\end{equation*}
This gives us $\bar D\cdot\bar C=\bar C^2=1/2$, $m=2$,
and $K_{\bar X}^2=9/2$, a contradiction.
Finally, by Lemmas~\ref{lemma-barC-irrducible} and~\ref{lemma-E} the surface
$\tilde X$ (resp. $\bar X$) has exactly three (resp. two) singular points on $\tilde C$.
\end{proof}
By Theorem~\ref{Theorem-Q-smoothings} the surface $\bar X$ has at least one non-Du Val singularity
lying on $\bar C$. Thus
Theorem~\ref{theorem-main} is implied by the following.
\begin{proposition}
\label{lemma-DuVal-singularities-C}
$\bar X$ has only Du Val singularities on $\bar C$.
\end{proposition}
\begin{proof}
Assume that the singularities of $\bar X$
at points lying on $\bar C$ are of types $\frac 1{n_1} (1,1)$
and $\frac 1{n_2} (1,1)$ with $n_1\ge n_2$ and $n_1>2$.
In this case near $\bar C$ the divisor $H:=-(K_{\bar X}+2\bar C)$ is Cartier.
By the adjunction formula
\begin{equation*}
K_{\bar C}+\operatorname{Diff}_{\bar C}(0)=(K_{\bar X}+\bar C)|_{\bar C}
=-(H+\bar C)|_{\bar C}
\end{equation*}
Hence,
\begin{equation*}
\deg\operatorname{Diff}_{\bar C}(0)< 2-H\cdot\bar C\le 1.
\end{equation*}
In particular, $\bar X$ has at most one singular point on $\bar C$,
a contradiction.
\end{proof}
\def$'${$'$}
|
2,877,628,089,091 | arxiv | \section{Introduction}
Discovering governing equations of a physical system from data is a central challenge in a variety of areas of science and engineering. These governing equations are usually represented by partial differential equations (PDEs). For the first scenario, where we know all the terms of the PDE and only need to infer unknown coefficients from data, many effective methods have been proposed. For example, we can enforce physics-based constraints to train neural networks that learn the underlying physics~\citep{pang2019fpinns, zhang2019quantifying, chen2020physics, yazdani2020systems, rao2020physics, wu2018physics, qian2020lift, lu2021deepxde}. In the second scenario, we do not know all the PDE terms, but we have a prior knowledge of all possible candidate terms, several approaches have also developed recently~\citep{brunton2016discovering, rudy2017data, chen2020deep}.
In a general setup, discovering PDEs only from data without any prior knowledge is much more difficult. To address this challenge, instead of discovering the PDE in an explicit form, in most practical cases, it is sufficient to have a surrogate model of the PDE solution operator that can predict PDE solutions repeatedly for different conditions (e.g., initial conditions). Very recently, several approaches have been developed to learn PDE solution operators by using neural networks such as DeepONet~\citep{lu2019deeponet} and Fourier neural operator~\citep{li2020fourier}. However, these approaches require large amounts of data to train the networks.
In this work, we propose a novel approach to learn PDE solution operators from only one data point, i.e., one-shot learning. To our knowledge, the use of one-shot learning in this space is very limited. \citet{wang2019effective} and \citet{fang2017feature} used few shot learning to learn PDEs and applied it to face recognition. Our method leverages the locality of PDEs and use neural networks to learn the system governed by PDEs at a small computational domain. Then for a new PDE condition, a fixed point iterative algorithm is proposed to couple all local domains to find the PDE solution. Our method exhibits strong generalization properties.
Moreover, the one-shot local learning approach trains very fast and extends to multi-dimensional, linear and non-linear PDEs. In this paper, we describe, in detail, the one-shot local learning approach and demonstrate on different PDEs for a range of conditions.
\section{Methods}
We first introduce the problem setup of learning solution operators of partial differential equations (PDEs) and then present our one-shot learning method.
\subsection{Learning solution operators of PDEs}
We consider a physical system governed by a PDE defined on a spatio-temporal domain $\Omega \subset \mathbb{R}^d$:
\begin{equation*}
\mathcal{F}[u(\mathbf{x}); f(\mathbf{x})] = 0, \quad \mathbf{x}=(x_1, x_2, \dots, x_d) \in \Omega
\end{equation*}
with suitable initial and boundary conditions. $u(\mathbf{x})$ is the solution of the PDE and $f(\mathbf{x})$ is a forcing term. The solution $u$ depends on $f$, and thus we define the solution operator as
$\mathcal{G}: f(\mathbf{x}) \mapsto u(\mathbf{x})$.
For nonlinear PDEs, $\mathcal{G}$ is a nonlinear operator.
In many problems, the PDE of a physical system is unknown or computationally expensive to solve, and instead, sparse data representing the physical system is available. Specifically, we consider a dataset $\mathcal{T}=\{(f_i,u_i)\}_{i=1}^{|\mathcal{T}|}$, and $(f_i,u_i)$ is the $i$-th data point, where $u_i=\mathcal{G}(f_i)$ is the PDE solution for $f_i$. Our goal is to learn $\mathcal{G}$ from the training dataset $\mathcal{T}$, such that for a new $f$, we can predict the corresponding solution $u=\mathcal{G}(f)$. When $\mathcal{T}$ is sufficiently large, then we can learn $\mathcal{G}$ straightforwardly by using neural networks, whose input and output are $f$ and $u$, respectively. Many networks have been proposed in this manner such as DeepONet~\citep{lu2019deeponet} and Fourier neural operator~\citep{li2020fourier}.
In this study, we consider an extreme scenario where we have only one data point for training, i.e., one-shot learning with $|\mathcal{T}|=1$, and we let $\mathcal{T}=\{(f_\mathcal{T}, u_\mathcal{T})\}$. Learning from only one data point is impossible in general, and here we consider that $\mathcal{T}$ is not given, and we can select $f_\mathcal{T}$. In addition, instead of learning $\mathcal{G}$ for the entire input space, we only predict $f$ in a neighborhood of some $f_0$, where we know the solution $u_0 = \mathcal{G}(f_0)$.
\subsection{One-shot learning based on locality}
To overcome the difficulty of training a machine learning model based on only one data point, we consider the fact that derivatives and PDEs are defined locally, i.e., the same PDE is satisfied in an arbitrary small domain inside $\Omega$. In our method, we partition the entire domain $\Omega$ into many small domains, i.e., a mesh of $\Omega$. In this study, we only consider the structured equispaced grid to demonstrate our method, but our method can be easily extended to unstructured mesh.
\begin{wrapfigure}{r}{0.23\textwidth}
\centering
\vspace{-1em}
\includegraphics[width=3cm]{mesh.png}
\vspace{-1ex}
\caption{\textbf{PDE in a local domain $\tilde{\Omega}$.} A neural network $\tilde{\mathcal{G}}$ is trained to learn the mapping from all or some of $u(\mathbf{x})$ and $f(\mathbf{x})$ for $\mathbf{x} \in \tilde{\Omega}$ to $u(\mathbf{x}^*)$.}
\label{fig:mesh}
\end{wrapfigure}
\paragraph{Learning the local solution operator via a neural network.}
To demonstrate the idea, we consider a mesh node at the location $\mathbf{x}^*$ (the red node in Fig.~\ref{fig:mesh}). In order to predict the solution $u(\mathbf{x}^*)$, instead of considering the entire computational domain, we only consider a small domain $\tilde{\Omega}$ surrounding $\mathbf{x}^*$ with several mesh cells. If we know the solution $u$ at the boundary of $\tilde{\Omega}$ ($\partial\tilde{\Omega}$) and $f$ within $\tilde{\Omega}$, then $u(\mathbf{x}^*)$ is determined by the PDE. Here, we use a neural network to represent this relationship
$\tilde{\mathcal{G}}: \{u(\mathbf{x}): \mathbf{x} \in \partial\tilde{\Omega} \} \cup \{f(\mathbf{x}): \mathbf{x} \in \tilde{\Omega} \} \mapsto u(\mathbf{x}^*).$
In addition, considering the flexibility of neural networks, we may use other local information as network inputs. For example, we can only use the value of $f$ at $\mathbf{x}^*$ and add more solutions inside $\tilde{\Omega}$ as network inputs:
$\tilde{\mathcal{G}}: \{u(\mathbf{x}): \mathbf{x} \in \tilde{\Omega} \text{ and } \mathbf{x}\neq\mathbf{x}^* \} \cup \{f(\mathbf{x}^*) \} \mapsto u(\mathbf{x}^*)$.
The size and shape of $\tilde{\Omega}$ are also hyperparameters to be chosen. We will compare the performance of several different choices of network inputs in our numerical experiments.
Because the network learns for a small local domain, by traversing the entire domain $\Omega$, we can generate many input-output pairs for training the network, which makes it possible to learn a network from only one PDE solution. In addition, to make the network generalizable to different $f$, the selection of $f_\mathcal{T}$ in $\mathcal{T}$ is also important. In this study, we choose $f_\mathcal{T}$ to be uniform random between -1 and 1 on each mesh node, i.e., $f_\mathcal{T}(\mathbf{x})$ is sampled from $U(-1, 1)$. The reason is that if $f_\mathcal{T}$ is smooth, then the training data points in adjacent meshes become similar, and thus the ``effective'' number of training data points is small, while a randomly sampled $f_\mathcal{T}$ can induce a more ``diverse'' training dataset. The choice of a random $f_{\mathcal{T}}$ is critical for the learning as we will elaborate on this in the numerical examples.
\paragraph{Prediction via a fixed-point iteration.}
For a new $f = f_0 + \Delta{f}$, we cannot directly predict its solution $u$ using the trained network, because the network inputs also include the solution to be predicted. Here, we propose an iterative algorithm to find the solution (Algorithm~\ref{alg:predict}). Because $f$ is close to $f_0$, we use $u_0$ as the initial guess of $u$, and then in each iteration, we apply the trained network on the current solution as the input to get a new solution. This algorithm can be viewed as a fixed-point iteration. When the solution is converged, $u$ and $f$ are consistent with respect to the local operator $\mathcal{G}$, and thus the current $u$ is the solution of our PDE.
\begin{algorithm}[H]
Initiate: $u(\mathbf{x}) \leftarrow u_{0}(\mathbf{x})$ for all $\mathbf{x} \in \Omega$ \\
\While{$u$ has not converged}{
\For{$\mathbf{x} \in \Omega$}{
$\hat{u}(\mathbf{x}) \leftarrow \tilde{\mathcal{G}}(\text{the inputs of }u \text{ and } f \text{ in } \tilde{\Omega})$
}
Update: $u(\mathbf{x}) \leftarrow \hat{u}(\mathbf{x})$ for all $\mathbf{x} \in \Omega$
}
\caption{Predicting the solution $u=\mathcal{G}(f)$ for a new $f$.}
\label{alg:predict}
\end{algorithm}
\section{Results}
In this section, we show the effectiveness of our proposed method for a few problems, and compare the accuracy and convergence rate of different choices of the local solution operator $\tilde{\mathcal{G}}$.
\subsection{Poisson equation}
We first consider a pedagogical example of a one-dimensional Poisson equation
$\Delta u = f(x), \quad x \in [0, 1]$,
with the zero Dirichlet boundary condition $u(0)=u(1)=0$, and the solution operator is $\mathcal{G}: f\mapsto u$.
We consider an equispaced grid with a step size $h = 0.01$, and choose the simplest local operator as $\tilde{\mathcal{G}}: [u(x_{i-1}),u(x_{i+1}),f(x_i)] \mapsto u(x_i)$ with $x_i=ih$, i.e., $\tilde{\Omega}$ only has three nodes and we use $u(x_{i-1})$, $u(x_{i+1})$ and $f(x_i)$ to predict $u(x_i)$. We use a single layer linear network for $\tilde{\mathcal{G}}$, and the training dataset $\mathcal{T}$ only has one data point $(f_\mathcal{T}, \mathcal{G}(f_\mathcal{T}))$, where $f_\mathcal{T}(x_i)$ is sampled randomly in $U(-1,1)$. One example of $\mathcal{T}$ is shown in Figs.~\ref{fig:Poisson}A and B, and we use the finite difference method (FDM) to compute the numerical solution.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{Poisson}
\vspace{-2em}
\caption{\textbf{Learning the Poisson equation.} (\textbf{A} and \textbf{B}) (A) A random $f_\mathcal{T}$ and (B) the corresponding solution $u_\mathcal{T}$ used for training. (\textbf{C}) The initial guess of $u_0$ has a $L^2$ relative error of 25\% compared to the reference solution. (\textbf{D}) Red solid line: The $L^2$ relative error decreases exponentially during the iteration, when the network is trained on a random $f$. Blue dash line: The error is large when the network is trained on $f_0$.}
\label{fig:Poisson}
\end{figure}
After the network is trained, we use the iterative Algorithm~\ref{alg:predict} to predict the solution for $f=\sin(2\pi x)+0.05$. We assume we have the solution $u_0=-\frac{1}{4\pi^2}\sin(2\pi x)$ for $f_0=\sin(2\pi x)$, i.e., $\Delta f=0.05$, and then $u_0$ is used as the initial guess of the iteration, which has an $L^2$ relative error of 25\% compared to the reference solution (Fig.~\ref{fig:Poisson}C). During the iteration, the error of the solution decreases exponentially with respect to the number of iterations (Fig.~\ref{fig:Poisson}D red solid line), and it requires about 6000 iterations to reach 1\% error.
We notice that the learned linear network is $u(x_i) = \tilde{\mathcal{G}}(u(x_{i-1}),u(x_{i+1}),f(x_i)) = [u(x_{i-1}),u(x_{i+1}),f(x_i)] \cdot [0.5, 0.5, -5\times 10^{-5}] + 10^{-9}$, which is the second-order finite difference scheme for the Poisson equation. However, as we will show in other examples, the network does not learn a finite difference scheme in general, especially for nonlinear PDEs or the local domain $\tilde{\Omega}$ with more nodes. To show the necessity of using a random $f$ for training, we also use a smooth $f_0$ and $u_0$ to train the network, and the error can only decrease to 15\% (Fig.~\ref{fig:Poisson}D blue dash line). Although, both $f(x)$ and $f_0(x)$ span from -1 to 1, the noisy $f$ generates a diverse training set and thus improves generalizability of the neural network.
\subsection{Nonlinear diffusion-reaction equation}
We consider a nonlinear diffusion-reaction equation with a source term $f(x)$:
$\frac{\partial{u}}{\partial{t}} = D \frac{\partial^2 u}{\partial x^2} + k u^2 + f(x), \quad x \in [0,1], t \in [0, 1],$
with zero initial and boundary conditions, where $D=0.01$ is the diffusion coefficient, and $k=0.01$ is the reaction rate. The solution operator is the mapping from $f(x)$ to $u(x, t)$. We use a grid with $\Delta x=\Delta t=0.01$. To generate the training dataset, $f_\mathcal{T}(x)$ is randomly sampled from $U(0,1)$ (Fig.~\ref{fig:random_f_solution}A), and the reference solution (Fig.~\ref{fig:random_f_solution}B) is obtained from finite differences with the Crank-Nicolson method.
\begin{wrapfigure}{r}{0.47\textwidth}
\centering
\vspace{-2em}
\includegraphics[width=7cm]{dr_domain.png}
\vspace{-2em}
\caption{\textbf{Local domains $\tilde{\Omega}$ of the diffusion-reaction equation.} (\textbf{A}) Domain with 4 nodes. (\textbf{B}) Domain with 6 nodes.}
\vspace{-2em}
\label{fig:dr_domain}
\end{wrapfigure}
In this example, we consider different choices of the local domain $\tilde{\Omega}$ and the local solution operator $\tilde{\mathcal{G}}$. For the target $u_{i}^j$ at $(i\Delta x, j\Delta t)$, the simplest choice of $\tilde{\Omega}$ is $\{(i,j-1), (i-1,j),(i,j),(i+1,j) \}$ (Fig.~\ref{fig:dr_domain}A), and then the simplest choice of $\tilde{\mathcal{G}}$ is
$\tilde{\mathcal{G}}_1: [u_{i}^{j-1}, u_{i-1}^j, u_{i+1}^j, f_i^j] \mapsto u_i^j.$
We also consider a larger domain of 6 nodes as shown in (Fig.~\ref{fig:dr_domain}B), and choose
$\tilde{\mathcal{G}}_2: [u_{i-1}^{j-1}, u_{i}^{j-1}, u_{i+1}^{j-1}, u_{i-1}^j, u_{i+1}^j, f_i^j] \mapsto u_i^j$.
After we train the networks $\tilde{\mathcal{G}}_1$ and $\tilde{\mathcal{G}}_2$, we predict the solution for $f$ in the neighborhood of $f_0= 0.9\sin(2\pi x)$, and the solution $u_0 = \mathcal{G}(f_0)$ is used as the initial guess of the iteration. Instead of using a constant $\Delta f$, we randomly sample $\Delta f$ from a mean-zero Gaussian random field (GRF): $\Delta f \sim \mathcal{GP}(0, k(x_1, x_2))$, where the covariance kernel $k(x_1,x_2)=\sigma^2 \exp(-\|x_1-x_2\|^2/2l^2)$ has a standard deviation $\sigma$ and a correlation length $l$. Examples of some random $f$ with different $\sigma$ and $l$ are shown in Fig.~\ref{fig:f_example}.
\begin{figure}[htbp]
\centering
\includegraphics[height=3cm]{figs/L2_error_G1_G2.png}
\vspace{-1em}
\caption{\textbf{Learning the diffusion-reaction equation.} (\textbf{A}) $L^2$ relative errors of $\tilde{\mathcal{G}}_1$ and $\tilde{\mathcal{G}}_2$ tested on 100 random $\Delta f$ sampled from GRF of $\sigma=0.1$ and $l=0.1$. (\textbf{B}) $\sigma=0.3$. (\textbf{C}) $\sigma=1$. The shaded band is one standard deviation.}
\label{fig:dr_error}
\end{figure}
We first test $\tilde{\mathcal{G}}_1$ and $\tilde{\mathcal{G}}_2$ on 100 random $\Delta f$ sampled from a GRF with $\sigma=0.1$ and $l=0.1$. The $L^2$ relative error of $\tilde{\mathcal{G}}_1$ is more than one-order larger than the error of $\tilde{\mathcal{G}}_2$ (Fig.~\ref{fig:dr_error}A).
If we increse the magnitude of $\Delta f$ to $\sigma=0.3$ or 1, $\tilde{\mathcal{G}}_2$ always has better performance than $\tilde{\mathcal{G}}_1$ (Figs.~\ref{fig:dr_error}B and C). We note that when $\Delta f$ is sampled from GRF with $\sigma=1$, $f$ and $f_0$ are significantly different (Fig.~\ref{fig:f_example}C), but $\tilde{\mathcal{G}}_2$ can still achieve an $L^2$ relative error of 0.3\% in about 200 iterations, which demonstrates the robustness and generalizability of our proposed method.
\section{Conclusion}
In this study, we propose, to the best of our knowledge, the first one-shot method to learn solution operators from only one PDE solution.
In future, we will carry out further validation of on different types of PDEs, extend the method to unstructured meshes, and improve Algorithm~\ref{alg:predict} for faster convergence and better computational efficiency.
|
2,877,628,089,092 | arxiv | \section{Introduction}
\label{Sec:Intro}
\input{sect_1.tex}
\section{Single-top Processes at the LHC}
\label{Sec:Processes}
\input{sect_2.tex}
\section{Calculation}
\label{Sec:Calc}
\input{sect_3.tex}
\section{Results}
\label{Sec:Results}
\input{sect_4.tex}
\section{Conclusions}
\label{Sec:Conclusions}
\input{sect_5.tex}
\section*{Acknowledgements}
\input{acknowledgements.tex}
\input{bibl.tex}
\input{figures.tex}
\end{document}
\subsection{LHC at 7 TeV}
The total cross section and asymmetries for t- and s-channel are given in Tabs.~\ref{tab:7TeV-t}
and~\ref{tab:7TeV-s} respectively.
\begin{table}[htbp]
\centering
\begin{tabular}{|c|c|c|}
\hline
& $\sigma$(pb) & L-R Asymmetry (1-$\delta$) \\
\hline
Born & 35.44 & $\delta$=4.7327$\times$10$^{-7}$ \\
\hline
LS2 & 35.29 ($K$=0.996) & $\delta$=4.7515$\times$10$^{-7}$ (1-$K$=1.87$\times$10$^{-9}$) \\
SPS1 & 35.33 ($K$=0.997) & $\delta$=4.7353$\times$10$^{-7}$ (1-$K$=2.6$\times$10$^{-10}$) \\
\hline
\end{tabular}
\caption{Numerical results for t-channel production at 7 TeV.}
\label{tab:7TeV-t}
\end{table}
\begin{table}[htbp]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
& $\sigma$(pb) & L-R Asymmetry & F-B Asymmetry \\
\hline
Born & 2.061 & 0.6777 & 0.598049 \\
\hline
LS2 & 2.086 ($K$=1.0121) & 0.6796 ($K$=1.0028) & 0.598018 ($K$=0.999948) \\
SPS1 & 2.062 ($K$=1.0005) & 0.6783 ($K$=1.0009) & 0.598046 ($K$=0.999995) \\
\hline
\end{tabular}
\caption{Numerical results for s-channel production at 7 TeV.}
\label{tab:7TeV-s}
\end{table}
As it is possible to see, in both the two channels, the NLO corrections to the total cross sections are
opposite in sign: in the t-channel case one-loop corrections reduce the cross section, while in the
s-channel there is an enhancement, but the corrections are 1\% or less in both the benchmarks
considered (in line with the results from the mSUGRA scans).
The main difference between the two channels is of course the value of the cross section, $\sigma$. Focusing on
t-channel, which has a greater $\sigma$, we have found that, with an integrated luminosity of 1
fb$^{-1}$ the LHC will generate $\approx35000$ events per year, therefore the detection of any deviation
from the SM prediction due to the SUSY EW and QCD one-loop corrections, at least in the early stages of
the experiment, will be indeed very challenging.
Also $A_{\rm{LR}}$ shows different behaviours in the two channels but,
unfortunately, they are much greater in the s-channel, which, as we have just verified, has NLO
corrections well beyond observability at 7 TeV.
The same observability considerations can be applied to $A_{\rm{FB}}$ in the
s-channel.
\subsection{LHC at 14 TeV}
\subsubsection{The t-channel case}
The total cross section and asymmetries for this production mechanism are given in Tab.~\ref{tab:14TeV-t}.
\begin{table}[htbp]
\centering
\begin{tabular}{|c|c|c|}
\hline
& $\sigma$(pb) & L-R Asymmetry (1-$\delta$) \\
\hline
Born & 122.5 & $\delta$=4.3491$\times$10$^{-7}$ \\
\hline
LS2 & 122.0 ($K$=0.996) & $\delta$=4.3668$\times$10$^{-7}$ (1-$K$=1.77$\times$10$^{-9}$) \\
SPS1 & 122.1 ($K$=0.997) & $\delta$=4.3516$\times$10$^{-7}$ (1-$K$=2.5$\times$10$^{-10}$) \\
\hline
\end{tabular}
\caption{Numerical results for t-channel production at 14 TeV.}
\label{tab:14TeV-t}
\end{table}
In this case the total cross section is significantly bigger than in the 7 TeV case, and therefore
we expect that it will be possible to extract much more information from experimental data. The
$K$-factor for the integrated cross section is admittedly small, less than 1\% for both benchmarks but,
with the expected luminosity of 10 fb$^{-1}$, the production of single-top through this t-channel
process will count $\sim$1M events and therefore it should be possible to detect a discrepancy in
the expected number of SM events that could be interpreted as a hint of NP and thus boost the search of a
SUSY resonance. As far as the $A_{\rm LR}$ is concerned, the top quark will emerge almost completely left-polarised, yet the
detection of a one-loop induced discrepancy in the asymmetry will be again unlikely, the $K$-factor
being close to one part in a billion or less.
The analysis of differential observables is now mandatory to better understand the origin of the
corrections to the integrated cross section and Left-Right asymmetry.
In Fig.~\ref{fig:sigma-t-channel-14TeV} we have collected our results for the differential
distributions of the cross section. It can be noticed that the corrections are always bigger in the
LS2 scenario than in the SPS1a case and that in the transverse momentum distribution they can reach the $-10\%$ limit for
high values of the top $p_t$ (transverse momentum). However, the differential cross section is quite small in this
limit, of the order of 0.01 pb/TeV, and therefore observation of large one-loop corrections for
events with high top transverse momentum is probably possible though quite difficult. It is also possible to
see that the one-loop corrections in the other differential distributions are always quite small,
below the 1\% or so limit.
In Fig.~\ref{fig:ALR-t-channel-14TeV} the one-loop corrections to the differential distributions for
the Left-Right asymmetry are shown. Again, the corrections are generally bigger in the LS2
scenario than in the SPS1a case and can reach significant values for high top $p_t$, of $-8\%$ or so. The $p_t$ distribution has another
interesting feature: the asymmetry is bigger in the high top $p_t$ region, where, however, one-loop
corrections are much less than 1\%.
\subsubsection{The s-channel case}
The total cross section and asymmetries for this production mechanism are given in Tab.~\ref{tab:14TeV-s}.
\begin{table}[htbp]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
& $\sigma$(pb) & L-R Asymmetry & F-B Asymmetry \\
\hline
Born & 5.138 & 0.6861 & 0.53665 \\
\hline
LS2 & 5.206 ($K$=1.0132) & 0.6883 ($K$=1.0032) & 0.53695 ($K$=1.00056) \\
SPS1 & 5.145 ($K$=1.0014) & 0.6869 ($K$=1.0012) & 0.53676 ($K$=1.00020) \\
\hline
\end{tabular}
\caption{Numerical results for s-channel production at 14 TeV.}
\label{tab:14TeV-s}
\end{table}
Single-top events from s-channel at the LHC will be significantly more than at the Tevatron,
and this will allow a much more precise analysis of the process and of deviations to related
observables due to new physics.
The inclusive one-loop corrections we have obtained are at most of $\sim$1\% in the LS2 scenario,
but maybe enough to
allow a small observable difference in the expected number of events.
In constrast, the
Left-Right asymmetry receives NLO corrections of the order of 0.1\% in both the scenarios
considered (here, unlike the general case, the effect of pure SUSY corrections is almost completely negligible).
The same result holds for the one-loop corrections to the Forward-Backward asymmetry:
the $K$-factor is very small and the effect practically undetectable with the s-channel cross
section being of some pb.
The differential distributions for the cross section are shown in
Fig.~\ref{fig:sigma-s-channel-14TeV}. In the s-channel case there are significant differences
between the two scenarios considered: the corrections for LS2 are higher than those for SPS1a in
the regions where the contributions to the total cross section are bigger, and this is the reason
for the big difference in the $K$-factors for the integrated cross section in the two scenarios.
The same feature can be observed for the Left-Right asymmetry distributions,
Fig.~\ref{fig:ALR-s-channel-14TeV}, where, however, the enhancement of the one-loop corrections from
the LS2 scenario is milder and therefore the differences in the integrated asymmetry are smaller.
The corrections to differental quantities in the Forward-Backward asymmetry are qualitatively
and quantitatively very similar to those for the Left-Right asymmetry, as it is possible to see in
Fig.~\ref{fig:AFB-s-channel-14TeV}, thus the same comments as above apply in this case.
Considering all the differential distributions for the s-channel process, we can notice that they
do not differ too much from each other between the two benchmarks and, as a general feature, they reach their maximum value in
the invariant mass distribution for $M_{\rm inv}\gtrsim 700$~GeV, in the transverse momentum
distribution for $p_T\gtrsim 300$~GeV and in the top rapidity for $y_t \gtrsim 1.5$.\\
To better understand the origin of the corrections and why there are peaks for certain
values of $M_{\rm inv}$, $p_T$ and $y_t$ we have computed (here, limitedly to the s-channel) the
SUSY EW and QCD corrections
for the cross section separately and plotted the related differential quantities in
Figs.~\ref{fig:EWQCD-s-channel-LS2} and~\ref{fig:EWQCD-s-channel-SPS1} for the two benchmark scenarios. It is possible to see that
the corrections are dominated by
the SUSY QCD contribution, while the EW counterpart
provides small
contributions to the total corrections. In both benchmark scenarios, however, the EW contribution shows
peculiar features in the invariant mass distribution, given by peaks and troughs in the correction,
and a closer inspection of these threshold effects reveals that they are situated in correspondence
of $M_{\chi_i^\pm}^2$. Therefore charginos could play an interesting role in the determination of
SUSY effects in s-channel single top production processes. Going back to the dominant QCD
contribution, the only parameter which enter the QCD correction alone is
the gluino mass ($m_{\tilde g} = 607$~GeV in SPS1a and $m_{\tilde g} = 392$~GeV in
LS2), thus we can argue that the one-loop corrections are sensitive mostly to this parameter.
Smaller gluino masses shift the SUSY QCD peak in the corrections towards regions where the
differential distributions have higher values and therefore the integrated quantities will
be affected more by NLO corrections. This means that observing higher pure SUSY one-loop
corrections would point towards scenarios with lighter gluino and viceversa.\\
To conclude this section, we are aware that recent measurements have slightly changed the value of
the top mass\cite{topmass}. Nevertheless, we have verified that any change (within a reasonable
range) in this value has practically no effect on one-loop corrections, as it is possible to see in
Fig.~\ref{fig:topmass}.
|
2,877,628,089,093 | arxiv | \section{Introduction}\label{sec:intro}
The last two decades have witnessed the great success of the concept of topology applied to condensed matter physics, which not only enriches our knowledge of quantum phases of matter but also provides topological material-based unique technical applications. In mathematics, topology is used to classify geometric shapes, for example, two shapes are topologically inequivalent if they cannot be continuously deformed into each other. Similarly, if the wave function of a quantum many-body system cannot be adiabatically connected to its trivial atomic limit, the system is classified as topologically nontrivial and characterized by the emergence of robust boundary states \cite{Hasan:2010aa,Qi:2011aa}. The integer quantum Hall insulator is a paradigm example of topologically nontrivial systems \cite{Klitzing:1980aa}, where the precisely quantized Hall conductance has a topological origin characterized by the Thouless-Kohmoto-Nightingale-de Nijs number or Chern number \cite{Thouless:1982aa}. In 2005, Kane and Mele found that the spin-orbit coupling can covert graphene from a semimetal into a quantum spin Hall insulator \cite{Kane:2005aa}, a spinful version of Haldane's model \cite{Haldane:1988aa}. This discovery triggered the widespread explorations on topological insulators in condensed matter systems. To date, the topological concept has been generalized to various electronic systems, such as insulators \cite{Hasan:2010aa,Qi:2011aa,Chiu:2016aa}, metals/semimetals \cite{Burkov:2016aa,Bansil:2016aa,Armitage:2018aa}, and superconductors \cite{Qi:2011aa,Alicea:2012aa,Sato:2017aa}, where the presence of a quasiparticle band gap, direct, indirect, or curved, is indispensable for defining the topology. The establishment of the database for topological materials \cite{Bradlyn:2017aa,Tang:2019aa,Vergniory:2019aa,Vergniory:2021aa} has further completed the band theory of solids. Beyond the electronic systems, topological states have been unveiled in photonic crystals \cite{Lu:2014aa,Ozawa:2019aa}, phononic systems \cite{Ma:2019aa,Liu:2020aa}, and non-Hermitian systems \cite{Yao:2018aa,Gong:2018aa,Bergholtz:2021aa}. Moreover, the concept of topology has been recently extended to higher orders \cite{Schindler:2018aa,Khalaf:2018aa,Kim:2020aa,Xie:2021aa}, keeping this field attractive.
In analogy to insulators, if the wave function of a superconductor cannot be adiabatically connected to the trivial Bose-Einstein condensate of Cooper pairs, the superconductor is classified as topologically nontrivial \cite{Qi:2011aa}. The corresponding boundary state of a topological superconductor (TSC) is a superposition of particle and hole states, and therefore can be utilized to realize Majorana fermion \cite{Qi:2011aa,Alicea:2012aa}. Majorana fermion is its own antiparticle named after Ettore Majorana who deduced the real solutions of the Dirac equation \cite{Majorana:2008aa}. In 2000, Read and Green demonstrated that a two-dimensional (2D) spinless chiral $p$-wave superconductor can harbor Majorana zero mode (MZM) around a quantized magnetic vortex and chiral Majorana edge modes at the boundaries \cite{Read:2000aa}. The 1D counterpart of a $p$-wave superconductor or the Kitaev chain was shown to support unpaired Majorana fermions at the two ends of the chain \cite{Kitaev:2001aa}. The operator for exchanging two MZMs bounded by magnetic vortices possesses a necessary form of irreducible unitary matrix \cite{Read:2000aa,Ivanov:2001aa}, implying MZMs are subject to non-Abelian statistics \cite{Nayak:2008aa}. Because an ordinary fermion can be formally decomposed into two Majorana fermions, a pair of well separated Majorana fermions can be utilized to characterize a quantum state. Such a non-local way of information storage has an exceptional advantage of being immune to local noises, making Majorana fermion an ideal building block for realizing fault-tolerant quantum computing \cite{Nayak:2008aa,Sarma:2015aa}.
Motivated by the promising technique application of Majorana fermion, tremendous efforts have been devoted to realizing TSC in realistic material systems \cite{Qi:2011aa,Alicea:2012aa,Leijnse:2012aa,Beenakker:2013aa,Stanescu:2013aa,Elliott:2015aa,Sato:2017aa,Li:2019aa,sharma:2022aa}. Topological property of a superconductor is characterized by the phase winding of order parameter around its Fermi surface \cite{Qi:2011aa,Alicea:2012aa}, which suggests odd-parity spin-triplet superconductor is a promising ground for realizing TSC \cite{Ivanov:2001aa}. To date, there are rare materials showing signatures of spin-triplet pairing, and various artificial schemes have been proposed to realize TSC. Depending on the way of introducing nontrivial phase winding into the superconducting order parameter, recipes for realizing TSC can be classified into three distinct categories: (i) real-space superconducting proximity effect-induced TSC; (ii) reciprocal-space superconducting proximity effect-induced TSC; (iii) intrinsic TSC.
The scheme of realizing TSC via real-space proximity effect is first proposed by Fu and Kane in 2008 \cite{Fu:2008aa}. In their proposal, a heterostructure consisting of a 3D topological insulator and a conventional $s$-wave superconductor is shown to be an effective chiral $p$-wave superconductor. This idea has been experimentally explored in the Bi$_2$Te$_3$/NbSe$_2$ heterostructure \cite{Xu:2014aa,Xu:2015aa,Sun:2016aa}, where signature of Majorana fermion was revealed in the spin-polarized scanning tunneling spectroscopy measured around a magnetic vortex \cite{Sun:2016aa}. This real-space proximity scheme has been generalized to heterostructure comprised of a 2D or 1D semiconductor with spin-orbit coupling proximity coupled to spin-singlet superconductors \cite{Sau:2010aa,Lutchyn:2010aa,Oreg:2010aa,Alicea:2011aa,Zhang:2013aa,Lutchyn:2018aa,Prada:2020aa}.
For reciprocal-space proximity or band proximity effect-induced TSC, typical examples are doped 3D topological insulators \cite{Hor:2010aa,Sasaki:2011aa,Levy:2013aa,Liu:2015aa} and iron-based superconductors \cite{Xu:2016aa,Shi:2017aa,Zhang:2017aa,Zhang:2018ab,Liu:2018aa,Hao:2018aa,Zhang:2019aa,Kong:2019aa,Wang:2020aa,Li:2022aa}, where superconductivity arising from the bulk bands is proximity coupled to the topological surface states, mimicking effective 2D chiral $p$-wave superconductors. Intrinsic TSC means that superconductivity and nontrivial topology arise from the same electronic states of the same material \cite{Elliott:2015aa,Sato:2017aa,sharma:2022aa}. There are increasing numbers of materials showing signatures of intrinsic topological superconductivity, such as Sr$_2$RuO$_4$ \cite{Maeno:1994aa}, UTe$_2$ \cite{Jiao:2020aa}, Pb$_3$Bi \cite{Qin:2009aa}. In addition to the three primary schemes, second-order TSCs are also proposed in 3D and 2D systems \cite{Wang:2018ab,Wang:2018ac,Hsu:2018aa,Zhu:2019aa,Wu:2020aa}, which are characterized by Majorana hinge and corner modes, respectively.
Since TSC is uniquely characterized by Majorana boundary states, the following measurements are frequently involved in experimental demonstration of TSC. (i) Detection of zero bias conductance peaks (ZBCPs). The MZM is expected to emerge around a magnetic vortex core in a TSC, giving rise to a conductance peak in the scanning tunneling spectroscopy at zero bias voltage \cite{Law:2009aa,Flensberg:2010aa,Xu:2015aa,Sun:2016aa}. Nevertheless, the emergence of ZBCP only serves as a preliminary screening in pursuing TSC because other effects, such as in-gap Caroli-de Gennes-Matricon states induced by material imperfection \cite{Caroli:1994aa}, can also result in a conductance peak in the scanning tunneling spectroscopy. Although the MZM-induced ZBCP has been theoretically shown to exhibit a quantized value of $2e^2/h$, the experimental detection of this quantum effect remains highly controversial with discouraging or encouraging reports \cite{Zhang:2018aa,Chen:2019aa,Zhu:2020aa}. (ii) Measurements of thermal conductivity and spin current. Due to its superconducting nature, Majorana boundary states are hard to be directly detected via conventional electric transport measurements. However, the thermally insulating bulk state of a TSC makes it possible to evidence Majorana boundary states via thermal transport \cite{Wang:2011aa,Nomura:2012aa,Shiozaki:2013aa}. In particular, the thermal Hall conductance of a TSC has been shown to exhibit a quantized value, manifesting as quantized thermal Hall effect \cite{Beenakker:2016aa}. Moreover, a spin current associated with heat current can be detected if the boundary states possess spin polarization \cite{Sato:2017aa}. (iii) Measurements of anomalous Josephson effect. The current-phase relationship in a Josephson junction that involves TSC has been proposed to exhibit anomalous behaviors \cite{Beenakker:2013aa,Sato:2017aa}. For example, a junction between two 1D TSCs that contain a pair of Majorana fermions leads to the "fractional" Josephson effect \cite{Kwon:2003aa,Fu:2009aa,Lutchyn:2010aa}, characterized by $4\pi$-periodicity in the supercurrent phase difference \cite{Kitaev:2001aa,Jiang:2011aa}. Such an exotic behavior has been recognized as an important signature of topological superconductivity \cite{Rokhinson:2012aa,Jose:2012aa,Lee:2014ab,Wang:2018ad}. (iv) Angle-resolved photoemission spectroscopy (ARPES). For the real-space/reciprocal-space proximity effect-induced TSC, the Dirac-cone-type surface/edge states are expected to be fully gapped, which can be directly observed via high resolution ARPES \cite{Wang:2012ab,Zhang:2018ab}. (v) Measurements of spin texture. The surface/edge current of an intrinsic spin-triplet TSC is one of its key characteristics \cite{Kallin:2016aa}.
In this review, we summarize recent progresses in pursuing intrinsic TSCs, focusing on 2D candidates and their normal-state band topology. Intrinsic TSC is commonly associated with unconventional pairing that is sensitive to the quality of the hosting material. Recent advances in fabrication techniques make it possible to realize highly crystalline 2D superconductors, providing necessary conditions for realizing and studying intrinsic topological superconductivity. In addition to topological properties, 2D superconductors exhibit many other intriguing properties, such as oscillation of critical temperature due to quantum size effect \cite{Guo:2004aa,Eom:2006aa,Qin:2009aa}, Berezinskii–Kosterlitz–Thouless transition \cite{Berezinskii:1971aa,Berezinskii:1972aa,Kosterlitz:1972aa}, tunable superconductor-metal-insulator quantum phase transition \cite{Reyren:2007aa,Caviglia:2008aa,Saito:2015aa}, and in-plane critical magnetic field far beyond Pauli limit \cite{Lu:2015aa,Saito:2016aa,Xi:2016aa}. These properties have been adequately reviewed in ref.~\cite{Saito:2016ab}. This review in together with ref.~\cite{Saito:2016ab} complete the properties of 2D superconductors. The rest of this review is organized as follows. Sec.~\ref{sec:proximity} briefly introduces the basic ideas and some representative examples of realizing TSC via real- and reciprocal-space superconducting proximity effects. Sec.~\ref{sec:intrinsic} reviews candidate materials that harbor intrinsic topological superconductivity. Sec.~\ref{sec:coexist} summarizes the recently discovered 2D materials where superconductivity and band topology coexist. Sec.~\ref{sec:tunable} discusses three dominant tuning schemes, including strain, gating, and ferroelectricity, in acquiring tunable TSC. Sec.~\ref{sec:conclusion} gives a conclusion and perspective on the application of TSC and Majorana fermion in quantum computation.
\section{TSC induced by proximity effect }
\label{sec:proximity}
The topological property of a superconductor is characterized by the phase winding of superconducting order parameter around the Fermi surface \cite{Qi:2011aa,Alicea:2012aa}.
Within mean-field theory, the superconducting order parameter is defined as
\begin{equation}
\Delta(\bm{k}) = -\sum_{\bm{k}'} V(\bm{k},\bm{k}') \langle c_{-\bm{k}'} c_{\bm{k}'}\rangle,
\label{eq:gapequation}
\end{equation}
where $V(\bm{k},\bm{k}')$ denotes the interaction matrix, $c_{\bm{k}}$ ($c^{\dagger}_{\bm{k}}$) is the electron annihilation (creation) operator, spin and other degrees of freedom are implied. It is obvious from Eq.~(\ref{eq:gapequation}) that both of the interaction matrix $V(\bm{k},\bm{k}')$ and the single-particle wave function give contributions to the phase of $\Delta(\bm{k})$. For a proximity effect-induced TSC, the pairing glue and phase winding of $\Delta(\bm{k})$ originate from different while proximity coupled electronic states. In this section, we briefly review some representative examples of proximity effect-induced TSC by further distinguishing the proximity effects into real-space and reciprocal-space ones.
\subsection{Real-space proximity effect-induced TSC}
\label{sec:real-proximity}
\begin{figure}[H]
\centering
\includegraphics[scale=0.65]{Fig1.pdf}
\caption{(Color online) (a) Schematic of a 3D topological insulator/$s$-wave superconductor heterostructure. (b) Proximity effect-induced superconducting gap on the Dirac-type topological surface state. (c) A single magnetic vortex measured by zero-bias STS on the surface of Bi$_2$Te$_3$/NbSe$_2$ heterostructure. (d) Line cut along the direction marked in (c), where the zero-bias peak at the vortex center is a signature of a MZM. (c) and (d) are reprinted from ref.~\cite{Xu:2015aa} with permission from American Physical Society.}
\label{fig:figure1}
\end{figure}
In 2008, Fu and Kane showed that an effective chiral $p$-wave superconductor can be realized at the interface of a 3D topological insulator and an $s$-wave superconductor \cite{Fu:2008aa}, as schematically depicted in Fig.~\ref{fig:figure1}(a). Superconductivity is induced in the topological surface states via proximity effect, where the $2\pi$ phase winding of the order parameter around the Fermi surface arises from the inherent wave function effect of the Dirac-like bands shown in Fig.~\ref{fig:figure1}(b). The authors further demonstrated that a quantum magnetic vortex can harbor a MZM \cite{Fu:2008aa}. Motivated by this idea, Bi$_2$Se$_3$ thin films have been successfully grown on a conventional $s$-wave superconductor NbSe$_2$, and the coexistence of topological surface states and superconductivity has been observed on the surface of this heterostructure \cite{Wang:2012ab}. Figs.~\ref{fig:figure1}(c)-(d) show scanning tunneling microscopy/spectroscopy (STM/STS) of a single magnetic vortex on the surface of Bi$_2$Te$_3$/NbSe$_2$ \cite{Xu:2014aa,Xu:2015aa}, where the symmetric zero-bias peak around the vortex center is a signature of a MZM \cite{Xu:2015aa}. Spin-selective Andreev reflections are detected in the zero-bias peaks using spin-polarized STM \cite{Sun:2016aa}, consistent with the theoretical prediction of the tunneling spectrum around a MZM \cite{He:2014aa}.
Similar ideas have been extended to the heterostructure comprised of a semiconductor with spin-orbit coupling and a conventional superconductor \cite{Sau:2010aa,Lutchyn:2010aa,Oreg:2010aa}. Here the Zeeman field is essential in order to break time-reversal symmetry, leaving a single Fermi surface within the energy window of pairing potential \cite{Alicea:2012aa,Sato:2017aa}. The Zeeman field can be introduced into the heterostructure via magnetic proximity effect \cite{Sau:2010aa}, magnetic dopants \cite{Liu:2009aa,Yu:2010aa,Zhu:2011aa,Qin:2014aa}, or an external magnetic field \cite{Alicea:2010aa,Lutchyn:2010aa,Lutchyn:2011aa}. The last approach invokes a large Land\'e $g$ factor of the semiconductor \cite{Alicea:2010aa}. Because 2D superconductivity is easily destroyed by the orbit effect caused by perpendicular magnetic fields, 1D semiconductor/superconductor heterostructure that mimics the Kitaev chain attracts more attentions than its 2D counterpart \cite{Alicea:2011aa,Lutchyn:2018aa,Prada:2020aa}. Signatures of TSC and Majorana fermion have been unveiled in heterostructures consisting of conventional superconductors proximity coupled to 1D spin-orbit coupled semiconductor nanowires \cite{Mourik:2012aa,Rokhinson:2012aa,Das:2012aa,Albrecht:2016aa,Deng:2016aa,Nichele:2017aa,Lutchyn:2018aa}, magnetic atomic chains or islands \cite{Nadj-Perge:2014aa,Ruby:2015aa,Menard:2017aa,Palacio-Morales:2019aa,Kezilebieke:2020aa}. However, experimental demonstration of Majorana fermion is far beyond definitive in view of the recent controversies surrounding the existence of MZMs in nanowire/superconductor heterostructures \cite{Zhang:2018aa}, and chiral Majorana edge modes in quantum anomalous Hall insulator-superconductor devices \cite{He:2017aa,Kayyalha:2020aa}.
\subsection{Reciprocal-space proximity effect-induced TSC}
\label{subsec:reciprocal_proximity}
By integrating the essential ingredients of a topological insulator/superconductor heterostructure, including Dirac-type surface sates and superconductivity in a single material, this constitutes the scheme of reciprocal-space or band proximity effect-induced TSC. Such an idea has been widely explored in doping carriers into topological insulators \cite{Hor:2010aa,Sasaki:2011aa,Levy:2013aa,Liu:2015aa}. For example, as shown in Figs.~\ref{fig:figure2}(a)-(b), Cu$_x$Bi$_2$Se$_3$ exhibits superconductivity below $T_c \sim 3.8$ K for $0.12\le x \le 0.15$ \cite{Hor:2010aa}. In this mild doping regime, ARPES measurements show that the topological surface states survive up to the Fermi level \cite{Wray:2010aa}. By invoking the proximity effect between superconducting bulk states and topological surface states, this material serves as a promising platform for exploiting topological superconductivity and Majorana fermion \cite{Sasaki:2011aa,Hsieh:2012aa,Bay:2012aa,Levy:2013aa,Kirshenbaum:2013aa,Asaba:2017aa}. Although the origin of the ZBCP observed in point-contact measurements is still under active debates \cite{Sasaki:2011aa,Kirzhner:2012aa,Levy:2013aa}, the nuclear magnetic resonance (NMR) measurements of Knight shift \cite{Matano:2016aa} and specific heat measurements \cite{Yonezawa:2017aa} show that the spin-rotational symmetry is spontaneously broken below $T_c$ in Cu$_x$Bi$_2$Se$_3$, suggesting spin-triplet pairing and topological superconductivity.
Similar band proximity effect has been studied in iron-based superconductors \cite{Xu:2016aa,Shi:2017aa,Zhang:2017aa,Zhang:2018ab,Liu:2018aa,Hao:2018aa,Zhang:2019aa,Kong:2019aa,Wang:2020aa}. A typical example is FeTe$_{1-x}$Se$_x$, whose crystal structure is shown in Fig.~\ref{fig:figure2}(c). This material possesses topologically nontrivial band structures that are characterized by Dirac-type surface states for $x \sim 0.5$ \cite{Wang:2015aa,Wu:2016aa,Xu:2016aa}. Spin-resolved ARPES measurements in FeTe$_{0.55}$Se$_{0.45}$ reveal an isotropic superconducting gap with a size of $\Delta \sim 1.8$ meV in the spin-momentum-locking surface states caused by band proximity effect \cite{Zhang:2018ab}, as schematically depicted in Fig.~\ref{fig:figure2}(d), strongly indicating topological superconductivity. Moreover, a robust ZBCP is detected inside a surface magnetic vortex core in the STS \cite{Wang:2018aa,Machida:2019aa}, and the magnitude of the tunneling conductance is close to the conductance quantum of $2e^2/h$ \cite{Zhu:2020aa}. A more adequate review on realizing TSC and Majorana fermion in iron-based superconductors can be found in a separate review within this volume \cite{Ding:2022aa}. In view of its relative high critical temperature, iron-based superconductor serves as a superior condensed-matter platform for realizing and manipulating Majorana fermions.
\begin{figure}[H]
\centering
\includegraphics[scale=1]{Fig2.pdf}
\caption{(Color online) (a) Crystal structure of Cu$_x$Bi$_2$Se$_3$ realized by intercalating Cu atoms between quintuple layers of Bi$_2$Se$_3$. (b) Resistivity measured for $x = 0.12$, where superconducting transition occurs at $T_c= 3.8$ K. An upper critical field of $\mu_oH_{c2}$ $\sim 1$ T is observed. (c) Crystal structure and Brillouin zone (BZ) of Fe(Te, Se). (d) Schematic of the proximity effect between superconducting bulk states and topological surface states. (a) and (b) are reprinted from ref.~\cite{Hor:2010aa} with permission from American Physical Society. (c) and (d) are reprinted from ref.~\cite{Zhang:2018ab} with permission from AAAS.}
\label{fig:figure2}
\end{figure}
\section{Candidate systems for intrinsic topological superconductivity}
\label{sec:intrinsic}
Superconductivity and nontrivial topology of an intrinsic TSC emerge from the same electronic states. Intrinsic TSC is usually associated with unconventional pairing arising from strong electronic correlations. As illustrated in Eq.~(\ref{eq:gapequation}), if the many-electron correlation-dressed interaction matrix $V(\bm{k},\bm{k}')$ has an odd-parity structure with respect to $\bm{k}$ or $\bm{k}'$, the resulting superconducting state prefers spin-triplet pairing and $\Delta(\bm{k})$ possesses $2\pi$ phase winding around the Fermi surface. Because there are limited candidates of the intrinsic TSC, in this section, we review several representative examples ranging from 3D to 2D materials.
\subsection{Sr$_2$RuO$_4$}
Superconductivity was discovered in Sr$_2$RuO$_4$ in 1994 by Maeno \textit{et al} \cite{Maeno:1994aa}. Although this material shares similar tetragonal crystal structure with high transition temperature (high-$T_c$) Cu-based superconductors, Sr$_2$RuO$_4$ exhibits a much lower critical temperature of $T_c \sim 1.5$ K \cite{Mackenzie:1998aa,Mackenzie:1998ab}. The normal state of Sr$_2$RuO$_4$ does not show long-range magnetic orders. Nevertheless, strong ferromagnetic fluctuations are observed in low temperatures, likely preferring spin-triplet superconductivity \cite{Rice:1995aa,Mazin:1997aa}. Signatures of odd-parity spin-triplet pairing in Sr$_2$RuO$_4$ are revealed in various types of experimental measurements including: (i) The NMR measurements show that the Knight shift remains constant \cite{Ishida:1998aa} or increase slightly \cite{Ishida:2015aa} (similar to the A phase of $^3$He \cite{Leggett:1975aa}) across $T_c$, suggesting that the spin susceptibility survives in the superconducting state, consistent with spin-triplet pairing; (ii) Spin relaxation is detected in muon spin rotation ($\mu$SR) measurements below $T_c$ \cite{Luke:1998aa}, indicating the existence of internal magnetic field, which breaks time-reversal symmetry of the superconducting state; (iii) Nonzero magneto-optic Kerr rotation in the superconducting state of Sr$_2$RuO$_4$ is another evidence of time-reversal symmetry breaking \cite{Xia:2006aa}; (iv) Phase-sensitive measurements reveal that the zero-field critical current of a $90$-degree corner Josephson junction lies between the maximum and minimum critical currents achieved at finite magnetic fields \cite{Nelson:2004aa}, consistent with $p$-wave pairing; (v) Signatures of half-quantized vortices are observed in Sr$_2$RuO$_4$ in the presence of an oblique magnetic field \cite{Jang:2011aa}. These experimental observations single out the spin-triplet chiral $p$-wave pairing as the most likely pairing symmetry in Sr$_2$RuO$_4$ superconductor. Particularly, the recent quasi-particle tunneling spectroscopy measurements found signatures of edge states in Sr$_2$RuO$_4$ below $T_c$ \cite{Kashiwaya:2011aa}, which are further shown to be consistent with chiral $p$-wave superconductivity \cite{Kashiwaya:2011aa}. Although these earlier experimental observations suggested that Sr$_2$RuO$_4$ is a chiral $p$-wave spin-triplet superconductor, the recently observed deduction of Knight shift in the superconducting states of strained and untrained Sr$_2$RuO$_4$ \cite{Pustogow:2019aa} casts serious doubt on the previously believed odd-parity spin-triplet pairing. In fact, the precise pairing symmetry of Sr$_2$RuO$_4$ is still highly debated. Recent thermodynamic \cite{Benhabib:2021aa}, ultrasound \cite{Ghosh:2021aa}, and muon spin relaxation \cite{Grinenko:2021aa} measurements showed signatures of two-component order parameter in Sr$_2$RuO$_4$. Based on these experimentally revealed constraints, various theoretical proposals of even-parity spin-singlet pairing \cite{Gingras:2019aa,Romer:2019aa,Roising:2019aa,Suh:2020aa,Kivelson:2020aa} have also been proposed to understand the exotic superconducting properties of this material.
\subsection{Heavy-fermion material UTe$_2$}
Because spin-triplet pairing is commonly mediated by ferromagnetic fluctuation, heavy-fermion systems that contain $f$ electrons and possess both strong electronic correlations and magnetism are promising candidates for odd-parity spin-triplet superconductors. A typical heavy-fermion family that exhibits superconductivity is uranium-based compounds, such as UPt$_3$ \cite{Stewart:1984aa}, UGe$_2$ \cite{Saxena:2000aa}, URhGe \cite{Aoki:2001aa} and UCoGe \cite{Huy:2007aa}. There are various experimental signatures showing the coexistence of superconductivity and ferromagnetic fluctuation in these materials \cite{Aoki:2019aa}. Recently, superconducting transition temperature of $T_c\sim 1.6$ K has been observed in UTe$_2$ \cite{Ran:2019aa}, a new family member of uranium-based heavy-fermion compounds. In the phase diagram of uranium-based heavy-fermion superconductors, higher $T_c$ observed in UTe$_2$ than other family members is likely related to the fact that UTe$_2$ locates more closer to the critical ferromagnetic fluctuation point \cite{Ran:2019aa}.
Several types of experimental measurements show signatures of intrinsic topological superconductivity in UTe$_2$: (i) Temperature-independent NMR Knight shift across $T_c$ \cite{Ran:2019aa}; (ii) Extremely large anisotropic upper critical magnetic fields that are well in excess of the Pauli limit \cite{Aoki:2019ab}; (iii) The low-temperature magnetic behaviors manifest as a quantum critical ferromagnet with strong magnetic fluctuations \cite{Ran:2019aa}; (iv) A re-entrant superconducting phase at magnetic fields beyond 65 T \cite{Ran:2019ab}; (v) STS signatures of chiral in-gap states at step edges, resembling the Majorana boundary states in a TSC \cite{Jiao:2020aa}. All these experimental observations in together suggest UTe$_2$ is a promising intrinsic spin-triplet chiral $p$-wave TSC.
\subsection{Monolayer Pb$_{3}$Bi alloys}
\label{subsec:PbBiRG}
As discussed in the beginning of Sec.~\ref{sec:proximity}, nontrivial phase winding of superconducting order parameter can arise from either geometric phase of the single-particle wave function effect or many-electron correlation effect. For systems containing both effects, it is therefore essential to explore the interplay between them, and how they mutually influence each other in realizing intrinsic TSC. In 2019, Qin \textit{et al} proposed that the monolayer of Pb$_3$Bi grown on Ge(111) can harbor unconventional chiral $p$-wave superconductivity by taking advantage of the interplay between electron correlation and geometric phase \cite{Qin:2019aa}.
Before the theoretical proposal, superconductivity has been observed in ultra thin Pb film consisting of two atomic layers \cite{Qin:2009aa}, and later in one atomic Pb layer \cite{Zhang:2010aa}. Comparing to 3D bulk material, electronic screening in these 2D layers is dramatically reduced due to dimensional reduction, leading to an enhancement of electronic correlation. Moreover, the strong spin-orbit coupling of monolayer Pb$_3$Bi and substrate-induced inversion symmetry breaking result in non-vanishing geometric phase of the normal electronic states \cite{Qin:2019aa}.
The band structure of Pb$_3$Bi/Ge(111) is shown in Fig.~\ref{fig:figure3}(a), where an extremely large Rashba-type spin-orbit coupling splitting and a saddle-like band structure emerges around the Fermi level. The density of states exhibits a type-II van Hove singularity (VHS) \cite{Yao:2015aa} emerging from the Rashba-split band characterized by non-vanishing geometric phase, which further contributes phase term to the electron-electron interaction \cite{Shi:2019aa}.
Within saddle-point patch approximation, an effective interacting model is developed. As depicted in Fig.~\ref{fig:figure3}(b), $\gamma_4$ is dressed by a phase factor of $\phi_4$ originating from geometric phase of the wavefunction. A couple of complex renormalization group (RG) flow equations are derived as \cite{Qin:2019aa}
\begin{equation}
\begin{aligned}
\frac{ \text{d} \gamma_1}{ \text{d}y}&=2(\eta-y)\gamma_1^2 -8\gamma_2\gamma_3-4y |\gamma_4|^2,\\
\frac{ \text{d} \gamma_2}{ \text{d}y}&= (\eta'_1-\eta_1-2)\gamma_2^2-4\gamma_1\gamma_3-2 \gamma_3^2+\eta'_1 |\gamma_4|^2,\\
\frac{ \text{d} \gamma_3}{ \text{d}y}&= -4(\gamma_1\gamma_2+\gamma_2\gamma_3)+(\eta'_1-\eta_1) \gamma_3^2+\eta'_1 |\gamma_4|^2, \\
\frac{ \text{d} \gamma_4}{ \text{d}y}&=-4y \gamma_1\gamma_4+2y (\gamma_4^*)^2+2 \eta'_1(\gamma_2+\gamma_3)\gamma_4,
\end{aligned}
\label{eq:equation3}
\end{equation}
where the relevant coefficients can be found in ref.~\cite{Qin:2019aa}. By solving these differential equations, pairing instability occurs at a critical temperature $T_c \sim 4\Lambda e^{-y_c}$, where $\Lambda$ is the ultraviolet energy cutoff and $y_c$ is the critical RG flow time.
As shown in Figs.~\ref{fig:figure3}(c)-(d), the renormalized geometric phase flows to three stable fixed points at $\pm 2\pi/3$ and $0$, preferring ($p_x\pm ip_y$)-wave and $f$-wave pairings, respectively. Given the robustness of the stable points, two thirds of the phase diagram shown in Fig.~\ref{fig:figure3}(d) are able to harbor chiral $p$-wave superconductivity. The pairing mechanism can arise either from electron-phonon interaction or from electron-electron repulsive interaction. Moreover, the critical RG flow time $y_c$ shown in Fig.~\ref{fig:figure3}(c) exhibits minima at stable fixed points, suggesting enhanced superconducting critical temperature. Overall, monolayer Pb$_3$Bi serves as a promising material for realizing 2D intrinsic chiral $p$-wave TSC.
\begin{figure}[H]
\centering
\includegraphics[scale=0.44]{Fig3.pdf}
\caption{(Color online) (a) Band structure and density of states of Pb$_3$Bi/Ge(111). Saddle point of the Rashba-split band is marked by red arrow and a type-II VHS emerges at $\sim$ 0.1 eV below the Fermi level. (b) Four distinct electron-electron scattering channels. A single monopole is schematically depicted at the BZ center, giving phase $\phi_4$ to $\gamma_{4}$. (c) RG flow of $\phi_4$ with stable fixed points highlighted by red dots. (d) Superconducting phase diagram spanned by $\gamma_4$. Red lines hight three stable fixed points that favor chiral $p_x \pm ip_y$ and $f$-wave pairing states. (a)-(d) are reprinted from ref. \cite{Qin:2019aa} with permission from Springer Nature.}
\label{fig:figure3}
\end{figure}
\subsection{Graphene-based systems}
Graphene is a monolayer isolated graphite consisting of carbon atoms arranged on a honeycomb lattice. The low-energy band structure of graphene possesses a linear dispersion associated with Dirac-like electronic excitations \cite{Castro-Neto:2009aa}. By doping graphene to the VHS, it was proposed that chiral $d$-wave superconductivity can emerge from repulsive electron-electron interaction \cite{Nandkishore:2012aa}. In experiments, it is hard to reach the VHS of graphene through conventional electric gating. Superconductivity is observed in heavily doped monolayer graphene with alkali adatoms, where both of the conventional electron-phonon interaction and pure electron-electron interaction are invoked to explain the observed superconductivity \cite{McChesney:2010aa,Profeta:2012aa,Ludbrook:2015aa}.
By stacking two sheets of graphene with a relative $\sim$$1.1^{\circ}$ twisting angle (magic angle), Bistritzer and MacDonald found the corresponding moir\'e mini bands are flat near Fermi level \cite{Bistritzer:2011aa}, leading to VHS in the density of states. The quenched kinetic energy in magic-angle twisted bilayer graphene (MATBG) strongly enhances the electronic correlation. In 2018, Cao \textit{et al} discovered correlated insulating state at half filling in MATBG \cite{Cao:2018ab} and superconductivity upon slightly doping away from half filling \cite{Cao:2018aa}. The phase diagram of MATBG resembles that of Cu-based high-$T_c$ superconductor, where superconductivity emerges upon doping away from Mott insulator \cite{Dagotto:1994aa}. In contrast to cuprates, MATBG has an exceptional property of low carrier density that is tunable via external electrostatic gating, making it an ideal platform for systematically studying strong correlation effects \cite{Yankowitz:2019aa,Lu:2019aa,Balents:2020aa,Stepanov:2020aa,Saito:2020aa,Wong:2020aa,Zondiner:2020aa,Liu:2021aa,Stepanov:2021aa,Saito:2021aa,Choi:2021aa,Choi:2021ab,Jaoui:2022aa}. To date, the underlying mechanism of superconductivity in MATBG remains under active debates; both the conventional electron-phonon interaction mediated $s$-wave pairing \cite{Wu:2018aa,Choi:2018aa,Lian:2019aa,Angeli:2019aa,Lewandowski:2021aa} and strong electron-electron interaction drived unconventional pairing \cite{Guo:2018aa,Dodaro:2018aa,Liu:2018ab,Isobe:2018aa,Kennes:2018aa,Sherkunov:2018aa,Xie:2019aa,Kerelsky:2019aa,Choi:2019aa,Jiang:2019aa,Gonzalez:2019aa,Roy:2019aa,Huang:2019aa,Ray:2019aa,You:2019aa,Khalaf:2020aa,Qin:2021aa,Cea:2021aa,Huang:2021aa} have been theoretically proposed. In particular, there are theoretical studies showing that superconductivity in MATBG is topologically nontrivial \cite{Xu:2018aa,Fidrysiak:2018aa,Chew:2021aa}.
Mirror symmetric magic-angle twisted trilayer graphene (MATTG) consists of three layers of graphene stacked in a mirror symmetric configuration with twisted top and bottom layers \cite{Park:2021aa}. MATTG possesses nearly identical flat bands to MATBG at a magic angle that differs from $1.1^{\circ}$ by a factor of $\sqrt{2}$ \cite{Khalaf:2019aa,Mora:2019aa,Carr:2020aa,Zhu:2020ab,Lei:2020aa}. The two systems share similar phase diagrams upon varying carrier density, including the patterns of broken flavor symmetries and the emergence of superconductivity away from the correlated insulating states \cite{Park:2021aa,Hao:2021aa}.
A remarkable difference between MATBG and MATTG superconductors is their responses to in-plane magnetic fields. The critical in-plane magnetic field of MATBG is compatible with Pauli paramagnetic limit \cite{Cao:2021ab}, whereas superconductivity in MATTG survives to in-plane magnetic fields that are well in excess of this limit \cite{Cao:2021aa}. Such distinct in-plane critical magnetic fields in MATBG and MATTG can be well explained by invoking spin-triplet pairing in both systems \cite{Qin:2021ab}, where the orbit effect that breaks Cooper pairing in MATBG is quenched by the inherent mirror symmetry in MATTG \cite{Qin:2021ab,Lake:2021aa}. These studies suggest twisted graphene multilayers are promising candidates for harboring spin-triplet superconductivity.
Superconductivity was recently observed in electric-gated untwisted rhombohedral trilayer graphene (RTG) \cite{Zhou:2021aa} and Bernal-stacking bilayer graphene \cite{Zhou:2022aa}. RTG exhibits two superconducting domes upon varying carrier density. The higher $T_c$ dome emerges from a paramagnetic normal state with annular Fermi sea \cite{Zhou:2021aa,Zhou:2021ab}, which is demonstrated to be able to harbor unconventional chiral-$p$ or $d$-wave superconductivity \cite{Chatterjee:2021aa,Ghazaryan:2021aa,Cea:2022aa,Qin:2022aa}. The other superconducting dome emerges from the spin-polarized, valley-unpolarized half-metallic state, and survives to in-plane magnetic fields far beyond the Pauli limit, indicating spin-triplet pairing \cite{Zhou:2021aa}. For Bernal bilayer graphene, superconducting phase transition occurs only at finite in-plane magnetic fields \cite{Zhou:2022aa}, an essential character of spin-triplet superconductivity. Overall, the (near) flat bands in twisted and untwisted graphene multilayers strongly enhance their electronic correlations, providing the ground for developing unconventional superconductivity, serving as candidate systems for realizing 2D intrinsic TSC.
\section{Coexistence of superconductivity and topological bands}
\label{sec:coexist}
In this section, we review several recently proposed and/or discovered 2D materials showing coexistence of superconductivity and topological bands. Based on the principle of reciprocal-space proximity effect discussed in Sec.~\ref{subsec:reciprocal_proximity}, these materials are candidates for hosting 2D or 1D topological superconducting states.
\subsection{Monolayer Pb$_{3}$Bi }
In Sec.~\ref{subsec:PbBiRG}, monolayer Pb$_{3}$Bi grown on Ge(111) is shown to be an intrinsic 2D TSC at VHS band filling. By tuning the Fermi level to the Dirac point at the BZ corner, this material was also proposed to realize TSC via band proximity effect, where superconductivity and topology arise from different electronic bands \cite{Sun:2021aa}. An effective tight-binding model was developed to describe the electronic structure of Pb$_3$Bi/Ge(111) \cite{Sun:2021aa}. By comparing Fig.~\ref{fig:figure3}(a) and Fig.~\ref{fig:figure4}(a), the Rashba bands obtained from density functional theory calculation are well reproduced by the model calculation, including the giant Rashba-type splitting, type-II VHS, and the Dirac-like bands around the BZ corner. By invoking the conventional electron-phonon interaction-mediated spin-singlet pairing, as illustrated in Fig.~\ref{fig:figure4}(b), a phase transition from topologically trivial superconducting state with Chern number $C = 0$ to topological superconducting state with $C = -2$ occurs when the Bogoliubov–de Gennes (BdG) quasiparticle band gap is closed. It was further shown that the $C = -2$ state is protected by mirror symmetry \cite{Sun:2021aa}. The lower panel of Fig.~\ref{fig:figure4}(c) shows the BdG quasiparticle bands of the ribbon structure depicted in the upper panel of Fig.~\ref{fig:figure4}(c). There are two chiral Majorana edge modes propagating along the same direction at each edge, consistent with the Chern number calculation.
\begin{figure}[H]
\centering
\includegraphics[scale=0.47]{Fig4.pdf}
\caption{(Color online) (a) Rashba bands calculated from the tight-binding model. (b) BdG quasiparticle band gap (color scale) versus Zeeman field $V_z$ and chemical potential $\mu$ with pairing potential $\Delta = 0.3$ meV. (c) Quasiparticle bands of the ribbon structure depicted in the upper panel, where $\Delta = 0.3$ meV and $V_z = 1.5$ meV. Oscillation of chiral Majorana edge modes is caused by finite size effects \cite{Zhou:2008aa,Wada:2011aa}. (a)-(c) are reprinted from ref.~\cite{Sun:2021aa} with permission from American Physical Society.}
\label{fig:figure4}
\end{figure}
Experimental realization of TSC and chiral Majorana fermions in monolayer Pb$_{3}$Bi is highly promising due to the following reasons: (i) Both the VHS and K-point Dirac cone are close to Fermi level and accessible via chemical doping or ionic liquid gating \cite{Ye:2010aa}; (ii) Pairing potential with a magnitude $\Delta = 0.3$ meV is easily obtained because the experimentally observed superconducting gaps for Pb thin films and related systems range from 0.3 meV to 1 meV \cite{Qin:2009aa,Zhang:2010aa,Sekihara:2013aa,Yamada:2013aa,Nam:2016aa}; (iii) The large Land\'e $g$-factor makes it possible to convert the system into a TSC by applying a small external magnetic field that does not completely suppress superconductivity. For example, by taking $g\sim100$ \cite{Lei:2018aa}, the topological phase transition occurs at $V_z \sim \Delta$, amounting to $B_{\bot} = 0.1 $ T. This value is smaller than the critical magnetic field $B_{c\bot}$ of Pb thin films, ranging from 0.15 T to 1 T \cite{Zhang:2010aa,Yamada:2013aa}. Overall, monolayer Pb$_3$Bi alloy is an appealing platform for realizing 2D TSC.
In addition to the superconducting state, the structural, electronic, and topological properties of Pb$_3$Bi/Ge(111) have been systematically studied using first-principles calculations \cite{Li:2020aa}. Three different structural phases labeled respectively by T$_1$, H$_3$ and T$_4$ were shown to be energetically nearly degenerate. Moreover, the electronic structures of H$_3$ and T$_4$ phases are topologically nontrivial and can harbor quantum spin Hall effect \cite{Li:2020aa}.
\subsection{2D CoX (X = As, Sb, Bi) }
Discoveries of high-$T_c$ superconductivity in Cu- and Fe-based superconductors have generated tremendous interests over the last decades \cite{Damascelli:2003aa,Paglione:2010aa,Si:2016aa}. Among the Fe-based superconductors, bulk FeSe possesses relatively simple crystal structure and exhibits a critical temperature $T_c \sim 8 $ K at ambient pressure \cite{Hsu:2008aa}, which can be enhanced to 37 K under high pressure \cite{Medvedev:2009aa}. Importantly, it provides an elemental building block, namely, the FeSe monolayer, which can be placed on proper substrates \cite{Wang:2012aa,Peng:2014aa,Ding:2016aa} or stacked into superlattice structures \cite{Guo:2010aa,Lu:2015ab} and then results in substantially enhanced $T_c$’s up to tens of Kelvin. Extensive studies have been carried out to search for potential high-$T_c$ superconductors beyond Cu- and Fe-based families. A comprehensive project screened over 1000 candidate materials, but no high-$T_c$ superconducting material was indentified beyond Cu- and Fe-based families \cite{Hosono:2015aa}. In those earlier searches, only systems with intrinsically layered structures were considered. More recent studies have revealed that 2D materials with no layered bulk phase can also be stabilized in the monolayer or few-layer regime \cite{Zhu:2017aa,Lucking:2018aa}, effectively broadening the space of 2D materials as candidate high-$T_c$ superconductors.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{Fig5.pdf}
\caption{(Color online) (a) and (b) Schematic atomic structures of NiAs-type bulk CoSb and PbO-type monolayered CoSb. (c) and (d) Electronic structures (left panel) and density of states (right panel) of freestanding CoSb and FeSe monolayers. (e) Most stable structure of CoSb monolayer on the SrTiO$_3$(001) substrate, where the Sb atoms sit right above the Ti atoms. (f) Four commonly considered magnetic configurations of CoSb/STO or FeSe/STO (top panel). Magnetic moment $M$ (red) and relative energy $\Delta E$ (blue) of the four magnetic configurations versus the on-site Hubbard $U$ for the two systems. This figure is reprint from ref.~\cite{Ding:2020aa} with permission from American Physical Society.}
\label{fig:figure5}
\end{figure}
In 2020, Ding \textit{et al} \cite{Ding:2020aa} identified the monolayered CoSb to be an attractive candidate for harboring high-$T_c$ superconductivity. Their prediction is initially guided by the isovalency rule, namely, keeping the same number of valence electrons as FeSe but tuning other physical factors. Although the bulk structure of CoSb is hexagonal and non-layered, as shown in Fig.~\ref{fig:figure5}(a), its freestanding monolayer can be stabilized to the tetragonal structure depicted in Fig.~\ref{fig:figure5}(b). Particularly, monolayer CoSb and FeSe share an identical crystal structure and similar electronic bands, as illustrated in Figs.~\ref{fig:figure5}(c) and (d). For monolayer FeSe-based superconductors, the dominant pairing mechanism is still under active debate, with antiferromagnetic spin fluctuations \cite{Scalapino:2012aa,Chubukov:2012aa}, electron-phonon coupling (EPC) \cite{Lee:2014aa,Gerber:2017aa,Zhang:2019ab}, and the cooperative effect of the two to be frequently invoked \cite{Lee:2014aa,Lee:2018aa}. By employing the EPC strength as an indicator for superconductivity, Ding \textit{et al} \cite{Ding:2020aa} found that the freestanding CoSb monolayer ($T_c = 0.9$ K) possesses higher critical temperature than freestanding FeSe monolayer ($T_c = 0.5$ K). In analogy to FeSe, monolayer CoSb can be supported on SrTiO$_3$(001) [CoSb/STO, Fig.~\ref{fig:figure5}(e)], offering promising alternative platforms for realizing high-$T_c$ superconductivity under the regulation of substrates including significant charge transfer \cite{He:2013aa,Tan:2013aa} and the longitudinal optical phonons of the STO penetrating into the overlayers \cite{Zhang:2016aa}. In contrast to FeSe/STO, CoSb/STO shows a much weaker tendency toward developing magnetization, and exhibits completely different magnetic properties, as shown in Fig.~\ref{fig:figure5}(f). In addition, other cobalt pnictides, such as CoAs and CoBi monolayers, have also been demonstrated as candidate 2D high-$T_c$ superconductors, even though their bulk phases have no resemblance to layering either \cite{Gao:2021aa}. These findings offer new attractive candidate systems beyond the well-known highly crystalline 2D superconductors \cite{Saito:2016ab}.
Motivated by the theoretical prediction \cite{Ding:2020aa}, several experimental groups have made efforts to study this system, with preliminary confirmative findings. Specifically, Xue and collaborators fabricated CoSb films in orthogonal structure on STO and observed symmetric gap around the Fermi level with coherence peaks at 7 meV as well a diamagnetic transition at 14 K, indicative of superconductivity \cite{Ding:2019aa}. CoSb$_{1-x}$ nanoribbons with quasi-one-dimensional stripes on STO were also fabricated, showing signatures of Tomonaga-Luttinger liquid state \cite{Lou:2021aa}.
Beyond superconductivity, the normal-state band topology of CoX monolayers has been systematically studied \cite{Gao:2021aa}. First-principles calculations of the band structures of freestanding CoX monolayers are shown in Figs.~\ref{fig:figure6}(a)-(c). In the absence of SOC, the local gaps around $\Gamma$ point between the conduction band minimum (CBM) and valence band maximum (VBM) are 20, 108, and 517 meV for CoAs, CoSb, and CoBi, respectively. In the presence of SOC, band inversion occurs at $\Gamma$ point for CoAs [Fig.~\ref{fig:figure6}(a)] and CoBi [Fig.~\ref{fig:figure6}(b)], indicating both systems are topologically nontrivial, as also confirmed by the calculated topological invariant $Z_2 = 1$. In contrast, the SOC in CoSb [Fig.~\ref{fig:figure6}(c)] is not strong enough to close and reopen the band gap. Nevertheless, a moderate tensile biaxial strain of 0.7\% can reduce the band gap to 59 meV, which can then be closed and reopened by the SOC [Fig.~\ref{fig:figure6}(d)], driving the system into a topologically nontrivial phase. The topological properties of CoX/STO have also been investigated. It turns out that each CoX monolayer possesses nontrivial band topology under the lattice constant of STO substrate and the influence of the STO bands on the topology of CoX is negligible. These results suggest that CoX/STO systems are topologically nontrivial, as characterized by the odd $Z_2$ invariants and robust edge states shown in Fig.~\ref{fig:figure6}(e).
Since CoX/STO systems are able to harbor both high-$T_c$ superconductivity and nontrivial band topology, by further invoking the reciprocal-space proximity effect between 2D bulk superconducting sates and topological edge states, these systems are new promising candidates for realizing 1D topological superconductivity.
\begin{figure}[H]
\centering
\includegraphics[scale=0.57]{Fig6.pdf}
\caption{(Color online) (a)-(d) The left panels are band structures of freestanding CoAs, CoBi, CoSb monolayers and CoSb monolayer with tensile biaxial strain of 0.7\%, respectively. These results are calculated with SOC. The right panels show the enlarged and orbital-resolved band structures within the regimes marked by solid red boxes in the left panels. The top and bottom panels plot the results calculated without and with SOC, respectively. The radii of the red, blue, and green dots indicate the spectral weights of different $d$ orbitals of Co atoms. The orange dashed lines in (a)-(d) correspond to the curved chemical potentials. (e) Topological edge states (TESs) of CoX/STO along the [100] edge. The warmer colors denote higher local density of states, and the blue regions denote the bulk band gaps. The Fermi levels are all set at zero. (a)-(e) are modified after ref.~\cite{Gao:2021aa} with permission from American Chemical Society.}
\label{fig:figure6}
\end{figure}
In addition to CoX systems, there are also other 2D candidate systems that are highly like to support the coexistence of superconductivity and nontrivial band topology. One typical example is PdTe$_2$ grown on STO substrate, where robust superconductivity was observed down to bilayer thickness of PdTe$_2$ \cite{Liu:2018ac}. Nevertheless, experimental confirmation of nontrivial band topology in PdTe$_2$/STO is challenging because the predicted topological edge states turn out to be heavily overlapped with the 2D bulk states \cite{Liu:2018ac}. Another example is monolayer W$_2$N$_3$, which can be mechanically exfoliated from its van der Waals bulk counterpart. Based on the fully anisotropic Migdal-Eliashberg formalism calculation, this material is unveiled to exhibit superconductivity below $T_c\sim21$ K associated with a zero-temperature superconducting gap of $\sim$ 5 meV \cite{You:2021aa,Campi:2021aa}. The topological helical edge states emerge at 0.5 eV above the Fermi level, where superconductivity was shown to be persistent \cite{You:2021aa,Campi:2021aa}.
\subsection{Stanene atomic layers}
Stanene was proposed to be a large-gap quantum spin Hall insulator \cite{Xu:2013aa}, which has stimulated extensive efforts on synthesizing stanene films under diverse conditions and subsequently characterizing their emergent properties. Monolayered stanene was first successfully fabricated on the Bi$_2$Te$_3$(111) substrate by molecular beam epitaxy \cite{Zhu:2015aa}, and later also on other substrates, such as Sb(111) \cite{Gou:2017aa} and InSb(111) \cite{Xu:2018ab}. Such monolayered stanene films with compressed strain by the substrates were shown to exhibit only a trivial band structure. An advance in realizing nontrivial band topology was achieved by growing stanene on the Cu(111)-($2\times 2$) surface, where an inverted band order was observed in the ultraflat yet metastable stanene films \cite{Deng:2018aa}. Indications of edge states of monolayered stanene grown on the InSb(111)-($3\times 3$) substrate have been reported, even though the films contain pronounced defects \cite{Zheng:2019aa}. An unexpected high-buckled $\sqrt{3}\times \sqrt{3}$ stanene grown on the Bi(111) substrate has also shown a gap shape near the Fermi level and exhibited the existence of the edge states, which was further theoretically confirmed to be topologically nontrivial \cite{Song:2021aa}. At a separate front, superconductivity of few-layer stanene grown on PbTe(111) has been detected using transport measurements \cite{Liao:2018aa}, and more intriguingly, type-II Ising pairing (which will be described later) has been proposed to interpret the observed unusually large in-plane upper critical fields beyond the Pauli paramagnetic limit \cite{Wang:2019aa,Falson:2020aa}. These exciting developments demonstrate that stanene can serve as an ideal platform for studying the interplay between the nontrivial band topology and superconductivity, two central ingredients that can be further explored for realization of 2D topological superconductivity by invoking band proximity effect.
A major obstacle severely limiting the exploration of stanene is that the overall quality of such stanene films is still far from satisfactory, such as containing multi-domains \cite{Zhu:2015aa} or pronounced defects \cite{Zheng:2019aa}. A recent theoretical work reported that a stanene monolayer obeys different atomistic mechanisms when grown on different Bi$_2$Te$_3$(111)-based substrates, and in particular, the Bi(111)-bilayer precovered Bi$_2$Te$_3$(111) substrate was predicted to strongly favor the growth of single crystalline stanene \cite{Zhang:2018ac}. This prediction has been largely supported by the latest experimental demonstration of high-quality few-layer stanene grown on the Bi(111) substrate \cite{Zhao:2022aa}. As shown in Figs.~\ref{fig:figure7}(a)-(c), one- to five-layer stanene films with high quality were successfully fabricated on the Bi(111) films that were first grown on a silicon wafer, where the stanene films are stable at room temperature. The topmost surfaces of such stanene films are saturated by hydrogen atoms, similar as shown in previous studies \cite{Zhu:2015aa,Zang:2018aa}. During the growth of each layer of stanene, the most important processes are low-temperature deposition of Sn atoms and sufficient surface passivation of the films by the residual hydrogen. The systematic first-principles calculations further reveal that the surface passivation of the growth front is essential in achieving layer-by-layer growth of the high-quality stanene films, with the hydrogen functioning as a surfactant \cite{Copel:1989aa,Zhang:1997aa}. Specifically, as shown in Fig.~\ref{fig:figure7}(d), the second-order difference of the formation energy of an N-layer stanene film ($E^{''}_{s}(N)$) with hydrogen passivation is equal or larger than zero, indicating that the film is stable; in contrast, $E^{''}_{s}(N)$ without hydrogen passivation is negative, indicating that the film is unstable.
\begin{figure}[H]
\centering
\includegraphics[scale=0.57]{Fig7.pdf}
\caption{(Color online) Coexistence of the robust edge states and superconductivity in one- to five-layer stanene films grown on Bi(111). (a) Schematics of a freestanding monolayer stanene (left) and a sample structure of four-layer stanene/Bi(111)/Si with the top surface of stanene passivated by hydrogen (right). (b) Topography of the first-layer stanene films on Bi(111), with a profile shown at the bottom depicting the height along the black dotted line. Inset: atomically resolved image taken on the stanene film. (c) Topography of multilayer stanene ($\sim$ 2.5 layers) on Bi(111), with the corresponding height profile shown below. (d) The second-order differences of the formation energies of N-layer stanene films as a function of the thickness of the stanene layers. (e) dI/dV mappings of a fourth-layer stanene island at different energies. (f) dI/dV mappings of different-layered stanene islands taken at the energy of the respective bulk dip minimum. (g) Layer-dependent superconducting gap of stanene films. (b)-(g) are from ref.~\cite{Zhao:2022aa} with permission from American Physical Society.}
\label{fig:figure7}
\end{figure}
With these high-quality stanene films, the exotic properties have been further experimentally observed, including the long-sought edge states and superconductivity. First, the \textit{in situ} STM/STS measurements identify the enhanced intensity of local density of states at two different zigzag edges of the stanene islands with different thicknesses, signifying the existence of the robust edge states, as shown in Figs.~\ref{fig:figure7}(e) and (f). First-principles calculations further show that all these films on Bi(111) have well-defined continuous gaps across the whole BZ with the inclusion of SOC, yielding an extraordinarily robust nontrivial topological invariant $Z_2$ that is impervious to the layer thickness (see Table \ref{tab1}). The physical origin of the robust nontrivial topology is the consequence of interfacial coupling with the Bi(111) substrate. Qualitatively, the Bi(111) substrate, with inherently strong SOC, is able to promote the nontrivial topology in the few-layer stanene via effective proximity effects. Furthermore, possible origins for topologically trivial edge states such as dangling bonds or H-passivation at the edges can be excluded by a comparative experiment of growing stanene on Bi$_2$Te$_3$. Next, clear superconducting gaps were detected on the stanene islands where the robust edge states were also observed. Fig.~\ref{fig:figure7}(g) shows the layer-dependent superconductivity of stanene films measured at 400 mK, exhibiting a wider and deeper superconducting gap with a thicker layer. In particular, the existence of a superconducting gap in monolayer stanene was observed for the first time, which might be enabled by the higher charge transfer level from the Bi(111) substrate. Overall, the coexistence of nontrivial topology and intrinsic superconductivity renders stanene a promising candidate for realizing 2D topological superconductivity in a simple single-element system. As aforementioned, such topological superconductivity originates from band proximity effect that has been experimentally demonstrated in several 3D systems \cite{Yin:2015aa,Zhang:2018ab,Wang:2018aa,Wang:2020aa,Zhang:2019aa,Liu:2018aa}.
\begin{table}[H]
\footnotesize
\caption{$Z_2$ topological invariants of different-layered stanene films under different conditions. The table is adapted from ref.~\cite{Zhang:2018ac} with permission from American Physical Society.}
\label{tab1}
\tabcolsep 10pt
\doublerulesep 0.025pt \tabcolsep 4pt
\begin{tabular}{ l c c c c c}
\toprule
~~~~~Conditions & 1-layer & 2-layer & 3-layer & 4-layer & 5-layer \\
\hline
w/o Bi ~~~~~~~~w/o H & 1 & 0 &1 &0 &1 \\
w/o Bi ~~~~~~~~w/ H & 0 & 0 &1 &1 &1 \\
w/ Bi ~~~~~~~~~~w/o H & 1 & 1 &1 &1 &1 \\
\begin{tabular}{@{} l @{} l}w/ Bi ~~~~~~~~~ w/ H \\ (experimental condition) \end{tabular}
& 1 & 1 &1 &1 &1 \\
\bottomrule
\end{tabular}
\end{table}
In order to realize topological superconductivity in these systems that harbor both superconductivity and topological bands via reciprocal-space or band proximity effect, one common crucial condition is the emergence of Dirac-cone-like electronic bands within the energy window of the superconducting gap. Similarly, this condition should also be fulfilled in other systems, such as monolayered Fe(Te, Se), gated WTe$_2$ monolayer, and the IrTe$_2$/In$_2$Se$_3$ heterobilayer, where the coexistence of superconductivity and topologically nontrivial bands is controllable by using proper tuning knobs, as detailed in the next section.
\section{Tunability}
\label{sec:tunable}
Various external tuning approaches have been extensively studied to modulate the physical properties of materials. The structural flexibility of 2D materials offers more opportunities for exploring the tuning impacts on their physical properties, including superconductivity, nontrivial band topology, coexistence of the two, and ultimately topological superconductivity. In this section, we will briefly review some of the latest advances in property for topological superconductivity, covering strain, gating, and ferroelectricity as the tuning knobs.
\subsection{Strain}
\label{sec:strain}
Strain is an effective way to introduce superconductivity by altering their electronic properties via changes in lattice structures for 2D materials. For example, $T_c \sim 3$ K was realized in a topological insulator of Bi$_2$Te$_3$ between 3 to 6 GPa \cite{Zhang:2011aa}, and pressure-driven superconductivity and suppressed magnetoresistance were observed in WTe$_2$ \cite{Pan:2015aa,Kang:2015aa}. Furthermore, strain can induce a topological phase transition by narrowing the band gap associated with the strain-enhanced crystal field splitting for a system. When the band gap is decreased to a critical value or even closed, enabling the SOC to further hybridize the relevant bands and open an inverted gap at high-symmetry point(s) of BZ, a topological phase transition occurs. Through the band proximity effect mentioned before, the coexistence of superconductivity and nontrivial band topology renders the system to potentially become a TSC.
One compelling example system is FeSe-based system. It has been shown that a monolayer FeSe exhibits surprisingly high $T_c$ over $\sim 65$ K when grown on a STO substrate \cite{Wang:2012aa}. For such a system, it has been further proposed theoretically that the electronic structure can be tuned by the effective strain originating from a proper substrate, and develop emergent topological properties on top of its intrinsic superconducting property, pointing to the feasibility of realizing high-$T_c$ topological superconductors \cite{Hao:2014aa}. A few recent studies provided complementary experimental evidence for potential existence of topological superconductivity in monolayered FeSe systems \cite{Wang:2016aa,Shi:2017aa,Zhang:2017aa}. In one study, the robust electronic states along the edges of an FeSe monolayer on STO were identified by both STM and ARPES measurements, with the topological nature of the hosting FeSe nanoribbons confirmed by first-principles theory for a metastable state with chequerboard antiferromagnetic configuration \cite{Wang:2016aa}. In another, systematic spectroscopic indications support the occurrence of a topological quantum phase transition of FeTe$_{1-x}$Se$_x$/STO from the normal to topological state, induced by an increasing concentration of Te as a substitutional dopant to Se \cite{Shi:2017aa}. Here, aside from the stronger SOC effects associated with Te, the substitutional doping can also be viewed as applying a chemical strain, since Te possesses a larger atomic radius than Se.
Beyond FeSe-based superconductors, impurity-assisted vortex states in LiFeAs have also been shown to exhibit MZMs \cite{Kong:2021aa}, attesting the nontrivial nature of the superconducting systems. Furthermore, as a compelling example for the strain-based tunability, a very recent study reported that the surface strain can alter the charge density waves (CDWs) appearing on the surfaces of the LiFeAs, and such CDWs can in turn regulate the special distribution of the vertex lattice that harbors the MZMs \cite{Li:2022aa}. More coverages on this line of advances can be found in a separate review within this volume \cite{Ding:2022aa}.
\begin{figure*}
\centering
\includegraphics[scale=0.5]{Fig8.pdf}
\caption{(Color online) Band structures of 1$H$-NbSe$_2$ at (a) 0 GPa, (b) 3.0 GPa, and (e) Te doping concentration $x$ = 1.0 without (upper panel) and with (lower panel) the SOC. Calculated $Z_2$ invariants for 1$H$-NbSe$_2$ at different (c) pressures and (f) Te contents. Edge states (ES) of the semi-infinite slab along an Se-terminated Nb-zigzag edge at (d) 3.0 GPa and (g) $x$ = 1.0, with the Dirac nature at the $\mathrm{\bar{X}}$ point as highlighted in the inset. In-plane upper critical field $H^{||}_{c2}$ as a function of the transition temperature $T$ at (h) 2.6 GPa and (i) $x$ = 1.0. The red dots in (h) represent the experimental data extracted from \cite{Xi:2016aa}. Panels are modified from \cite{Li:2022ab} with permission from American Chemical Society.}
\label{fig:figure8}
\end{figure*}
Going even farther, beyond the Fe-based systems, a theoretical study has proposed that monolayer NbSe$_2$ can be converted into an Ising superconductor with nontrivial band topology via physical or chemical pressuring \cite{Li:2022ab}. The notion of Ising superconductivity was proposed recently in transition metal dichalcogenide (TMD) thin-film systems such as MoS$_2$ and NbSe$_2$ that possess strong SOC and inversion symmetry breaking \cite{Lu:2015aa,Xi:2016aa}. Specifically for monolayer NbSe$_2$, it was experimentally observed to exhibit Ising superconductivity \cite{Xi:2016aa}, signified by the surprisingly high in-plane upper critical field well above the Pauli paramagnetic limit \cite{Clogston:1962aa}. In such systems, the effective Zeeman field is generated by the broken inversion symmetry, which allows the inherently strong SOC of the systems to lock the spins of the electrons moving in-plane into the out-of-plane directions. When a strain was applied, a topological phase transition can be induced in monolayer NbSe$_2$, rendering the system a promising candidate to realize topological Ising superconductivity \cite{Li:2022ab}, as detailed below.
First, with increasing hydrostatic pressure, the Nb-$d$ and Se-$p$ bands of monolayer NbSe$_2$ within the 1$H$ phase (1$H$-NbSe$_2$) are moving closer towards each other, and then touch and cross, especially along the $\Gamma$-M path in the first BZ [see Figs.~\ref{fig:figure8}(a) and ~\ref{fig:figure8}(b)]. With the SOC included, the crossing bands open a gap, associated with an explicit band inversion around the M point [see Fig. ~\ref{fig:figure8}(b)]. The corresponding topological invariants $Z_2$ are summarized in Fig. ~\ref{fig:figure8}(c), confirming that the 1$H$-NbSe$_2$ is in the topologically nontrivial phase with $Z_2$ = 1 at 2.5 GPa or higher. Secondly, via substitutional doping of Se by Te, a chemical pressure was applied on 1$H$-NbSe$_2$, leading to similar band evolution [see Fig. ~\ref{fig:figure8}(e)]. As shown in Fig. ~\ref{fig:figure8}(f), when $x$ $\geq$ 0.8, the NbSe$_{2-x}$Te$_x$ system hosts a topologically nontrivial phase with $Z_2$ = 1. For both pressuring approaches, as another manifestation of the nontrivial topology in the band structures, the edge states of a semi-infinite slab along an Se-terminated Nb-zigzag edge of NbSe$_2$ at 3.0 GPa and NbSe$_{2-x}$Te$_x$ with $x$ = 1.0 are shown in Figs. ~\ref{fig:figure8}(d) and ~\ref{fig:figure8}(g), respectively. Finally, the $T_c$’s of 1$H$-NbSe$_2$ under different pressures and at the Te doping concentration of $x$ =1.0 were further estimated, showing relatively small variations from that of pristine 1$H$-NbSe$_2$. In particular, the Ising pairing nature has been explicitly enhanced, as indicated by the significantly enhanced in-plane upper critical fields [$H^{||}_{c2}$, see Figs. ~\ref{fig:figure8}(h) and ~\ref{fig:figure8}(i)], and also as confirmed by the preliminary experimental results of NbSe$_{2-x}$Te$_x$ flakes for a broad range of Te concentrations \cite{Li:2022ab}.
\subsection{Gating}
\label{sec:gating}
In condensed matter physics, gating has been a long and well-established approach to modulate the carrier densities and various corresponding physical properties. One major step forward surrounding this traditional approach was the recent developments of ionic liquid or solid gating, allowing to reach much higher carrier densities \cite{Ono:2009aa}, and enabling discoveries of emergent physical phenomena such as superconductivity in MoS$_2$ \cite{Ye:2012aa}. The enabling power of such an innovative gating approach has also been exemplified by the discovery of Ising superconductivity in MoS$_2$ \cite{Lu:2015aa}, a subject briefly reviewed in the preceding subsection.
In Sec.~\ref{sec:strain}, we spent more discussions on how strain can be used as an effective tuning knob to induce phase transitions of superconducting systems into the topologically nontrivial regime. The application of external electric field as a form of gating has also been demonstrated to be able to induce topological phase transitions \cite{Kim:2012aa}, but here, gating is mainly invoked as a means to convert topologically nontrivial and yet non-superconducting systems into the superconducting regime. One representative line of studies was about layered TMD systems. It was first predicted that such systems including MoS$_2$ and WTe$_2$ can be quantum spin Hall insulators under proper conditions or in proper structural phases \cite{Qian:2014aa}, and robust edge states suggestive of the nontrivial topology have been observed experimentally for WTe$_2$ \cite{Fei:2017aa,Tang:2017aa}. On the other hand, as insulators, such systems are non-superconducting. To induce superconductivity, proper charge doping into the systems is indispensable, as successfully demonstrated for WTe$_2$ via gating \cite{Sajadi:2018aa,Fatemi:2018aa}. It should be noted that, even though coexistence of nontrivial topology and superconductivity remains to be achieved, the fact that the same material platform of WTe$_2$ encompasses both essential properties of topological superconductivity offers new opportunities for further investigations. In this regard, it is also worth noting that a WTe$_2$ monolayer stabilized on an $s$-wave superconductor NbSe$_2$ can exhibit superconductivity while with the robust edge states preserved \cite{Lupke:2020aa}, but the approach is conceptually similar to that of proximity-induced topological superconductivity reviewed earlier in Sec.~\ref{sec:real-proximity} (e.g., Bi$_2$Te$_3$/NbSe$_2$ \cite{Xu:2014aa,Xu:2015aa}), instead of effective gating as emphasized here.
\subsection{Ferroelectricity}
\label{sec:ferroelectricity}
As a nonvolatile, reversible, and thus more desirable approach, ferroelectric effects can be also exploited to modulate the superconductivity of an overlayer, given that a ferroelectric material harbors switchable polarization upon application of a voltage pulse. For some 3D conventional superconductors, such ferroelectric tuning of superconductivity has been achieved by forming heterostructures with ferroelectric films, such as significant $T_c$ modulations in Pb(Zr$_x$Ti$_{1-x}$)O$_3$/GdBa$_2$Cu$_3$O$_{7-x}$ \cite{Ahn:1999aa} and BiFeO$_3$/YBa$_2$Cu$_3$O$_{7-x}$ \cite{Crassous:2011aa}, and a complete switching of a superconducting transition in Nb-doped SrTiO$_3$ with Pb(Zr, Ti)O$_3$ as the ferroelectric overlayer \cite{Takahashi:2006aa}. However, the study of ferroelectric-tuned superconductivity in 2D systems has been very rare so far. In fact, the concurrent discoveries of 2D ferroelectric materials \cite{Liu:2016aa,Chang:2016aa,Ding:2017aa,Xiao:2018aa,Xue:2018aa,Yuan:2019aa} and 2D superconducting materials \cite{Xi:2016aa,Xu:2015ab,Yoshida:2018aa,Song:2021ab} should offer unprecedented opportunities for exploration of ferroelectrically tuned superconductivity and related devices, especially given the atomically sharp interfacial qualities of such van der Waals heterostructures \cite{Li:2020ac,Zhou:2021ac}.
Recently, ferroelectric switching of topological states has also been proposed theoretically in the van der Waals heterostructures of 2D trivial semiconducting/insulating and ferroelectric materials, such as Bi(111) bilayer/In$_2$Se$_3$ \cite{Bai:2020aa}, $\beta$-phase antimonene/In$_2$Se$_3$ \cite{Zhang:2021aa}, CuI/In$_2$Se$_3$ \cite{Marrazzo:2022aa}, and In$_2$Te$_3$/In$_2$Se$_3$ \cite{Huang:2022aa}. In these systems, the opposite polarization states are associated with different topological phases. Such a topological switching results from two aspects: one is that the ferroelectric polarization can change band alignments, band hybridizations, and charge transfer between the ferroelectric and trivial materials in the heterostructures. The other is that the strong SOC effect of the heavy elements in the trivial layers can inverse the bands to induce the nontrivial band topology only in one polarization.
Given the strong couplings between topological/superconducting and ferroelectric states, it is highly feasible to simultaneously tune superconductivity and band topology in 2D heterostructures using nonvolatile ferroelectric control, eventually realizing topological superconductivity. A very recent theoretical study has targeted to achieve this goal, presenting simultaneously tunable $T_c$ and band topology in a heterobilayer of superconducting IrTe$_2$ and ferroelectric In$_2$Se$_3$ monolayers \cite{Chen:2022aa}. The $T_c$ of the heterobilayer is shown to depend on the In$_2$Se$_3$ polarization, with the higher $T_c$ attributed to enhanced interlayer electron-phonon coupling when the polarization is downward. Meanwhile, the band topology is also switched from trivial to nontrivial as the polarization is reversed from upward to downward. Such 2D superconductor/ferroelectric heterostructures with coexistence of superconductivity and nontrivial band topology provide highly appealing candidates for realizing topological superconductivity, and this reversible and nonvolatile approach also offers promising new opportunities for detecting, manipulating, and ultimately braiding Majorana fermions.
\section{Conclusions and perspectives}
\label{sec:conclusion}
In this review, we have attempted to summarize some of the main developments surrounding 2D crystalline superconductors that possess either intrinsic $p$-wave pairing or nontrivial band topology. The selection of the contents has been made with the excellent review of Iwasa and collaborators \cite{Saito:2016ab} as the starting place, with some of own findings subjectively and hopefully proportionally incorporated.
We have first introduced a classification of the generic topological superconductivity reached through three different conceptual schemes: (i) real-space superconducting proximity effect-induced TSC; (ii) reciprocal-space superconducting proximity effect-induced TSC; and (iii) intrinsic TSC. Whereas the first scheme has so far been most extensively explored, the other two remain to be fully substantiated and developed, including their subtle yet intrinsic differences.
For intrinsic or $p$-wave superconductors, we have reviewed the four candidate systems, old and new, that have been explored in the field, including Sr$_2$RuO$_4$, UTe$_2$, and graphene-based systems. In particular, among the 2D systems, we have predicted a Pb$_3$Bi alloyed system properly stabilized on a Ge(111) substrate to be an appealing candidate for realizing intrinsic topological superconductivity.
For superconductivity with coexisting nontrivial band topology, we have reviewed the developments surrounding TMD monolayered systems as well as some of the candidates we identified. Notably, CoX (X = As, Sb, Bi) systems have been predicted to be stable in monolayered structural form; these systems may not only serve as new platforms for realizing high-$T_c$ superconductivity on STO, but may also harbor nontrivial band topology and robust edge states. Furthermore, we have shown experimentally that few-layered stanene films grown on Bi(111) with strong SOC are superconducting and possess robust edge states that can be attributed to their nontrivial band topology.
In the pursuit for topological superconductivity, one widely recognized crucial aspect is the tunability of such systems and properties. We have attempted to outline three of the tuning knobs in acquiring one or both of the essential ingredients of TSC, including strain, gating, and ferroelectricity. In particular, strain has been demonstrated very recently as an effective means to regulate the spatial distribution and mutual interaction of a MZM lattice \cite{Li:2022aa}. Furthermore, we optimistically expect that the reversible and nonvolatile ferroelectric tunability will play a major role in gaining precise control and manipulation of MZMs. Irrespective of which dominant tuning scheme or their combination, the very fact of 2D systems will always offer superior advantages.
A further concern is what properties a TSC should possess in order to be favorable for potential applications in fault-tolerant quantum information processing. The first important feature is tunability; as discussed in Sec.~\ref{sec:tunable}, the tunability plays a powerful role in achieving topological superconductivity and regulating the distribution of the MZMs as well as their mutual interactions. The second is to have large superconducting gaps and high transition temperatures, which make the TSC systems more robust against temperature fluctuations and enable relatively easy manipulations of the MZMs. The third is short coherence length, which ensures the MZMs to survive in high magnetic fields, reduces the probability of the MZMs being pinned by impurities, and avoids fusion of the MZMs due to interacting magnetic vortices. Collectively, such merits will help to broaden the space for manipulating MZM-based qubits.
At present, discoveries of new TSC systems and unambiguous identification of MZMs remain to be the forefront challenges and advancing direction of the field. Given this status quo, it is premature and somewhat impractical to elaborate excessively on MZM braiding and non-Abelian statistics. Nevertheless, as outlined in the preceding section, the various enabling and complementary tuning capabilities being developed in the field will undoubtedly optimize and even maximize our chances to achieve these earnestly sought objectives step by step, casting the first ray of sunlight in the dream era of topological quantum computing.
\begin{acknowledgments}
We thank many of our collaborators who have contributed to the main findings highlighted in this review, especially Dr. Jinfeng Jia, Dr. Changgan Zeng, Dr. Bing Xiang, and Dr. Chenxiao Zhao on the experimental side, and Dr. Wenjun Ding, Dr. Leiqiang Li, Dr. Jianyong Chen on the theory side. This work was supported in part by the National Natural Science Foundation of China (Grant No. 11634011 and No.11974323), the National Key R\&D Program of China (Grant No.2017YFA0303500), the Anhui Initiative in Quantum Information Technologies (Grant No. AHY170000), the Strategic Priority Research Program of Chinese Academy of Sciences (Grant No. XDB30000000), and the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302800).
\end{acknowledgments}
|
2,877,628,089,094 | arxiv | \section{Introduction} \label{sec:intro}
Dark matter (DM) is an unrevealed component of the matter in the Universe whose existence is widely supported by a broad set of observations \citep{Bertone2018a}.
For decades, many theoretical candidates have been considered for particle DM, of which two representative examples are ultralight axions ($M_{\chi}$ $\ll$ 1\,eV) and weakly interacting massive particles (WIMPs; $M_{\chi} \sim {\cal O}({\rm GeV}-{\rm TeV})$).
Both candidates have been hunted for with state-of-the-art experiments and observatories, and although these searches will continue to achieve important milestones---for example the long sought-after Higgsino may soon be within reach~\citep{Rinchiuso:2020skh,Dessert:2022evk}---so far the program has been unsuccessful \citep[for the latest reviews, see e.g.,][]{Gaskins2016, Boveia2018, Tao2020}.
The longstanding lack of a DM signal detection has driven theorists to look for DM candidates beyond the conventional parameter space.
One such candidate is ultra-heavy DM (UHDM; 10\,TeV $\lesssim M_{\chi} \lesssim m_{\rm pl} \approx 10^{19}$\,GeV).
Depending on the cosmological scenario and beyond the Standard Model (SM) theory that predicts the UHDM, the abundance and properties can vary \citep[for a broad outline, see][]{snowmass2022}; e.g., WIMPzilla \citep{Kolb1999} and Gluequark DM \citep{Contino2019}.
In addition to unexplored UHDM candidates, there are models that extend the WIMP mass range beyond $\sim$10\,TeV \citep[e.g.,][]{Harling2014, Baldes2017, Cirelli2019, Bhatia2021}.
Yet there exists a general upper limit on the WIMP mass, known as the unitarity limit, which requires $M_{\chi} \lesssim 194$\,TeV \citep{Griest:1989wd,Smirnov:2019ngs}.
This bound arises as the standard WIMP paradigm is associated with a thermal relic cosmology.
In this scenario, in the early Universe, the DM and SM particles are in thermal equilibrium.
As the Universe expands and cools, the DM departs from equilibrium and its abundance is rapidly depleted by annihilations, until the expansion eventually shuts this process off and the relic abundance freezes out.
The key parameter in this scenario is the DM annihilation cross section, which for point-like particles going to Standard Model (SM) states must scale as $M_{\chi}^{-2}$ by dimensional analysis.
As the mass increases, the cross section generally decreases. If it becomes too small, then the DM will be insufficiently depleted by the time it freezes out, and too much DM will remain to be consistent with the observed cosmological density.
Ultimately, as unitarity dictates that the cross section cannot be made arbitrarily large, this constraint translates into the stated upper bound on the DM mass.
While there is an attractive simplicity to the thermal-relic cosmology so described, as soon as we allow even minimal departures from it, the unitarity bound can be violated, allowing for the possibility that DM with even higher masses could be annihilating in the present day Universe.
For example, instead of annihilating directly to SM states, the DM could produce a metastable dark state which itself decays to the SM.
As shown by \cite{Berlin2016}, if this dark state lives long enough to dominate the energy density of the Universe, its decays to the SM will then dilute the DM density, avoiding the overproduction otherwise associated with heavy thermal DM, and allowing masses up to 100\,PeV to be obtained.
PeV-scale thermal DM can also be achieved if the DM is a composite state, rather than a point-like particle.
Exactly such a scenario was considered by \cite{Harigaya:2016nlg}, where DM with a large radius arose from a model of a strongly coupled confining theory in the dark sector. The lightest baryon in the theory plays the role of DM, which annihilates through a portal coupling to eventually produce SM states.
Such a scenario can evade the unitarity bound as the annihilation cross section is no longer guaranteed to scale as $M_{\chi}^{-2}$; it can instead now be determined by the geometric size of the composite DM.
Indeed, we will see that such composite DM scenarios are broadly the models that can be probed using the observational strategies considered in this work.
The self-annihilations which play a role in setting DM abundance in the early Universe can also be active today, producing an observable flux of stable SM particles such as $e^{\pm}$, $\nu_{e, \mu, \tau}$, and $\gamma$-rays, as well as unstable quarks, leptons, and bosons whose interaction processes can produce secondary $\gamma$-rays.
The full energy spectrum at production can be estimated with Monte Carlo (MC) simulations of the underlying particle physics.
For this purpose, {\tt PYTHIA} is the most widely used program, providing an accurate prompt DM spectrum up to ${\cal O}(10)$ TeV \citep{pythia}, and is a central ingredient in the widely used PPPC4DMID~\citep{Cirelli:2010xx,Ciafaloni:2010ti}.
However, {\tt PYTHIA} is not appropriate for studying UHDM in general, as it omits many of the interactions in the full unbroken SM that become important as the UHDM mass becomes much larger than the electroweak scale.
An alternative approach was introduced in \cite{HDM}, which computed the prompt DM spectrum from 1 TeV up to the Planck scale, the so-called {\tt HDMSpectrum}.\footnote{The results are publicly available at \url{https://github.com/nickrodd/HDMSpectra}.}
To do so, the authors of that work mapped the calculation of the DM spectrum to the computation of fragmentation functions, which can then be computed with DGLAP evolution in a manner that includes all relevant SM interactions, providing a better characterization of the prompt UHDM spectrum \citep[see][for a discussion of earlier approaches to compute DM spectra]{HDM}.
When $\gamma$-rays are produced from DM annihilation throughout the Universe, they can propagate to the Earth and be detected.
After considering the propagation effects,\footnote{For DM searches with galaxies in the Local Group, any galactic absorption by the starlight, infrared photons, and/or cosmic microwave background can be ignored due to its relatively small contribution \citep[$<$20\% at $\mathcal{O}$(100) TeV;][]{Esmaili2015}.
We note that while the UHDM mass range considered extends to 30 PeV, detected photons with energies above 100 TeV are not considered.}
the $\gamma$-ray flux at the Earth from DM annihilation can be described as
\begin{equation}\label{eq:dm_flux}
\frac{dF(E, \hat{n})}{dE d\Omega} = \frac{\langle\sigma v\rangle}{8\pi M_{\chi}^2}\frac{dN_{\gamma}(E)}{dE}\int_{\rm los}dl\, \rho^2(l\hat{n}),
\end{equation}
where $\langle\sigma v\rangle$ is the velocity-averaged annihilation cross section.
The prompt energy spectrum, $dN_{\gamma}(E)/dE$, depends on the DM annihilation channel and is determined from the HDM spectrum; $\rho(l\hat{n})$ is the DM density along the line of sight (los).
Even though the DM annihilation process can occur anywhere that DM is present, the DM signature from DM-rich regions will be brighter.
For instance, dwarf spheroidal galaxies (dSphs) in the Local Group are one of the best targets for DM study because of their high mass-to-light ratio (implying high DM density; e.g., $M/L \sim 3400 M_{\sun}/L_{\sun}$ for Segue 1; \citealp{Simon2011}), close proximity, and absence of bright nearby background sources.
The $\gamma$-rays that could be arriving at Earth from DM annihilations would be detectable with $\gamma$-ray space telescopes and ground-based observatories, enabling indirect searches for DM.
The self-annihilation of UHDM can produce $\gamma$-rays from around a TeV to above a PeV, containing the energy band in which the ground-based $\gamma$-ray observatories have better sensitivity than space-based instruments.
There are two classes of ground-based Very-High-Energy (VHE; $>$100 GeV) $\gamma$-ray observatories: Imaging Atmospheric Cherenkov Telescope arrays (IACTs) and Extended Air Shower arrays (EAS).
IACTs use reflecting dishes and fast cameras (generally using photomultiplier tubes, or PMTs) to reconstruct the Cherenkov light stimulated by the air showers triggered by TeV $\gamma$-rays as they interact with Earth's atmosphere.
Current generation EAS arrays are made of water tanks, where optical detectors (generally PMTs) in each tank directly detect the Cherenkov radition from charged air shower particles. Both types of instrument can reconstruct TeV $\gamma$-rays~\citep{doi:10.1146/annurev-nucl-102014-022036}.
Both have been used for indirect DM searches, with a particular focus on searches for electroweak-scale WIMPs \citep[e.g.,][]{dm_magic, dm_veritas, dm_hess, dm_hawc, dm_magic2, hawc_dm_halo}.
In addition to those $\gamma$-ray observatories, neutrino observatories have also searched for an indirect DM signal \citep[e.g.,][]{icecube_dm, Albert2022}.
In this paper, we explore the feasibility of detecting a UHDM annihilation signature from dSphs with current and future ground-based VHE $\gamma$-ray observatories.
To this end, we use only publicly available resources.
Also, we compute expected upper limits (ULs) for an UHDM particle with a mass from 30 TeV to 30 PeV, assuming that the UHDM signal is not detected.
We take Segue 1, one of the local classical dSphs, as our benchmark target, because it has been widely used for indirect DM searches, making it possible to place our results in the context of the existing limits at lower masses \citep[e.g.,][]{dm_veritas, dm_magic}. Furthermore, it has good visibility (in terms of zenith angle of observation) for all of the instruments discussed in this work.
We consider three instruments: the Very Energetic Radiation Imaging Telescope Array System (VERITAS; IACT), the Cherenkov Telescope Array (CTA; IACT), and the High-Altitude Water Cherenkov Observatory (HAWC; EAS array).
For VERITAS and HAWC, we do not access the official instrument response functions (IRFs)\footnote{The IRFs describe the mapping between the true and detected flux, primarily consisting of the effective area, point spread function, and energy dispersion matrix, each of which will differ between experiments.} and/or observed background spectra, but rather make reasonable assumptions based on publicly available information, and introduce a VERITAS-like and a HAWC-like instrument.
The remaining discussion is organized as follows.
In Section~\ref{sec:theory}, we present the theoretical motivations for UHDM searches, with a particular focus on the experimentally accessible parameter space.
The data acquisition and processing for each instrument is detailed in Sec.~\ref{sec:data}, with the methods used to calculate the projected sensitivity and ULs for each instrument outlined in Sec.~\ref{sec:method}.
We present our results in Sec.~\ref{sec:result}, and the studies on the systematic and statistical uncertainties are discussed in Sec.~\ref{sec:discussion}.
Our conclusions are reserved for Sec.~\ref{sec:summary}.
\vspace{0.3in}
\section{Theoretical Motivation}\label{sec:theory}
Theoretical arguments for DM have often downplayed the ultraheavy mass regime.
The prejudice against heavier masses arises from the so-called unitarity limit of \cite{Griest:1989wd}, which is based on the following ``bottom-up'' argument.
The naive expectation is that DM annihilation rates for point-like particles will scale as $\langle \sigma v \rangle \sim C/M_{\chi}^2$, where $M_{\chi}$ is the particle mass, and $C$ is a dimensionless parameter.
For a thermal relic, this cross section is what depletes the DM abundance away from its equilibrium value once the temperature of the Universe drops below $M_{\chi}$, and so we expect $\Omega_{\chi} \propto 1/\langle \sigma v \rangle$.
Accordingly, for too-large $M_{\chi}$, DM cannot destroy itself with enough vigor, and the Universe overcloses.
One can boost the size of $C$, but only up to an amount allowed by unitarity.
DM as a {\it simple} self-annihilating thermal relic is only possible for masses up to $\sim$194 TeV \citep{Smirnov:2019ngs}.
We show this upper limit in Fig.~\ref{fig:lim}; 194 TeV is an updated value of the conservative bound from \cite{Griest:1989wd} (those authors used $\Omega_\chi h^2 = 1$, as opposed to the current measurement of $\Omega_\chi h^2 = 0.12$ given by \citealp{Planck:2018vyg}).
To derive $M_{\chi} \lesssim 194~{\rm TeV}$, one assumes that the annihilation rate saturates the unitarity limit ($\langle \sigma v \rangle \propto 1/v$; cf. Eq.~\ref{eq:unitlim} with $J=0$) for the entire relevant history of the DM.
A rate that scales inversely with velocity is typically found only at low velocities and in the presence of a long-range force, as in the celebrated case of Sommerfeld enhancement.
As discussed below, it is difficult to model-build a scenario where the cross section is maximally large, but where the DM continues to behave as a simple elementary particle.
Typically, bound state and compositeness effects will enter in this limit.
For such reasons, in \cite{Griest:1989wd}, the authors felt the above cross-section scaling was overly conservative.
Instead, they assumed that the cross section was dominantly $S$-wave ($\langle \sigma v \rangle \propto v^0$) but with a maximum value still set by unitarity (as given in Eq.~\ref{eq:unitlim}).
Using this, and assuming $\Omega_{\chi} h^2 = 1$, they derived the well-known upper limit of 340 TeV.
Repeating their calculation for $\Omega_{\chi} h^2 = 0.12$, the bound is reduced to $M_{\chi} \lesssim 116~{\rm TeV}$.
Nevertheless, we will adopt the more conservative value of 194 TeV in our results.
It involves the fewest assumptions about the early Universe, but amounts to assuming that DM finds a way to annihilate at the limiting cross-section value throughout the era that set its relic abundance.
The presence of additional structure in either the DM particles themselves or the final states they capture into can weaken even this conservative limit, though.
For example, if capture into bound states is possible, then selection rules can open up annihilation channels into higher partial waves.
The total relic abundance of DM is necessarily set by the sum over all channels, but each partial wave respects the limit from unitarity,
\begin{equation}
\sigma_J \leq \frac{4\pi\, (2J+1)}{M_\chi^2 v_{\rm rel}^2}.
\label{eq:unitlim}
\end{equation}
As discussed in \cite{Bottaro:2021snn}, even for the straightforward scenario of thermal relics that are just multiplets of the electroweak group SU(2)$_L$, this allows DM consistent with unitarity up to $\sim$325 TeV.
It would seem uncontroversial to analyze the full regime that allows this simple scenario.
To relax the bound farther, as mentioned above, the unitarity limit of roughly one hundred TeV assumes a point-like particle. This was explicitly recognized in the classic 1990 reference on the matter.
If, however, DM is a composite particle, then the relevant dimensionful scale that sets the annihilation rate can be its geometric size, $R$, which may be much larger than its Compton wavelength $\sim 1/M_{\chi}$.
It is thus possible to realize a thermal-relic scenario for masses $\gg$ 100 TeV ({\it e.g.,} the example of \citealp{Harigaya:2016nlg} discussed above).\footnote{Alternatively, to get to very high masses, one can decouple the DM abundance from its annihilation rate. In this approach, one forfeits the WIMP-miracle in favor of an alternate cosmological history.
As an example, some other particle could populate the Universe, which ultimately decays to the correct quantity of DM ({\it cf.}~\citealp{Carney:2022gse} for discussion and references).
If DM is non-thermal, then additional structure is needed for detection. One straightforward possibility is to construct DM that is cosmologically stable, but decays with an observable rate ({\it e.g.}~\citealp{Kolb1999}).}
For pointing telescopes like VERITAS, HESS, or CTA to have a discovery advantage, one needs a scenario, like compositeness, with non-negligible DM annihilation, since the resulting flux will scale like $\rho^2$.
Bound-state particles with a heavy constituent, whether obtained as thermal relics or by a more complicated cosmology, provide a means to get annihilation rates $\langle \sigma v \rangle \, \gg \, C_{\rm unitary}/M_{\chi}^2$, where $C_{\rm unitary}$ is the largest factor consistent with quantum mechanics in a single partial wave.
One may therefore consider this as a generalization of the ``sum over partial waves'' loophole we first mentioned in the bound-state capture scenario.
As we see in Fig.~\ref{fig:lim}, there is a large region of parameter space beyond the point-like unitarity limit.
Furthermore, we project that the limits from CTA exceed those from HAWC out to several PeV, and are primed for testing these models.
\begin{figure}[t!]
\centering
\includegraphics[width=0.45\linewidth]{figures/TheorySpace.pdf}
\caption{A comparison of our estimated limits for annihilation to $t\bar{t}$ against various theoretical benchmarks.
%
The black solid curve refers to the standard thermal-relic cross section ($2.4\times10^{-26}$ cm$^3$/s;~\citealp{Steigman:2012nb}), and the region shaded in gray is the conventional parameter space associated with a point-like thermal relic.
%
For Segue 1, the $J=0$ partial-wave unitarity limit on a point-like annihilation cross section is shown in orange---irrespective of the early Universe cosmology, the point-like particles can only annihilate at a rate below this.
%
Composite states are not so restricted, however, and can annihilate up to the various composite unitarity bounds.
%
For a detailed discussion, see Sec.~\ref{sec:theory}.}
\label{fig:lim}
\end{figure}
The generic possibility of a geometric cross section for composite particles can be seen with atomic (anti)hydrogen, as pointed out in \cite{Geller:2018biy}, whose arguments we briefly recap.
In a hydrogen-antihydrogen collision, an interaction with a geometric cross section is the ``rearrangement'' reaction, which produces a protonium ($p \bar p$) $+$ positronium ($e^+ e^-$) final state.
Partial-wave by partial-wave, unitarity is naturally respected.
However, summing over all allowed angular momenta gives
\begin{equation}
\sigma \, \sim \, \sum_{J = 0}^{J_{\rm max}} \sigma_J \, \sim \, \frac{4\pi}{k_i^2} \sum_{J = 0}^{J_{\rm max}} (2J + 1) \,\sim\, \frac{4\pi}{k_i^2} J_{\rm max}^2 \,\sim\, 4\pi R^2,
\label{eq:unitsc}
\end{equation}
where $k_i$ is the initial momentum, $R$ is the size of the particle, and $J_{\rm max}$ is set by angular momentum conservation and the classical value $(k_i \, R)$.\footnote{For this parametric estimate, we are taking $J_{\rm max} \sim L_{\rm max} \sim k_i \, R$. Strictly, $k_i \, R$ is bounding the orbital angular momentum in the collision.
Also, Eq.~\eqref{eq:unitsc} assumes a kinetic energy, $E_i = k_i^2/2M_{\chi}$ comparable or larger than the incoming particle's binding energy, $E_b$.
If $E_i \ll E_b$, then only the $S$-wave will contribute and the cross section $\sigma \sim R/k_i$.
Since this involves just a single partial wave, we therefore cannot use a sum with many terms to exceed the point-particle unitarity limit.}
Importantly, a parametric enhancement in the cross section has been achieved by saturating each partial-wave bound up to $J_{\rm max}$.
Whatever partial-wave protonium is captured into, it will ultimately decay down the spectroscopic ladder until reaching the lowest-allowed-energy state, at which point it annihilates.
For a generic scenario with the dark sector charged under the SM, the entire process of capture, decay, and annihilation is prompt on observational timescales.
An ultraheavy dark-hydrogen thus provides a proof of concept for a ``detection-through-annihilation'' scenario.
The argument for geometric scaling generalizes, though, to include states bound by strong dynamics \citep{Kang:2006yd,Jacoby:2007nw}.
Thus, DM may be more like an ultraheavy $B$-meson \cite[as studied by][]{Geller:2018biy}, or a gluequark \citep[adjoint fermion with color neutralized by cloud of dark gluons;][]{Contino2019}, heavy-light baryon \citep{Harigaya:2016nlg}, {\it etc.}
For a complete scenario, one would necessarily need an explanation for why these heavy-constituent composites came to be the DM with the right abundance.
Nonetheless, the physics behind their ability to annihilate with an effective rate far above the point-particle unitarity limit is straightforward.
Therefore, models with dynamics not-too-different from the SM can realize annihilating particle DM all the way to the Planck scale, and should be tested.
With the above in mind, in Fig.~\ref{fig:lim}, we outline basic theoretical aspects of the parameter space we will consider \citep[\textit{cf.}][]{ANTARES:2022aoa}.
Firstly, we see that the majority of the mass range probed is above the conventional unitarity limit.
Next, the curve we label by ``Partial-Wave Unitarity'' represents the largest present day annihilation cross section consistent with the same point-particle unitarity constraints that when applied in the early Universe constrains $M_{\chi} \lesssim 194$ TeV.
In particular, we require $\langle \sigma v \rangle \leq 4\pi/(M_{\chi}^2 v_{\rm rel})$, where we take $v_{\rm rel} \sim 2 \times 10^{-5}$ as an approximate value for the average velocity between DM particles in nearby dwarf galaxies \citep{Martinez:2010xn,McGaugh:2021tyj}.\footnote{We note that the location of the Partial-Wave Unitarity bound strongly depends on the system observed.
A search for DM annihilation within the Milky Way, for instance, would depend on a higher relative velocity, $v_{\rm rel} \sim 10^{-3}$, given the larger mass of our galaxy as compared to its satellites.
This would lower the Partial-Wave Unitarity curve shown in Fig.~\ref{fig:lim} by roughly two orders of magnitude.}
Composite states can readily evade this bound, although as shown by \cite{Griest:1989wd}, even these systems eventually hit a ``Composite Unitarity'' bound, which requires $\langle \sigma v \rangle \leq 4\pi(1+M_{\chi} v_{\rm rel} R)^2/(M_{\chi}^2 v_{\rm rel})$, which for large masses reduces to the result in Eq.~\eqref{eq:unitsc}.
We show this result for different values of $R$ in Fig.~\ref{fig:lim}, and note that for $M_{\chi} \ll R^{-1}$, these results reduce to the point-like unitarity limit.
\vspace{0.3in}
\section{Data reduction} \label{sec:data}
\subsection{VERITAS-like instrument}\label{sec:veritas}
VERITAS is an array of four imaging atmospheric Cherenkov telescopes located in Arizona, USA \citep{VERITAS}. One of the VERITAS scientific programs is to search for indirect DM signals from astrophysical objects such as dSphs and the Milky Way Galactic Center \citep{Zitzer2017}. Since it has a similar sensitivity to other IACT observatories like MAGIC and HESS \citep{Park2015, Aleksic2016, Aharonian2006}, we adopt VERITAS as representative of current-generation IACTs.
For our analysis, we take the published IRFs and observed ON and OFF region\footnote{The ON region is defined as the area centered on a target. The OFF region is one or more areas containing no known $\gamma$-ray sources, used for estimating the isotropic-diffuse background rate. } counts from \cite{dm_veritas}. The size of ON-region was 0.03 deg$^2$, and the OFF-region was defined by the crescent background method \citep{Zitzer2013}. The relative exposure time between ON and OFF regions ($\alpha$) was 0.131. From 92.0 hrs of Segue 1 observations, the number of observed events from the ON ($N_{\rm on}$) and OFF regions ($N_{\rm off}$) was 15895 and 120826, respectively. We introduce a reference instrument, denoted ``VERITAS-like,'' whose observables are limited to total $N_{\rm on}$, total $N_{\rm off}$, and $\alpha$ (see App.~\ref{sec:check} for the comparison between VERITAS and VERITAS-like constraints on the DM annihilation cross section). In addition, we scale down $N_{\rm on}$ and $N_{\rm off}$ values to a nominal observation time of 50 hours.
\vspace{0.1in}
\subsection{CTA}\label{sec:cta}
CTA is the next-generation ground-based IACT array, which is expected to have about 10 times better point-source sensitivity when compared with the current IACT observatories, in addition to a broader sensitive energy range, stretching from 20 GeV to 300 TeV, and two to five times better energy and angular resolution \citep{Bernlohr2013}.
The observatory will be made up of two arrays, providing full-sky coverage: one in the northern hemisphere (CTA-North; La Palma in Spain) and the other in the southern hemisphere (CTA-South; Atacama Desert in Chile). CTA will be equipped with tens of telescopes. In this study, we consider the CTA-North array, from which our target, Segue 1, can be observed. CTA will broaden our understanding of the extreme Universe, including the nature of DM \citep{CTA}, and will be able to probe long-predicted, but so-far untested candidates like Higgsino DM~\citep{Rinchiuso:2020skh}.
The CTA IRFs and background distributions as a function of energy, as well as official analysis tools,\footnote{{\tt Gammapy}, \url{https://gammapy.org/}} are publicly available \citep{CTA_IRFs, Deil2017}.
We assume the alpha configuration (\textit{prod5 v0.1}). In the alpha configuration, the CTA-North array consists of 4 Large-Sized Telescopes (LSTs) and 9 Medium-Sized Telescopes (MSTs).\footnote{\url{https://www.cta-observatory.org/science/ctao-performance/}} To compare with the VERITAS-like instrument, we use the same observation conditions; the size of the ON-region is set to 0.03 deg$^2$ with $\alpha$ of 0.131.
\vspace{0.1in}
\subsection{HAWC-like instrument}\label{sec:hawc}
HAWC, located at Sierra Negra, Mexico, is a $\gamma$-ray and cosmic-ray observatory. The instrument is constituted of 300 water tanks. Each tank contains about $1.9\times10^5$ L of water with four photomultiplier tubes (PMTs). After applying $\gamma$/hadron separation cuts, observed $\gamma$-ray events are divided into analysis bins ($\mathcal{B}_{\rm hit}$) based on the fraction of the number of PMT hits. HAWC observes two-thirds of the sky on a daily basis and has found many previously undetected VHE sources \citep{Albert2020}. In addition, they have studied 15 dSphs within its field of view to search for DM annihilation and decay signatures \citep{dm_hawc,HAWC:2017udy}.
The IRFs and observed background spectrum for Segue 1 are not publicly available, so we introduce a ``HAWC-like'' reference instrument based on reasonable assumptions. A dataset including 507 days of observations of the Crab Nebula is publicly available \citep{Abeysekara2017},\footnote{\url{https://data.hawc-observatory.org/datasets/crab_data/index.php}} and the declination angle (Dec.) of the Crab Nebula is not significantly different from that of Segue 1 ($\Delta$Dec.~$\approx$ 6 degrees). Since Dec.~is expected to be one of the key factors determining the shape of the IRFs and background rate, we assume that the background rate and IRFs should be similar for observations of Segue 1 and the Crab Nebula (see App.~\ref{sec:check} for the comparison between HAWC and HAWC-like constraints on the DM annihilation cross section). With the help of the Multi-Mission Maximum Likelihood framework \citep[{\tt 3ML}; ][]{Vianello2015}, we acquire the IRFs and background rate for each $\mathcal{B}_{\rm hit}$ (total of 9 bins) as used in \cite{Abeysekara2017}. We set the radius of an ON region to 0.2 degrees, and the background is calculated from a circular region with a 3-degree radius around the Crab Nebula, providing $\alpha$ of 0.04/9 ($\sim$ 0.004).
\vspace{0.3in}
\section{Analysis methods}\label{sec:method}
\subsection{Ingredients for estimating UHDM signal}
\begin{figure*}[t!]
\centering
\subfigure{\includegraphics[width=0.45\linewidth]{Exp-Counts.pdf}}\hspace{0.5cm}
\subfigure{\includegraphics[width=0.45\linewidth]{Rel-Flux.pdf}}
\caption{The number of expected $\gamma$-ray events (left) and relative ratio between observable and total $\gamma$-ray energy flux (right).
%
The expected counts are computed assuming an effective area of $10^{10}$ cm$^2$, 50 hours of exposure time, a $J$-factor of 10$^{18}$ GeV$^2$/cm$^5 \,\cdot\,$sr, and that $\langle \sigma v \rangle = 10^{-23}$ cm$^3$/s.
%
The observable energy flux is defined as the integrated $\gamma$-ray energy flux up to 100 TeV, and for reference in the black dashed curve we show a value of 10\%. The portion of the observable UHDM signal from $M_{\chi} >$ 100 TeV decreases progressively as $M_{\chi}$ increases.
%
The various line styles refer to the classes of annihilation channel: charged leptons (solid), quarks (dashed), gauge bosons (dotted), and $\nu_{e}\bar{\nu}_{e}$ (dashed-dotted).}
\label{fig:ratio}
\end{figure*}
To compute the $\gamma$-ray annihilation flux at the Earth, given in Eq.~\eqref{eq:dm_flux}, we need two ingredients: the photon spectrum for each DM annihilation channel and the DM density profile of the selected target, Segue 1.
As stated, we use the {\tt HDMSpectrum} \citep{HDM} to calculate the expected DM signal because it provides an accurate spectrum for the full mass range we consider.
The annihilation of UHDM produces $\gamma$-rays of energies equal to or less than $M_{\chi}$.
We compute the fraction of the produced energy flux ($F \propto \int E \frac{dN}{dE} dE$) that is observable and the number of expected $\gamma$-ray events ($N \propto \int \frac{dN}{dE} dE$); i.e., the energy flux and $\gamma$-ray counts distribution within the energy band of the current and future VHE $\gamma$-ray observatories ($E \leq$ 100 TeV).
In this work, we consider nine annihilation channels: three charged leptons ($e^{+}e^{-}$, $\mu^{+}\mu^{-}$, and $\tau^{+}\tau^{-}$), two heavy quarks ($t\bar{t}$ and $b\bar{b}$), three gauge bosons ($W^{+}W^{-}$, $ZZ$, and $\gamma\gamma$), and one neutrino ($\nu_e \bar{\nu}_e$).
For the DM density profile, we take a generalized version of the Navarro–Frenk–White (NFW) profile, which is a function of five parameters \citep{Hernquist1990, Zhao1996, GS2015},
\begin{equation} \label{eq:dm_profile}
\rho(r) = \frac{\rho_{s}}{(r/r_s)^{\gamma}[1+(r/r_s)^{\alpha}]^{(\beta-\gamma)/\alpha}},
\end{equation}
where the choice of ($\alpha$, $\beta$, $\gamma$) = (1, 3, 1) recovers the original NFW profile \citep{NFW1997}, and $r_s$ is the scale radius of the DM halo.
The so-called $J$-factor is defined as the integral of the squared DM density along the los within a region of interest (roi),
\begin{equation}
J = \int_{\rm roi}d\Omega \int_{\rm los}dl\, \rho^2(l\hat{n}).
\end{equation}
The set of five NFW parameters ($\alpha$, $\beta$, $\gamma$, $\rho_s$, and $r_s$) is obtained by fitting the observed kinematic data of the dSphs. Limited data produces large uncertainties in estimates of the $J$-factor, which propagates as a systematic uncertainty when estimating the DM cross section (see Sec.~\ref{sec:discussion}). In a thorough study, \cite{GS2015} obtained a number of the parameter sets that adequately describe the data. Among more than 6000 sets for Segue 1, we take one that approximates the median of the $J$-factor (see Table~\ref{tab:nfw}).
Fig.~\ref{fig:ratio} shows the expected number of $\gamma$-ray photons under the conditions stated below (left panel) and the ratio of observable energy flux to total energy flux (right panel) for the nine annihilation channels. For the expected counts distribution, we assume that the effective area is $10^{10}$ cm$^2$, the exposure time is 50 hours, the $J$-factor is 10$^{18}$ GeV$^2$/cm$^5 \,\cdot\,$sr, and the DM cross section is 10$^{-23}$ cm$^3$/s.
This result implies that the current and future observatories, whose sensitive energy ranges extend to 100 TeV, can observe a large portion of the produced $\gamma$-rays and/or energy flux from the UHDM annihilation, up to $M_{\chi}$ of a few PeV.
For the $\gamma \gamma$ channel, the majority of the energy remains in the sharp spectral feature at $E_\gamma \sim M_{\chi}$, and so the energy flux ratio sharply drops once the mass is above 100 TeV and the continuum component becomes dominant. This sharp decrease is not clearly visible in the expected count level because the emission at $E_\gamma \sim M_{\chi}$ produces only about 10\% of the total counts in the high-mass regime.
\begin{table}[h!]
\centering
\begin{tabular}{c c c c c c c c}
\hline\hline
$\rho_s$ & $r_s$ & $\alpha$ & $\beta$ & $\gamma$ & $\theta_{\rm max}$ & $J(\theta_{\rm max})$ \\
$[$ \(M_\odot\)$/{\rm pc}^3$ ] & [ pc ] & & & & [ deg ] & [ GeV$^2$/cm$^5 \,\cdot\,$sr ] \\ \hline
$5.1\times10^{-3}$& $2.2\times10^4$ & 1.48 & 8.04 & 0.83 & 0.35 & $2.5\times10^{19}$\\
\hline\hline
\end{tabular}
\caption{The selected parameter set of the generalized NFW profile for Segue 1. The maximum angular distance, $\theta_{\rm max}$ is given by the location of the furthest member star, which is an estimate of the size of Segue 1.}\label{tab:nfw}
\end{table}
\vspace{0.1in}
\subsection{Projected sensitivity curves}\label{sec:excess}
To explore the feasibility of detection, we compare expected $\gamma$-ray counts from UHDM self-annihilation to background counts. The number of expected signal counts ($N_s$) is obtained by forward-folding Eq.~\eqref{eq:dm_flux} with IRFs,
\begin{equation}\label{eq:dm_signal}
N_s = \int d\Omega\, dE'\, dE\, \frac{dF(E', \hat{n})}{dE'\, d\Omega} R(E, \Omega|E', \Omega'),
\end{equation}
where unprimed and primed quantities represent observed (strictly speaking, reconstructed) and true quantities, respectively. The function $R(E, \Omega|E', \Omega')$, refers to an IRF consisting of three sub-functions: effective area, energy bias, and point spread function. Assuming that the number of ON region events is $N_{\rm on} = N_s+\alpha N_{\rm off}$, we calculate the significance of the UHDM signal by using the so-called Li \& Ma significance \citep[$\mathcal{S}$;][]{Li1983},
\begin{equation}
\mathcal{S} = \sqrt{2} \left\{ N_{\rm on} \ln \left[ \frac{1+\alpha}{\alpha} \left( \frac{N_{\rm on}}{N_{\rm on}+N_{\rm off}} \right) \right] + N_{\rm off} \ln \left[ (1+\alpha) \left( \frac{N_{\rm off}}{N_{\rm on}+N_{\rm off}} \right) \right] \right\}^{1/2}.
\end{equation}
Finally, for each annihilation channel, we find a set of values of $M_{\chi}$ and $\langle\sigma v\rangle$ for which $\mathcal{S}$ = 5\,$\sigma$.
\subsection{Expected upper limit curves}\label{sec:uls}
To estimate an UL on the UHDM annihilation cross section for a given $M_\chi$, we perform a maximum likelihood estimation (MLE). Since we cannot access the energy distribution of background events for the VERITAS-like instrument, we use a simple likelihood analysis using the total $N_{\rm on}$ and $N_{\rm off}$ counts, $\mathcal{L}(\langle\sigma v\rangle; b|D)$, constructed from two Poisson distributions,
\begin{equation}
\begin{aligned}
\mathcal{L} &= \mathcal{P}_{\rm pois} \left( N_s + \alpha b; N_{\rm on} \right) \times \mathcal{P}_{\rm pois}(b; N_{\rm off})\\
&= \frac{ \left( N_s + \alpha b \right) ^{N_{\rm on} } e^{-(N_s+\alpha b)}}{N_{\rm on}!}\frac{b^{N_{\rm off}}e^{-b}}{N_{\rm off}!},
\end{aligned}
\end{equation}
where the nuisance parameter $b$ represents the expected background rate. This likelihood function is expected to be less sensitive compared to a full likelihood function incorporating event-wise energy information, especially at high masses, as it does not utilize any features present in the DM spectrum; see \cite{Aleksic2012} for full discussion of this hindrance. For CTA and the HAWC-like instrument, we perform a binned likelihood analysis,
\begin{equation}
\mathcal{L} = \sum_i \frac{ \left( N_{s, i} + \alpha b \right) ^{N_{{\rm on}, i} } e^{-(N_{s, i}+\alpha b)}}{N_{{\rm on}, i}!}\frac{b^{N_{{\rm off}, i}}e^{-b}}{N_{{\rm off}, i}!}.
\end{equation}
We calculate an expected UL with the assumption that an ON region does not contain any signal from UHDM self-annihilation but only Poisson fluctuation around $\alpha \times N_{\rm off}$; i.e., we can randomly sample $N_{\rm on}$ from the Poisson distribution of $\alpha N_{\rm off}$. For the binned likelihood analysis, we can apply the Poisson fluctuation to each background bin to get the binned ON-region data. With the synthesized ON-region data, we perform MLE analysis and calculate an UL on the DM cross section for a given $M_{\chi}$. Throughout this paper, UL refers to the one-sided 95\% confidence interval, which is obtained from the profile likelihood ($\Delta \ln\mathcal{L} = 1.35$). We repeat the process of calculating an expected limit to get the median or the containment band for the 95\% UL.
\vspace{0.3in}
\section{Results} \label{sec:result}
\begin{figure*}[t!]
\centering
\subfigure{\includegraphics[width=0.32\linewidth]{./figures/Discovery-Veritas.pdf}}
\subfigure{\includegraphics[width=0.32\linewidth]{./figures/Discovery-CTA.pdf}}
\subfigure{\includegraphics[width=0.32\linewidth]{./figures/Discovery-HAWC.pdf}}
\caption{Sensitivity curves for the nine UHDM annihilation channels for a VERITAS-like instrument (50 hrs; left panel), CTA-North (50 hrs; middle panel), and a HAWC-like instrument (507 days; right panel). Each curve corresponds to a set of parameters ($M_{\chi}$ and $\langle\sigma v\rangle$), producing a 5\,$\sigma$ signal excess (Sec.~\ref{sec:excess}). Line styles are as in Fig.~\ref{fig:ratio}.}
\label{fig:sensitivity}
\end{figure*}
Here, we present two sets of analysis results: sensitivity curves and expected ULs, as functions of the UHDM particle mass. Since above a few tens of PeV the energy flux ratio for all annihilation channels is less than 10\% (Fig.~\ref{fig:ratio}), we perform the analyses for UHDM masses from 30 TeV up to 30 PeV. Note that all of the following results are based on assumed exposure times of 50 hours for the VERITAS-like instrument and CTA-North, and 507 days for the HAWC-like instrument.
Figure~\ref{fig:sensitivity} shows the sensitivity curves for nine UHDM annihilation channels ($e^{+}e^{-}$, $\mu^{+}\mu^{-}$, $t\bar{t}$, $b\bar{b}$, $W^{+}W^{-}$, $ZZ$, $\gamma\gamma$\footnote{Note that for the $\gamma\gamma$ channel, we use a different mass binning so that the lower bound of the sensitivity and upper limit curves is different from those from the other channels. This choice is based on the fact that the delta component in the $\gamma\gamma$ annihilation can be fully addressed only when the mass binning matches the binning of the energy bias matrix ($M_\chi = E_\gamma$).}, and $\nu_e \bar{\nu}_e$) with VERITAS-like (50 hrs; left panel), CTA-North (50 hrs; middle panel), and HAWC-like (507 days; right panel) instruments. Considering the annihilation of an UHDM particle with $M_{\chi}$ of 1 PeV via the $\tau^{+}\tau^{-}$ channel, a HAWC-like instrument is likely to reach $\mathcal{S}$ of 5\,$\sigma$ with the smallest cross section; specifically, a VERITAS-like instrument is expected to detect UHDM for a cross section of $\sim 5\times10^{-19}~{\rm cm}^3/{\rm s}$, CTA-North for $\sim 4\times10^{-19}~{\rm cm}^3/{\rm s}$, and a HAWC-like instrument for $\sim 1\times10^{-19}~{\rm cm}^3/{\rm s}$. However, this sensitivity depends on the annihilation channel and the UHDM mass, not to mention the exposure time. For example, for $M_{\chi}$ of 100 TeV, CTA-North shows, in general, a better sensitivity compared to the other instruments. For the $\gamma \gamma$ channel, a discontinuity in the sensitivity lines can be seen because, as explained earlier, the line-like contribution ($E_\gamma \sim M_\chi$) falls outside the sensitive energy range.
Next, we estimate the ULs on the UHDM annihilation cross section as a function of UHDM particle mass for the same annihilation channels for the three instruments (Fig.~\ref{fig:uls}). The curves represent the median value from 100 realizations generated at each mass. With the assumed observation conditions (e.g., livetime), CTA-North shows the most constraining ULs at lower masses ($M_{\chi} < 1$ PeV), whereas a HAWC-like instrument provides more stringent ULs at higher masses. Note that the UL on the DM cross section is expected to decrease as we increase the exposure time, $\langle\sigma v\rangle_{\rm UL} \propto 1/\sqrt{t}$. As expected from the relative sensitivity between VERITAS and CTA-North, the UL curves from CTA-North are about 10 times lower than those from a VERITAS-like instrument.
In the case of the $\gamma\gamma$ annihilation channel, a discontinuity in the UL curve is again observed at 100 TeV, most strongly for the CTA-North. In contrast to the VERITAS-like instrument, it is possible for the CTA-North instrument to perform the full binned likelihood analysis by comparing the signal and background energy distributions, which lowers the UL curve (see App.~\ref{sec:check}). Note that in the case of the $\gamma\gamma$ annihilation channel, the two distributions differ clearly compared to those of other channels. In the case of the HAWC-like instrument, the energy dispersion matrix for the highest energy bin is relatively broad, which smooths out the discontinuity.
\begin{figure*}[t!]
\centering
\subfigure{\includegraphics[width=0.32\linewidth]{./figures/UL-VERITAS.pdf}}
\subfigure{\includegraphics[width=0.32\linewidth]{./figures/UL-CTA.pdf}}
\subfigure{\includegraphics[width=0.32\linewidth]{./figures/UL-HAWC.pdf}}
\caption{Expected 95\,\% UL curves of UHDM cross section for the nine UHDM annihilation channels, obtained for a VERITAS-like instrument (50 hrs; left panel), CTA-North (50 hrs; middle panel), and a HAWC-like instrument (507 days; right panel). An expected UL is the median of 100 realizations obtained from the profile likelihood, assuming no signal excess in an ON region (Sec.~\ref{sec:uls}). Again, the line styles follow Fig.~\ref{fig:ratio}.}
\label{fig:uls}
\end{figure*}
\vspace{0.3in}
\section{Discussion of statistical and systematic uncertainties} \label{sec:discussion}
Here we briefly discuss the impact of statistical and systematic uncertainties on the presented UL curves. For these studies, we consider a single annihilation channel ($t\bar{t}$) for simplicity, although the results are representative of what we expect for the additional channels.
\begin{figure}[t!]
\centering
\subfigure{\includegraphics[width=0.45\linewidth]{Stat-Err.pdf}}\hspace{0.5cm}
\subfigure{\includegraphics[width=0.45\linewidth]{Sys-Err.pdf}}
\caption{{\it Left:} Statistical uncertainty on the expected 95\% limits. Each uncertainty band is obtained from 10$^{4}$ realizations for the $t\bar{t}$ annihilation channel.
%
{\it Right:} Systematic uncertainty on the same expected limits, resulting from uncertainties in the $J$-factor estimation. Each uncertainty band is obtained from 10$^{4}$ realizations for the $t\bar{t}$ annihilation channel.
%
In both figures, shaded region refers to 68\% containment, and dashed lines are 95\% containment.}
\label{fig:statsys_err}
\end{figure}
Due to the Poisson fluctuation in the observed counts, statistical uncertainty is inevitable. For this study, we compute the 68\% containment band of expected UL curves for a large number of MC realizations (10,000), using the method described in Sec.~\ref{sec:uls}.
Figure~\ref{fig:statsys_err} shows the statistical uncertainty band for 68\% (shaded region) and 95\% (dashed lines) containment. This figure implies that the Poisson fluctuation can result in 45--55\% statistical uncertainty (at the 1$\sigma$ level) across all masses for the three instruments: VERITAS-like ($\sim$45\%), CTA-North ($\sim$53\%), and HAWC-like ($\sim$54\%).
A major systematic uncertainty, beyond that inherent in IRFs, is the present uncertainty in the DM density profile assumed for Segue 1.
A DM density profile estimated from insufficient and possibly inaccurate kinematic observations will inevitably have a large uncertainty.
Also, it depends on assumptions and approximations made in the modeling---for instance the assumption of a NFW profile with exact spherical symmetry---can also lead to systematic uncertainties.
In addition, the stellar sample selection when fitting the DM density profile affects the $J$-factor significantly, such that any ambiguity in the sample selection, possibly due to contamination from foreground stars or stellar streams, can overestimate the $J$-factor.
The magnitudes of the systematic uncertainties are different from dSph to dSph, and depend on the definition of the DM density profile. For further discussion on this uncertainty, see \cite{Bonnivard2015a, Bonnivard2015b}.
As mentioned earlier, \cite{GS2015} provide more than 6000 viable parameter sets for Segue 1, and we compute $10^{4}$ expected UL curves by randomly sampling the parameter set. In this work, we use the parameter sets to estimate the systematic uncertainty on an expected UL curve due to uncertainty on the $J$-profile. Note that in this study, we do not include the Poisson fluctuation of the simulated ON region counts; i.e., $N_{{\rm on}, i}$ is equal to $\alpha N_{{\rm off}, i}$. Finally, we take ULs corresponding to the 68\% and 95\% containment for each mass (Fig.~\ref{fig:statsys_err}). This figure implies that, for Segue 1, the $J$-factor can increase or decrease an UL curve by a factor of 2 (1\,$\sigma$ level) across all masses, regardless of instrumental properties, at a level to the statistical uncertainties seen in Fig.~\ref{fig:statsys_err}. Note that \cite{Bonnivard2016} claimed that $J$-factor may be overestimated by about two orders of magnitude due to the stellar sample selection bias. However, the accurate prediction of the Segue 1 $J$-profile is beyond the scope of this paper.
\vspace{0.3in}
\section{Summary and Outlook}\label{sec:summary}
In this work, we have explored the potential of current and future $\gamma$-ray observatories to extend the search for DM beyond the unitarity bound.
Our results allow one to determine whether discovery of an UHDM candidate of a given mass and annihilation cross section is within reach. Furthermore, we provide an estimate of the constraints that can be derived on the UHDM annihilation cross section by current and future $\gamma$-ray observatories, assuming a non-detection.
Returning to Fig.~\ref{fig:lim}, we can place our obtained limits in the context of theoretical constraints on the allowed annihilation cross section of UHDM. All instruments considered can probe realistic cross sections for composite UHDM particles whose annihilation respects partial-wave unitary. For the given exposure times (50 hours for CTA-North and a VERITAS-like instrument, and 507 days for a HAWC-like instrument), CTA-North is projected to provide the most constraining limits, probing scales down to $R = (10~{\rm GeV})^{-1} $ for UHDM with a mass around 300~TeV. At higher masses, above 1~PeV, HAWC-like limits become the most constraining, reaching scales around $R = (1~{\rm GeV})^{-1}$ at 10~PeV. The VERITAS-like limits, while less constraining, are worse than those of CTA-North or a HAWC-like instrument by less than or equal to an order of magnitude for the entire mass range (with a slight advantage over the HAWC-like instrument at masses below 100~TeV).
This work draws attention to the exploration of DM beyond the conventional parameter range. The results we have derived are indicative, using reasonable assumptions about the data and IRFs for current-generation instruments, as well as realistic exposure times for current and future instruments. We hope that this work illustrates the interest and feasibility of searches for UHDM with the current-generation $\gamma$-ray instruments, and the value of considering such searches for future observatories. The phase space that can be probed, in terms of DM particle mass and annihilation cross section, is a relevant one for models predicting composite UHDM. This parameter space is currently unconstrained, but could be probed with archival datasets from current-generation $\gamma$-ray instruments, including HAWC, VERITAS, and other IACTs.
\vspace{0.1in}
\begin{acknowledgments}
{\it Acknowledgments.}
Our work benefited from discussions with Michael Geller, Diego Redigolo, and Juri Smirnov.
We would like to thank Alex Geringer-Sameth, Savvas M. Koushiappas, and Matthew Walker for providing the parameter sets for the $J$-factors.
This research has made use of the CTA instrument response functions provided by the CTA Consortium and Observatory, see https://www.cta-observatory.org/science/cta-performance/ (version prod5 v0.1; [citation]) for more details.
D. Tak and E. Pueschel acknowledge the Young Investigators Program of the Helmholtz Association, and additionally acknowledge support from DESY, a member of the Helmholtz Association HGF. M. Baumgart is supported by the DOE (HEP) Award DE-SC0019470.
\end{acknowledgments}
\bibliographystyle{aasjournal}
|
2,877,628,089,095 | arxiv | \subsection{Binary Rewriting}
\label{sec:binary-rewriting}
\textit{SnapFuzz}\xspace implements a load-time binary rewriting subsystem that dynamically
intercepts both the loader's and the target's functionalities in order to monitor and modify all external behaviours of the target application.
Applications interact with the external world via \emph{system calls}, such as \code{read()} and \code{write()} in Linux, which provide various OS services.
As an optimization, Linux provides some services via \emph{vDSO (virtual Dynamic Shared Object)} calls. vDSO is essentially a small shared library injected by the kernel in every application in order to provide fast access to some services. For instance, \emph{gettimeofday()} is typically using a vDSO call on Linux.
The main goal of the binary rewriting component of \textit{SnapFuzz}\xspace is to intercept all the system calls and vDSO calls issued by the application being fuzzed, and redirect them to a \emph{system call handler}. The rest of this subsection presents how this interception is realised and can be skipped by readers less interested in the technical details involved.
Binary rewriting in \textit{SnapFuzz}\xspace employs two major components:
1)~the rewriter module, which scans the code for specific functions, vDSO and system call assembly opcodes, and redirects them to the plugin module, and
2)~the plugin module where \textit{SnapFuzz}\xspace resides.
\subsubsection{Rewriter}
\label{subsec:rewriter}
\textit{SnapFuzz}\xspace is an ordinary dynamically linked executable that is provided with a path to a target application together with the arguments to invoke it with.
When \textit{SnapFuzz}\xspace is launched, the expected sequence of events of a standard Linux operating system are taking place, with the first step being the dynamic loader that loads \textit{SnapFuzz}\xspace and its dependencies in memory.
When \textit{SnapFuzz}\xspace starts executing, it inspects the target's ELF binary to obtain information about its interpreter, which in our implementation is always the standard Linux \textit{ld} loader.
\textit{SnapFuzz}\xspace then scans the loader code for system call assembly opcodes and some special functions in order to instruct the loader to load the \textit{SnapFuzz}\xspace plugin.
In particular, the rewriter:
(1)~intercepts the dynamic scanning of the loader in order to append the \textit{SnapFuzz}\xspace plugin shared object as a dependency, and
(2)~intercepts the initialisation order of the shared libraries in order to prepend the \textit{SnapFuzz}\xspace plugin initialisation code (in the \textit{.preinit\_array}).
After the \textit{SnapFuzz}\xspace rewriter finishes rewriting the loader, execution is passed to the rewritten loader in order to load the target application and its library dependencies.
At this stage, all system calls and some specific loader functions are monitored.
As the normal execution of the loader progresses, \textit{SnapFuzz}\xspace intercepts its \code{mmap} system calls used to load libraries into memory, and scans these libraries in order to recursively rewrite their system calls and redirect them to the \textit{SnapFuzz}\xspace plugin.
The \textit{SnapFuzz}\xspace rewriter is based on the open-source load-time binary rewriter SaBRe~\cite{sabre}.
\subsubsection{Plugin}
The \textit{SnapFuzz}\xspace plugin implements a simple API that exposes three important functions:
(1)~An initialisation function that will be the first code to run after loading is done (this function is prepended dynamically to the \textit{.preinit\_array} of our target's ELF binary),
(2)~a function to which all system calls will be redirected, which we call the \textit{system call handler}, and
(3)~a similar function to which all vDSO calls will be redirected, the \textit{vDSO handler}.
After the loader completes, execution is passed to the target application, which will start by executing \textit{SnapFuzz}\xspace's initialisation function.
Per the ELF specification, execution starts from the function pointers of \textit{.preinit\_array}.
This is a common ELF feature used by LLVM sanitizers to initialise various internal data structures early, such as the shadow memory~\cite{ASan,MSan}.
\textit{SnapFuzz}\xspace is using the same mechanism to initialise its subsystems like its in-memory filesystem before the execution starts.
After the initialisation phase of the plugin, control is passed back to the target application and normal execution is resumed.
At this stage, the \textit{SnapFuzz}\xspace plugin is only executed when the target is about to issue a system call or a vDSO call.
When this happens, the plugin checks if the call should be intercepted, and if so, it further redirects it to the appropriate handlers, after which control is returned back to the target.
The \textit{SnapFuzz}\xspace plugin is also responsible to handle and guard against recursive calls and vDSO.
For example, the plugin itself is allowed to issue system calls through the use of the target's \textit{libc}.
To achieve this, \textit{SnapFuzz}\xspace's plugin is guarding every jump from the target application to the plugin with a thread local flag.
\subsection{In-memory filesystem}
\label{sec:in-mem-fs}
As mentioned before, \textit{SnapFuzz}\xspace redirects all file operations to use a custom in-memory filesystem.
This reduces the overhead of reading and writing from a storage medium, and eliminates the need for manually-written clean-up scripts to be run after each fuzzing operation, as the clean-up can be done automatically by simply discarding the in-memory state.
\textit{SnapFuzz}\xspace implements a lightweight in-memory filesystem, which uses two distinct mechanisms, one for files and the other for directories.
For files, \textit{SnapFuzz}\xspace's in-memory filesystem uses the recent \break \code{memfd\_create()} system call, introduced in Linux in 2015~\cite{memfd_create}.
This system call creates an anonymous file and returns a file descriptor that refers to it.
The file behaves like a regular file, but lives in memory.
Under this scheme, \textit{SnapFuzz}\xspace only needs to specially handle system calls that initiate interactions with a file through a pathname (like the \code{open} and \code{mmap} system calls).
All other system calls that handle file descriptors are compatible by default with the file descriptors returned by \code{memfd\_create}.
When a target application opens a file, the default behavior of \textit{SnapFuzz}\xspace is to
check if this file is a regular file (e.g. device files are ignored), and if
so, create an in-memory file descriptor and copy the whole contents
of the file in the memory address space of the target application through the
memory file descriptor.
\textit{SnapFuzz}\xspace keeps track of pathnames in order to avoid reloading the same file twice. This is not only a performance optimization but also a correctness requirement, as the application might have changed the contents of the file in memory.
Implementing an in-memory filesystem from scratch with anonymous mapping through \textit{mmap} and rewriting all I/O system calls to become function calls operating on the in-memory files would be even more efficient than the current \textit{SnapFuzz}\xspace implementation which is still issuing regular system calls and thus paying the associated context-switch overhead.
But developing such a subsystem that is compatible with the large diversity of system call options available on Linux is a laborious and difficult task, which is why we have opted for a \code{memfd\_create}-based approach.
For directories, \textit{SnapFuzz}\xspace takes advantage of the \textit{Libsqlfs} library~\cite{libsqlfs}, which implements a POSIX-style file system on top of the SQLite database and allows applications to have access to a full read/write filesystem with its own file and directory hierarchy.
\textit{Libsqlfs} simplifies the emulation of a real filesystem with directories and permissions.
\textit{SnapFuzz}\xspace uses \textit{Libsqlfs} for directories only, as we observed better performance for files via \textit{memfd\_create}.
With the in-memory filesystem in-place, \textit{SnapFuzz}\xspace can already implement two important optimizations: a smart deferred forkserver (\S\ref{sec:deferred-forkserver}) and an efficient state reset (\S\ref{sec:state-reset}).
\subsection{Smart deferred forkserver}
\label{sec:deferred-forkserver}
As discussed in \S\ref{sec:afl}, the deferred forkserver can offer great performance benefits by avoiding initialisation overheads in the target.
Such overheads include loading the shared libraries of a binary, parsing configuration files and cryptographic initialisation routines.
Unfortunately, for the deferred forkserver to be used, the user needs to manually modify to source code of the target.
Furthermore, the deferred forkserver cannot be used after the target has created threads, child processes, temporary files, network sockets, offset-sensitive file descriptors, or shared-state resources, so the user has to carefully decide where to place it: do it too early and optimisation opportunities are missed, do it too late and correctness is affected.
\textit{SnapFuzz}\xspace makes two important improvements to the deferred forkserver: first, it makes it possible to defer it much further than usually possible with \textit{AFL}\xspace's architecture, and second, it does so automatically, without any need for manual source modifications. This is made possible through \textit{SnapFuzz}\xspace's binary rewriting mechanism together with its in-memory filesystem and custom network communication mechanism.
There are two reasons for which \textit{SnapFuzz}\xspace can place the forkserver after many system calls which normally would have caused problems: (1)~its use of an in-memory filesystem in the case of file operations, as it transforms external side effects into in-memory changes;
and (2)~its custom network communication mechanism which allows it to skip network setup calls such as \code{socket} and \code{accept} (see \S\ref{sec:domain-sockets}).
Via binary rewriting, \textit{SnapFuzz}\xspace intercepts each system call, and places the forkserver just before it encounters either a system call that spawns new threads (\code{clone}, \code{fork}), or one used to receive input from a client.
The reason \textit{SnapFuzz}\xspace still has to stop before the application spawns new threads is that the forkserver relies on \code{fork} to spawn new instances to be fuzzed, and \code{fork} cannot reconstruct existing threads---in Linux, forking a multi-threaded application creates a process with a single thread~\cite{fork}.
As a possible mitigation, we tried to combine \textit{SnapFuzz}\xspace and the \textit{pthsem / GNU pth} library~\cite{pthsem}---a green threading library that provides non-preemptive priority-based scheduling, with the green threads executing inside an event-driven framework---but the performance overhead was too high.
In particular, we used \textit{pthsem} with LightFTP\xspace, as this application has to execute two \code{clone} system calls before it accepts input.
With \textit{pthsem} support, \textit{SnapFuzz}\xspace's forkserver can skip these two \code{clone} calls, as well as 37 additional system calls, as now \textit{SnapFuzz}\xspace can place the forkserver just before LightFTP\xspace is ready to accept input.
However, despite this gain, the overall performance was 10\% lower than in the version of \textit{SnapFuzz}\xspace without \textit{pthsem}, due to the overhead of this library.
Ideally, \textit{SnapFuzz}\xspace should implement a lightweight thread reconstruction mechanism to recreate all dead threads, but this is left as future work.
\subsection{Efficient state reset}
\label{sec:state-reset}
To use \textit{AFLNet}\xspace, users typically have to write a clean-up script to reset the application state after each iteration.
For instance, LightFTP\xspace under \textit{AFLNet}\xspace requires a Bash script to be invoked in every iteration in order to clean up any directories or files that have been created in the previous iteration.
Under \textit{SnapFuzz}\xspace, there is no need for such a cleanup script, which simplifies the test harness construction, and improves performance by avoiding the invocation of the cleanup script.
In the simplest case where \textit{AFL}\xspace checkpoints the target application before \code{main}, no filesystem modifications have happened at the point where the forkserver is placed.
So when a fuzzing iteration has finished, the target application process just exits and the OS discards its memory, which includes any in-memory filesystem modifications made during fuzzing.
Then, when the forkserver spawns a new instance of the target application, the filesystem is brought back to a state where all initial files of the actual filesystem are unmodified.
The situation is more complicated when the deferred forkserver is placed after the target application has already created some files.
When the forkserver creates a new instance to be fuzzed, the Linux kernel shares the memory pages associated with the newly-created in-memory files between the new instance and the forkserver.
This is problematic for \textit{SnapFuzz}\xspace, as now any modifications to the in-memory files by the fuzzed application instance will persist even after the instance finishes execution.
So in the next iteration, when the forkserver creates a new instance, this new instance will inherit those modifications too.
\textit{SnapFuzz}\xspace solves this issue as follows.
First, note that \textit{SnapFuzz}\xspace knows whether the application is executing before or after the forkserver's checkpoint, as it intercepts all system calls, including \code{fork}.
While the target application executes before the forkserver's checkpoint, \textit{SnapFuzz}\xspace allows all file interactions to be handled normally.
When a new instance is requested from the forkserver, \textit{SnapFuzz}\xspace recreates in the new instance all in-memory files registered in the in-memory filesystem and copies all their contents by using the efficient \code{sendfile} system call once per in-memory file.
\subsection{UNIX domain sockets}
\label{sec:domain-sockets}
\textit{AFLNet}\xspace uses the standard Internet sockets (TPC/IP and UDP/IP) to communicate to the target and send it fuzzed inputs.
The Internet socket stack includes functionality---such as calculating checksums of packets, inserting headers, routing---which is unnecessary when fuzzing applications on a single machine.
To eliminate this overhead, \textit{SnapFuzz}\xspace replaces Internet sockets with UNIX domain sockets.
More specifically, \textit{SnapFuzz}\xspace uses Sequenced Packets sockets (\code{SOCK\_SEQPACKET}).
This configuration offers performance benefits and also simplifies the implementation.
Sequenced Packets are quite similar to TCP, providing a sequenced, reliable, two-way connection-based data transmission path for datagrams.
The difference is that Sequenced Packets require the consumer (in our case the \textit{SnapFuzz}\xspace plugin running inside the target application) to read an entire packet with each input system call.
This atomicity of network communications simplifies corner cases where the target application might read only parts of the fuzzer's input due to scheduling or other delays.
By contrast, \textit{AFLNet}\xspace handles this issue by exposing manually defined knobs for introducing delays between network communications.
Our modified version of \textit{AFLNet}\xspace creates a socketpair of UNIX domain sockets with the Sequenced Packets type, and passes one end to the forkserver, which later passes it to the \textit{SnapFuzz}\xspace plugin.
The \textit{SnapFuzz}\xspace plugin initiates a handshake with the modified \textit{AFLNet}\xspace, after which \textit{AFLNet}\xspace is ready to submit generated inputs to the target or consume responses.
Translating networking communications from Internet sockets to UNIX domain sockets is not trivial, as \textit{SnapFuzz}\xspace needs to support the two main IP families of TCP and UDP which have a slightly different approach to how network communication is established.
In addition, \textit{SnapFuzz}\xspace also needs to support the different synchronous and asynchronous network communication system calls like \code{poll}, \code{epoll} and \code{select}.
For the TCP family, the \code{socket} system call creates a TCP/IP socket and returns a file descriptor which is then passed to \code{bind}, \code{listen} and finally to \code{accept}, before the system is ready to send or receive any data.
\textit{SnapFuzz}\xspace monitors this sequence of events on the target and when the \code{accept} system call is detected, it returns the UNIX domain socket file descriptor from the forkserver.
\textit{SnapFuzz}\xspace doesn't interfere with the \code{socket} system call and intentionally allows its normal execution in order to avoid complications with target applications that perform advanced configurations on the base socket.
This strategy is similar to the one used by the in-memory file system via the \code{memfd\_create} system call (\S\ref{sec:in-mem-fs}) in order to provide compatibility by default.
The UDP family is handled in a similar way, with the only difference that instead of monitoring for an \code{accept} system call to return the UNIX domain socket of the forkserver, \textit{SnapFuzz}\xspace is monitoring for a \code{bind} system call.
\textit{SnapFuzz}\xspace also needs to handle asynchronous networking communications as the base socket will not receive any prompt for incoming connections.
\textit{SnapFuzz}\xspace intercepts all asynchronous system calls and the first time the base socket is passed to an asynchronous system call like \code{poll}, it sets its state to be notified.
This informs the target application that an incoming connection is waiting to be handled, making it progress to execute an \code{accept} system call.
In summary, \textit{SnapFuzz}\xspace's use of UNIX domain sockets provides two key advantages: it simplifies the construction of test harnesses, which don't need to specify fragile delays between network communications anymore; and speeds up fuzzing by eliminating these delays and the unnecessary overhead of Internet sockets.
\subsection{Eliminating protocol-related delays}
\label{sec:snapfuzz-protocol}
\begin{figure}
\centering
\resizebox{1.00\columnwidth}{!}{
\includegraphics{imgs/snapfuzz-prot.pdf}
}
\caption{Messages exchanged for each \texttt{recv} and \texttt{send}.}
\captionsetup{justification=centering}
\label{fig:sf-prot}
\end{figure}
Network applications often implement multistep protocols with multiple requests and replies per session.
One of \textit{AFLNet}\xspace's main contributions is to infer the network protocol starting from a set of recorded message exchanges.
However, \textit{AFLNet}\xspace cannot guarantee that during a certain fuzzing iteration the target will indeed respect the protocol.
Deviations might be possible for instance due to a partly-incorrect protocol being inferred, bugs in the target application, or most commonly due to the target not being ready to send or receive a certain message.
Therefore, \textit{AFLNet}\xspace performs several checks and adds several user-specified delays to ensure communication is in sync with the protocol.
These protocol-related delays, which can significantly degrade the fuzzing throughput, are:
\begin{enumerate}
\item A delay to allow the server to initialise before \textit{AFLNet}\xspace attempts to communicate.
\item A delay specifying how long to wait before concluding that no responses are forthcoming and instead try to send more information, and
\item A delay specifying how long to wait after each packet is sent or received.
\end{enumerate}
These delays are necessary, as otherwise the Linux kernel will reject packets that come too fast while the target is not ready, and \textit{AFLNet}\xspace will desynchronize from its state machine.
But they cause a lot of time to be wasted, essentially because \textit{AFLNet}\xspace does not know whether the target is ready to send or receive information.
\textit{SnapFuzz}\xspace overcomes this challenge by notifying \textit{AFLNet}\xspace about the next action of the target.
It does this by introducing a second UNIX domain socket, namely the \textit{control socket}, which is used as a send-only channel from the \textit{SnapFuzz}\xspace plugin to \textit{AFLNet}\xspace to inform the latter if the target will next execute a \code{recv} or a \code{send} system call, as shown in Figure~\ref{fig:sf-prot}.
In summary, \textit{SnapFuzz}\xspace makes it possible to remove three important communication delays, which simplifies fuzzing harnesses and leads to significant performance gains, as we will show in the evaluation.
\subsection{Additional optimizations}
\label{sec:extras}
In this section, we discuss several additional optimizations performed by \textit{SnapFuzz}\xspace.
They concern developer-added delays, writes to \code{stdout/stderr}, signal propagation, server termination and thread pinning, and highlight the versatility of \textit{SnapFuzz}\xspace's approach in addressing a variety of challenges and inefficiencies when fuzzing network applications.
\subsubsection{Eliminating developer-added delays}
\label{sec:eliminate-delays}
Occasionally, network applications add sleeps or timeouts in order to avoid high CPU utilisation when they poll for new connections or data.
\textit{SnapFuzz}\xspace removes these delays, making those calls use a more aggressive polling model.
We also noticed that in some cases application developers deliberately choose to add sleeps in order to wait for various events.
For example, LightFTP\xspace adds a one second sleep in order to wait for all its threads to terminate.
This might be fine in a production environment, but during a fuzzing campaign such a delay is unnecessary and expensive.
\textit{SnapFuzz}\xspace completely skips such sleeps by intercepting and then not issuing this family of system calls at all.
\subsubsection{Avoiding \code{stdout/stderr} writes}
By default, \textit{AFL}\xspace redirects \code{stdout} and \code{stderr} to \code{/dev/null}.
This is much more performant than actually writing to a file or any other medium, as the kernel optimizes those operations aggressively.
\textit{SnapFuzz}\xspace goes one step further and saves additional time by completely skipping any system call that targets \code{stdout} or \code{stderr}.
\subsubsection{Signal Propagation}
\label{sec:signals}
Some applications use a multi-process rather than a multi-threaded concurrency model.
In this case, if a subprocess crashes with a segfault, the signal might not be propagated properly to the forkserver and the crash missed.
We stumbled upon this case with the Dcmqrscp\xspace server (\S\ref{sec:dcmqrscp}) where a valid new bug was manifesting, but \textit{AFLNet}\xspace was unable to detect the issue as the main process of Dcmqrscp\xspace never checked the exit status of its child processes.
As \textit{SnapFuzz}\xspace has full control of the system calls of the target, whenever a process is about to exit, it checks the exit status of its child processes too.
If an error is detected, it is raised to the forkserver.
\subsubsection{Efficient server exit}
Network servers usually run in a loop.
This loop is terminated either via a special protocol-specific keyword or an OS signal.
Since \textit{AFLNet}\xspace cannot guarantee that each fuzzing iteration will finish via a termination keyword, if the target does not terminate, it sends it a \code{SIGTERM} signal and waits for it to terminate.
Signal delivery is slow and also servers might take a long time to properly terminate execution.
In the context of fuzzing, proper termination is not so important, while fuzzing throughput is.
\textit{SnapFuzz}\xspace implements a simple mechanism to terminate the server: when it receives an empty string, this signals that the fuzzer has no more inputs to provide and the application is instantly killed.
This obviously has the downside that it could miss bugs in the termination routines, but these could be tested separately.
\subsubsection{Smart affinity}
\label{sec:pinning}
\textit{AFL}\xspace assumes that its targets are single-threaded and thus tries to pin the fuzzer and the target to two free CPUs.
Unfortunately, there is no mechanism to handle multi-threaded applications, other than just turning off \textit{AFL}\xspace's pinning mechanism.
\textit{SnapFuzz}\xspace can detect when a new thread or process is about to be spawned as both \code{clone} and \code{fork} system calls are intercepted.
This creates the opportunity for \textit{SnapFuzz}\xspace to take control of thread scheduling by pinning threads and processes to available CPUs.
\textit{SnapFuzz}\xspace implements a very simple algorithm that pins every newly created thread or process to the next available CPU.
\subsection{Binary Rewriting}
\label{sec:binary-rewriting}
\textit{SnapFuzz}\xspace implements a load-time binary rewriting subsystem that dynamically
intercepts both the OS loader's and the target's functionalities in order to monitor and modify all external behaviours of the target application.
Applications interact with the external world via \emph{system calls}, such as \code{read()} and \code{write()} in Linux, which provide various OS services.
As an optimization, Linux provides some services via \emph{vDSO (virtual Dynamic Shared Object)} calls. vDSO is essentially a small shared library injected by the kernel in every application in order to provide fast access to some services. For instance, \emph{gettimeofday()} is typically using a vDSO call on Linux.
The main goal of the binary rewriting component of \textit{SnapFuzz}\xspace is to intercept all the system calls and vDSO calls issued by the application being fuzzed, and redirect them to a \emph{system call handler}. \S\ref{sec:binary-rewriting-impl} presents the implementation details.
By intercepting the target application's interactions with its outside environment at this level of granularity, \textit{SnapFuzz}\xspace can significantly increase fuzzing throughput and eliminate the need for custom delays and scripts, as we discuss in the next subsections.
\subsection{\textit{SnapFuzz}\xspace Network Fuzzing Protocol: Eliminating Communication Delays}
\label{sec:snapfuzz-protocol}
\begin{figure}
\centering
\resizebox{1.00\columnwidth}{!}{
\includegraphics[trim=0 0 0 20, clip]{imgs/snapfuzz-prot.pdf}
}
\caption{Messages exchanged for each \texttt{recv} and \texttt{send}.}
\captionsetup{justification=centering}
\label{fig:sf-prot}
\end{figure}
Network applications often implement multistep protocols with multiple requests and replies per session.
One of \textit{AFLNet}\xspace's main contributions is to infer the network protocol starting from a set of recorded message exchanges.
However, \textit{AFLNet}\xspace cannot guarantee that during a certain fuzzing iteration the target will indeed respect the protocol.
Deviations might be possible for instance due to a partly-incorrect protocol being inferred, bugs in the target application, or most commonly due to the target not being ready to send or receive a certain message.
Therefore, \textit{AFLNet}\xspace performs several checks and adds several user-specified delays to ensure communication is in sync with the protocol.
These communication delays, which can significantly degrade the fuzzing throughput, are:
\begin{enumerate}[leftmargin=*]
\item A delay to allow the server to initialise before \textit{AFLNet}\xspace attempts to communicate.
\item A delay specifying how long to wait before concluding that no responses are forthcoming and instead try to send more information, and
\item A delay specifying how long to wait after each packet is sent or received.
\end{enumerate}
These delays are necessary, as otherwise the OS kernel will reject packets that come too fast while the target is not ready, and \textit{AFLNet}\xspace will desynchronize from its state machine.
But they cause a lot of time to be wasted, essentially because \textit{AFLNet}\xspace does not know whether the target is ready to send or receive information.
\textit{SnapFuzz}\xspace overcomes this challenge through a simple but effective network fuzzing protocol.
The protocol keeps track of the next action of the target, and notifies \textit{AFLNet}\xspace about it.
Figure~\ref{fig:sf-prot} shows the messages exchanged between \textit{SnapFuzz}\xspace and \textit{AFLNet}\xspace on each \code{recv} (for receiving data) and \code{send} (for sending data) system calls.
Essentially, to avoid the need for the communication delays discussed above, \textit{SnapFuzz}\xspace informs \textit{AFLNet}\xspace when the target is about to issue a \code{recv} or a \code{send}.
This is performed by introducing an additional \textit{control socket} (implemented via an efficient UNIX domain socket), which is used as a send-only channel from the \textit{SnapFuzz}\xspace plugin to \textit{AFLNet}\xspace.
The \textit{SnapFuzz}\xspace network fuzzing protocol additionally implements the following two optimisations:
\medskip \noindent
\textbf{UNIX Domain Sockets.} The standard Internet sockets (TPC/IP and UDP/IP) used by \textit{AFLNet}\xspace to communicate to the target and send it fuzzed inputs are unnecessarily slow.
As observed before~\cite{multifuzz}, replacing them with UNIX domain sockets can lead to significant performance speed-ups.
We discuss how this is achieved in \S\ref{sec:domain-sockets}.
\medskip \noindent
\textbf{Efficient Server Termination.}
Network servers usually run in a loop.
This loop is terminated either via a special protocol-specific keyword or an OS signal.
Since \textit{AFLNet}\xspace cannot guarantee that each fuzzing iteration will finish via a termination keyword, if the target does not terminate, it sends it a \code{SIGTERM} signal and waits for it to terminate.
Signal delivery is slow and also servers might take a long time to properly terminate execution.
In the context of fuzzing, proper termination is not so important, while fuzzing throughput is.
\textit{SnapFuzz}\xspace implements a simple mechanism to terminate the server: when it receives an empty string, it infers that the fuzzer has no more inputs to provide and the application is instantly killed.
This obviously has the downside that it could miss bugs in the termination routines, but these could be tested separately.
In summary, the \textit{SnapFuzz}\xspace network fuzzing protocol improves fuzzing performance (significantly, as shown in the evaluation) and simplifies fuzzing harness construction by eliminating the need to manually specify three different communication delays.
\subsection{Efficient State Reset}
\label{sec:state-reset}
\textit{AFLNet}\xspace users typically have to write a clean-up script to reset the application state after each fuzzing iteration.
For instance, LightFTP\xspace under \textit{AFLNet}\xspace requires a script that cleans up any directories or files that have been created in the previous iteration.
Under \textit{SnapFuzz}\xspace, there is no need for such a clean-up script, which simplifies the test harness construction, and improves performance by avoiding the invocation of the clean-up script.
\textit{SnapFuzz}\xspace solves this challenge by employing an in-memory filesystem.
Using the in-memory filesystem \code{tmpfs} under UNIX is a well-known optimisation in the context of fuzzing.\footnote{\url{https://www.cipherdyne.org/blog/2014/12/ram-disks-and-saving-your-ssd-from-afl-fuzzing.html}}\textsuperscript{,}\footnote{\url{https://medium.com/@dhiraj_mishra/fuzzing-vim-53d7cf9b5561}}\textsuperscript{,}\footnote{\url{https://www.cis.upenn.edu/~sga001/classes/cis331f19/hws/hw1.pdf}}
\textit{SnapFuzz}\xspace uses an in-memory filesystem both for efficiency and for removing the need for clean-up scripts involving filesystem state.
However, we are not using \code{tmpfs}, but a custom in-memory filesystem that uses the \code{memfd\_create} system call for files and the \textit{Libsqlfs} library for directories (see \S\ref{sec:in-mem-fs} for details).
This allows us to quickly duplicate state after forking, as explained below.
In the simplest case where \textit{AFL}\xspace checkpoints the target application before \code{main}, no filesystem modifications have happened at the point where the forkserver is placed.
So when a fuzzing iteration has finished, the target application process just exits and the OS discards its memory, which includes any in-memory filesystem modifications made during fuzzing.
Then, when the forkserver spawns a new instance of the target application, the filesystem is brought back to a state where all initial files of the actual filesystem are unmodified.
The situation is more complicated when the deferred forkserver is placed after the target application has already created some files.
In our implementation, which is based \code{memfd\_create}, when the forkserver creates a new instance to be fuzzed, the Linux kernel shares the memory pages associated with the newly-created in-memory files between the new instance and the forkserver.
Note that using \code{tmpfs} would not solve this issue---as far as we know, there is no way to duplicate a \code{tmpfs} filesystem in a copy-on-write way.
This sharing of pages between the new instance and the forkserver is problematic, as now any modifications to the in-memory files by the fuzzed application instance will persist even after the instance finishes execution.
So in the next iteration, when the forkserver creates a new instance, this new instance will inherit those modifications too.
\textit{SnapFuzz}\xspace solves this issue as follows.
First, note that \textit{SnapFuzz}\xspace knows whether the application is executing before or after the forkserver's checkpoint, as it intercepts all system calls, including \code{fork}.
While the target application executes before the forkserver's checkpoint, \textit{SnapFuzz}\xspace allows all file interactions to be handled normally.
When a new instance is requested from the forkserver, \textit{SnapFuzz}\xspace recreates in the new instance all in-memory files registered in the in-memory filesystem and copies all their contents by using the efficient \code{sendfile} system call once per in-memory file.
\subsection{Smart Deferred Forkserver}
\label{sec:deferred-forkserver}
As discussed in \S\ref{sec:afl}, the deferred forkserver can offer great performance benefits by avoiding initialisation overheads in the target.
Such overheads include loading the shared libraries used by the target, parsing configuration files and cryptographic initialisation routines.
Unfortunately, for the deferred forkserver to be used, the user needs to manually modify to source code of the target.
Furthermore, the deferred forkserver cannot be used after the target has created threads, child processes, temporary files, network sockets, offset-sensitive file descriptors, or shared-state resources, so the user has to carefully decide where to place it: do it too early and optimisation opportunities are missed, do it too late and correctness is affected.
\textit{SnapFuzz}\xspace makes two important improvements to the deferred forkserver: first, it makes it possible to defer it much further than usually possible with \textit{AFL}\xspace's architecture, and second, it does so automatically, without any need for manual source modifications.
The two components which enable \textit{SnapFuzz}\xspace to place the forkserver after many system calls which normally would have caused problems are:
(1)~its custom network fuzzing protocol which allows it to skip network setup calls such as \code{socket} and \code{accept} (\S\ref{sec:snapfuzz-protocol}) and
(2)~its in-memory filesystem, which transforms filesystem operations into in-memory changes (\S\ref{sec:state-reset}).
Via binary rewriting, \textit{SnapFuzz}\xspace intercepts each system call, and places the forkserver just before it encounters either a system call that spawns new threads (\code{clone}, \code{fork}), or one used to receive input from a client.
The reason \textit{SnapFuzz}\xspace still has to stop before the application spawns new threads is that the forkserver relies on \code{fork} to spawn new instances to be fuzzed, and \code{fork} cannot reconstruct existing threads---in Linux, forking a multi-threaded application creates a process with a single thread~\cite{fork}.
As a possible mitigation, we tried to combine \textit{SnapFuzz}\xspace and the \textit{pthsem / GNU pth} library~\cite{pthsem}---a green threading library that provides non-preemptive priority-based scheduling, with the green threads executing inside an event-driven framework---but the performance overhead was too high.
In particular, we used \textit{pthsem} with LightFTP\xspace, as this application has to execute two \code{clone} system calls before it accepts input.
With \textit{pthsem} support, \textit{SnapFuzz}\xspace's forkserver can skip these two \code{clone} calls, as well as 37 additional system calls, as now \textit{SnapFuzz}\xspace can place the forkserver just before LightFTP\xspace is ready to accept input.
However, despite this gain, the overall performance was 10\% lower than in the version of \textit{SnapFuzz}\xspace without \textit{pthsem}, due to the overhead of this library.
Ideally, \textit{SnapFuzz}\xspace should implement a lightweight thread reconstruction mechanism to recreate all dead threads, but this is left as future work.
\subsection{Additional Binary Rewriting-enabled Optimizations}
\label{sec:extras}
In this section, we discuss several additional optimizations performed by \textit{SnapFuzz}\xspace, which are enabled by its binary rewriting-based architecture.
They concern developer-added delays, writes to \code{stdout/stderr}, signal propagation, and CPU affinity, and highlight the versatility of \textit{SnapFuzz}\xspace's approach in addressing a variety of challenges and inefficiencies when fuzzing network applications.
\subsubsection{Eliminating developer-added delays}
\label{sec:eliminate-delays}
Occasionally, network applications add sleeps or timeouts in order to avoid high CPU utilisation when they poll for new connections or data.
\textit{SnapFuzz}\xspace removes these delays via binary rewriting, making those calls use a more aggressive polling model.
We also noticed that in some cases application developers deliberately choose to add sleeps in order to wait for various events.
For example, LightFTP\xspace adds a one second sleep in order to wait for all its threads to terminate.
This might be fine in a production environment, but during a fuzzing campaign such a delay is unnecessary and expensive.
\textit{SnapFuzz}\xspace completely skips such sleeps by intercepting and then not issuing this family of system calls at all.
\subsubsection{Avoiding \code{stdout/stderr} writes}
By default, \textit{AFL}\xspace redirects \code{stdout} and \code{stderr} to \code{/dev/null}.
This is much more performant than actually writing to a file or any other medium, as the kernel optimizes those operations aggressively.
\textit{SnapFuzz}\xspace goes one step further and saves additional time by completely skipping any system call that targets \code{stdout} or \code{stderr}.
\subsubsection{Signal Propagation}
\label{sec:signals}
Some applications use a multi-process rather than a multi-threaded concurrency model.
In this case, if a subprocess crashes with a segfault, the signal might not be propagated properly to the forkserver and the crash missed.
We stumbled upon this case with the Dcmqrscp\xspace server (\S\ref{sec:dcmqrscp}) where a valid new bug was manifesting, but \textit{AFLNet}\xspace was unable to detect the issue as the main process of Dcmqrscp\xspace never checked the exit status of its child processes.
As \textit{SnapFuzz}\xspace has full control of the system calls of the target, whenever a process is about to exit, it checks the exit status of its child processes too.
If an error is detected, it is raised to the forkserver.
\subsubsection{Smart affinity}
\label{sec:pinning}
\textit{AFL}\xspace assumes that its targets are single-threaded and thus tries to pin the fuzzer and the target to two free CPUs.
Unfortunately, there is no mechanism to handle multi-threaded applications, other than just turning off \textit{AFL}\xspace's pinning mechanism.
\textit{SnapFuzz}\xspace can detect when a new thread or process is about to be spawned as both \code{clone} and \code{fork} system calls are intercepted.
This creates the opportunity for \textit{SnapFuzz}\xspace to take control of thread scheduling by pinning threads and processes to available CPUs.
\textit{SnapFuzz}\xspace implements a very simple algorithm that pins every newly created thread or process to the next available CPU.
\subsection{Methodology}
\label{sec:methodology}
Since \textit{SnapFuzz}\xspace's contribution is in increasing the fuzzing throughput, our main comparison metric is the number of fuzzing iterations per second.
Note that each fuzzing iteration may include multiple message exchanges between the fuzzer and the target.
A fuzzing campaign consists of a given number of fuzzing iterations.
During a fuzzing campaign, the fuzzer's speed may vary across iterations, sometimes significantly, due to different code executed by the target.
To ensure a meaningful comparison between \textit{SnapFuzz}\xspace and \textit{AFLNet}\xspace, rather than fixing a time budget and counting the number of iterations performed by each, we instead fix the number of iterations and measure the execution time of each system.
We monitored standard fuzzing metrics including bug count, coverage, stability, path and cycles completed, to make sure that the \textit{SnapFuzz}\xspace and \textit{AFLNet}\xspace campaigns have the same (or very similar) behaviour.
We chose to run each target for one million iterations to simulate realistic \textit{AFLNet}\xspace fuzzing campaigns (ranging from approximately 16 to 36 hours).
We repeated the execution of each campaign 10 times.
For bug finding, we left \textit{SnapFuzz}\xspace to run for 24 hours, three times for each benchmark.
We then accumulated all discovered crashes in a single repository.
To uniquely categorise the crashes found, we recompiled all benchmarks under \textit{ASan}\xspace and \textit{UBSan}\xspace, and then grouped the crashing inputs based on the reports from the sanitizers.
\subsection{Experimental Setup}
All of our experiments were conducted on a 3.0 GHz AMD EPYC 7302P 16-Core CPU and 128 GB RAM running 64-bit Ubuntu 18.04 LTS (kernel version 4.15.0-162) with an SSD disk.
Note that using a slower HDD instead of an SDD disk would likely lead to larger gains for \textit{SnapFuzz}\xspace's in-memory filesystem component.
\textit{SnapFuzz}\xspace is built on top of \textit{AFLNet}\xspace revision \code{0f51f9e} from January 2021 and SaBRe revision \code{7a94f83}.
The servers tested and their workloads were taken from the \textit{AFLNet}\xspace paper and repository at the revision mentioned above.
We used the default configurations proposed by \textit{AFLNet}\xspace for all benchmarks, with a couple of exceptions.
For the Dcmqrscp\xspace server, two changes were required: 1) we had to include a Bash clean-up script to reset the state of a data directory of the server, and 2) we had to add a wait time between requests of 5 milliseconds as we observed \textit{AFLNet}\xspace to desynchronise from its target.
These changes further emphasise the fact that the clean-up scripts and delays that users need to specify when building a fuzzing harness are fragile and may need adjustment when using different machines, thus \textit{SnapFuzz}\xspace's ability to eliminate their need is important.
In TinyDTLS\xspace we decided to decrease the inter-request wait time from 30 to 2 milliseconds, as we noticed the \textit{AFLNet}\xspace performance was seriously suffering due to this large delay.
Again, this shows that choosing the right values for these time delays is difficult.
\subsection{Summary of Results}
\begin{table}[t]
\centering
\caption{Time (in minutes) to complete one million fuzzing iterations in AFLNet vs Snapfuzz.}
\captionsetup{justification=centering}
\label{tbl:summary}
\begin{tabular}{l|r|r|r}
& \textit{AFLNet}\xspace & \textit{SnapFuzz}\xspace & Speedup \\ \hline
Dcmqrscp\xspace & 1055\xspace & 127\xspace & 8.4x\xspace \\ \hline
Dnsmasq\xspace & 917\xspace & 30\xspace & 30.6x\xspace \\ \hline
TinyDTLS\xspace & 1401\xspace & 34\xspace & 41.2x\xspace \\ \hline
LightFTP\xspace & 2135\xspace & 34\xspace & 62.8x\xspace \\ \hline
LIVE555\xspace & 1547\xspace & 63\xspace & 24.6x\xspace \\
\end{tabular}
\end{table}
Table~\ref{tbl:summary} shows a summary of the results.
In particular, it compares the average time needed by \textit{AFLNet}\xspace and by \textit{SnapFuzz}\xspace to complete one million iterations.
As can be seen, \textit{AFLNet}\xspace takes between \dnsmasqAnT to \lightftpAnT to complete these iterations, with \textit{SnapFuzz}\xspace taking only a fraction of that time, between \tinydtlsSfT and \dcmqrscpSfT.
The speedups are impressive in each case, varying between \dcmqrscpSfAnSu for \dcmqrscp and \lightftpSfAnSu for \lightftp.
In all cases, we observed identical coverage statistics, bug counts, and stability numbers.
\subsection{LightFTP\xspace}
\label{sec:lightftp}
LightFTP\xspace~\cite{lightftp} is a small server for file transfers that implements the FTP protocol.
The fuzzing harness instructs LightFTP\xspace to log in a specific user, list the contents of the home directory on the FTP server, create directories, and execute various other commands for system information.
LightFTP\xspace exercises a large set of \textit{SnapFuzz}\xspace's subsystems.
First, it heavily utilises the filesystem, as the probability to create directories is quite high on every iteration.
Second, it has verbose logging and writing to \code{stdout}.
Third, it has a long initialisation phase, because it parses a configuration file and then undergoes a heavyweight process of initialising x509 certificates.
And lastly, LightFTP\xspace is a multi-threaded application and has a hardcoded sleep to make sure that all of its threads have terminated gracefully.
\textit{SnapFuzz}\xspace optimises all the above functionalities.
All directory interactions are translated into in-memory operations, thus avoiding context switches and device (hard drive) overheads.
\textit{SnapFuzz}\xspace cancels \code{stdout} and \code{stderr} writes.
\textit{SnapFuzz}\xspace's smart deferred forkserver snapshots the LightFTP\xspace server after its initialisation phase and thus fuzzing under \textit{SnapFuzz}\xspace pays the initialisation overhead only once.
And lastly, \textit{SnapFuzz}\xspace cancels any calls to \code{sleep} and similar system calls.
Note that \textit{SnapFuzz}\xspace can place the forkserver later than it could be placed manually.
For the deferred forkserver to work properly, recall that no file descriptor must be open before the forkserver snapshots the target.
This is because the underlying resource of a file descriptor is retained after a fork happens.
This limits the area where the deferred forkserver can be placed manually.
\textit{SnapFuzz}\xspace overcomes this challenge with its in-memory file system as described in \S\ref{sec:in-mem-fs} and thus it is able to place the forkserver after the whole initialisation process has finished.
The one million iterations run for LightFTP\xspace take on average 35 hours 35 minutes\xspace under \textit{AFLNet}\xspace, while only 34 minutes\xspace under \textit{SnapFuzz}\xspace, providing a 62.8x\xspace speedup.
\subsection{Dcmqrscp\xspace}
\label{sec:dcmqrscp}
Dcmqrscp\xspace~\cite{dcmqrscp} is a DICOM image archive server that manages a number of storage areas and allows images to be stored and queried.
The fuzzing harness instructs the DICOM server to echo connection information back to the client, and to store, find and retrieve specific images into and from its database.
Dcmqrscp\xspace heavily exercises \textit{SnapFuzz}\xspace's in-memory filesystem as on every iteration the probability to read or create files is high.
Dcmqrscp\xspace also benefits from the smart deferred forkserver, as it has a long initialisation phase in which the server dynamically loads the \textit{libnss} library and also parses multiple configuration files that dictate the syntax and capabilities of the DICOM language.
Our signal propagation subsystem (\S\ref{sec:signals}) was able to expose a bug in Dcmqrscp\xspace which was also triggered by \textit{AFLNet}\xspace but was missed because signals were not properly propagated.
The one million Dcmqrscp\xspace iterations take on average 17 hours 35 minutes\xspace to execute under \textit{AFLNet}\xspace, while only 2 hours 7 minutes\xspace under \textit{SnapFuzz}\xspace, providing a 8.4x\xspace speedup.
\subsection{Dnsmasq\xspace}
\label{sec:dnsmasq}
Dnsmasq\xspace~\cite{dnsmasq} is a single-threaded DNS proxy and DHCP server designed to have a small footprint and be suitable for resource-constrained routers and firewalls.
The fuzzing harness instructs Dnsmasq\xspace to query various bogus domain names from its configuration file and then report results back to its client.
Dnsmasq\xspace is an in-memory database with very little interaction with the filesystem.
Therefore, it mainly benefits from the \textit{SnapFuzz}\xspace protocol and its additional optimizations of \S\ref{sec:extras}.
Furthermore, it highly benefits from the smart deferred forkserver, as it has a long initialisation process which uses \textit{dlopen()} and performs various network-related configurations.
Dnsmasq\xspace requires approximately 1,200 system calls before the process is ready to accept input.
As for other benchmarks, a manually-placed forkserver under \textit{AFLNet}\xspace could not snapshot the application at the same depth as \textit{SnapFuzz}\xspace's smart deferred forkserver.
This is because Dnsmasq\xspace needs to execute a sequence of system calls to establish a network connection with \textit{AFLNet}\xspace.
This sequence includes creating a socket, binding its file descriptor, calling \code{listen}, executing a \code{select} to check for incoming connections, and finally accepting the connection.
Therefore, under \textit{AFLNet}\xspace, the latest possible placement of the forkserver would be just before this sequence.
Under \textit{SnapFuzz}\xspace, network communications are translated into UNIX domain socket communications that don't require any of the above, and thus the smart deferred forkserver can snapshot the target right before reading the input from the fuzzer, saving a lot of initialisation time.
The one million Dnsmasq\xspace iterations take on average 15 hours 17 minutes\xspace under \textit{AFLNet}\xspace, while only 30 minutes\xspace under \textit{SnapFuzz}\xspace, providing a 30.6x\xspace speedup.
\subsection{LIVE555\xspace}
\label{sec:live555}
LIVE555\xspace~\cite{live555} is a single-threaded multimedia streaming server that uses open standard protocols like RTP/RTCP, RTSP and SIP.
The fuzzing harness instructs the LIVE555\xspace server to accept requests to serve the content of a specific file in a streaming fashion, and the server replies to these requests with information and the actual streaming data.
LIVE555\xspace only reads files and thus no state reset script is required.
It has a relatively slim initialisation phase with the main overhead coming from the many writes to \code{stdout} with welcoming messages to users.
LIVE555\xspace mainly benefits from the \textit{SnapFuzz}\xspace protocol and the elimination of \code{stdout} writes.
LIVE555\xspace reads its files only after the forkserver performs its snapshot.
As a result, those files are not kept in the in-memory filesystem of \textit{SnapFuzz}\xspace, and are read from the actual filesystem in each iteration.
We leave as future work the optimisation of predefining a set of files to be loaded in the in-memory file system when the smart deferred forkserver kicks in, so the target could read these files from memory rather the actual filesystem.
The one million LIVE555\xspace iterations take on average 25 hours 47 minutes\xspace under \textit{AFLNet}\xspace, while only 63 minutes\xspace under \textit{SnapFuzz}\xspace, providing a 24.6x\xspace speedup.
\subsection{TinyDTLS\xspace}
\label{sec:tinydtls}
TinyDTLS\xspace~\cite{tinydtls} is a DTLS 1.2 single-threaded UDP server targetting IoT devices.
In the fuzzing harness, TinyDTLS\xspace accepts a new connection and then the DTLS handshake is initiated in order for communication to be established.
The protocol followed by \textit{AFLNet}\xspace has several steps, and progress to the next step is accomplished either by a successful network action or after a timeout has expired.
TinyDTLS\xspace supports two cipher suites, one Eliptic Curve (EC)-based, the other Pre-Shared Keys (PSK)-based.
EC-based encryption is slow, requiring the use of a large timeout between requests, which slows down fuzzing with \textit{AFLNet}\xspace considerably.
In addition, \textit{AFLNet}\xspace includes some hardcoded delays between network interactions so that it doesn't overwhelm the target---without these delays, network packets might be dropped and \textit{AFLNet}\xspace's state machine desynchronized.
Due to TinyDTLS\xspace's processing delays, network buffers might fill up if \textit{AFLNet}\xspace sends too much data in a short time period.
To deal with this, \textit{AFLNet}\xspace checks on every send and receive if all the bytes are sent, and retries if not.
\textit{SnapFuzz}\xspace handles all these issues through its network fuzzing protocol.
(We also note that TinyDTLS\xspace exercises \textit{SnapFuzz}\xspace's UDP translation capabilities, unlike the other servers which use TCP.)
The end result is that all these delays are eliminated:
\textit{AFLNet}\xspace doesn't need to guess the state of the target anymore, as \textit{SnapFuzz}\xspace explicitly informs \textit{AFLNet}\xspace about the next action of the target.
Similarly, the issue of dropped packets disappears, as \textit{AFLNet}\xspace is always informed when it is the right time to send more data.
Finally, \textit{SnapFuzz}\xspace's UNIX domain sockets eliminate the need for send and receive retries, as full buffer delivery from and to the target is guaranteed by the domain socket protocol.
TinyDTLS\xspace writes a lot of data to \code{stdout}, so it also benefits from \textit{SnapFuzz}\xspace's ability to skip these system calls.
The one million TinyDTLS\xspace iterations take on average 23 hours 21 minutes\xspace under \textit{AFLNet}\xspace, while only 34 minutes\xspace under \textit{SnapFuzz}\xspace, providing a 41.2x\xspace speedup.
We remind the reader that in TinyDTLS\xspace we decided to decrease the manually-added time delay between requests from 30ms to 2ms, as we noticed the performance of \textit{AFLNet}\xspace was seriously affected by it.
Without this change, \textit{AFLNet}\xspace would take significantly longer to complete one million iterations, and the speedup achieved by \textit{SnapFuzz}\xspace would be significantly higher.
\subsection{Performance Breakdown}
\begin{table*}[t]
\centering
\caption{Speedup achieved by \textit{SnapFuzz}\xspace compared to \textit{AFLNet}\xspace, when each \textit{SnapFuzz}\xspace component is added one by one.
Note that the ordering has an impact on the speedup achieved by each component (see text).}
\captionsetup{justification=centering}
\label{tbl:breakdown}
\begin{tabular}{l|r||r|r|r|r|r|}
& SF Protocol & + Affinity & + No Sleeps & + No STDIO & + Defer & + In-Mem FS \\ \hline
Dcmqrscp\xspace & 1.30x\xspace & 3.85x\xspace & 1.00x\xspace & 1.00x\xspace & 1.94x\xspace & 1.55x\xspace \\ \hline
Dnsmasq\xspace & 1.90x\xspace & 3.47x\xspace & 1.00x\xspace & 1.00x\xspace & 4.79x\xspace & 1.00x\xspace \\ \hline
TinyDTLS\xspace & 3.40x\xspace & 12.21x\xspace & 1.00x\xspace & 1.00x\xspace & 1.09x\xspace & 1.09x\xspace \\ \hline
LightFTP\xspace & 1.90x\xspace & 1.79x\xspace & 2.76x\xspace & 1.00x\xspace & 2.39x\xspace & 2.23x\xspace \\ \hline
LIVE555\xspace & 3.00x\xspace & 5.93x\xspace & 1.04x\xspace & 1.04x\xspace & 1.25x\xspace & 1.18x\xspace \\
\end{tabular}
\end{table*}
In \S\ref{sec:lightftp}--\S\ref{sec:tinydtls} we discuss which components of \textit{SnapFuzz}\xspace are likely to benefit each application the most.
Those conclusions were reached by investigating the system calls issued by the applications, using the estimates provided by strace about how much each syscall takes in the kernel.
To have a better quantitative understanding of the contribution of each components, we performed an ablation study in which we have run different versions of \textit{SnapFuzz}\xspace for a short number of 10k iterations.
We chose a much smaller number of iterations because running so many experiments with 1M iterations was prohibitive on our computing infrastructure.
This means that our speedups sometimes differ significantly from those achieved by 1M iterations.
However, the main goal of these experiments is to gain additional insights into the impact of different components and their interaction.
Due to various dependencies among components, we start with a version of \textit{SnapFuzz}\xspace containing only the network fuzzing protocol, and keep adding components one by one.
However, it is essential to understand that the order in which we add components matters, as their effect is often multiplicative.
In particular, this means that the additional impact of components added earlier can be significantly diminished compared to the case where the same component is added later.
We give two examples:
\begin{enumerate}[leftmargin=*]
\item \textbf{\textit{SnapFuzz}\xspace protocol and smart affinity.}
The \textit{SnapFuzz}\xspace protocol is a performant non-blocking protocol that polls the fuzzer and the application for communication.
Under the default restricted CPU affinity of \textit{AFLNet}\xspace, the protocol is under-performing, because the polling model requires independent CPU cores to get the expected performance benefit.
At the same time, the smart CPU affinity component depends on whether the \textit{SnapFuzz}\xspace protocol is enabled or not, as the protocol changes what it is executed on the CPU.
\item \textbf{In-memory filesystem and smart deferred forkserver}.
The smart deferred forkserver performs better when the in-memory filesystem is enabled, because with an in-memory filesystem it can delay the forkserver past filesystem operations.
On the other hand, the in-memory filesystem also performs better when the smart deferred forkserver is enabled.
This is because the in-memory filesystem has a fixed overhead of loading and storing the files the target is reading in the beginning of its execution.
This initial overhead might degrade performance, especially for short executions.
When the deferred forkserver is enabled, this overhead is bypassed, as these files are loaded only once in memory and consecutive operations will be only in-memory.
\end{enumerate}
One option would be to try all possible orderings.
However, the full number is large (6! = 720) and some orderings are difficult to run due to engineering limitations (e.g.\ the \textit{SnapFuzz}\xspace protocol is deeply embedded into \textit{SnapFuzz}\xspace and disabling it would require a major engineering overhaul).
Nevertheless, we believe the ordering we present here is still useful in providing insights into the impact of each \textit{SnapFuzz}\xspace component.
Table~\ref{tbl:breakdown} shows our results.
We observe that all components have a significant impact on at least one benchmark.
Furthermore, the \textit{SnapFuzz}\xspace protocol, the smart affinity, and the smart deferred forkserver always lead to gains, while eliminating developer-added delays (\textit{no sleeps}), avoiding stdout/stderr writes (\textit{no stdout}) and the in-memory file system make no difference in some benchmarks.
Removing writes to stdout/stderr is the least impactful component, benefiting only LIVE555\xspace.
The reported numbers are largely consistent with our qualitative observations of \S\ref{sec:lightftp}--\S\ref{sec:tinydtls}.
For instance, the main benefits of LightFTP\xspace come from the \textit{SnapFuzz}\xspace protocol (1.90x\xspace) which removes synchronization and server termination delays; from smart affinity (1.79x\xspace), as LightFTP\xspace is a multi-threaded application; from removing developer-added delays, which are present in LightFTP\xspace (2.76x\xspace); from the smart deferred forkserver (2.39x\xspace), as it has a long initialization phase; and from the in-memory filesystem (2.23x\xspace), as it makes heavy use of the filesystem.
While LightFTP\xspace has writes to stdout, removing them does not make a noticeable difference.
The performance numbers for other benchmarks also largely agree with our expectations.
For instance, the in-memory filesystem brings no benefits to Dnsmasq\xspace, which is an in-memory database with little filesystem interaction;
but it highly benefits from the smart deferred server (4.79x\xspace), given that it has a long initialization with over 1,200 system calls issued before it is able to accept input.
\subsection{Unique Crashes Found}
\textit{SnapFuzz}\xspace, as expected, was able to replicate all \textit{AFLNet}\xspace discovered crashes.
Through its performance advantage, it also found additional crashes in 3 of the 5 benchmarks.
During 24h fuzzing campaigns, \textit{SnapFuzz}\xspace found 4 bugs in the Dcmqrscp\xspace benchmark while \textit{AFLNet}\xspace was not able to find any.
For Dnsmasq\xspace, \textit{SnapFuzz}\xspace was able to find 7 crashes while \textit{AFLNet}\xspace found only 1, and for the LIVE555\xspace benchmark, \textit{SnapFuzz}\xspace was able to find 4 crashes while \textit{AFLNet}\xspace found 2.
Both tools found 3 bugs in TinyDTLS\xspace.
Overall, \textit{SnapFuzz}\xspace found 18 unique crashes, 12 more than \textit{AFLNet}\xspace.
The bugs are a variety of heap overflows, stack overflows, use-after-free bugs, and other types of undefined behaviours.
Fortunately, they seem to have been fixed in the latest versions of these applications.
We plan to rerun \textit{SnapFuzz}\xspace on the latest versions.
\begin{comment}
\subsection{LLVM Sanitizers}
The \textit{AFLNet}\xspace experiments do not use sanitizers, likely because of the extra performance overhead.\tasos{this might be contradicting to the following}
However, sanitizers are often used in fuzzing when one has source access and they are easily integrated into the build process, so we have also conducted experiments with our benchmarks running under \textit{ASan}\xspace~\cite{ASan}.
In order to support LLVM Sanitizers, \textit{SnapFuzz}\xspace's rewriter module had to support two important features.
Firstly, LLVM Sanitizers are always check if they are the first shared object to be loaded in memory.
LLVM Sanitizers perform this check by using \code{dl\_iterate\_phdr()}.
To overcome this check, \textit{SnapFuzz}\xspace's rewriter tricks the loader to load the \textit{SnapFuzz}\xspace plugin as second in the ELF's \code{dynamic} section as explained in \S\ref{subsec:rewriter}.
Secondly, \textit{SnapFuzz}\xspace's rewriter needs to trick the Sanitizers' reading of \code{/proc/self/maps} in order to hide \textit{SnapFuzz}\xspace itself or else the Sanitizers exit with an error reporting unexpected mappings were found.
LLVM Sanitizers are meticulous on analysing memory and because \textit{SnapFuzz}\xspace's rewriter is loaded before the loader it self, it creates extra memory footprints that Sanitizers are able to detect.
\textit{SnapFuzz}\xspace solves this issue by intercepting \code{read()} system calls to \code{/proc/self/maps} and by hiding \textit{SnapFuzz}\xspace from the Sanitizer.
\textit{SnapFuzz}\xspace also needs to monitor and block \code{mmap()} and \code{mprotect()} calls on the pages that \textit{SnapFuzz}\xspace resides.
For this benchmark we chose to run LightFTP\xspace with \textit{ASan}\xspace under the same experimental setups as the above experiments.
\textit{AFLNet}\xspace executions of LightFTP\xspace under \textit{ASan}\xspace had similar speed to vanila LightFTP\xspace.
This shows that networking communication inefficiencies between the target and the fuzzer are the performance bottleneck instead of computation.
For \textit{SnapFuzz}\xspace, the performance speedup of LightFTP\xspace under \textit{ASan}\xspace was at 32x compareted to \textit{AFLNet}\xspace under \textit{ASan}\xspace.
As expected, performance was lower than \textit{SnapFuzz}\xspace without the sanitizers.
This because the computation overhead added by the sanitizers is becoming much more noticeable.
We left \textit{TSan}\xspace support as future work as we faces some compatibility issues.
\end{comment}
\subsection{Binary Rewriting}
\label{sec:binary-rewriting-impl}
Binary rewriting in \textit{SnapFuzz}\xspace employs two major components:
1)~the rewriter module, which scans the code for specific functions, vDSO and system call assembly opcodes, and redirects them to the plugin module, and
2)~the plugin module where \textit{SnapFuzz}\xspace resides.
\medskip
\noindent\textbf{Rewriter.}
\label{subsec:rewriter}
\textit{SnapFuzz}\xspace is an ordinary dynamically linked executable that is provided with a path to a target application together with the arguments to invoke it with.
When \textit{SnapFuzz}\xspace is launched, the expected sequence of events of a standard Linux operating system are taking place, with the first step being the dynamic loader that loads \textit{SnapFuzz}\xspace and its dependencies in memory.
When \textit{SnapFuzz}\xspace starts executing, it inspects the target's ELF binary to obtain information about its interpreter, which in our implementation is always the standard Linux \textit{ld} loader.
\textit{SnapFuzz}\xspace then scans the loader code for system call assembly opcodes and some special functions in order to instruct the loader to load the \textit{SnapFuzz}\xspace plugin.
In particular, the rewriter:
(1)~intercepts the dynamic scanning of the loader in order to append the \textit{SnapFuzz}\xspace plugin shared object as a dependency, and
(2)~intercepts the initialisation order of the shared libraries in order to prepend the \textit{SnapFuzz}\xspace plugin initialisation code (in the \textit{.preinit\_array}).
After the \textit{SnapFuzz}\xspace rewriter finishes rewriting the loader, execution is passed to the rewritten loader in order to load the target application and its library dependencies.
As the normal execution of the loader progresses, \textit{SnapFuzz}\xspace intercepts its \code{mmap} system calls used to load libraries into memory, and scans these libraries in order to recursively rewrite their system calls and redirect them to the \textit{SnapFuzz}\xspace plugin.
The \textit{SnapFuzz}\xspace rewriter is based on the open-source load-time binary rewriter SaBRe~\cite{sabre}.
\medskip
\noindent\textbf{Plugin.}
After the loader completes, execution is passed to the target application, which will start by executing \textit{SnapFuzz}\xspace's initialisation function.
Per the ELF specification, execution starts from the function pointers of \textit{.preinit\_array}.
This is a common ELF feature used by LLVM sanitizers to initialise various internal data structures early, such as the shadow memory~\cite{ASan,MSan}.
\textit{SnapFuzz}\xspace is using the same mechanism to initialise its subsystems like its in-memory filesystem before the execution starts.
After the initialisation phase of the plugin, control is passed back to the target and normal execution resumed.
At this stage, the \textit{SnapFuzz}\xspace plugin is only executed when the target is about to issue a system call or a vDSO call.
When this happens, the plugin checks if the call should be intercepted, and if so, it redirects it to the appropriate handler, and then returns back control to the target.
\subsection{In-memory Filesystem}
\label{sec:in-mem-fs}
As discussed in \S\ref{sec:state-reset}, \textit{SnapFuzz}\xspace redirects all file operations to use a custom in-memory filesystem.
This reduces the overhead of reading and writing from a storage medium, and eliminates the need for manually-written clean-up scripts.
\textit{SnapFuzz}\xspace implements a lightweight in-memory filesystem, which uses two distinct mechanisms, one for files and the other for directories.
For files, \textit{SnapFuzz}\xspace's in-memory filesystem uses the recent \break \code{memfd\_create()} system call, introduced in Linux in 2015~\cite{memfd_create}.
This system call creates an anonymous file and returns a file descriptor that refers to it.
The file behaves like a regular file, but lives in memory.
Under this scheme, \textit{SnapFuzz}\xspace only needs to specially handle system calls that initiate interactions with a file through a pathname (like the \code{open} and \code{mmap} system calls).
All other system calls that handle file descriptors are compatible by default with the file descriptors returned by \code{memfd\_create}.
When a target application opens a file, the default behavior of \textit{SnapFuzz}\xspace is to
check if this file is a regular file (e.g. device files are ignored), and if
so, create an in-memory file descriptor and copy the whole contents
of the file in the memory address space of the target.
\textit{SnapFuzz}\xspace keeps track of pathnames in order to avoid reloading the same file twice. This is not only a performance optimization but also a correctness requirement, as the application might have changed the contents of the file in memory.
For directories, \textit{SnapFuzz}\xspace employs the \textit{Libsqlfs} library~\cite{libsqlfs}, which implements a POSIX-style file system on top of the SQLite database and allows applications to have access to a full read/write filesystem with its own file and directory hierarchy.
\textit{Libsqlfs} simplifies the emulation of a real filesystem with directories and permissions.
\textit{SnapFuzz}\xspace uses \textit{Libsqlfs} for directories only, as we observed better performance for files via \textit{memfd\_create}.
\subsection{UNIX Domain Sockets}
\label{sec:domain-sockets}
\textit{AFLNet}\xspace uses the standard Internet sockets (TPC/IP and UDP/IP) to communicate to the target and send it fuzzed inputs.
The Internet socket stack includes functionality---such as calculating checksums of packets, inserting headers, routing---which is unnecessary when fuzzing applications on a single machine.
To eliminate this overhead, similarly to prior work~\cite{multifuzz}, \textit{SnapFuzz}\xspace replaces Internet sockets with UNIX domain sockets.
More specifically, \textit{SnapFuzz}\xspace uses Sequenced Packets sockets (\code{SOCK\_SEQPACKET}).
This configuration offers performance benefits and also simplifies the implementation.
Sequenced Packets are quite similar to TCP, providing a sequenced, reliable, two-way connection-based data transmission path for datagrams.
The difference is that Sequenced Packets require the consumer (in our case the \textit{SnapFuzz}\xspace plugin running inside the target application) to read an entire packet with each input system call.
This atomicity of network communications simplifies corner cases where the target application might read only parts of the fuzzer's input due to scheduling or other delays.
By contrast, \textit{AFLNet}\xspace handles this issue by exposing manually defined knobs for introducing delays between network communications.
Our modified version of \textit{AFLNet}\xspace creates a socketpair of UNIX domain sockets with the Sequenced Packets type, and passes one end to the forkserver, which later passes it to the \textit{SnapFuzz}\xspace plugin.
The \textit{SnapFuzz}\xspace plugin initiates a handshake with the modified \textit{AFLNet}\xspace, after which \textit{AFLNet}\xspace is ready to submit generated inputs to the target or consume responses.
Translating network communication from Internet sockets to UNIX domain sockets is not trivial, as \textit{SnapFuzz}\xspace needs to support the two main IP families of TCP and UDP which have a slightly different approach to how network communication is established.
In addition, \textit{SnapFuzz}\xspace also needs to support different types of synchronous and asynchronous communication such as \code{(e)poll} and \code{select}.
For the TCP family, the \code{socket} system call creates a TCP/IP socket and returns a file descriptor which is then passed to \code{bind}, \code{listen} and finally to \code{accept}, before the system is ready to send or receive any data.
\textit{SnapFuzz}\xspace monitors this sequence of events on the target and when the \code{accept} system call is detected, it returns the UNIX domain socket file descriptor from the forkserver.
\textit{SnapFuzz}\xspace doesn't interfere with the \code{socket} system call and intentionally allows its normal execution in order to avoid complications with target applications that perform advanced configurations on the base socket.
This strategy is similar to the one used by the in-memory file system via the \code{memfd\_create} system call (\S\ref{sec:in-mem-fs}) in order to provide compatibility by default.
The UDP family is handled in a similar way, with the only difference that instead of monitoring for an \code{accept} system call to return the UNIX domain socket of the forkserver, \textit{SnapFuzz}\xspace is monitoring for a \code{bind} system call.
\section{Introduction}
\label{sec:intro}
\input{intro}
\section{From AFL to AFLNet to SnapFuzz}
\label{sec:afl-aflnet-snapfuzz}
\input{overview}
\section{Design}
\label{sec:design}
\input{design}
\section{Implementation}
\label{sec:impl}
\input{implementation}
\section{Evaluation}
\label{sec:eval}
\input{eval}
\section{Related Work}
\label{sec:related}
\input{related}
\section{Conclusion}
\label{sec:conclusion}
Fuzzing stateless applications has proven extremely successful, with hundreds of bugs and security vulnerabilities being discovered.
Recently, in-depth fuzzing of stateful applications such as network servers has become feasible, due to algorithmic advances that make it possible to generate inputs that follow the application's network protocol.
Unfortunately, fuzzing such applications requires clean-up scripts and manually-configured time delays that are error-prone, and suffers from low fuzzing throughput.
\textit{SnapFuzz}\xspace addresses both challenges through a robust architecture, which combines a synchronous communication protocol with an in-memory filesystem and the ability to delay the forkserver to the latest safe point, as well as other optimizations.
As a result, \textit{SnapFuzz}\xspace simplifies fuzzing harness construction and improves the fuzzing throughput significantly, between \dcmqrscpSfAnSu and \lightftpSfAnSu on a set of popular network applications, allowing it to find additional crashes.
\textit{SnapFuzz}\xspace will be submitted for artifact evaluation and made available to the community as open-source shortly after publication, with the hope that it will help improve the security and reliability of network applications and facilitate further research in this space.
\balance
\bibliographystyle{ACM-Reference-Format}
\subsection{American Fuzzy Lop (\textit{AFL}\xspace)}
\label{sec:afl}
\begin{figure}
\centering
\resizebox{0.8\columnwidth}{!}{
\includegraphics{imgs/afl.pdf}
}
\caption{Architecture of \textit{AFL}\xspace's forkserver mode.}
\captionsetup{justification=centering}
\label{fig:arch-afl}
\end{figure}
\textit{AFL}\xspace~\cite{afl-fuzz} is a greybox fuzzer that uses an effective coverage-guided genetic algorithm.
\textit{AFL}\xspace uses a modified form of edge coverage to efficiently identify inputs that change the target application's control flow.
In a nutshell, \textit{AFL}\xspace first loads user-provided initial seed inputs into a queue, picks an input, and mutates it using a variety of strategies.
If a mutated input covers a new state, it is added to the queue and the cycle is repeated.
At a systems level, \textit{AFL}\xspace's simplest mode (called \textit{dumb} mode) is to restart the target application from scratch by forking first and then creating a fresh process via \code{execve}.
When this happens, the standard sequence of events to start a process is taking place, with the OS loader first initializing the target application and its libraries into memory.
\textit{AFL}\xspace then sends to the new process the fuzzed input through a file descriptor that usually points to an actual file or \code{stdin}.
Lastly, \textit{AFL}\xspace waits for the target to terminate, but kills it if a predefined timeout is exceeded.
These steps are repeated for every input \textit{AFL}\xspace wants to provide to the target application.
\textit{AFL}\xspace's dumb mode is rather slow as too much time is spent on loading and initialising the target and its libraries (such as \code{libc}) for every generated input.
Ideally, the application would be restarted after all these initialisation steps are done, as they are irrelevant to the input provided by \textit{AFL}\xspace.
This is exactly what \textit{AFL}\xspace's \textit{forkserver mode} offers, as shown in Figure~\ref{fig:arch-afl}.
In this mode, AFL first creates a child server called the \textit{forkserver} (step 1 in Figure~\ref{fig:arch-afl}), which loads the target application via \code{execve} and freezes it just before the \code{main} function is about to start.
Then, in each fuzzing iteration, the following steps take place in a loop: \textit{AFL}\xspace requests a new target instance from the forkserver (step~2), the forkserver creates a new instance (step~3), \textit{AFL}\xspace sends fuzzed input to this new instance (step~4), and the forkserver checks the target instance for crashes (step~5).
With this forkserver snapshotting mechanism, \textit{AFL}\xspace replaces the loading overhead by a much less expensive \code{fork} call, while guaranteeing that the application will be at its initial state for every freshly generated input from \textit{AFL}\xspace.
In the most recent versions of \textit{AFL}\xspace, this is implemented as an LLVM pass, but other methods that do not require access to the source code are also available.
One additional optimisation that \textit{AFL}\xspace offers is the \textit{deferred forkserver mode}.
In this mode, the user can manually add in the target's source code a special call to an internal function of \textit{AFL}\xspace in order to instruct it to create the forkserver at a later stage in the execution of the target application.
This can provide significant performance benefits in the common case where the target application needs to perform a long initialisation phase before it is able to consume \textit{AFL}\xspace's input.
Unfortunately though, this mode requires the user not only to have access to the source code of the target application, but also knowledge of the internals of the target application in order to place the deferred call at the correct stage of execution. As we will explain in \S\ref{sec:deferred-forkserver}, the forkserver placement has several restrictions (e.g.\ it cannot be placed after file descriptors are created) and if these restrictions are violated, the fuzzing campaign can waste a lot of time exploring invalid executions.
\subsection{\textit{AFLNet}\xspace}
\label{sec:aflnet}
\begin{figure}
\centering
\resizebox{0.8\columnwidth}{!}{
\includegraphics{imgs/aflnet.pdf}
}
\caption{Architecture of \textit{AFLNet}\xspace.}
\captionsetup{justification=centering}
\label{fig:arch-aflnet}
\end{figure}
\textit{AFL}\xspace essentially targets applications that receive inputs via files (with \code{stdin} a special file type).
This means that it is not directly applicable to network applications, as they expect inputs to arrive through network sockets and follow an underlying \textit{network protocol}.
\textit{AFLNet}\xspace~\cite{aflnet} extends \textit{AFL}\xspace to work with network applications.
Its most important contribution is that it proposes a new algorithm on how to generate inputs that follow the underlying network protocol (e.g.\ the FTP, DNS or SIP protocols).
More specifically, \textit{AFLNet}\xspace infers the underlying protocol via examples of recorded message exchanges between a client and the server.
\textit{AFLNet}\xspace also extends \textit{AFL}\xspace by building the required infrastructure to direct the generated inputs through a network socket to the target application, as shown in Figure~\ref{fig:arch-aflnet}.
More precisely, from a systems perspective, \textit{AFLNet}\xspace acts as the client application.
After a configurable delay waiting for the server under fuzzing to initialize, it sends inputs to the server through TCP/IP or UDP/IP sockets, with configurable delays between those deliveries (we describe the various time delays needed by \textit{AFLNet}\xspace in \S\ref{sec:snapfuzz-protocol}).
\textit{AFLNet}\xspace consumes the replies from the server (or else the server might block) and also sends to the server a \code{SIGTERM} signal after each exchange is deemed complete, as usually network applications run in infinite loops.
As shown in Figure~\ref{fig:arch-aflnet}, the architecture of \textit{AFLNet}\xspace is similar to that of \textit{AFL}\xspace's deferred forkserver mode, except that communication takes place over the network instead of via files.
Network applications like databases or FTP servers are often stateful, keeping track of their state by storing information to various files.
This can create issues during a fuzzing campaign because when \textit{AFLNet}\xspace restarts the application, its state might be tainted by information from a previous execution.
To avoid this problem, \textit{AFLNet}\xspace requires the user to write custom \textit{clean-up scripts} that are invoked to reset any filesystem state.
We use the term \textit{fuzzing harness} to refer to all the code that users need to write in order to be able to fuzz an application.
In \textit{AFLNet}\xspace, this includes the client code, the various time delays that need to be manually added, and the clean-up scripts. One important goal of \textit{SnapFuzz}\xspace is to simplify the creation of fuzzing harnesses for network applications.
\subsection{\textit{SnapFuzz}\xspace}
\label{sec:snapfuzz}
\begin{figure}
\centering
\resizebox{0.9\columnwidth}{!}{
\includegraphics{imgs/snapfuzz.pdf}
}
\caption{Architecture of \textit{SnapFuzz}\xspace.\vspace{-0.5cm}}
\captionsetup{justification=centering}
\label{fig:arch-sf}
\end{figure}
\textit{SnapFuzz}\xspace is built on top of \textit{AFLNet}\xspace by revamping its networking communication architecture as shown in Figure~\ref{fig:arch-sf}, without any modifications to \textit{AFLNet}\xspace's fuzzing algorithm.
\textit{SnapFuzz}\xspace's main goals are (1)~to improve the performance (throughput) of fuzzing network applications, and
(2)~lower the barrier for testing network applications by simplifying the construction of fuzzing harnesses, in particular by eliminating the need to add manually-specified time delays and to write clean-up scripts.
At the same time, it is not a goal of \textit{SnapFuzz}\xspace to improve in any way \textit{AFL}\xspace's and \textit{AFLNet}\xspace's fuzzing algorithms or mutation strategies.
At a high level, \textit{SnapFuzz}\xspace achieves its significant performance gains by:
optimising all networking communications by eliminating synchronisation delays (\textit{the \textit{SnapFuzz}\xspace protocol)};
automatically injecting \textit{AFL}\xspace's forkserver deeper into the application than otherwise possible and without the user's intervention (\textit{smart deferred forkserver});
performing binary rewriting-enabled optimisations which eliminate additional delays and inefficiencies;
automatically resetting any filesystem state;
and optimising filesystem writes by redirecting them into an in-memory filesystem.
\textit{SnapFuzz}\xspace also makes fuzzing harness development easier and in some cases trivial by completely removing the need for manual code modifications.
Such manual changes are often required to: reset the state of either the target or its environment after each fuzzing iteration; terminate the target, as usually servers run in infinite loops; pin the CPU for threads and processes; and add deferred forkserver support to the target.
Figure~\ref{fig:arch-sf} shows the architecture of \textit{SnapFuzz}\xspace.
While at a high-level it resembles that of \textit{AFLNet}\xspace, there are several important changes.
First, \textit{SnapFuzz}\xspace intercepts the external actions of the target application using \textit{binary rewriting} (\S\ref{sec:binary-rewriting}).
It then monitors the behaviour of both the target application and the \textit{AFLNet}\xspace client in order to eliminate synchronisation delays using its \textit{SnapFuzz}\xspace protocol (\S\ref{sec:snapfuzz-protocol}).
Second, a custom in-memory filesystem is added, to improve performance and facilitate resetting the state after each fuzzing iteration (\S\ref{sec:state-reset}).
Third, the forkserver is replaced by a smart deferred forkserver, which automates and optimizes the forkserver placement (\S\ref{sec:deferred-forkserver}).
We describe the main components of \textit{SnapFuzz}\xspace in detail in the next section.
|
2,877,628,089,096 | arxiv | \section{Introduction}
\label{intro}
The measurement of the neutron emission from deuterium-deuterium (DD) and deuterium-tritium (DT) fusion reactions is one of the most important methods of assessing the performance of present and future fusion reactors. Since a neutron is released for each fusion reaction occurring in the plasma, the measurement of neutron flux emitted from the plasma is directly correlated to the fusion power. The emitted neutrons' energy spectrum is characterized by two main components at 2.45 and 14.1 MeV from DD and DT reactions respectively\footnote{In fusion devices operated with DT fuel, due to their different cross-sections, the DT neutron emission dominates over the DD one while in D-only fuel devices DT reactions can be observed at a very low level where T is generated via one branch of the DD reaction.}.
The neutron energy spectrum is affected, among other things\footnote{Other parameters that affect the neutron production are the plasma density and the plasma effective charge but are not discussed here as they mainly affect the neutron yield and not the energy spectrum.}
, by the fusion plasma operating conditions. For example, the broadening of the Gaussian energy spectrum for 2.45 and 14.1 MeV neutrons is proportional to the square root of the plasma fuel ion temperature due to the relative velocity distribution of the reactants \cite{Jarvis}. In addition, the neutron energy spectrum is affected by the different additional heating schemes that are normally employed in fusion devices such as neutral beam injection and radio-frequency heating.
For example, in the case of neutral beam heating the emitted neutron energy spectrum is the combination of a thermal component due to the plasma fuel ions, a beam-thermal component from the reactions between beam and fuels ions and a beam-beam component. The relative intensity of these components is in turn affected by the plasma conditions. In present day devices, the beam-thermal component is dominating while in future fusion devices, the thermal component will be dominant.
In the case of radio-frequency heating, neutrons with energies above the DD and DT energies are observed due to fusion reaction from ions accelerated to energies of a few MeV.
The measurement of the spatial and temporal evolution of the neutron emission from fusion plasmas in terms of its flux and energy spectrum can therefore provide important information on the plasma itself which can be used to optimize fusion power production.
Different diagnostics are used to measure the neutron emission from fusion plasmas \cite{Jarvis, Wolle} but all rely on the conversion of the neutron into a charged particle that can then be detected.
This conversion takes places either thanks to neutron induced nuclear reactions in which heavy charged fission fragments are produced\footnote{Thermal neutron capture induced reaction in $^{235}$U is a typical example.} or via elastic scattering of neutrons with light atoms, typically hydrogen.
In their simplest form, neutron diagnostics can just be used as counters where the measured counts are proportional to the number of neutrons emitted by the plasma and therefore to the fusion power.
The proportionality constant is determined via the absolute calibration of such neutron counters and depends on several parameters such as the counter's efficiency and its position with respect to the neutron source (the plasma) and the fusion reactor \cite{Syme, Batistoni}.
Neutron spectrometers are more sophisticated diagnostics in which the measured neutron energy spectrum is linked in a non-trivial way to the neutron source. Since the energy of fission fragments does not reflect the energy spectrum of the incident neutron, fission chambers can not be used as spectrometers, if one excludes the very crude spectroscopic capability offered by threshold reactions.
For this reason, neutron spectrometers are all based on the conversion of the neutron into a light recoil particle via elastic scattering. Neutron spectrometers can be distinguished by the way in which the scattered neutron and the recoil particle are processed.
In compact spectrometers, the recoil particle deposits its energy into the scattering medium which, depending on the material, can emit a pulse of scintillation light that is detected: in this case the scatterer itself acts as the detector. The light emission is then converted into a voltage signal and the voltage pulse height spectrum generated by the recoil particles is measured\footnote{Pulse height spectra can be based on the peak amplitude or on the time integrated voltage signal.}. Since the voltage signal is proportional to the energy deposited by the recoil particle and this depends on the incident neutron energy, a detector based on scintillation material can, in principle, be used as a neutron spectrometer.
Large spectrometers, instead, are based on the measurement of the scattered neutron or recoil particle in a detector other than the scatterer. For example, the recoil protons ejected by neutron scattering on a thin, hydrogen-rich foil can be detected in an array of detectors after they have been momentum and energy separated \cite{Eric}. Alternatively,
the time difference between the two scintillation events generated by the same incident neutron in two spatially separated scintillators provides the neutron time of flight and therefore its energy \cite{Maria}. Recently, time-of-flight spectrometers are also taking advantage of the information on the amount of energy deposited by the recoil particles in the scatterer and in the detector to suppress the contribution from unwanted random coincidences that, especially in fusion devices, typically affect such instruments \cite{Skiba}.
The often complex relation between the incident neutron energy spectrum and the output signal from the detector is referred to as the detector response function. Since the neutron energy spectrum at the detector's location is not mono-energetic and since the response function is usually dependent on the incident neutron energy, it is necessary to determine the response function for all the neutron energies of interest. The resulting set of response functions (one for each incident neutron energy) is referred to as the detector response function matrix.\footnote{The response function for large spectrometers, which takes into account for example the effect of the time of flight geometry between scatterer and detector, is referred to as the instrument response function to distinguish it from the individual detectors' response functions.}
It is the knowledge of this response function that allows to infer the characteristics of the neutron energy spectrum and ultimately of the plasma itself. Two different approaches can be used to relate the measured neutron energy spectrum to the fusion plasma source: \emph{i}) forward modelling and \emph{ii}) inversion algorithms.
Forward modelling relies on the accurate modelling of all the processes that in the plasma affect the neutron emission, of the neutron transport from the source to the detector, of the conversion of the incident neutron field into recoil particles and eventually into the detector output signal \cite{Binda}. Inversion algorithms make no assumption or modelling on the neutron source which instead is obtained by different least-squares minimization methods based on the knowledge of the detector response function \cite{Zimbal}. Both methods have advantages and disadvantages but the one thing they have in common is the requirement for a very well characterized detector response function.
Response functions for scintillator detectors are usually measured experimentally with well characterized neutron sources \cite{Klein, Enqvist} and interpreted with the help of dedicated Monte Carlo codes such as NRESP \cite{NRESP} or more general radiation transport codes such as MCNP \cite{MCNP} and MCNP-PoliMi \cite{Pozzi} and GEANT4 \cite{Agostinelli} (which has the additional feature of simulating the scintillation light transport too). Very good agreement is found between measured and simulated neutron response functions for all the cited codes. For the purpose of this tutorial, NRESP will be used.
In NRESP, the relevant cross-sections (and differential cross-sections) for neutron interactions with hydrogen and carbon in the energy range 0.02 to 20 MeV are included. The detector material composition and geometry including the liquid scintillator housing and in the optical window connecting to the photo-cathode are modelled. The deposited energy is then converted into a light pulse height distribution taking into account the finite detector energy resolution and experimentally measured light output functions for all the generated particles in the medium (recoils and $\alpha$-s from nuclear reactions on carbon).
The interpretation of the measured or simulated response functions for liquid scintillator can be quite difficult if only the light output pulse height distribution is given. For a detailed understanding of the origin of the different features present in the response function it is necessary to analyse the contributions from the individual processes occurring in the liquid scintillator such as single and multiple elastic scattering, nuclear reactions and so on. This information can be obtained from the Monte Carlo codes discussed above but this is usually not trivial. In addition, although these codes can provide the expected light output pulse height spectrum, for example, for neutrons that have collided three times with protons, they do no provide an explanation for why it has that particular shape. The aim of this tutorial is to provide such an explanation by means of an analytical derivation of the expected light output pulse height distribution combined with a very simple Monte Carlo code used for its verification. The analytical approach is limited to a few simple cases but it provides nevertheless the basic understanding of how the real response function arises from a combination of multiple individual neutron interactions.
This tutorial focuses in particular on the detailed explanation of the different contributions to the response function of a liquid scintillator to incident mono-energetic neutrons. As it will be shown, the interpretation of the response function even in this simple situation is far from trivial. The results obtained in this particular case are easily generalized for a broad neutron energy spectrum.
The tutorial is therefore structured as follows. Section \ref{sec:EJ301} is dedicated to a brief overview of the scintillation process, of the resulting recoil particle light output response function and of a simplified Monte Carlo code used for the study of multiple elastic scattering. Section \ref{sec:singleNP} is devoted to the study of the light response function in the case of single elastic scattering processes. Section \ref{sec:doubleNP} is dedicated to the determination of the light response function in the case of multiple elastic scattering of a neutron with particles of the same species (for example, only with protons or only with carbon atoms). Section \ref{sec:mixedNP} discusses the mixed double scattering case in which the neutron scatters elastically with two different particles (for example first with a carbon atom and then with a hydrogen atom). In section \ref{sec:Comparison}, the light response functions calculated with full Monte Carlo simulations are qualitatively interpreted with the help of the response functions derived in the previous two sections for each elastic scattering process. Section \ref{sec:complications} briefly addresses some aspects affecting the light response function that have been neglected in the previous sections and elucidate the use of the response function matrix for the interpretation of the liquid scintillator response function when non mono-energetic neutron sources are present such as in the case of fusion reactions. In addition, the forward modelling method is also discussed. Final comments and remarks are presented in section \ref{sec:summary}.
\section{Neutron response function of liquid scintillators}
\label{sec:EJ301}
Scintillators are among the most common type of detectors both for $\gamma$-rays and neutron radiation detection and operate on the principle of induced fluorescent light emission upon the interaction of the radiation within the material.
A detailed description of the physical principles, material composition and application of scintillators can be found in \cite{Knoll}. For some scintillation materials, light emission depends on the type of incident radiation and therefore it is possible to discriminate between $\gamma$-rays and neutrons, a feature that is essential in fusion application as neutron diagnostics often operates in mixed fields.
Liquid scintillators based on organic materials such as benzene (C$_6$H$_6$), tuolene (C$_6$H$_5\cdot$CH$_3$) and xylene (C$_6$H$_4\cdot$(CH$_3$)$_2$), that is hydrogen-rich materials, belong to this category. The main interaction mechanism between $\gamma$-rays and liquid scintillators is Compton scattering with the electrons while neutrons interacts by elastic scattering with the hydrogen and carbon nuclei\footnote{Inelastic scattering and nuclear reactions with carbon atom are also possible for neutron energies above 4 MeV and are discussed briefly later.}.
In both cases a recoil particle is generated (an electron for $\gamma$-rays and a proton or carbon nucleus for neutrons). Coulomb interactions between recoil particles and the organic scintillator molecules result in the conversion of the recoil kinetic energy into molecular excitation energy. Part of this excitation energy is then dissipated via thermal quenching and part via fluorescent light emission in the UV region of the visible spectrum. Typical light pulses have a very fast rise time (few nanoseconds) and decay constants between 20 and 200 ns.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.6]{Figure03.eps}
\end{center}
\caption{Liquid scintillator light output as a function the recoil particle energy. Data adapted from \cite{Verbinski}.}
\label{fig:Verbinski}
\end{figure}
The fluorescent light is then transported (either directly of via reflection with the liquid scintillator housing walls) to a photocathode: depending on the volume of the scintillator, significant attenuation of the fluorescent light in the scintillator medium itself can occur.
The fluorescent light is then converted into electrons at the photocathode via the photoelectric effect and the initial few ejected photo-electrons are subsequently accelerated, focussed and multiplied via secondary electron emission by multi-stage dynodes photomultiplier resulting in large gains ($10^5$ - $10^9$). The conversion of photons into electrons and their multiplication is a process that, under the correct experimental conditions, is highly linear. Non-linearities occur for example if the photomultiplier is operated at high gains, at high counting rates and in presence of even weak magnetic fields.
The electron current at the anode of the photomultiplier is converted into a voltage via a load resistor and the detector voltage output is fed into the acquisition system by co-axial cables (usually tens of meters in fusion experiments). Attenuation and distortion of the voltage signal in the cables will occur but these processes are linear and easily modelled.
In present days fusion neutron diagnostics the detector voltage signal is digitized at very high sampling frequencies ranging from 250 MHz up to 4 GHz with high resolution from 10 to 14 bits. The ADC process can be considered linear if the integral an differential non-linearities are negligible which is often the case.
Several conversion mechanism contribute therefore to the final signal that is measured for a single neutron interacting with the detector. If the liquid scintillator is to be used as a spectrometer it is then important that a good linearity exists between the recorded signal and in the incident neutron energy.
Under the correct experimental conditions all the above processes are linear but one: the fluorescent light output for recoil particles from neutron elastic collision is inherently non-linear as shown in figure \ref{fig:Verbinski}. This non-linearity complicates significantly the neutron response function as discussed in detail in sections \ref{sec:singleNP}, \ref{sec:doubleNP} and \ref{sec:mixedNP}.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.65]{Figure01.eps}
\end{center}
\caption{Example of the light response response function of a liquid scintillator to a mono-energetic neutron with energy of 2.45 MeV for a liquid scintillator of 6 cm diameter and two different thicknesses with and without the detector energy resolution effect included.}
\label{fig:RFMEN}
\end{figure}
Examples of the response function calculated with NRESP of two liquid scintillators with the same diameter (6 cm) and two different thickness (0.5 and 3.0 cm) for a beam of incident mono-energetic neutrons with an energy of 2.45 MeV are shown in figure \ref{fig:RFMEN}.
As can be seen, the response function exhibits a rich series of features some of which depends on the detector thickness. For the purpose of this tutorial a Simplified monTe cArlo neutron Response funcTion Simulator (STARTS) has been written to calculate the probability density function of the energy and light output distributions of multi-scattered neutrons and of all recoil protons and carbon nuclei in any possible combination. Contrary to the full Monte Carlo codes discussed above, STARTS is specifically aimed at the calculation of the pulse height spectra for specific types of elastic collisions.
The following simplifications have been made in STARTS: \emph{i}) no cross-section dependence is included as the particles involved in the elastic scattering are selected by the user, \emph{ii}) all neutrons undergo a fixed number of elastic scattering defined by the user, \emph{iii}) the detector is considered infinite in size, \emph{iv}) recoil particles deposit all their energy in the detector, \emph{v}) the incident neutrons are mono-energetic and \emph{vi}) no energy resolution broadening is included in the calculation of the response function and \emph{vii}) the light output function used is taken from \cite{NRESP}.
The first simplification implies that the relative intensity of the contribution to the response function from neutron-proton (np) and from neutron-carbon (nC) elastic collisions is neglected: this does not affect the shape of the response function. Simplifications \emph{iii}) and \emph{iv}) do not affect in any qualitative way the response function while simplification \emph{vi}) has an important effect which is however well known and easily understood. The impact of all these simplifications is briefly discussed in section \ref{sec:singleNP}.
\section{Response function for single neutron scattering}
\label{sec:singleNP}
Consider an incident neutron with initial energy $E_\mathrm{n,0}$ making an elastic collision with a proton, assumed to be at rest, as depicted schematically in panel (a) of figure \ref{fig:SingleScattering}. After the elastic collision the energies of the neutron and of the recoil protons will be $E_\mathrm{n,1}$ and $E_\mathrm{p,1}$ where the index ``1'' indicates that these quantities refer to the energy of the particles after the first collision. From classical mechanics, invoking the conservation of energy and linear momentum, it can be shown that in the case of a neutron colliding with a generic target particle these energies are given by:
\begin{align}
\label{eq:En1}
& \dfrac{E_\mathrm{n,1}(\theta)}{E_\mathrm{n,0}} = \dfrac{(1+\alpha) + (1-\alpha) \cos \theta}{2} \\
\label{eq:Et1}
& \dfrac{E_\mathrm{t,1}(\theta)}{E_\mathrm{n,0}} =\dfrac{(1 - \alpha)(1-\cos\theta)}{2}
\end{align}
where the index $t$ identifies the recoil target, $\theta$ is the scattering angle in the centre of mass reference system, $\alpha = (A-1)^2/(A+1)^2$ and $A = m_\mathrm{t}/m_\mathrm{n}$ that is the ratio between the target and neutron masses. It is useful, especially for the discussion in the following sections, to observe that $E_\mathrm{n,1}/\alpha$ represent the maximum possible energy from which a neutron with energy $E_\mathrm{n,1}$ could have originated in a single elastic scattering.
%
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\textwidth]{Figure02.eps}
\end{center}
\caption{Panel (a): depiction of an incoming neutron undergoing single elastic scattering resulting in a scattered neutron and a recoil proton; the shaded area indicates the liquid scintillator volume. Panel (b): probability density function for the energy of the scattered neutron.}
\label{fig:SingleScattering}
\end{figure*}
Under the assumption of an isotropic cross-section for the elastic scattering in the centre of mass, the scattering angle can assume any value between 0 and $\pi$ with equal probability\footnote{This assumption is a good approximation for np elastic collision for neutrons of few MeV and slightly less accurate for the nC elastic collision case in which forward scattering is more favourable. For the sake of simplicity, this effect is here neglected as it makes a small difference to the calculations and results here discussed.}.
It can then be shown that the probability of the scattered neutron to have an energy in the interval $[E_\mathrm{n,1}, E_\mathrm{n,1} + dE_\mathrm{n,1}]$ is given by:
\begin{equation}
\label{eq:pdfSCn}
p(E_{\mathrm{n},1}) dE_{\mathrm{n},1} =
\begin{cases}
\dfrac{1}{1-\alpha} \dfrac{1}{E_\mathrm{n,0}} dE_{\mathrm{n},1} & \hspace{0.1cm} \mathrm{if~} E_{\mathrm{n},1} \in [\alpha E_\mathrm{n,0}, E_\mathrm{n,0}] \\[14pt]
0 & \hspace{0.1cm} \mathrm{otherwise}
\end{cases}
\end{equation}
where $p(E_{\mathrm{n},1})$ is the Probability Density Function (PDF) of the continuous random variable $E_{\mathrm{n},1}$. Note that $p(E_{\mathrm{n},1})$ is a properly normalized PDF\footnotemark. In fact, the total probability of the neutron having an energy in the interval $[\alpha E_\mathrm{n,0}, E_\mathrm{n,0}]$ after one collision is the sum (integral) of all the probabilities of the neutron having an energy in the interval $[E_\mathrm{n,1}, E_\mathrm{n,1} + dE_\mathrm{n,1}]$ over all possible energies, which is equal to:
\begin{equation}
\label{eq:TotPSCn}
\int_{\alpha E_\mathrm{n,0}}^{E_\mathrm{n,0}} p(E_{\mathrm{n},1}) dE_{\mathrm{n},1} = 1
\end{equation}
as it can be easily verified by inserting equation \eqref{eq:pdfSCn} into the above expression. note that equation \eqref{eq:TotPSCn} is the total probability law for mutually exclusive events for continuous random variables.
\footnotetext{\label{fn:probdef} This probability can also be thought as the probability of the neutron having that energy given that an elastic collision has occurred. The latter probability is linked to the detection efficiency as discussed in section \ref{sec:efficiency}}.
Similarly, the probability of the recoil target of having an energy in the interval $[E_\mathrm{t,1}, E_\mathrm{t,1} + d E_\mathrm{t,1}]$ is given by:
\begin{equation}
\label{eq:pdfSCt}
p(E_{\mathrm{t},1}) dE_{\mathrm{t},1}=
\begin{cases}
\dfrac{1}{1-\alpha} \dfrac{1}{E_\mathrm{n,0}} dE_{\mathrm{t},1} & \hspace{0.1cm} \mathrm{if~} E_{\mathrm{t},1} \in [0, (1 - \alpha) E_\mathrm{n,0}] \\[14pt]
0 & \hspace{0.1cm} \mathrm{otherwise}
\end{cases}.
\end{equation}
According to equations \eqref{eq:pdfSCn} and \eqref{eq:pdfSCt}, the probability of observing a scattered neutron and recoil target with energies in the range $[E, E+dE]$ is non-zero and constant within the appropriate energy interval as shown in panel (b) of figure \ref{fig:SingleScattering}. In the case of np elastic scattering, $\alpha = 0$ which implies that the recoil proton energy $E_\mathrm{p,1}$ can assume any value between 0 MeV (``grazing'' collision) and $E_\mathrm{n,0}$ (``head-on'' collision) with equal probability. In the particular case depicted in panel (b) of figure \ref{fig:SingleScattering}, the energy of the scattered neutron is $E_\mathrm{n,1}$ and the energy of the recoil proton is $E_\mathrm{p,1} = E_\mathrm{n,0} - E_\mathrm{n,1}$.
Since for mono-energetic neutrons with energy $E_\mathrm{n,0}$ the recoil protons can equally assume energies in $[0, E_\mathrm{n,0}]$ it follows that, in this simplified scenario, the response function of the detector is the one depicted in panel (b) of figure \ref{fig:SingleScattering}, that is a ``box-like'' function.
As can be seen from figure \ref{fig:Verbinski}, a recoil proton of energy $E_\mathrm{p,1}= E_\mathrm{n,0} = 2.45$ MeV would generate a light output yield of approximately $L \approx 0.8$ which is exactly the upper edge of the response functions shown in figure \ref{fig:RFMEN}.
This simple ``box-like'' response function is modified by the non-linear dependence of the light output function on the recoil particle's energy. Following \cite{Knoll}, if one assumes for the light output function the relation:
\begin{equation}
\label{eq:LO}
L(E_\mathrm{p}) = k E_\mathrm{p}^\beta
\end{equation}
then:
\begin{equation}
\label{eq:invLO}
E_\mathrm{p}(L) = \left( \dfrac{L}{k} \right)^{1/\beta}
\end{equation}
and therefore:
\begin{equation}
\label{eq:dEpdL}
\dfrac{\mathrm{d} E_\mathrm{p}(L)}{\mathrm{d} L} = \dfrac{1}{\beta L} \left( \dfrac{L}{k} \right)^{1/\beta}
\end{equation}
where $k \approx 0.21$ MeV$^{-\beta}$ and $\beta \approx 3/2$ give a good approximation to the light output function for recoil protons shown in figure \ref{fig:Verbinski} up to 3 MeV.
Equations \eqref{eq:pdfSCt} (with $\alpha = 0$) and \eqref{eq:dEpdL} can be combined to obtain the light response function for the recoil protons:
\begin{equation}
\label{eq:NLOSnp}
\dfrac{\mathrm{d} N}{\mathrm{d} L} = \dfrac{\mathrm{d} N}{\mathrm{d} E_\mathrm{p,1}} \dfrac{\mathrm{d} E_\mathrm{p,1}}{\mathrm{d} L} = \dfrac{1}{E_\mathrm{n,0}} \dfrac{1}{\beta L} \left( \dfrac{L}{k} \right)^{1/\beta}
\end{equation}
which implies that the light output response function increases for low light yields. Figure \ref{fig:LRFEp} shows the light response function for the 3 cm thick detector calculated by NRESP, by equation \eqref{eq:NLOSnp} and by STARTS. As can be seen, the overall features at low light output yields ($L \lesssim 0.2$) and for $L \approx 0.8$ are well described qualitatively\footnote{The term ``qualitative'' is here use to indicate that the absolute amplitude of the pulse height spectrum can not be calculated by STARTS and ``ad-hoc'' scaling factors are used instead.} but not in the range in between.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.55]{Figure01c.eps}
\end{center}
\caption{Example of the light response response function of a liquid scintillator to a mono-energetic neutron with energy of 2.45 MeV for a liquid scintillator of 6 cm diameter and two different thicknesses with and without the detector energy resolution effect included.}
\label{fig:LRFEp}
\end{figure}
The ``bump'' seen in figures \ref{fig:RFMEN} and \ref{fig:LRFEp} at intermediate $L$ values can not be explained as the result of the detector finite resolution, of edge effects for detectors of finite size nor as a consequence of the contribution from nC scattering. The effect of the detector finite energy resolution is mainly to ``smear out'' the sharp edge at the maximum light output as shown by the red curves in figure \ref{fig:RFMEN}.
For a detector of finite size it is possible for the recoil protons generated near the outer surface of the scintillator to escape after having deposited in the medium only a fraction of their energy. The mean free path $\lambda$ of 2.45 MeV protons in liquid scintillators is of the order of $\lambda \lesssim 0.1$ mm.
Even assuming that all the recoil protons generated within a distance $\lambda$ from the outer surfaces escape, the overall light response function would be reduced in its amplitude by approximately 1 \% but its shape would not be modified. The effect would be even smaller for recoil protons of lower energies. According to equation \eqref{eq:Et1}, the maximum energy for a recoil carbon, for which $\alpha \approx 0.716$, is $E_\mathrm{c,1} = (1-\alpha) E_\mathrm{n,0} \approx 0.7$ MeV. The corresponding light output is $L \approx 0.01$ (see figure \ref{fig:Verbinski}) and therefore the contribution to the light response function from recoil carbon nuclei is confined to the very low end of the response function. On the scale used in figure \ref{fig:LRFEp} this is hardly visible.
It is possible to conclude therefore that the ``box-like'' response function for single np elastic scattering combined with the light output non-linearity together with the detector finite energy resolution describe quite well the simulated response function especially for the thin detector. However, for thick detectors this is not the case: in \cite{Knoll} this feature is described as the result of neutron double elastic scattering with protons in the scintillator. Section \ref{sec:doubleNP} is dedicated to the understanding of the origin and shape of this feature.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.5]{Figure04.eps}
\end{center}
\caption{Single and double elastic scattering events for a neutron incident on a liquid scintillator (gray shaded area). Tracks (a) and (b) are single scattering with a proton and carbon atom respectively. Tracks (c) to (d) represent the four possible collisions: np \& np (c); nC \& np (d); np \& nC (e) and nC \& nC (f).}
\label{fig:DoubleScattering}
\end{figure}
\section{Response function for multiple neutron scattering}
\label{sec:doubleNP}
In the case of an incident neutron making two elastic collisions four different outcomes are possible as shown in figure \ref{fig:DoubleScattering}: double elastic collision on two protons (track ``c'') or on two carbon nuclei (track ``f'') or mixed scattering first on a carbon nucleus and then on a proton (track ``d'') or the other way around (track ``e'').
As discussed in section \ref{sec:singleNP}, the contribution to the total light output from recoil carbon nuclei is negligible and therefore the double scattering on carbon nuclei can also be neglected and is not further discussed here. In a similar fashion, the total light output in the case in which the neutron first collides with a proton and then with a carbon nucleus is almost equivalent to a single elastic scattering with a proton. Tracks of type ``e'' therefore contributes to the response function as single np events described in section \ref{sec:singleNP}.
The reponse function from double scattering on protons (track ``c'') is discussed in this section while track ``d'' (collision on carbon nucleus followed by collision on proton) is discussed in section \ref{sec:mixedNP}.
The discussion of the neutron double elastic scattering on proton is divided in four parts. The probability density function for the energy of the neutron after two elastic collisions is derived in section \ref{sec:doubleNP_projectile} while the probability density function for the energy of the second recoil target is derived in section \ref{sec:doubleNP_target}. The probability density function for the total deposited energy in two elastic collisions by the neutron is derived in section \ref{sec:doubleNP_DepEnergy} and finally the PDF for the corresponding total light output is obtained in section \ref{sec:doubleNP_LightOutput}.
\subsection{Doubly scattered neutron energy probability density function}
\label{sec:doubleNP_projectile}
The probability of a neutron with initial energy $E_\mathrm{n,0}$ having energy in the range $[E_\mathrm{n,2}, E_\mathrm{n,2} + dE_\mathrm{n,2}]$ after two elastic collisions given that after the first collision it had an energy in the range $[E_\mathrm{n,1}, E_\mathrm{n,1} + dE_\mathrm{n,1}]$ is given by the conditional probability:
\begin{equation}
\label{eq:CondProb}
p(E_{\mathrm{n},2} \mid E_{\mathrm{n},1}) dE_\mathrm{n,2} = \dfrac{p(E_\mathrm{n,2} \cap E_\mathrm{n,1})}{p(E_\mathrm{n,1})} dE_\mathrm{n,2}
\end{equation}
where $p(E_\mathrm{n,2} \cap E_\mathrm{n,1})$ is the PDF of the joint events resulting in the energies $E_\mathrm{n,1}$ and $E_\mathrm{n,2}$. Recalling equation \eqref{eq:pdfSCn}, the probability $p(E_{\mathrm{n},2} \mid E_{\mathrm{n},1}) dE_\mathrm{n,2}$ is given by:
\begin{equation}
\label{eq:pdfSC2nd}
p(E_\mathrm{n,2}|E_\mathrm{n,1}) dE_\mathrm{n,2} =
\begin{cases}
\dfrac{1}{1-\alpha} \dfrac{1}{E_{\mathrm{n},1}} dE_{\mathrm{n},2} & \hspace{0.1cm} \mathrm{if~} E_{\mathrm{n},2} \in [\alpha E_{\mathrm{n},1}, E_{\mathrm{n},1}] \\[14pt]
0 & \hspace{0.1cm} \mathrm{otherwise}
\end{cases}
\end{equation}
which is equal to equation \eqref{eq:pdfSCn} with $E_\mathrm{n,0}$ and $E_\mathrm{n,1}$ replaced by $E_\mathrm{n,1}$ and $E_\mathrm{n,2}$ respectively.
The probability $p(E_\mathrm{n,2})dE_\mathrm{n,2}$ of a neutron with initial energy $E_\mathrm{n,0}$ having energy in the range $[E_\mathrm{n,2}, E_\mathrm{n,2} + dE_\mathrm{n,2}]$ after two elastic collisions regardless of which energy it had after the first collision can be obtained, using the law of the total probability, as the sum of the probabilities of all the possible combination of two collisions that could have resulted in $E_\mathrm{n,2}$ being in such range. Each joint event has a probability given by $p(E_\mathrm{n,2} \cap E_\mathrm{n,1}) dE_\mathrm{n,2}$ which, using equation \eqref{eq:CondProb} and replacing $p(E_\mathrm{n,1})$ with equation \eqref{eq:pdfSCn} in combination with equation \eqref{eq:pdfSC2nd}, can be written as:
\begin{equation}
\label{eq:CondProExplicit}
p(E_\mathrm{n,2} \cap E_\mathrm{n,1}) dE_\mathrm{n,2} = \dfrac{1}{(1-\alpha)^2}\dfrac{1}{E_\mathrm{n,0}}\dfrac{1}{E_\mathrm{n,1}} dE_{\mathrm{n,2}}.
\end{equation}
The total probability $p(E_\mathrm{n,2})dE_\mathrm{n,2}$ is then obtained by integrating over all possible energies $E_\mathrm{n,1}$:
\begin{equation}
\label{eq:TotPbDS}
p(E_\mathrm{n,2}) dE_{\mathrm{n,2}} = dE_{\mathrm{n,2}} \int \dfrac{1}{(1-\alpha)^2}\dfrac{1}{E_\mathrm{n,0}}\dfrac{1}{E_\mathrm{n,1}} dE_\mathrm{n,1}.
\end{equation}
Note that $E_\mathrm{n,2}$ ranges between the neutron initial energy $E_\mathrm{n,0}$, corresponding to two subsequent ``grazing'' collisions, and the minimum energy $\alpha^2 E_\mathrm{n,0}$ corresponding to two subsequent ``head-on'' collisions ($\theta = \pi$). If $E_\mathrm{n,2} \in [\alpha E_\mathrm{n,0}, E_\mathrm{n,0}]$, then $E_\mathrm{n,1}$ can take any value between $E_\mathrm{n,2}$ and $E_\mathrm{n,0}$ (see panel (b) of figure \ref{fig:DSPossibleEnergies}) so that equation \eqref{eq:TotPbDS} results in:
\begin{align}
\label{eq:TotPbDS1}
p(E_\mathrm{n,2}) dE_\mathrm{n,2} & = \int_{E_\mathrm{n,2}}^{E_\mathrm{n,0}} \dfrac{1}{(1-\alpha)^2}\dfrac{1}{E_\mathrm{n,0}}\dfrac{1}{E_\mathrm{n,1}} dE_\mathrm{n,1} dE_\mathrm{n,2}\\
\label{eq:TotPbDS1integrated}
& = \dfrac{1}{(1-\alpha)^2} \dfrac{1}{E_\mathrm{n,0}} \ln \left( \dfrac{E_\mathrm{n,0}}{E_\mathrm{n,2}} \right) dE_\mathrm{n,2}.
\end{align}
\begin{figure}
\begin{center}
\includegraphics[scale = 0.8]{Figure05a.eps}
\end{center}
\caption{A depiction of a doubly scattered neutron with energy $E_\mathrm{n,0}$ from the same type of target nucleus (indicated by the magenta arrows) is shown in panel (a). Panels (b) and (c) show the range of possible energies that the neutron can have after one or two elastic scattering (short and long horizontal bars respectively). The shaded area indicates the range of possible energies for the neutron after one collision if after two collisions it has energy $E_\mathrm{n,2}$.}
\label{fig:DSPossibleEnergies}
\end{figure}
If $E_\mathrm{n,2} \in [\alpha^2 E_\mathrm{n,0}, \alpha E_\mathrm{n,0}]$, then $E_\mathrm{n,1}$ can range only between $\alpha E_\mathrm{n,0}$ and $E_\mathrm{n,2}/ \alpha$ since in a single collision it is not possible for a neutron with initial energy $E_\mathrm{n,1} \in [E\mathrm{n,2}/\alpha, E_\mathrm{n,0}]$ to reach a final energy in the range $[\alpha^2 E_\mathrm{n,0}, E_\mathrm{n,2}]$ (see panel (c) of figure \ref{fig:DSPossibleEnergies}). In this case then, equation \eqref{eq:TotPbDS} gives:
\begin{align}
\label{eq:TotPbDS2}
p(E_\mathrm{n,2}) dE_\mathrm{n,2} & = \int_{\alpha E_\mathrm{n,0}}^{E_\mathrm{n,2}/\alpha} \dfrac{1}{(1-\alpha)^2}\dfrac{1}{E_\mathrm{n,0}}\dfrac{1}{E_\mathrm{n,1}} dE_\mathrm{n,1} dE_\mathrm{n,2}\\
\label{eq:TotPbDS2integrated}
& = \dfrac{1}{(1-\alpha)^2} \dfrac{1}{E_\mathrm{n,0}} \ln \left( \dfrac{E_\mathrm{n,2}}{\alpha^2 E_\mathrm{n,0}} \right) dE_\mathrm{n,2}.
\end{align}
To summarize, the probability $p(E_\mathrm{n,2})dE_\mathrm{n,2}$ of a neutron with initial energy $E_\mathrm{n,0}$ having energy in the range $[E_\mathrm{n,2}, E_\mathrm{n,2} + dE_\mathrm{n,2}]$ after two elastic collisions is given by:
\begin{equation}
\label{eq:TotPDSsummary}
p(E_\mathrm{n,2}) dE_\mathrm{n,2} =
\begin{cases}
\dfrac{1}{(1-\alpha)^2} \dfrac{1}{E_\mathrm{n,0}} \ln \left( \dfrac{E_\mathrm{n,0}}{E_\mathrm{n,2}} \right)dE_\mathrm{n,2} & \mathrm{if~} E_\mathrm{n,2} \in [\alpha E_\mathrm{n,0}, E_\mathrm{n,0}] \\[14pt]
\dfrac{1}{(1-\alpha)^2} \dfrac{1}{E_\mathrm{n,0}} \ln \left( \dfrac{E_\mathrm{n,2}}{\alpha^2 E_\mathrm{n,0}} \right)dE_\mathrm{n,2} & \mathrm{if~} E_\mathrm{n,2} \in [\alpha^2 E_\mathrm{n,0}, \alpha E_\mathrm{n,0}] \\[14pt]
0 & \mathrm{otherwise}
\end{cases}
\end{equation}
and the corresponding PDF is shown in panel (a) of figure \ref{fig:DoubleScattering}. In the particular case of the two targets being protons, $\alpha = 0$ and therefore $E_\mathrm{n,1}$ and $E_\mathrm{n,2}$ can both take any value in the interval $[0, E_\mathrm{n,0}]$ which implies that equation \eqref{eq:TotPbDS} should be integrated in the interval:
\begin{equation}
\label{eq:TotPbDSProton}
p(E_\mathrm{n,2}) dE_{\mathrm{n,2}} = dE_{\mathrm{n,2}} \int_{E_\mathrm{n,2}}^{E_\mathrm{n,0}} \dfrac{1}{E_\mathrm{n,0}}\dfrac{1}{E_\mathrm{n,1}} dE_\mathrm{n,1}.
\end{equation}
resulting in:
\begin{equation}
\label{eq:TotPDSsummaryProton}
p(E_\mathrm{n,2}) dE_\mathrm{n,2} =
\begin{cases}
\dfrac{1}{E_\mathrm{n,0}} \ln \left( \dfrac{E_\mathrm{n,0}}{E_\mathrm{n,2}} \right)dE_\mathrm{n,2} & \hspace{0.2cm} \mathrm{if~} E_\mathrm{n,2} \in [0, E_\mathrm{n,0}] \\[14pt]
0 & \hspace{1cm} \mathrm{otherwise.}
\end{cases}
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[scale = 0.95]{Figure05.eps}
\end{center}
\caption{Probability density functions for the twice scattered neutron (panel (a)) and for the second recoil target (panel (b)) for an incident neutron of energy $E_\mathrm{n,0}$.}
\label{fig:DoubleScattering}
\end{figure}
\subsection{Second recoil target energy probability density function}
\label{sec:doubleNP_target}
Consider now the two recoils particles generated from the first and second elastic scattering with a single neutron with energy $E_\mathrm{n,0}$. The probability for the first recoil to have an energy in the range $[E_\mathrm{t,1}, E_\mathrm{t,1} + dE_\mathrm{t,1}]$ is given by equation \eqref{eq:pdfSCt} while the probability of observing the second recoil particle with an energy in the range $[E_\mathrm{t,2}, E_\mathrm{t,2} + dE_\mathrm{t,2}]$ given that after the first collision the neutron has an energy $E_\mathrm{n,1}$ is given by the conditional probability:
\begin{equation}
\label{eq:CondProbTarget}
p(E_\mathrm{t,2} \mid E_\mathrm{n,1}) dE_\mathrm{t,2} = \dfrac{p(E_\mathrm{t,2} \cap E_\mathrm{n,1})}{p(E_\mathrm{n,1})} dE_\mathrm{t,2} = \dfrac{1}{1 - \alpha} \dfrac{1}{E_\mathrm{n,1}} dE_\mathrm{t,2}.
\end{equation}
The probability of observing the joint events in which the first scattered neutron has energy $E_\mathrm{n,1}$ and collides with a second target transferring to it the energy $E_\mathrm{t,2}$ is:
\begin{equation}
\label{eq:JointProbTarget}
p(E_\mathrm{t,2} \cap E_\mathrm{n,1}) dE_\mathrm{t,2} = p(E_\mathrm{t,2} \mid E_\mathrm{n,1})p(E_\mathrm{n,1}) dE_\mathrm{t,2}.
\end{equation}
Substitution of the corresponding expressions for the PDFs results in:
\begin{equation}
\label{eq:CondProbTargetExplicit}
p(E_\mathrm{t,2} \cap E_\mathrm{n,1}) dE_\mathrm{t,2} = \dfrac{1}{(1 - \alpha)^2} \dfrac{1}{E_\mathrm{n,1}} \dfrac{1}{E_\mathrm{n,0}} dE_\mathrm{n,1} dE_\mathrm{t,2}.
\end{equation}
The probability $p(E_\mathrm{t,2}) dE_\mathrm{t,2}$ of observing a second recoil target with an energy in the range $[E_\mathrm{t,2}, E_\mathrm{t,2} + dE_\mathrm{t,2}]$ is then obtained using the law of total probability, that is, by integrating equation \eqref{eq:CondProbTargetExplicit} over all possible energies $E_\mathrm{n,1}$ that could result in second recoil particle to be in such energy range.
Note that the second recoil particle can have an energy in the interval $E_\mathrm{t,2} \in [0, (1 - \alpha) \alpha E_\mathrm{n,0}]$ regardless of $E_\mathrm{n,1}$, so the corresponding probability $p(E_\mathrm{t,2}) dE_\mathrm{t,2}$ is:
\begin{align}
\label{eq:TotPb2ndScatt1}
p(E_\mathrm{t,2}) dE_\mathrm{t,2} & = \int_{\alpha E_\mathrm{n,0}}^{E_\mathrm{n,0}} \dfrac{1}{(1 - \alpha)^2} \dfrac{1}{E_\mathrm{n,1}} \dfrac{1}{E_\mathrm{n,0}} dE_\mathrm{n,1} dE_\mathrm{t,2}\\
\label{eq:TotPb2ndScatt1Integrated}
& = \dfrac{1}{(1 - \alpha)^2} \dfrac{1}{E_\mathrm{n,0}} \ln\left( \dfrac{1}{\alpha} \right) dE_\mathrm{t,2}.
\end{align}
\begin{figure}
\begin{center}
\includegraphics[scale = 0.6]{Figure06.eps}
\end{center}
\caption{Probability density functions for the energy of recoil particles for an incident neutron of initial energy of 2.45 MeV undergoing elastic collisions with two particles of the same species: two protons (panel (a)) and two carbon atoms (panel (b)).}
\label{fig:DoubleScatteringSimulation}
\end{figure}
If $E_\mathrm{t,2} \in [(1 - \alpha) \alpha E_\mathrm{n,0}, (1 - \alpha) E_\mathrm{n,0}]$ then the incident neutron can only have energies in the interval $[E_\mathrm{t,2}/(1-\alpha), E_\mathrm{n,0}]$ so the corresponding probability $p(E_\mathrm{t,2}) dE_\mathrm{t,2}$ is:
\begin{align}
\label{eq:TotPb2ndScatt2}
p(E_\mathrm{t,2}) dE_\mathrm{t,2} & = \int_{\tfrac{E_\mathrm{t,2}}{1-\alpha}}^{E_\mathrm{n,0}} \dfrac{1}{(1 - \alpha)^2} \dfrac{1}{E_{\mathrm{p},1}} \dfrac{1}{E_{\mathrm{p},0}} dE_{\mathrm{p},1} dE_\mathrm{t,2}\\
\label{eq:TotPb2ndScatt2Integrated}
& = \dfrac{1}{(1 - \alpha)^2} \dfrac{1}{E_\mathrm{n,0}} \ln\left[\dfrac{(1-\alpha) E_\mathrm{n,0}}{ E_\mathrm{t,2}} \right] dE_\mathrm{t,2}.
\end{align}
To summarize, the probability of the second recoil particle to have an energy in the interval $[E_\mathrm{t,2}, E_\mathrm{t,2} + dE_\mathrm{t,2}]$ is given by:
\begin{equation}
\label{eq:TotPDSummary2ndTarget}
p(E_\mathrm{t,2}) dE_\mathrm{t,2} =
\begin{cases}
\dfrac{1}{(1 - \alpha)^2} \dfrac{1}{E_\mathrm{n,0}} \ln\left( \dfrac{1}{\alpha} \right) dE_\mathrm{t,2}& \mathrm{if~} E_\mathrm{t,2} \in [0 , (1-\alpha) \alpha E_\mathrm{n,0}]\\[14pt]
\dfrac{1}{(1 - \alpha)^2} \dfrac{1}{E_\mathrm{n,0}} \ln\left[\dfrac{(1-\alpha) E_\mathrm{n,0} }{ E_\mathrm{t,2}} \right] dE_\mathrm{t,2} & \mathrm{if~} E_\mathrm{t,2} \in [(1-\alpha) \alpha E_\mathrm{n,0}, (1-\alpha) E_\mathrm{n,0}]\\[14pt]
0 & \mathrm{otherwise}.
\end{cases}
\end{equation}
The PDF $p(E_\mathrm{t,2})$ is shown in panel (b) of figure \ref{fig:DoubleScattering}. In the particular case of the two targets being protons, equation \eqref{eq:TotPDSummary2ndTarget} reduces to:
\begin{equation}
\label{eq:TotPDSummary2ndProton}
p(E_\mathrm{t,2}) dE_\mathrm{t,2} =
\begin{cases}
\dfrac{1}{E_\mathrm{n,0}} \ln\left(\dfrac{E_\mathrm{n,0} }{ E_\mathrm{t,2} } \right) dE_\mathrm{t,2} & \mathrm{if~} E_\mathrm{t,2} \in [0, E_\mathrm{n,0}]\\[14pt]
0 & \mathrm{otherwise}.
\end{cases}
\end{equation}
Figure \ref{fig:DoubleScatteringSimulation} shows the probability density functions for the energy of two recoil protons and two recoil carbon nuclei calculated according to equations \eqref{eq:pdfSCt} and \eqref{eq:TotPDSummary2ndTarget} with $\alpha = 0$ and $\alpha = 0.716$ (panels (a) and (b) respectively) for $E_\mathrm{n,0} = 2.45$ MeV and compared to STARTS calculations.
\subsection{Total deposited energy for doubly scattered neutrons}
\label{sec:doubleNP_DepEnergy}
Most liquid scintillators are small in size compared to the distance travelled by neutrons with energies in the MeV range in the time required for the ADC to record a few samples (a few nanoseconds). If a neutron was to scatter twice within this time interval then the two recoil particles would be generated on a time scale so short that even present day fast data acquisition system would see the two collisions as a single event with an energy equal to the sum of the energy deposited by each individual recoil particle. As discussed at the beginning of section \ref{sec:doubleNP}, carbon contributes to the light response function only for $L \ll 0.1$ and therefore, even if the energy deposited can be a substantial fraction of the initial neutron energy, it is not discussed further.
Consider instead an incident neutron scattering elastically with two protons: the total energy deposited, in this case, is $E_\mathrm{d} = E_\mathrm{p,1} + E_\mathrm{p,2}$. The probability density function $p(E_\mathrm{d})$ can be obtained by observing that if the first recoil particle deposits an energy $E_\mathrm{p,1}$, then the probability of observing a deposited energy $E_\mathrm{d}$ is equal to the probability of the second recoil proton to have energy $E_\mathrm{p,2}$, that is:
\begin{equation}
\label{eq:DepEnergyCondProb}
p(E_\mathrm{d} \mid E_\mathrm{p,1}) dE_\mathrm{d} = p(E_\mathrm{p,2} \mid E_\mathrm{p,1}) dE_\mathrm{p,2} = \dfrac{1}{E_\mathrm{n,1}} dE_\mathrm{p,2} = \dfrac{1}{E_\mathrm{n,0} - E_\mathrm{p,1}} dE_\mathrm{p,2}.
\end{equation}
The probability of observing $E_\mathrm{p,1}$ and $E_\mathrm{d}$ is given by:
\begin{equation}
\label{eq:DepEnergyProbGivenEp1}
p(E_\mathrm{p,1} \cap E_\mathrm{d}) dE_\mathrm{d} dE_\mathrm{p,1} = p(E_\mathrm{p,2} \mid E_\mathrm{p,1})p(E_\mathrm{p,1}) dE_\mathrm{p,2} dE_\mathrm{p,1}.
\end{equation}
Integration over all possible energies $E_\mathrm{p,1}$, recalling that $p(E_\mathrm{p,1}) = 1/E_\mathrm{n,0}$, gives then the probability of observing $E_\mathrm{d}$ \footnotemark:
\begin{align}
\label{eq:DepEnergyProbConv}
p(E_\mathrm{d}) dE_\mathrm{d} & = \int_0^{E_\mathrm{d}} \dfrac{1}{E_\mathrm{n,0}} \dfrac{1}{E_\mathrm{n,0} - E_\mathrm{p,1}} dE_\mathrm{p,1} dE_\mathrm{p,2} \\
\label{eq:DepEnergyProb}
& = \dfrac{1}{E_\mathrm{n,0}} \ln \left( \dfrac{E_\mathrm{n,0}}{E_\mathrm{n,0} - E_\mathrm{d}} \right) dE_\mathrm{d} .
\end{align}
\footnotetext{
\label{fn:PDFconv}
Equation \eqref{eq:DepEnergyProbConv} is a special case of the general expression for the PDF of the sum of two continuous random variables $x$ and $y$ with PDFs $p_x$ and $p_y$ which is given by the convolution:
\begin{equation}
p_{x+y}(z) = p_x \otimes p_y = \int_{-\infty}^{\infty} p_x(\zeta)p_y(z - \zeta) d\zeta. \nonumber
\end{equation}
}
Figure \ref{fig:DoubleScatteringDepEnergy} shows the PDF for the total energy deposited by the two recoil protons from a neutron with $E_\mathrm{n,0} = 2.45$ MeV calculated according to equation \eqref{eq:DepEnergyProb} and with the STARTS code.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.6]{Figure06a.eps}
\end{center}
\caption{Simulated (red line) and theoretical (dashed black line) probability density function of the total deposited energy by two recoil protons for a neutron with initial energy of 2.45 MeV.}
\label{fig:DoubleScatteringDepEnergy}
\end{figure}
\subsection{Total light output for doubly scattered neutrons}
\label{sec:doubleNP_LightOutput}
The non-linear relation between the proton recoil energy deposited in the scintillator and the emitted light output transforms non-linearly the two continuous random variables $E_\mathrm{p,1}$ and $E_\mathrm{p,2}$ into the two continuous random variables $L_1$ and $L_2$ thereby transforming non-linearly their outcome space as well. In particular, the outcome space for the total deposited energy by the two recoil protons given by:
\begin{equation}
\label{eq:TotEDep}
E_\mathrm{p,1} + E_\mathrm{p,2} = E_\mathrm{d}
\end{equation}
becomes
\begin{equation}
\label{eq:TotLight}
L_1 + L_2 = L.
\end{equation}
where if:
\begin{align}
\label{eq:Domain_L1}
&E_\mathrm{p,1} \in [0, E_\mathrm{n,0}] & &\Rightarrow & L_1 & \in [0, L(E_\mathrm{n,0})] \\
\label{eq:Domain_L2}
&E_\mathrm{p,2} \in [0, E_\mathrm{n,0} - E_\mathrm{p,1}] & &\Rightarrow & L_2 & \in [0, L(E_\mathrm{n,0} - E_\mathrm{p,1})].
\end{align}
Figure \ref{fig:DoubleScatteringSimulation2D} shows the effect of this non-linear transformation of the probability density function $p(E_\mathrm{p,2} \mid E_\mathrm{p,1})$ into $p(L_2 \mid L_1)$ for $E_\mathrm{n,0} = 2.45$ MeV calculated by STARTS.
For ``head-on'' collisions, the neutron energy is transferred only to one proton and the light output is $L(E_\mathrm{p}) \approx 0.8$.
For $\theta < \pi$, resulting for example in $E_\mathrm{p,1} = 0.425$ MeV, then $E_\mathrm{p,2} \in [0, 2.025]$ MeV (see left panel of figure \ref{fig:DoubleScatteringSimulation2D}).
The recoil proton with energy $E_\mathrm{p,1} = 0.425$ MeV will result in a light pulse $L_1 = 0.05$ and therefore $L_2 \in [0, 0.60]$ (see right panel of figure \ref{fig:DoubleScatteringSimulation2D}). The maximum light output in this case is $L = 0.65$ for a total deposited energy of 2.45 MeV.
The insert on the right panel of figure \ref{fig:DoubleScatteringSimulation2D} shows $p(L_2 \mid L_1 = 0.05)$ calculated by STARTS.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.5]{Figure07.eps}
\end{center}
\caption{Left panel: PDF $p(E_\mathrm{p,2} \mid E_\mathrm{p,1})$ for the two recoil protons scattered elastically by a neutron with initial energy of 2.45 MeV: the vertical dashed line represents $p(E_\mathrm{p,2} \mid E_\mathrm{p,1} = 0.245 \textrm{~MeV})$. Right panel: PFD $p(L_2 \mid L_1)$ corresponding to the one based on the energy outcome space shown on the left panel: the vertical dashed line represent the range of possible light outputs $L_2$ given that for $E_\mathrm{p,1} = 0.425$ MeV, $L_1 = 0.05$. The corresponding simulated (black line) and theoretical (red line) PDFs are explicitly shown in the insert.}
\label{fig:DoubleScatteringSimulation2D}
\end{figure}
Note however that the observable quantity is not the total deposited energy $E_\mathrm{d}$ but the light output $L = L_1 + L_2$ where $L_1$ and $L_2$ must satisfy the condition:
\begin{equation}
\label{eq:Econs}
E_\mathrm{p}(L_1) + E_\mathrm{p}(L_2) \leq E_\mathrm{n,0}.
\end{equation}
The probability density function for the total light output $p_L$ is then calculated as the convolution $p_{L_1} \otimes p_{L_2}$\footnote{See footnote \ref{fn:PDFconv}.}. The first term, $p_{L_1}$, can be written in terms of $p(E_\mathrm{p,1})$ observing that the probability $p(E_\mathrm{p,1})dE_\mathrm{p}$ of a recoil proton to have an energy in the range $[E_\mathrm{p,1}, E_\mathrm{p,1}+ dE_\mathrm{p,1}]$ is equal to the probability $p(L_1)dL$ that the corresponding light output is in the range $[L_1, L_1+dL_1]$ from which follows\footnotemark:
\begin{equation}
\label{eq:PEeqPL}
p(L_1) = p(E_\mathrm{p,1}) \dfrac{dE_\mathrm{p}(L_1)}{dL_1}.
\end{equation}
\footnotetext{
The quantity $(dE_\mathrm{p}/d\mathrm{L})$ can be calculated numerically from a tabulated data set of $\lbrace E_\mathrm{p}, L \rbrace$ values or analytically if a functional dependence is given as, for example, in equation \eqref{eq:LO}. In this tutorial, $(dE_\mathrm{p}/d\mathrm{L})$ is calculated numerically using the light output function shown in figure \ref{fig:Verbinski}.}
Replacing $p(E_\mathrm{p,1})$ with its corresponding expression (see equation \eqref{eq:pdfSCt}), equation \eqref{eq:PEeqPL} becomes:
\begin{equation}
\label{eq:pdfL1}
p(L_1) = \dfrac{1}{E_\mathrm{n,0}} \dfrac{dE_\mathrm{p}(L_1)}{dL_1}.
\end{equation}
In a similar fashion, the PDF for the conditional probability $p(L_2 \mid L_1)dL_2$ is then:
\begin{equation}
\label{eq:pdfL2}
p(L_2 \mid L_1) = \dfrac{1}{E_\mathrm{n,0} - E_\mathrm{p}(L_1)} \dfrac{dE_\mathrm{p}(L_2)}{dL_2}
\end{equation}
and therefore, the probability density function $p(L)$ of observing a total light output $L$ resulting from two recoil protons with light pulses $L_1$ and $L_2$ is:
\begin{equation}
\label{eq:pdfLconv}
p(L) = \int_0^{L} \dfrac{1}{E_\mathrm{n,0}} \dfrac{1}{E_\mathrm{n,0} - E_\mathrm{p}(L_1)} \dfrac{dE_\mathrm{p}(L_1)}{dL_1} \dfrac{dE_\mathrm{p}(L_2)}{dL_2} dL_1.
\end{equation}
The PDFs $p(L_1)$ and $p(L_2 \mid L_1)$ are non-zero only if $L_1$ and $L_2$ satisfy the condition given in equation \eqref{eq:Econs}. This implies that for a given $E_\mathrm{n,0}$ and $L_1$ there is a maximum value $L_{2,\mathrm{max}}$ for which this condition is satisfied and is given by:
\begin{equation}
\label{eq:L2max}
L_\mathrm{2,max}(L_1) = L[E_\mathrm{n,0} - E_\mathrm{p}(L_1)].
\end{equation}
The dependence of $L_{2,\mathrm{max}}$ on $L_1$ is shown by the dashed black line in the left panel of figure \ref{fig:EconsL}: the region of possible values of $L_1$ and $L_2$ is the one below the curve $L_{2,\mathrm{max}}(L_1)$. The curve $L_{2,\mathrm{max}}(L_1)$ can be interpreted as all the possible combinations of $L_1$ and $L_2$ values for which $E_\mathrm{d} = E_\mathrm{n,0}$.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.5]{Figure11.eps}
\end{center}
\caption{Left panel: maximum possible light output $L_2$ as a function of $L_1$ for $E_\mathrm{n,0} = 2.45$ MeV. The red and blues lines are examples of possible observable light output from two recoil protons. Right panel: total deposited energy as a function of $L_1$ corresponding to the two possible observable light output shown on the left panel.}
\label{fig:EconsL}
\end{figure}
As a result, the regions where $p(L_1)$ and $p(L_2 \mid L_1)$ are non-zero depends on both $L$ and $E_\mathrm{n,0}$. Consider in fact first the case where the observed total light output is $L_\mathrm{obs} = 0.4$: such a light output could be obtained by any combination of $L_1$ and $L_2$ values related by equation \eqref{eq:TotLight} (see the blue line in the left panel of figure \ref{fig:EconsL}). The corresponding deposited energy is shown by the blue line in the right panel of figure \ref{fig:EconsL}. Since in this case $E_\mathrm{d} < E_\mathrm{n,0}$, then $p(L_1)$ and $p(L_2)$ are non-zero for all $L_1 \in [0, L_\mathrm{obs}]$.
Conversely, consider now the case in which the observed total light output is $L_\mathrm{obs} = 0.65$ (see red line in the left panel of figure \ref{fig:EconsL}). In this case, $p(L_1)$ and $p(L_2)$ are non-zero only if $L_1 \in [0, L_{1,a}] \cup [L_{1,b}, L_\mathrm{obs}]$ where $L_{1,a}$ and $L_{1,b}$ are the light outputs for which $L_\mathrm{obs} = L_{2,\mathrm{max}}(L_1)$. For $L_1 \in [L_{1,a}, L_{1,b}]$ it turns out that $E_\mathrm{p}(L_1) + E_\mathrm{p}(L_2) > E_\mathrm{n,0}$ (as shown by the red line on the right panel of figure \ref{fig:EconsL}) which is of course not physically possible.
It is clear then, that as $L$ increases so does the integral $p(L)$ until $L = L_\mathrm{B,1}$ with:
\begin{equation}
\label{eq:LB1}
L_\mathrm{B,1} = L_1^* + L_\mathrm{2, max}(L_1^*)
\end{equation}
where $L_1^*$ is the point of tangency, i.e.:
\begin{equation}
\label{eq:tangency}
\dfrac{d L_\mathrm{2, max}(L_1)}{d L_1} \bigg\vert_{L_1 = L_1^*} = -1.
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[scale = 0.6]{Figure08.eps}
\end{center}
\caption{Evaluation of the PDF $p(L)$ for the total light output $L$ for $L = 0.25$, $L = 0.50$ and $L = 0.65$.}
\label{fig:Convolution}
\end{figure}
The PDF $p(L)$ reaches its maximum for $L = L_\mathrm{B,1}$ and goes to zero as $L \rightarrow 0$ since the regions where the PDF $p(L_1)$ and $p(L_2)$ are non-zero become vanishing small\footnote{In the case of the light output function used in this tutorial and for $E_\mathrm{n,0} = 2.45$ MeV, $L_1^* \approx 0.2697$ and $L_\mathrm{B,1} \approx 0.5389$.}.
The way in which the convolution integral in equation \eqref{eq:pdfLconv} is calculated as a function of $L$ is elucidated in figure \ref{fig:Convolution} which shows how the integrand depends on $L_1$ for three different values of the total light output $L$.
Figure \ref{fig:nppLO} shows instead the PDF $p(L)$ given by equation \eqref{eq:pdfLconv} for all possible values of the total light output, that is for $L \in [0, L_\mathrm{M}]$, compared with the results from STARTS.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.6]{Figure09.eps}
\end{center}
\caption{Probability density function for the light output corresponding to the total energy deposited in the scintillator when a neutron with an energy of 2.45 MeV scatters elastically with two protons: simplified simulation (black) and expected (red). The vertical dashed lines indicate the values of the total light output shown in figure \ref{fig:Convolution}.}
\label{fig:nppLO}
\end{figure}
The predicted contribution to the total light output response function due to a neutron of a given initial energy making two elastic scattering with protons shown in figure \ref{fig:nppLO} can be individuated in the NRESP response function shown in figure \ref{fig:LRFEp}: the sharp knee for $L \approx 0.539$ can now be understood in terms of the maximum energy repartition between the two recoil protons.
A closer comparison between figures \ref{fig:LRFEp} and \ref{fig:nppLO} reveals however that, for $L < L_\mathrm{B,1}$, $p(L)$ does not drop as predicted by equation \eqref{eq:pdfLconv} which indicates that double scattering on protons is not sufficient to reproduce NRESP response function. For this to happen, it is necessary to consider the contribution from triple np scattering.
In a fashion similar to what has been done for the double scattering case, it is possible to write the PDF for the light output for the third recoil proton as:
\begin{equation}
\label{eq:TriplePDF}
p(L_3) = \dfrac{1}{E_\mathrm{n,0} - E_\mathrm{p}(L_1) - E_\mathrm{p}(L_2)} \dfrac{dE_\mathrm{p}(L_3)}{dL_3}.
\end{equation}
The probability density function of the sums $L = L_1 + L_2 + L_3$ is then calculated as the convolution $p_L = p_{L_1} \otimes p_{L_2} \otimes p_{L_3}$:
\begin{equation}
\label{eq:TripleCNV}
p(L) = \int_0^L p(L_1) \int_0^{L-L_1} p(L_2) p(L - L_1 - L_2) dL_2 dL1.
\end{equation}
and is shown in figure \ref{fig:npppLO} together with the one calculated by STARTS. As can be seen, $p(L)$ does not drop much for $L \in [L_\mathrm{B,2}, L_\mathrm{B,1}]$ where $L_\mathrm{B,2}$ corresponds to situation in which $E_\mathrm{p}(L_1) + E_\mathrm{p}(L_2) + E_\mathrm{p}(L_3) = E_\mathrm{n,0}$.
It is clear from the results shown in figures \ref{fig:LRFEp}, \ref{fig:nppLO} and \ref{fig:npppLO} that a linear combination of the response functions from one, two and three recoil protons can reproduce NRESP output.
This is however postponed until section \ref{sec:Comparison} as the last important component to the total light response function, that is the one arising from recoil protons from a neutron that has undergone a prior elastic collision with a carbon atom (track ``d'' of figure \ref{fig:DoubleScattering}), is discussed in the next section.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.6]{Figure09b.eps}
\end{center}
\caption{Probability density function for the light output corresponding to the total energy deposited in the scintillator when a neutron with an energy of 2.45 MeV scatters elastically with three protons: simplified simulation (black) and expected (red).}
\label{fig:npppLO}
\end{figure}
\section{Response function for neutron scattering with different targets}
\label{sec:mixedNP}
From the discussion in sections \ref{sec:singleNP} and \ref{sec:doubleNP_target} it is clear that, even if the light output from the recoil carbon makes a negligible contribution to the response function, the energy repartition between the recoil and scattered particles will affect the light output of the scattered proton in the second elastic collision.
The corresponding energy and light output PDFs can be derived by generalizing to the case where an incident neutron with initial energy $E_\mathrm{n,0}$ makes two elastic collisions with atoms characterized by $A_1$ and $A_2$ such that $A_1 > A_2$ (and therefore $\alpha_1 > \alpha_2$).
The probability of observing a neutron with energy between $[E_\mathrm{n,1}, E_\mathrm{n,1} + dE_\mathrm{n,1}]$ after the scattering with $A_1$ is:
\begin{equation}
\label{eq:mix_EnA1}
p(E_{\mathrm{n},1}) dE_{\mathrm{n},1} = \dfrac{1}{1-\alpha_1}\dfrac{1}{E_{\mathrm{n},0}} dE_{\mathrm{n},1}.
\end{equation}
The probability of the twice scattered neutron to have an energy in the range $[E_\mathrm{n,2}, E_\mathrm{n,2} + dE_\mathrm{n,2}]$ after the scattering with $A_2$ is obtained by applying the law of total probability to the joint event $p(E_\mathrm{n,2} \cap E_\mathrm{n,1}) = p(E_\mathrm{n,2} \mid E_\mathrm{n,1}) p(E_\mathrm{n,1})$ where the integral is carried out over all possible $E_\mathrm{n,1}$:
\begin{equation}
\label{eq:mix_EnA2}
p(E_{\mathrm{n},2}) dE_{\mathrm{n},2} = \dfrac{1}{1-\alpha_1} \dfrac{1}{1-\alpha_2} \dfrac{dE_{\mathrm{n},2}}{E_{\mathrm{n},0}} \int \dfrac{1}{E_{\mathrm{n},1}} dE_{\mathrm{n},1}
\end{equation}
The integration limits above depends on $E_\mathrm{n,2}$ possible ranges. In particular
\begin{align}
\label{eq:mix_En2GivenEn1}
\mathrm{if~}
\begin{dcases}
E_{\mathrm{n}, 2} \in [\alpha_1 E_{\mathrm{n}, 0}, E_{\mathrm{n}, 0}] & \Rightarrow E_{\mathrm{n}, 1} \in [E_{\mathrm{n}, 2}, E_{\mathrm{n}, 0}] \\[6pt]
E_{\mathrm{n}, 2} \in [\alpha_2 E_{\mathrm{n}, 0}, \alpha_1 E_{\mathrm{n}, 0}] & \Rightarrow E_{\mathrm{n}, 1} \in [\alpha_1 E_{\mathrm{n}, 0}, E_{\mathrm{n}, 0}] \\[6pt]
E_{\mathrm{n}, 2} \in [\alpha_1 \alpha_2 E_{\mathrm{n}, 0}, \alpha_2 E_{\mathrm{n}, 0}] & \Rightarrow E_{\mathrm{n}, 1} \in [\alpha_1 E_{\mathrm{n}, 0}, E_{\mathrm{n}, 2}/\alpha_2].
\end{dcases}
\end{align}
then:
\begin{align}
\label{eq:mix_Et2PDFE1}
p(E_{\mathrm{n},2}) dE_{\mathrm{n},2} & = \dfrac{1}{1-\alpha_1} \dfrac{1}{1-\alpha_2} \dfrac{dE_{\mathrm{n},2}}{E_{\mathrm{n},0}} \times\\
&
\begin{dcases}
\ln \left(\dfrac{E_{\mathrm{n},0}}{E_{\mathrm{n},2}} \right) &\mathrm{if~} E_{\mathrm{n}, 2} \in [\alpha_1 E_{\mathrm{n}, 0}, E_{\mathrm{n}, 0}] \\[6pt]
\ln \left(\dfrac{1}{\alpha_1} \right) &\mathrm{if~} E_{\mathrm{n}, 2} \in [\alpha_2 E_{\mathrm{n}, 0}, \alpha_1 E_{\mathrm{n}, 0}]\\[6pt]
\ln \left(\dfrac{E_{\mathrm{n},2}}{\alpha_1 \alpha_2 E_{\mathrm{n},0}} \right) &\mathrm{if~} E_{\mathrm{n}, 2} \in [\alpha_1 \alpha_2 E_{\mathrm{n}, 0}, \alpha_2 E_{\mathrm{n}, 0}]\\[6pt]
0 &\mathrm{otherwise}.
\end{dcases}\nonumber
\end{align}
In the case $\alpha_1 > 0$ and $\alpha_2 = 0$, the PDF $p(E_\mathrm{n,2})$ then becomes:
\begin{align}
\label{eq:mix_Et2PDFE1}
p(E_{\mathrm{n},2}) dE_{\mathrm{n},2} = \dfrac{1}{1-\alpha_1} \dfrac{dE_{\mathrm{n},2}}{E_{\mathrm{n},0}} \times
\begin{dcases}
\ln \left(\dfrac{E_{\mathrm{n},0}}{E_{\mathrm{n},2}} \right) &\mathrm{if~} E_{\mathrm{n}, 2} \in [\alpha_1 E_{\mathrm{n}, 0}, E_{\mathrm{n}, 0}] \\[6pt]
\ln \left(\dfrac{1}{\alpha_1} \right) &\mathrm{if~} E_{\mathrm{n}, 2} \in [0, \alpha_1 E_{\mathrm{n}, 0}] \\[6pt]
0 &\mathrm{otherwise}.
\end{dcases}
\end{align}
The probability density function for the energy of the recoil target $A_1$ is given by \eqref{eq:pdfSCt} with $\alpha$ replaced by $\alpha_1$ if $E_{A_1,1} \in [0, (1-\alpha_1) E_{\mathrm{n},0}]$ and zero everywhere else.
The PDF for the energy of recoil target $A_2$ is given by equation \eqref{eq:mix_EnA2} but the integration limits are now given by:
\begin{align}
\label{eq:mix_Et2PDFconditions}
\mathrm{if~}
\begin{dcases}
E_{{A_2} , 1} \in [0, (1-\alpha_2) \alpha_1 E_{\mathrm{n}, 0}] &\Rightarrow E_{\mathrm{n}, 1} \in [\alpha_1 E_{\mathrm{n}, 0}, E_{\mathrm{n}, 0}] \\[6pt]
E_{{A_2}, 1} \in [(1-\alpha_2)\alpha_1 E_{\mathrm{n}, 0}, (1-\alpha_2) E_{\mathrm{n}, 0}] &\Rightarrow E_{\mathrm{n}, 1} \in \left[ \dfrac{E_{A_2,1}}{1-\alpha_2}, E_{\mathrm{n}, 0} \right].
\end{dcases}
\end{align}
In this case, $(1-\alpha_2) E_{\mathrm{n}, 0}$ corresponds to $A_2$'s maximum possible energy if the neutron has lost no energy in the collisions with $A_1$ (grazing collision) while $ (1-\alpha_2) \alpha_1 E_{\mathrm{n}, 0}$ is $A_2$'s maximum possible energy if the
neutron has lost the maximum energy possible in a ``head-on'' collision with $A_1$.
Integration of equation \eqref{eq:mix_EnA2} with $E_\mathrm{n,1}$ in the intervals specified in \eqref{eq:mix_Et2PDFconditions} gives the probability for the second recoil target to have an energy in $[E_{A_2,1}, E_{A_2,1} + dE_{A_2,1}]$:
\begin{align}
\label{eq:mix_Et2PDF}
p(E_{A_2,1}) dE_{A_2,1} &= \dfrac{1}{1-\alpha_1} \dfrac{1}{1-\alpha_2} \dfrac{dE_{A_2,1}}{E_{\mathrm{n},0}} \times\\
&
\begin{dcases}
\ln \left(\dfrac{1}{\alpha_1} \right) &\mathrm{if} \; E_{{A_2} , 1} \in [0, (1-\alpha_2) \alpha_1 E_{\mathrm{n}, 0}] \\[6pt]
\ln \left[\dfrac{E_{\mathrm{n},0}}{E_{A_2,1}} (1-\alpha_2) \right] &\mathrm{if} \; E_{{A_2}, 1} \in [(1-\alpha_2)\alpha_1 E_{\mathrm{n}, 0}, (1-\alpha_2) E_{\mathrm{n}, 0}]\\[6pt]
0 &\mathrm{otherwise}.
\end{dcases}\nonumber
\end{align}
In the case $\alpha_1 > 0$ and $\alpha_2 = 0$, the probability of observing the recoil proton with energy in the range $[E_\mathrm{p,1},E_\mathrm{p,1} + dE_\mathrm{p,1}]$ is therefore:
\begin{align}
\label{eq:mix_pPDF}
p(E_\mathrm{p,1}) dE_\mathrm{p,1} = \dfrac{1}{1-\alpha_1} \dfrac{dE_\mathrm{p,1}}{E_{\mathrm{n},0}} \times
\begin{dcases}
\ln \left(\dfrac{1}{\alpha_1} \right) &\mathrm{if} \; E_\mathrm{p,1} \in [0, \alpha_1 E_\mathrm{n,0}]\\[6pt]
\ln \left(\dfrac{E_{\mathrm{n},0}}{E_\mathrm{p,1}} \right) &\mathrm{if} \; E_\mathrm{p,1} \in [\alpha_1 E_\mathrm{n,0}, E_\mathrm{n,0}]\\[6pt]
0 &\mathrm{otherwise}.
\end{dcases}
\end{align}
Panel (a) of figure \ref{fig:nCp} shows the PDF for the energy of the recoil proton as a function of $E_\mathrm{p,1}$ calculated by equation \eqref{eq:mix_pPDF} together with STARTS results. The corresponding PDF for the light output is given by applying equation \eqref{eq:NLOSnp} to equation \eqref{eq:mix_pPDF} and is shown in panel (b).
\begin{figure}
\begin{center}
\includegraphics[scale = 0.6]{Figure10.eps}
\end{center}
\caption{Simulated (black line) and theoretical (red line) probability density function for the energy (panel (a)) and the light output (panel (b)) for the recoil proton from an elastic scattering with a neutron that has previously undergone a collision with a carbon atom.}
\label{fig:nCp}
\end{figure}
\section{Comparison with full Monte Carlo simulations}
\label{sec:Comparison}
It is now possible to proceed to the qualitative comparison between a combination of the theoretical PDFs derived in the previous sections with the light output response function obtained by full Monte Carlo simulation such as NRESP.
In this context, a full Monte Carlo simulation refers to a simulation in which the proper cross-sections of the relevant processes are taken into account, the the density of hydrogen and carbon atoms in the scintillation material are specified and a realistic geometrical model of the detector is used.
Without this level of details, neither STARTS nor the analytical approach can properly estimate the relative importance (amplitude) of the different components constituting the light output response function although one can reasonably assume that the role of multiple np scattering becomes more important as the size of the detector increases and that collisions with carbon nuclei followed by collisions on hydrogen are always present given that the cross-sections for these processes are of the same order of magnitude.
The weights of the light output response functions corresponding to the different elastic scattering processes are obtained by a simple multiple linear regression where the dependent variable is the overall response function calculated by NRESP.
The relative amplitude of the different components so obtained has just an indicative (qualitative) nature and for a proper estimate of these weights one should exclusively use full Monte Carlo codes.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.45]{Figure12.eps}
\end{center}
\caption{Comparison between the light output response function of a ``thin'' (left panel) and a ``thick'' (right panel) liquid scintillator to a mono-energetic neutron with energy of 2.45 MeV calculated by NRESP (black line) and the predicted one (red line) from a linear regression of the theoretical light output response function for single (blue line), double (orange line) and triple (dark magenta) np scattering events together with the nCp scattering contribution (green line).}
\label{fig:LORFmatch}
\end{figure}
With these words of caution well present in mind, the comparison between the NRESP light output response function and the one from the linear regression based on STARTS calculations is shown in figure \ref{fig:LORFmatch} for both the ``thin'' and ``thick'' detectors.
As can be seen, the light response function for the ``thin'' detector can be well matched neglecting the contribution from triple np scattering while this component is essential to match the response function for the ``thick detector''.
As an example of how the analytical approach can be used to make predictions regarding the expected response function of a liquid scintillator, consider the case of a scintillator with a very large active volume. It is obvious that multiple elastic scattering of the incident neutron on protons should make a large contribution to the total light output response functions. Observing that the response function for multiple np scattering can be calculated by multiple convolutions of the same probability density function, although each weighted by a different factor which depends on the scattered neutron energy before each collision, and using the central limit theorem\footnotemark{} then it is possible to predict that expected response function should approximate a normal probability density function. This is indeed the case as shown in figure \ref{fig:Enqvist} where the detector response function of a 12.7-by-12.7 cm cylindrical EJ-309 liquid scintillator measured experimentally \cite{Enqvist} is compared with a normal distribution.
\footnotetext{The central limit theorem holds only if both the average value and variance of the random variable exist. The random variable in this case is the total light output $L$ for which the existence of its average value and variance is guaranteed.}
Having now described, albeit qualitatively, the building blocks of the light response function of a liquid scintillator in the simplest case, it is now possible to tackle its interpretation in presence of those effects neglected so far and in more realistic scenarios such as those encountered in fusion devices.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.6]{Figure13.eps}
\end{center}
\caption{An example of the central limit theorem: multiple np scattering giving rise to a normal-like distribution contribution (red line) to the response function for 2.45 MeV neutrons into a large EJ-309 liquid scintillator (black line): data from \cite{Enqvist}. The experimental data have been acquired with an acquisition threshold which explain the reason why the response function goes to zero at low light output.}
\label{fig:Enqvist}
\end{figure}
\section{Response functions in realistic scenarios}
\label{sec:complications}
In this section additional aspects affecting the response function of liquid scintillators that have been neglected in the previous sections are discussed. These can be divided in two categoties: those aspects that are intrinsic to the scintillator properties which are discussed in sections \ref{sec:DetectorOthers} and \ref{sec:efficiency} and those which depend on the incident neutron energy spectrum, discussed in section \ref{sec:FusionSpectra}.
\subsection{Detector properties affecting the light output response function}
\label{sec:DetectorOthers}
The response function discussed so far has been limited to the case of 2.45 MeV mono-energetic incident neutrons for which the predominant interaction mechanism is elastic scattering on hydrogen and carbon nuclei.
For neutrons of higher energies the following competing processes occur: inelastic nC scattering (for $E_\mathrm{n} > 4.8$ MeV) and the nuclear reactions $^{12}\mathrm{C}(\mathrm{n}, \alpha){^9}\mathrm{Be}$ (for $E_\mathrm{n} > 7.4$ MeV) and $^{12}\mathrm{C}(\mathrm{n}, \mathrm{n}'){^9}\mathrm{Be^*}$ (for $E_\mathrm{n} > 10$ MeV).
At these high energies, nuclear reactions induced by the neutron interacting with the detector material surrounding the liquid scintillator cell are also possible.
Inelastic scattering results in the scattered neutron and recoil particles to have a lower energy than in the case of elastic scattering. The light output from these recoils particles have smaller amplitude and will be distributed continuously from zero to the maximum light output possible.
The light output from $\alpha$-s of few MeV is much higher than that of recoil carbons of similar energy but still lower by a factor of about 10 compared to the one for recoils protons (see figure \ref{fig:Verbinski}). The contribution from $\alpha$-s appear as an additional ``edge'' at the low end part of the light output response function superimposed to the ``box-like'' response for protons. Indicating with $L_\mathrm{p}$ the light output resulting from a ``head-on'' collision of a high energy neutron with a proton and with $L_\alpha$ the light output produced by an $\alpha$ particle then $L_\alpha/L_\mathrm{p} \approx 0.1$ for $E_\mathrm{n,0} = 15$ MeV \cite{Klein}. Due to the complexity of these competing events as well as the strong angular dependence of their cross section, full Monte Carlo simulations are indispensable for a correct modelling of experimental response functions at neutron energies above few MeVs.
The transport of the scintillation light from its point of generation inside the active detector volume to the PMT becomes important for large detectors or detectors with one dimension much larger than the other.
This is due to the fact that the scintillator itself is a self-absorbing medium and therefore the same deposited energy will result in two light pulses of different amplitude depending on the location of the recoil particle generation with respect to the PMT.
The effect of light transport and self-attenuation is usually negligible for small volume liquid scintillators (a few centimetres).
For larger scintillators, self-attenuation results in light pulses being distributed at lower light output than expected. For example, in the case of incident mono-energetic neutrons onto a long cylindrical scintillator with a PMT at one end of the cylinder, self-absorption will result in the ``smearing'' of the ``head-on'' collision edge towards lower amplitudes thus worsening the detector energy resolution. If liquid scintillators are to be used for neutron spectroscopy it is therefore essential that they are of limited dimensions.
Codes such GEANT4 \cite{Agostinelli} and MCNP-PoliMI \cite{Pozzi} are capable of treating properly the transport of photons and should be used for scintillators of shape more complex than the standard cylindrical one and for large volumes.
Two additional quantities that affect the light response function are the light output as a function of the deposited energy by different recoil particles and the energy resolution function.
The light output function depends by the size and geometry of the scintillator and affects the relative position of the different features in the light output pulse height spectrum and in particular the position of the ``edge'' corresponding to the deposition of the full neutron energy in a single elastic scattering with a proton (``head-on'' collision).
The energy resolution dependency on the neutron energy results in the smearing of the response function and it is mostly predominant at the location of the ``head-on edge''.
The energy resolution is affected in turn by the geometry and size of the scintillator and typically ranges between 5 and 30 \% in the neutron energy range 0 - 3 MeV.
For a discussion on the different methods for measuring the light output and energy resolution functions the reader is referred to \cite{Enqvist, Klein} the references therein. Since the ``head-on edge'' region is the one generated by neutrons that have not undergone any collision, the spectral information on the neutron source is mainly contained in this region and therefore the determination of the light output and energy resolution functions is very important especially at the energy of interest (for example at 2.45 and 14.1 MeV for fusion plasmas).
\subsection{Detector efficiency}
\label{sec:efficiency}
The efficiency of a liquid scintillator to a mono-energetic neutron with energy $E_\mathrm{n,0}$ for single elastic scattering on hydrogen is given by \cite{Knoll}:
\begin{equation}
\label{eq:eff_single}
\epsilon(E_\mathrm{n,0}) = \dfrac{n_\mathrm{H} \sigma_\mathrm{H}(E_\mathrm{n,0})}{n_\mathrm{H} \sigma_\mathrm{H}(E_\mathrm{n,0}) + n_\mathrm{C} \sigma_\mathrm{C}(E_\mathrm{n,0})} \left\lbrace 1- e^{ - \left[ n_\mathrm{H} \sigma_\mathrm{H}(E_\mathrm{n,0}) + n_\mathrm{C} \sigma_\mathrm{C}(E_\mathrm{n,0}) \right] d } \right\rbrace
\end{equation}
where $d$ is the detector thickness, $n_\mathrm{H}$ and $n_\mathrm{C}$ are the number density of hydrogen and carbon atoms respectively, and $\sigma_\mathrm{H}$ and $\sigma_\mathrm{C}$ are the neutron scattering cross-section on hydrogen and carbon. The exponential terms represents the fraction of neutrons of energy $E_\mathrm{n,0}$ passing through a scintillator of thickness $d$ (i.e. the uncollided neutrons) and therfore the terms within curly brackets represents the fraction of neutrons interacting in the scintillation material.
The detector efficiency can be calculated using full Monte Carlo codes as the integral of the pulse height spectrum: the efficiency of a 1.5 cm thick scintillator as a function of the incident neutron energy is shown in figure \ref{fig:efficiency}. Since in reality, pulse height spectra are measured with an acquisition threshold, as shown in figure \ref{fig:Enqvist}, the lower limit of the integration must be accurately determined.
This is typically done by calibrating the detector light output against the energy deposited by recoil electron using standard $\gamma$-rays calibration sources (such as $^{22}$Na, $^{137}$Cs and $^{207}$Bi) and then converting the experimental light output threshold into a deposited energy threshold using the corresponding light output function \cite{Ecal, Fowler}. The light output so calibrated is given in units of MeV electron-equivalent (MeVee) as shown in figure \ref{fig:Enqvist}. In this case, the light output threshold is approximately 0.1 MeVee resulting in a deposited energy threshold of approximately 0.8 MeVee for recoil protons (see figure \ref{fig:Verbinski}). In particular, the efficiency and the scintillator cross-sectional area determine the number of counts per seconds that are measured for a given incident neutron flux.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.6]{Figure15.eps}
\end{center}
\caption{Liquid scintillator efficiency calculated by NRESP for a 1.5 cm thick scintillator with and without an acquisition threshold.}
\label{fig:efficiency}
\end{figure}
As shown in figure \ref{fig:efficiency}, the probability of a neutron to interact with the scintillator increases as its energy decreases. As a results, neutrons that have undergone (multiple) scattering before reaching the detector can make a significant contribution to the pulse height spectrum. Neutrons that have collided first with the scintillation material will have an increased probability of making multiple collisions especially for thick detectors as the detection efficiency increases.
It is clear then that the dependency of the efficiency on the neutron energy has a non-trivial effect on the light response function both in shape and amplitude. Full Monte Carlo codes ensure that all these effects are included in the response function.
\subsection{Neutron energy spectra in fusion devices and forward modelling}
\label{sec:FusionSpectra}
In fusion devices, liquid scintillators are typically located inside thick neutron shield and view the neutron source (the plasma) via long collimators and the energy spectrum of the neutrons at the detector are anything but mono-energetic.
A typical fusion neutron energy spectrum contains spectral components from \emph{i}) DD and DT thermal emission (approximately gaussian with a width which depends on the plasma ion temperature), \emph{ii}) fusion reactions involving ions with energies much larger than the thermal ions produced by additional heating and \emph{iii}) scattered neutrons, that is, neutrons that before their first interaction with the scintillator have already suffered one or multiple collisions with the material surrounding it (from the fusion device itself, to the neutron diagnostic in which the detector is installed).
\begin{figure}
\begin{center}
\includegraphics[scale = 0.6]{Figure14.eps}
\end{center}
\caption{Example of forward modelling. Panel (a) shows the individual components of the neutron energy spectrum for a neutral beam heated DD fusion plasmas at the detector in its different components: THermal (TH), Beam-Beam (BB) and Beam-Thermal (BT) assuming a 5 \% contribution from scattered neutrons. Panel (b) shows light output response function matrix: the light output response function for two specific energies (2.45 and 4.00 MeV) indicated by the vertical dashed black lines are shown in panels (c) and (d) with and without the effect of the finite energy resolution. Panel (e) shows the folding of the energy spectrum in panel (a) with the response function matrix in panel (b). Finally, in panel (f) the linear combination of the folded response functions for all neutron energies shown in panel (e) are compared with the experimentally measured light output pulse height spectrum.}
\label{fig:FM}
\end{figure}
In such a situation, the interpretation of the experimental response function requires the accurate estimate of the neutron flux and energy spectrum at the detector location. In turns this requires accurate plasma physics models for the calculation of the DD and DT neutron sources and of the neutron emission along the line of sight by which the neutron source is seen by the detector to provide the direct (not collided) neutron flux and energy spectrum at the detector.
In addition, Monte Carlo neutron transport simulations are required to estimate the scattered neutron flux and its energy spectrum given a specific neutron source and line of sight. The resulting neutron energy spectrum at the detector will contains all these spectral components at different levels.
An example of the different components of the neutron spectrum at a detector location in a DD plasma with neutral beam heating is shown in panel (a) of figure \ref{fig:FM} which can be thought of as the neutron emissivity multiplied by the detector area and the solid angle.
The response function for each incident neutron energy is then calculated resulting in the response function matrix in which, as shown in panel (b), the columns contain the light output response function at for a particular neutron incident energy.
The response functions for incident neutron with energies 2.45 and 4.00 MeV are shown in panels (c) and (d): each response function is then folded with the energy resolution function.
The neutron energy spectrum can then be thought as a set of weights that multiply the response function matrix for each incident neutron energy resulting in the weighted response function matrix shown in panel (e).
Adding together the contribution from all neutron energies present in the neutron energy spectrum is then equivalent to the column-wise sum of the elements of the weighted response function matrix: the result in shown in panel (f) together with experimentally measured light output pulse height spectrum integrated over a given time interval and multiplied by the detector efficiency.
The comparison shown in panel (f) highlights one final aspect of the forward modelling, that is, the conversion of the matrix of normalized response functions multiplied by the absolute neutron energy spectrum into absolute counts. The geometry of the line of sight is already included in the neutron energy spectrum shown in panel (a) so this is achieved by multiplying the neutron energy spectrum by the integration time used to calculate the measured pulse height spectrum in panel (f) and by the detector efficiency.
In this case, the agreement is not perfect. Several reasons could be responsible for this. One possibility is that the original neutron spectrum might not have been properly estimated because one or more plasma parameters that are used in the modelling codes are measured or estimated incorrectly. Among such quantities are the fuel ion and the fast ion (from the additional heating) distribution functions in space and energy, the plasma temperature and density. If one assumes that the detector response function is well known, then such discrepancy can be used to improve the models used for the description of the plasma.
This method of proceeding in the modelling of the measured light pulse height spectra is therefore called ``forward modelling'' since, as the name implies, it starts from the source of the neutrons and then propagates the resulting neutron field taking into account all the aspects affecting the neutron transport from the source to the detector (including realistic geometries and material compositions of the fusion device and of the diagnostic) and then converts the neutron field at the detector into the light output pulse height spectrum via the response function matrix.
It is clear therefore how crucial it is that the liquid scintillator response function is determined as accurately as possible if information on the plasma itself is required from the neutron measurements from liquid scintillators.
\section{Summary}
\label{sec:summary}
The interpretation of the neutron response function of liquid scintillators can be rather subtle as discussed in this tutorial even in the simplest case imaginable of a mono-energetic neutron source. This case, which can be approximately obtained in neutron calibration facilities, however is very useful to understand the different processes that contributes to the neutron light response function which can then easily generalized to the case of fusion neutron sources.
Response functions can be easily obtained with Monte Carlo codes as discussed here but their use and the interpretation of the calculated results can be quite complex and as shown in this tutorial not strictly necessary for a qualitative understanding of the response function. Monte Carlo codes are however indispensable whenever a quantitative comparison with experimental measurements is crucial.
This is particularly true in the case where one wants to infer absolute quantities from measurements based on the response of liquid scintillators from fusion plasmas such for example the absolute neutron yield (which requires an accurate determination of the detector efficiency) or the underlying fuel ions distribution functions from the measured neutron energy spectrum.
In such cases, forward modelling is a robust although quite complex method that enables an understanding of the underlying plasma properties and therefore can provide useful feedback information on the theory and tools used to model the physics of fusion plasmas. This, in turns, requires a very good overall characterization of the response function of liquid scintillators. It is hoped that this tutorial provides an useful introduction to the understanding of the response functions obtained by full Monte Carlo codes.
\section*{Acknowledgements}
The author wish to thanks S. Conroy and A. Hjalmarsson at the Department of Physic and Astronomy at Uppsala University for useful discussions.
|
2,877,628,089,097 | arxiv | \section{Supplementary Information}
\subsection{Derivation of $\mathrm{Prob}(\mathcal{P})$ for a bimodal $p(\mu)$}
Let us consider a bimodal distribution for $p(\mu)$, with values $\mu^{(1)}$ and $\mu^{(2)}$ and probability $p_{1}$ and $p_{2} = 1 - p_{1}$. The survival probability $\mathcal{P}(\{\mu_{j}\})$, thus, can be written as
\begin{equation*}
\mathcal{P} = q(\mu^{(1)})^{k(\mathcal{P})}q(\mu^{(2)})^{m - k(\mathcal{P})},
\end{equation*}
where $k(\mathcal{P})$ is the frequency of the event $\mu^{(1)}$. By taking the logarithm of $\mathcal{P}$, Eq.~(\ref{eq:prob-distribution2}) follows by resolving for $k(\mathcal{P})$. Moreover, the frequency $k(\mathcal{P})$ is distributed by a binomial probability distribution, namely
\begin{equation*}
\mathrm{Prob}(k(\mathcal{P})) = \frac{m!}{k(\mathcal{P})!(m - k(\mathcal{P}))!}p_{1}^{k(\mathcal{P})}p_{2}^{m - k(\mathcal{P})}.
\end{equation*}
Assuming that for each value of $k(\mathcal{P})$ there exists a single solution $\mathcal{P}$ for Eq.~(\ref{eq:prob-distribution2}), $\mathrm{Prob}(\mathcal{P})$ can be univocally determined from $\mathrm{Prob}(k(\mathcal{P}))$. Hence, by using the Stirling approximation, for $m$ sufficiently large the binomial distribution $\mathrm{Prob}(k(\mathcal{P}))$ can be reasonably approximated to a Gaussian one, obtaining, thus, Eq.~(\ref{eq:prob-distribution1}).
\subsection{Derivation of $\Delta q(\mu,m)$}
Let us consider the series expansion of $q^{m}$ and its logarithm up to the fourth order, namely
\begin{equation*}
q^{m}=1-m\Delta^{2}H\mu^{2}+\frac{m}{12}\left[\gamma_{H}+3(2m-1)(\Delta^{2}H)^{2}\right]\mu^{4}+\mathcal{O}(\mu^{6})
\end{equation*}
and
\begin{equation*}
\ln q^{m}=-m\Delta^{2}H\mu^{2}+\frac{m}{12}\left[\gamma_{H}-3(\Delta^{2}H)^{2}\right]\mu^{4}+\mathcal{O}(\mu^{6}),
\end{equation*}
where $\gamma_{H}\equiv\overline{H^{4}}-4\overline{H^{3}}\overline{H}+6\overline{H^{2}}\overline{H}^{2}-3\overline{H}^{4}$ is the kurtosis of the Hamiltonian. Hence, by neglecting from $\Delta q(\mu,m)$ the higher order terms, it holds that
\begin{eqnarray}
\Delta q &\propto& m\Delta^{2}H\nu_{2}-\frac{m}{12}\left[\gamma_{H}-3(\Delta^{2}H)^{2}\right]\nu_{4}\nonumber \\
&+&\ln\left\{1-m\Delta^{2}H\nu_{2}+\frac{m}{12}\left[\gamma_{H}+3(2m-1)(\Delta^{2}H)^{2}\right]\nu_{4}\right\}\nonumber \\
&=& \frac{m^{2}}{2}(\Delta^{2}H)^{2}\nu_{4}-\frac{m^{2}}{2}(\Delta^{2}H)^{2}\nu_{2}^{2}. \nonumber
\end{eqnarray}
\subsection{Imaging and manipulation of the $^{87}$Rb Bose-Einstein condensate.}
We produce a Bose-Einstein condensate (BEC) of $^{87}$Rb atoms in a
magnetic micro-trap realized with an Atom chip. The trap has a
longitudinal frequency of $46~{\rm Hz}$, the radial trapping
frequency is $950~{\rm Hz}$. The atoms are evaporated to quantum
degeneracy by ramping down the frequency of a radio frequency (RF)
field. The BEC has typically $8\cdot10^4$ atoms, a critical
temperature of $0.5~\mu{\rm K}$ and is at $300~\mu{\rm m}$ from the
chip surface. The magnetic fields for the micro-trap are provided by
a Z-shaped wire on the Atom chip and an external pair of Helmholtz
coils. The RF fields for evaporation and manipulation of the Zeeman
states are produced by two further conductors also integrated on the
Atom chip.
\begin{figure}[h!]
\centering
\includegraphics[width=0.40
\textwidth,angle=0]{SI_Fig.pdf}
\caption{State preparation sequence for the SQZE experiment. After the condensation in the pure state $|F=2, m_F=+2\rangle$, in the first step the atoms are transferred, with fidelity higher than $90\%$, in the state $|F=2, m_F=0\rangle$. In the second step the atoms in this sub-level are transferred, by the Raman lasers, in the lower state $|F=1, m_F=0\rangle$, which is the initial state $\rho_0$ for our experiment. In the third, last, step, a fixed amount of population is transferred in the side sublevels $|F=1, m_F=\pm 1\rangle$. These atoms will be used as a benchmark to calculate the survival probability after the experiment.}\label{fig_5}
\end{figure}
To record the number of atoms in each of the 8 $m_F$ states of the
$F=2$ and $F=1$ hyperfine state we apply a Stern-Gerlach method.
After $2~{\rm ms}$ of free fall expansion, an inhomogeneous magnetic field is applied along the
quantization axis for $10~{\rm ms}$. This causes the different $m_F$
sub-levels to spatially separate. After a total time of $23~{\rm ms}$ of
expansion we shine $200 \rm\,\mu s$ of light resonant with the $|F=2\rangle\rightarrow|F'=3\rangle$ pushing away all atoms in the $F=2$ sub-levels and recording the shadow cast by these atoms onto a CCD camera.
ìWe let the remaining atoms expand for further $1 \rm\,ms$ and then we apply a bi-chromatic pulse containing light resonant to the $|F=2\rangle\rightarrow|F'=3\rangle$ and $|F=1\rangle\rightarrow|F'=2\rangle$ transitions effectively casting onto the CCD the shadow of the atoms in the $F=1$ sub-levels. Another two CCD images to acquire the background complete the imaging procedure.
To prepare the atoms for the experiment, the condensate is released from the magnetic trap and allowed to expand freely for $0.7 \rm\,ms$ while a constant magnetic field bias of $6.179 \rm\,G$ is applied in a fixed direction. This procedure ensures that the atom remain in $|F=2, m_F=+2\rangle$ state and strongly suppresses the effect of atom-atom interactions by reducing the atomic density. The first step of preparation consists in applying a frequency modulated RF pulse designed with an optimal control strategy \cite{Lovecchio} in order to prepare, with high fidelity, all the atoms in the $|F=2, m_F=0\rangle$ state (see Fig.~\ref{fig_5}). This RF optimal pulse is $50\rm\, \mu s$ long. After the RF pulse we transfer the whole $|F=2, m_F=0\rangle$ population in the $|F=1, m_F=0\rangle$ sublevel by shining the atoms with bi-chromatic (Raman) laser light. This is the initial state $\rho_0$ for our experiment. The preparation is completed by applying another RF pulse to place some atomic population in the $|F=2, m_F=\pm 1\rangle$ states for normalization. This atoms will be not altered during the actual experiment, when only the Raman lasers and the resonant light are on, so they can be used as a control sample population. Comparing the fraction of atoms in the $|F=1, m_F=0\rangle$ sub-level survived after the SQZE experiment, with the fraction in the same level without any further manipulation other than the preparation sequence, we are able to retrieve the survival probability $\mathcal{P}$.
|
2,877,628,089,098 | arxiv | \section{Introduction}
Suppose that $M$ is a compact smooth manifold with boundary $\partial M$.
There are two types of the Yamabe problem with boundary:
Given a smooth metric $g$ in $M$,
(i) find a metric conformal to $g$ such that its scalar curvature is constant in $M$ and its mean curvature is zero
on $\partial M$;
(ii) find a metric conformal to $g$ such that its scalar curvature is zero in $M$ and its mean curvature is constant
on $\partial M$.
The Yamabe problem with boundary has been studied by many authors.
See \cite{Brendle&Chen,Escobar1,Escobar2,Marques} and the references therein.
As a generalization of the Yamabe problem with boundary, one can consider the prescribing curvature problem on manifolds with boundary:
Given a smooth metric $g$ in a compact manifold $M$ with boundary $\partial M$,
(i) find a metric conformal to $g$ such that its scalar curvature is equal to a given smooth function in $M$
and its mean curvature is zero on $\partial M$;
(ii) find a metric conformal to $g$ such that its scalar curvature is zero in $M$ and its mean curvature is
equal to a given smooth function on $\partial M$.
In particular, when the manifold is the unit ball,
it is the corresponding Nirenberg's problem for manifolds with boundary.
These have been studied extensively by many authors,
We refer the readers to \cite{Chen&Ho,Escobar3,Ho,Xu&Zhang}
and the references therein for results in this direction.
More generally, one can consider the prescribing curvature problem on manifolds with boundary
without restricting to a fixed conformal class:
(i) given a smooth function $f$ in $M$, find a metric $g$ such that
its scalar curvature is equal to $f$ and its mean curvature is zero, i.e.
$R_g=f$ in $M$ and $H_g=0$ on $\partial M$;
(ii) given a smooth function $h$ on $\partial M$, find a metric $g$ such that
its scalar curvature is zero and its mean curvature is equal to $h$, i.e.
$R_g=0$ in $M$ and $H_g=h$ on $\partial M$.
This was recently studied by
Cruz-Vit{\'o}rio
in \cite{cruz2019prescribing}.
In this paper, we study the problem of prescribing the scalar curvature in $M$ and
the mean curvature on the boundary $\partial M$ \textit{simultaneously}.
More precisely, given a smooth function $f$ in $M$ and
a smooth function $h$ on $\partial M$,
we want to find a metric $g$ such that
its scalar curvature is equal to $f$ and its mean curvature is equal to $h$, i.e.
$R_g=f$ in $M$ and $H_g=h$ on $\partial M$.
We would like to point out that there are several results in
prescribing the scalar curvature in $M$ and
the mean curvature on the boundary $\partial M$ simultaneously \textit{in a fixed conformal class}.
See \cite{Chen&Ruan&Sun,Chen&Sun,Cruz,Escobar,Han&Li1,Han&Li2}.
The flow approach was introduced to study this problem
in \cite{Brendle,Zhang}
for dimension $2$ and in \cite{Chen&Ho&Sun} for higher dimensions.
However, without restricted to a fixed conformal class, there are not many results
in prescribing the scalar curvature and
the mean curvature on the boundary simultaneously.
So our paper can be viewed as the first step to understand this problem.
In order to study the problem of prescribing the scalar curvature and
the mean curvature simultaneously, we study the linearization of
the scalar curvature and the mean curvature.
We introduce the notion of singular space in section
\ref{section_2}.
This notion is inspired by the early work of Fischer-Marsden in \cite{Fischer&Marsden},
which studied the linearization of the scalar curvature in closed (i.e. compact without boundary) manifolds,
and the work of Lin-Yuan in \cite{lin2016deformations}
which studied the linearization of the $Q$-curvature in closed manifolds.
In section \ref{section_3}, we will show that some geometric properties of the manifold imply that it is singular (or not singular).
We then give some examples of singular space and non-singular space in section \ref{section_example}.
In
section \ref{section_5}, we prove some
theorems related to prescribing the scalar curvature and
the mean curvature simultaneously.
Finally, in section \ref{last_section},
we prove some rigidity results for the flat manifolds with
totally geodesic boundary. See Theorem \ref{rigidity}.
\section{Characterization of singular spaces}\label{section_2}
For a compact $n$-dimensional manifold $M$ with boundary $\partial M$, let $\mathcal{M}$ be
the moduli space of all smooth metrics defined in $M$. Denote the map
\begin{equation}\label{defn_Psi}
\begin{matrix}
\Psi: &\mathcal{M}& \longrightarrow &\mathbb{R}\times \mathbb{R}\\
& g& \longmapsto &(R_g,2 H_\gamma)
\end{matrix}
\end{equation}
where $\gamma=g|_{\partial M}$ is the metric $g$ induced on $\partial M$,
$R_g$ is the scalar curvature in $M$ and $H_\gamma$ the mean curvature on $\partial M$ with respect to $g$.
Let $\mathcal{S}_g: S_2(M) \longrightarrow C^\infty(M)\times C^\infty (M)$ be the linearization of $\Psi$ at $g$,
and let $\mathcal{S}_g^*:C^\infty(M)\times C^\infty (M)\longrightarrow S_2(M)$ be the $L^2$-formal adjoint of $\mathcal{S}_g$, where $S_2(M)$
is the space of symmetric $2$-tensors on $M$.
More precisely, for any $h\in S_2(M)$, we have
\begin{align*}
\left.\frac{d}{dt}\Psi(g+th)\right|_{t=0}=\mathcal{S}_g(h)=D\Psi_g\cdot h=(\delta R_gh, 2 \delta H_\gamma h).
\end{align*}
It was computed in \cite{Araujo} and \cite{cruz2019prescribing} that
\begin{equation}\label{0.1}
\begin{split}
\delta R_g h &= −\Delta_g (tr_g h) + div_g div_g h -\langle h, Ric_g\rangle,\\
2 \delta H_\gamma h&=[d(tr_g h) - div_gh](\nu) - div_\gamma X - \langle\gamma , h\rangle_\gamma,
\end{split}
\end{equation}
where $\nu$ is the outward unit normal to $\partial M$,
$II_\gamma$ is the second fundamental form of $\partial M$, $X$ is
the vector field dual to the one-form $\omega(\cdot) = h(\cdot, \nu)$,
$tr_g h= g^{ij} h_{ij}$ is the trace of $h$ and our
convention for the Laplacian is $\Delta_g f = tr_g(\mbox{Hess}_g f)$.
Now $\mathcal{S}_g^*$, the $L^2$-formal adjoint of $\mathcal{S}_g$, satisfies
\begin{equation*}
\begin{split}
&\left.\frac{d}{dt}\left(\int_M R_{g+th}f_1 dV_g +2\int_{\partial M} H_{\gamma+th}f_2 dA_\gamma \right)\right|_{t=0}\\
&=\langle \mathcal{S}_g(h), (f_1, f_2)\rangle =\langle h, S^*_g(f_1, f_2)\rangle
=\int_M(\delta R_g h)f_1 dV_g +2\int_{\partial M}(\delta H_\gamma h)f_2 dA_\gamma.
\end{split}
\end{equation*}
If we define
\begin{equation}\label{1.2}
\mathcal{S}^*_g(f):=\mathcal{S}^*_g(f,f)=(A^*_g f, B_\gamma^* f)~~\mbox{ where }f\in C^\infty(M),
\end{equation}
then it follows from (2.3) in \cite{cruz2019prescribing} that
\begin{equation}\label{1.1}
\begin{split}
A^*_g f &=-(\Delta_gf)g +\mbox{Hess}_g f-f Ric_g, \\
B_\gamma^* f&=\frac{\partial f}{\partial \nu}\gamma - f II_\gamma.
\end{split}
\end{equation}
Inspired by the notion of $Q$-singular space
defined in \cite{lin2016deformations} (see also \cite{Fischer&Marsden}),
we have the following:
\begin{definition}
The metric $g$ is called singular if $\mathcal{S}_g^*$ defined in \eqref{1.2} is not injective, namely, $\ker(\mathcal{S}^*_g)\neq \{ 0\}$. We also refer $(M,\partial M, g, f)$
as singular space, if $0\not\equiv f\in \ker(\mathcal{S}^*_g)$.
\end{definition}
It follows from (\ref{1.1}) that $f\in \ker(\mathcal{S}_g^*)$ if and only if $f$ satisfies the equations
\begin{equation}\label{system1}
\left\{
\begin{split}
-(\Delta_g f)g+\mbox{Hess}_g f-f Ric_g &=0 \text{ in } M, \\
\displaystyle\frac{\partial f}{\partial \nu}\gamma -f II_\gamma &=0 \text{ on } \partial M.
\end{split}
\right.
\end{equation}
Taking the trace of \eqref{system1} with respect to $g$, we obtain
\begin{equation}\label{system2}
\left\{
\begin{split}
\displaystyle\Delta_g f+\frac{R_g}{n-1}f &=0 \text{ in } M, \\
\displaystyle\frac{\partial f}{\partial \nu}-\frac{H_\gamma}{n-1}f &=0 \text{ on }\partial M.
\end{split}
\right.
\end{equation}
That is to say, if $f\in \ker(\mathcal{S}_g^*)$, then $f$ must satisfy (\ref{system2}).
\section{Singular and nonsingular spaces}\label{section_3}
In this section, we show that some geometric properties of $(M,\partial M,g)$
will imply that it is singular (or not singular).
\begin{prop}\label{prop1.1}
If $R_g\leq 0$ and $H_\gamma\leq 0$ such that one of them is not identically equal to zero, then $g$ is not singular.
\end{prop}
\begin{proof}
If $f\in \ker(\mathcal{S}_g^*)$, then \eqref{system2} holds. Multiplying $f$ to the first equation in \eqref{system2},
integrating it over $M$ and using integration by parts, we obtain
\begin{equation}\label{1.3}
\begin{split}
0&=\int_M \Big( f\Delta_g f+\frac{R_g}{n-1}f^2 \Big)dV_g \\
&=\int_M \Big( -|\nabla_g f|^2+\frac{R_g}{n-1} f^2\Big)dV_g + \int_{\partial M} f\frac{\partial f}{\partial \nu}dA_\gamma\\
&=\int_M \Big( -|\nabla_g f|^2 +\frac{R_g}{n-1} f^2\Big)dV_g+\int_{\partial M}\frac{H_\gamma}{n-1}f^2dA_\gamma
\end{split}
\end{equation}
where we have used the second equation of \eqref{system2} in the last equality.
Since $R_g\leq 0$ and $H_\gamma\leq 0$,
it follows from (\ref{1.3}) that
$|\nabla_g f|^2\equiv 0$ in $M$, which implies that $f\equiv c$ for some constant $c$.
Hence, (\ref{system2}) reduces to
$$\frac{R_g}{n-1}c=0\mbox{ in }M~~\mbox{ and }~~-\frac{H_\gamma}{n-1}c=0\mbox{ on }\partial M.$$
Since $R_g$ or $H_\gamma$ is not identically equal to zero by assumption,
we can conclude that $c=0$, i.e.
$f\equiv 0$.
Therefore, we have shown that $\ker(\mathcal{S}_g^*)=\{0\}$, as required.
\end{proof}
\begin{prop}\label{supseteq}
If $(M,\partial M,g)$ is Ricci-flat with totally-geodesic boundary, then $g$ is singular.
\end{prop}
\begin{proof}
By assumption, we have $Rig_g\equiv 0$ in $M$ and $II_\gamma\equiv 0$ on $\partial M$. If we take $f$ to be any nonzero constant function defined in $M$, then it satisfies \eqref{system1}. Thus, $\ker(\mathcal{S}_g^*)\neq \{ 0\}$ and the result follows.
\end{proof}
In fact, we have the following:
\begin{prop}\label{Ricci-flat}
If $(M,\partial M,g)$ is Ricci-flat with totally-geodesic boundary, then
\begin{equation}\label{condition}
\ker(\mathcal{S}_g^*)=\{\mbox{constant}\}.
\end{equation}
\end{prop}
\begin{proof}
We have already shown that $\{\mbox{constant}\}\subseteq\ker(\mathcal{S}_g^*)$
in Proposition \ref{supseteq}. On the other hand, if $f\in \ker(\mathcal{S}_g^*)$, then $f$ satisfies \eqref{system2}.
This together with the assumption $Rig_g\equiv 0$ in $M$ and $II_\gamma\equiv 0$ on $\partial M$
implies that
\begin{align}\label{Neumann1}
\left\{
\begin{array}{rl}
\Delta_g f=0 &\text{ in }M, \\
\displaystyle\frac{\partial f}{\partial \nu}=0 &\text{ on }\partial M,
\end{array}
\right.
\end{align}
which shows that $f$ is constant.
\end{proof}
The condition (\ref{condition})
gives a characterization of
the Ricci-flat manifold with totally-geodesic boundary.
\begin{prop}\label{prop1.4}
If $f$ is a nonzero constant function lies in $\ker(\mathcal{S}_g^*)$, then $(M,\partial M,g)$ is Ricci-flat with totally-geodesic boundary.
\end{prop}
\begin{proof}
By assumption, the function $f\equiv c$ satisfies \eqref{system1}. Thus, we have
\begin{align*}
\left\{
\begin{array}{rl}
c Ric_g =0 &\text{ in }M, \\
-c II_\gamma =0 &\text{ on }\partial M.
\end{array}
\right.
\end{align*}
Since $c$ is nonzero, we have $Rig_g\equiv 0$ in $M$ and $II_\gamma\equiv 0$ on $\partial M$,
as required.
\end{proof}
\begin{prop}\label{existnonsing}
Suppose $R_g\equiv 0$ in $M$ and $H_\gamma\equiv 0$ on $\partial M$. If one of the following assumptions holds:
\begin{enumerate}
\item $Ric_g\not\equiv 0$ in $M$, i.e. $g$ is not Ricci-flat;
\item $II_\gamma\not\equiv 0$ on $\partial M$, i.e. $\partial M$ is not totally-geodesic,
\end{enumerate}
then $g$ is not singular.
\end{prop}
\begin{proof}
Let $f\in \ker(\mathcal{S}_g^*)$.
Since $R_g\equiv 0$ in $M$ and $H_\gamma\equiv 0$ on $\partial M$ by assumption, again by \eqref{system2} we have \eqref{Neumann1},
and thus $f\equiv c$ for some constant $c$.
If $c$ is nonzero, i.e. $f$ is a nonzero constant function lies in $\ker(\mathcal{S}_g^*)$,
it follows from Proposition \ref{prop1.4}
that $(M,\partial M, g)$ is Ricci-flat with totally-geodesic boundary,
which contradicts to the assumption.
Therefore, we must have $c=0$, i.e. $f\equiv 0$.
\end{proof}
We remark that a result similar to Proposition \ref{existnonsing} has been obtained in \cite{cruz2019prescribing}. See Proposition 3.3 in \cite{cruz2019prescribing}.
\begin{prop}
Suppose that
\begin{equation}\label{2.6}
Ric_g=\frac{R_g}{n}g=(n-1)g\mbox{ in }M~~\mbox{ and }~~H_{\gamma}=n-1\mbox{ on }\partial M.
\end{equation}
If $g$ is singular, then
$(M,\partial M, g)$ is isometric to
either a spherical cap or the standard hemisphere.
\end{prop}
\begin{proof}
Since $g$ is singular by assumption, there exists $0\not\equiv f\in \ker(\mathcal{S}_g^*)$.
Note that $f$ is not a non-constant function; otherwise,
it follows from Proposition \ref{prop1.4} that
$g$ is Ricci-flat, which contradicts to the assumption that
the scalar curvature $R_g$ is nonzero.
Since \eqref{system1} and
\eqref{system2} hold, we can substitute \eqref{system2} into \eqref{system1} and apply (\ref{2.6}) to get
\begin{align*}
\left\{
\begin{array}{rl}
\displaystyle\mbox{Hess}_g f+f g =0& \text{ in }M, \\
\displaystyle\frac{\partial f}{\partial \nu}=f &\text{ on }\partial M.
\end{array}
\right.
\end{align*}
Now the result follows immediately from Theorem 3 in \cite{chen2019obata}.
\end{proof}
\begin{prop}\label{contains1}
Let $(M, \partial M, g)$ be an $n$-dimensional Einstein manifold
with minimal boundary, where $n\geq 3$. If $g$ is singular, then $\displaystyle\frac{R_g}{n-1}$ is an eigenvalue
of the Laplacian with Neumann boundary condition.
In this case, $\ker(\mathcal{S}_g^*)$ lies in the eigenspace of $\displaystyle\frac{R_g}{n-1}$.
In particular, $\ker(\mathcal{S}_g^*)$ is finite-dimensional.
\end{prop}
\begin{proof}
Let $0\not\equiv f \in \ker(\mathcal{S}_g^*)$.
Since $(M, \partial M, g)$ is an $n$-dimensional Einstein manifold with $n\geq 3$,
the scalar curvature $R_g$ is constant. Moreover, $H_\gamma=0$ on $\partial M$
by assumption. Hence, \eqref{system2} implies that $f$ satisfies
\begin{align}\label{nuemann3}
\left\{
\begin{array}{rl}
\displaystyle\Delta_g f+\frac{R_g}{n-1}f =0& \text{ in } M, \\
\displaystyle\frac{\partial f}{\partial \nu} =0& \text{ on }\partial M.
\end{array}
\right.
\end{align}
This implies that $\displaystyle\frac{R_g}{n-1}$ is the eigenvalue
of the Laplacian with Neumann boundary condition, and $f$ is the corresponding eigenfunction.
This shows that $\ker(\mathcal{S}_g^*)$ lies in the eigenspace of $\displaystyle\frac{R_g}{n-1}$, as claimed.
\end{proof}
\begin{prop}\label{Steklov}
Suppose that $(M,\partial M,g)$ is scalar-flat with umbilical boundary of constant mean curvature. If $g$ is singular, then $\displaystyle\frac{H_\gamma}{n-1}$ is an Steklov eigenvalue. In this case, $\ker(\mathcal{S}_g^*)$ lies
in the eigenspace corresponding to the Steklov eigenvalue $\displaystyle\frac{H_\gamma}{n-1}$.
\end{prop}
\begin{proof}
Let $0\not\equiv f\in\ker(\mathcal{S}_g^*)$.
By assumption, we have
\begin{equation}\label{2.3}
R_g=0\mbox{ in }M~~\mbox{ and
}~~II_\gamma=\frac{H_{\gamma}}{n-1}\gamma\mbox{ on }\partial M,
\end{equation}
where $H_\gamma$ is constant.
It follows from (\ref{2.3}) that \eqref{system2} reduces to
\begin{align*}
\left\{
\begin{array}{rl}
\Delta_g f= 0 &\text{ in } M, \\
\displaystyle\frac{\partial f}{\partial \nu}+\frac{H_\gamma}{n-1}f=0 &\text{ on }\partial M.
\end{array}
\right.
\end{align*}
Hence,
$\displaystyle\frac{H_\gamma}{n-1}$ is the Steklov
eigenvalue, and $f$ is the corresponding eigenfunction.
In particular,
this shows that $\ker(\mathcal{S}_g^*)$ lies in the eigenspace corresponding to the Steklov eigenvalue $\displaystyle\frac{H_\gamma}{n-1}$.
This proves the assertion.
\end{proof}
We remark that a result similar to Proposition \ref{Steklov} has been obtained in \cite{cruz2019prescribing}. See Proposition 3.1 in \cite{cruz2019prescribing}.
\section{Examples}\label{section_example}
In this section, we give some examples of singular and non-singular space.\\
\textbf{Manifolds with negative Yamabe constant.}
Suppose that $(M,\partial M,g)$ is an $n$-dimensional compact manifold with boundary,
where $n\geq 3$. The Yamabe constant of $(M,\partial M,g)$ is defined as (c.f. \cite{Escobar2})
\begin{equation*}
Y(M,\partial M,g)=\inf\{E_g(u)\,|\,0<u\in C^\infty(M)\},
\end{equation*}
where
$$E(u)=\frac{\int_M\Big(\frac{4(n-1)}{n-2}|\nabla_gu|^2+R_gu^2\Big)dV_g+2\int_{\partial M}H_gu^2dA_g
}{\left(\int_{M}u^{\frac{2n}{n-2}}dV_g\right)^{\frac{n-2}{n}}}.$$
If the Yamabe constant of $(M,\partial M,g)$ is negative,
then we can find a metric $\tilde{g}$ conformal to $g$ such that
$R_{\tilde{g}}<0$ in $M$ and $H_{\tilde{g}}=0$ on $\partial M$ (c.f. Lemma 1.1. in \cite{Escobar2}).
In particular, it follows from Proposition \ref{prop1.1} that $(M,\partial M,\tilde{g})$
is not singular.
Similarly, we can define (c.f. \cite{Escobar1})
\begin{equation*}
Q(M,\partial M,g)=\inf\{Q_g(u)\,|\,0<u\in C^\infty(M)\},
\end{equation*}
where
$$Q_g(u)=\frac{\int_M\Big(\frac{4(n-1)}{n-2}|\nabla_gu|^2+R_gu^2\Big)dV_g+2\int_{\partial M}H_gu^2dA_g
}{\left(\int_{\partial M}u^{\frac{2(n-1)}{n-2}}dA_g\right)^{\frac{n-2}{n-1}}}.$$
If $Q(M,\partial M,g)<0$,
then we can find a metric $\tilde{g}$ conformal to $g$ such that
$R_{\tilde{g}}=0$ in $M$ and $H_{\tilde{g}}<0$ on $\partial M$ (c.f. Proposition 1.4 in \cite{Escobar1}).
In particular, it follows from Proposition \ref{prop1.1} that $(M,\partial M,\tilde{g})$
is not singular.\\
\textbf{Ricci-flat manifolds with totally geodesic boundary.}
Suppose that $(M,g)$ is a closed (i.e. compact without boundary) manifold
which is Ricci-flat.
Consider the product manifold $\tilde{M}=[0,1]\times M$ equipped with the product metric
$\tilde{g}=dt^2+g$. Then $\tilde{g}$ is still Ricci-flat,
and its boundary $\partial\tilde{M}=(\{0\}\times M)\cup(\{1\}\times M)$
is totally geodesic.
Therefore, it follows from Proposition \ref{supseteq}
that $(\tilde{M},\partial\tilde{M},\tilde{g})$
is singular.
For example, we can take $(M,g)$ to be any compact Calabi-Yau manifold.
It is Ricci-flat. Then $\tilde{M}=[0,1]\times M$ equipped with the product metric
$\tilde{g}=dt^2+g$ is singular.
Now suppose that $(M_0,g_0)$ is a closed manifold such that $g_0$ is flat.
Then the product manifold $\tilde{M}=[0,1]\times M$ equipped with the product metric
$\tilde{g}=dt^2+g$ is still flat and has totally geodesic boundary.
Therefore, it follows from Proposition \ref{supseteq}
that $(\tilde{M},\partial\tilde{M},\tilde{g})$
is singular.
For example, if we take $(M_0,g_0)$ to be the $n$-dimensional torus $T^n$ equipped with
the flat metric $g_0$, then $[0,1]\times T^n$ equipped with the product metric
$dt^2+g_0$ is flat and has geodesic boundary, and hence is singular.\\
\textbf{Product manifolds.}
Suppose that $(M,g)$ is a closed Riemannian manifold
which is scalar-flat but not Ricci-flat. Consider the
product manifold $\tilde{M}=[0,1]\times M$ equipped with the product metric
$\tilde{g}=dt^2+g$. Then $(\tilde{M},\partial \tilde{M}, \tilde{g})$
is still scalar-flat but not Ricci-flat. Its boundary
$\partial\tilde{M}=(\{0\}\times M)\cup(\{1\}\times M)$
is totally geodesic, and thus, its mean curvature is zero.
It follows from Proposition \ref{existnonsing}
that $(\tilde{M},\partial \tilde{M}, \tilde{g})$ is not singular.
For example, let $S^2$ be
the $2$-dimensional unit sphere equipped with the standard metric $g_1$,
and $\Sigma$ be a $2$-dimensional compact manifold
with genus at least 2 equipped with the hyperbolic metric $g_{-1}$.
Then the product manifold $M=S^2\times \Sigma$ is a closed manifold,
and the product metric $g=g_1+g_{-1}$ has zero scalar curvature
and is not Ricci-flat.
From the above discussion, we can conclude that
$\tilde{M}=[0,1]\times S^2\times \Sigma$
equipped with the metric $dt^2+g_1+g_{-1}$ is not singular.\\
\textbf{The upper hemisphere.} Let
$$\mathbb{S}_+^n=\big\{(x_1, \cdots, x_{n+1})\in \mathbb{R}^{n+1}\big| x_1^2 +\cdots x_{n+1}^2=1, x_{n+1}\geq 0\big\}$$
be the $n$-dimensional upper hemisphere. We have the following:
\begin{prop}\label{hemisphere}
Let $(\mathbb{S}_+^n, \partial \mathbb{S}_+^n)$ be the $n$-dimensional upper hemisphere equipped with the standard metric $g_c$,
i.e. the sectional curvature of $g_c$ is $1$, where $n\geq 3$. Then $g_c$
is singular. Moreover,
$$\ker(\mathcal{S}_{g_c}^*)=\mbox{\emph{span}}\{x_1, \cdots, x_n \},$$
where $(x_1, \cdots, x_n, x_{n+1})$ are the coordinates of $\mathbb{S}_+^n\subset \mathbb{R}^{n+1}$.
\end{prop}
\begin{proof}
Note that $g_c$ is Einstein and the boundary $\partial \mathbb{S}_+^n$ is totally-geodesic, i.e.
\begin{equation}\label{2.1}
Ric_{g_c}=\displaystyle\frac{R_{g_c}}{n}g_c\mbox{ in }S^n_+~~\mbox{ and }~~
II_{\gamma_c}=0\mbox{ on }\partial \mathbb{S}_+^n,
\end{equation}
where
$R_{g_c}\equiv n(n-1)$. Note also that the coordinate functions $x_i$, $1\leq i\leq n$, satisfy
the following Obata-type equation: (see \cite{chen2019obata} for example)
\begin{equation}\label{2.2}
\mbox{Hess}_{g_c} x_i+x_i\,g_c=0\mbox{ in }S^n_+~~\mbox{ and }~~\frac{\partial x_i}{\partial\nu}=0\mbox{ on }\partial S^n_+.
\end{equation}
Combining (\ref{2.1}) and (\ref{2.2}), we can conclude that the coordinate functions
$x_i$, $1\leq i\leq n$, satisfy \eqref{system1}.
Thus, span$\{x_1, \cdots, x_n\}$ is contained in $\ker(\mathcal{S}_{g_c}^*)$. In particular, $g_c$ is singular.
On the other hand,
$x_i$, $1\leq i\leq n$, is an eigenfunction corresponding to
the eigenvalue $n$
of the Laplacian with Neumann boundary condition (this follows from taking trace of (\ref{2.2})).
In fact, it is well-known that the eigenspace is spanned by $x_i$ where $1\leq i\leq n$.
Hence, it follows from Proposition \ref{contains1} that
$$
\ker(\mathcal{S}_{g_c}^*)\subseteq \text{the eigenspace of }n=\mbox{span}\{x_1, \cdots, x_n\}.
$$
This proves the assertion.
\end{proof}
\textbf{The unit ball.} Let $$
D^n=\big\{(x_1, \cdots, x_n)\in \mathbb{R}^{n}\big| x_1^2 +\cdots x_n^2\leq 1\big\}$$
be the $n$-dimensional unit ball equipped with flat metric $g_0$. We have the following:
\begin{prop}\label{prop1.5}
The $n$-dimensional unit ball $(D^n,\partial D^n, g_0)$ equipped with the flat metric is a singular space.
Moreover, we have
$$
\ker(S_{g_0}^*)=\mbox{\emph{span}}\{ x_1, \cdots, x_n\},
$$
where $(x_1,...,x_n)$ are the coordinates of $D^n$.
\end{prop}
\begin{proof}
There holds
\begin{equation}\label{2.4}
Ric_{g_0}\equiv 0\mbox{ in }D^n~~\mbox{
and }II_{\gamma_0}=\frac{H_{\gamma_0}}{n-1}\gamma_0\mbox{ on }\partial D^n,
\end{equation}
where $H_{\gamma_0}=n-1$.
We can easily check that $x_i$, $1\leq i\leq n$, satisfies
\begin{equation}\label{2.5}
\mbox{Hess}_{g_0}x_i=0\mbox{ in }D^n~~\mbox{ and }~~\frac{\partial x_i}{\partial\nu}=x_i\mbox{ on }\partial D^n.
\end{equation}
Combining (\ref{2.4}) and (\ref{2.5}), we can see that $x_i$, $1\leq i\leq n$, satisfies \eqref{system1}.
Therefore, span$\{x_1, \cdots, x_n\}\subseteq \ker(\mathcal{S}_{g_0}^*)$, which implies that $g_0$ is singular.
On the other hand, the eigenspace of the Steklov eigenvalue $1$ is spanned by
$x_i$, where $1\leq i\leq n$ (see Example 1.3.2 in \cite{Girouard&Polterovich} for example).
This together with Proposition \ref{Steklov} implies that
$\ker(\mathcal{S}_{g_0}^*)\subseteq \mbox{span}\{x_1,\cdots, x_n\}$
and the proof is completed.
\end{proof}
\section{Prescribing scalar curvature and mean curvature simultaneously}\label{section_5}
Given a Riemannian manifold with boundary $(M,\partial M,\bar{g})$, we have the following theorem was proved by Cruz and Vit{\'o}rio
in \cite{cruz2019prescribing}.
\begin{theorem}[Theorem 3.5 in \cite{cruz2019prescribing}]\label{cruz}
Let $f=(f_1, f_2)\in L^p(M)\oplus W^{\frac{1}{2}, p}(\partial M)$ where $p>n$.
Suppose that $\mathcal{S}_{\bar{g}}^*$ is injective. Then there exists $\eta>0$ such that if
$$\|f_1-R_{\bar{g}}\|_{L^p(M)}+\|f_2-H_{\bar{\gamma}}\|_{W^{\frac{1}{2},p}(\partial M)}< \eta,$$
then there is a metric $g\in \mathcal{M}^{2, p}$ such that $\Psi(g)=f$. Moreover, $g$ is smooth in any open set whenever $f$ is smooth.
\end{theorem}
More generally, we have the following:
\begin{theorem}\label{prescribing}
Let $f=(f_1, f_2)\in L^p(M)\oplus W^{\frac{1}{2}, p}(\partial M)$ where $p>n$.
Define
\begin{equation}\label{kernel}
\Phi:=\left\{(f_1,f_2)\,\left|\,\int_Mf_1fdV_{\bar{g}}+\int_{\partial M}f_2fdA_{\bar{\gamma}}=0\mbox{ for all }f\in\ker\mathcal{S}_{\bar{g}}^*\right.\right\}.
\end{equation}
There exists $\eta>0$ such that if $(f_1,f_2)\in \Phi$ and
$$\|f_1-R_{\bar{g}}\|_{L^p(M)}+\|f_2-H_{\bar{\gamma}}\|_{W^{\frac{1}{2},p}(\partial M)}< \eta,$$
then there is a metric $g\in \mathcal{M}^{2, p}$ such that $\Psi(g)=f$. Moreover, $g$ is smooth in any open set whenever $f$ is smooth.
\end{theorem}
\begin{proof}
It was proved in P.5 of \cite{cruz2019prescribing} that
$A^*_{\bar{g}}$ is elliptic in $M$, and properly elliptic, and $B^*_\gamma$ satisfies the Shapiro-
Lopatinskij condition at any point of the boundary.
Thus, $\mathcal{S}^*_g$ defined in (\ref{1.2}) has injective symbol.
Hence, we have the following decomposition: (see \cite{Berger&Ebin} and \cite{Fischer&Marsden}; see also Theorem 4.1 in \cite{lin2016deformations})
\begin{equation}\label{decomposition}
C^\infty(M)\times C^\infty(\partial M)=\mbox{Im }\mathcal{S}_g\oplus\ker\mathcal{S}^*_g.
\end{equation}
Combining (\ref{kernel}) and (\ref{decomposition}), we have $\mbox{Im }\mathcal{S}_g=\Phi$.
By identifying $\Phi$ with its tangent space, we can see that
the map $\Psi$ defined in (\ref{defn_Psi})
is a submersion at $g$ with respect to $\Phi$.
We can now apply the Generilzed Inverse Function Theorem (c.f. Theorem 4.3 in \cite{lin2016deformations})
and conclude the local subjectivity of $\Phi$ at $g$.
This proves the assertion.
\end{proof}
The following theorem shows that we can prescribe the scalar curvature in $M$ and
the mean curvature on the boundary $\partial M$ simultaneously for
a generic scalar-flat manifold with minimal boundary.
\begin{theorem}\label{presc1}
Suppose that $(M, \partial M, \bar{g})$ is not a singular space such that $R_{\bar{g}}=0$ in $M$
and $H_{\bar{\gamma}}=0$ on $\partial M$.
Then, for any given functions $f_1\in C^\infty(M)$ and $f_2\in C^\infty(\partial M)$, there exists a metric $g$ such that $R_g=f_1$
in $M$ and $H_\gamma=f_2$ on $\partial M$.
\end{theorem}
\begin{proof}
Let $f_1\in C^\infty(M)$ and $f_2\in C^\infty(\partial M)$.
Since $M$ is compact, we can choose $L>0$ large enough such that
\begin{align}\label{inequ1}
\frac{\|f_1\|_\infty}{L}+\frac{\|f_2\|_\infty}{L^{1/2}}<\eta,
\end{align}
where $\eta>0$ is given as in Theorem \ref{cruz}.
Since $R_{\bar{g}}=0$ in $M$ and $H_{\bar{\gamma}}=0$ on $\partial M$ by assumption, the inequality \eqref{inequ1} can be written as
\begin{align}\label{ineq2}
\left\| \frac{f_1}{L}-R_{\bar{g}}\right\|_\infty +\left\| \frac{f_2}{L^{1/2}}-H_{\bar{\gamma}}\right\|_\infty < \eta.
\end{align}
We can now apply Theorem \ref{cruz} to conclude that $R_g=\displaystyle\frac{f_1}{L}$ in $M$ and
$H_\gamma=\displaystyle\frac{f_2}{L^{1/2}}$ on $\partial M$
for some smooth metric $g$. Thus the metric $L^{-1}g$ satisfies
$$R_{L^{-1}g}=LR_g=L(\frac{f_1}{L})=f_1~~\mbox{ in }M,$$
and
$$H_{L^{-1}\gamma}=L^{1/2}H_\gamma=L^{1/2}(\frac{f_2}{L^{1/2}})=f_2~~\mbox{ on }M,$$
as required.
\end{proof}
As we have seen in section \ref{section_example},
the product manifold
$M=[0,1]\times S^2\times \Sigma$
equipped with the metric $dt^2+g_1+g_{-1}$
is scalar-flat, has totally geodesic boundary, and is not singular,
where
$S^2$ is
the $2$-dimensional unit sphere equipped with the standard metric $g_1$,
and $\Sigma$ be a $2$-dimensional compact manifold
with genus at least 2 equipped with the hyperbolic metric $g_{-1}$.
Combining this with Theorem \ref{presc1}, we have the following:
\begin{cor}
Let $M=[0,1]\times S^2\times \Sigma$.
For any $f_1\in C^\infty(M)$ and $f_2\in C^\infty(\partial M)$,
there exists a metric $g$ such that $R_g=f_1$
in $M$ and $H_\gamma=f_2$ on $\partial M$.
\end{cor}
We also have the following:
\begin{theorem}\label{thm2}
Suppose $(M,\partial M, \bar{g})$ is Ricci-flat with totally-geodesic boundary.
For any $(f_1, f_2)\in \Phi_0$ where
\begin{equation}\label{Phi_0}
\Phi_0:=\left\{ (f_1, f_2)\in C^\infty(M)\times C^\infty(\partial M)\left|\int_M f_1 dV_{\bar{g}}=\int_{\partial M} f_2 dA_{\bar{\gamma}}=0\right.\right\},
\end{equation}
there exists a metric $g$ such that $R_g=f_1$ in $M$ and $H_\gamma=f_2$ on $\partial M$.
\end{theorem}
\begin{proof}
If $(M,\partial M,\bar{g})$ is Ricci-flat with totally-geodesic boundary, it follows from Proposition \ref{Ricci-flat}
that
\begin{equation*}
\ker(\mathcal{S}_{\bar{g}}^*)=\{\mbox{constant}\}.
\end{equation*}
Hence, $\Phi_0$ defined in (\ref{Phi_0}) is contained in $\Phi$ defined in (\ref{kernel}).
Let $(f_1,f_2)\in \Phi_0$. We can choose $L>0$ sufficiently large such that
\begin{equation*}
\left\| \frac{f_1}{L}-R_{\bar{g}}\right\|_\infty +\left\| \frac{f_2}{L^{1/2}}-H_{\bar{\gamma}}\right\|_\infty < \eta
\end{equation*}
where $\eta$ is the given as in Theorem \ref{prescribing}.
Since $(f_1/L,f_2/L^{1/2})\in \Phi_0\subset\Phi$, it follows from Theorem \ref{prescribing} that
$R_g=\displaystyle\frac{f_1}{L}$ in $M$ and
$H_\gamma=\displaystyle\frac{f_2}{L^{1/2}}$ on $\partial M$
for some smooth metric $g$ closed to $\bar{G}$. Thus the metric $L^{-1}g$ satisfies
$$R_{L^{-1}g}=LR_g=L(\frac{f_1}{L})=f_1~~\mbox{ in }M,$$
and
$$H_{L^{-1}\gamma}=L^{1/2}H_\gamma=L^{1/2}(\frac{f_2}{L^{1/2}})=f_2~~\mbox{ on }M,$$
as required.
\end{proof}
As we have seen in section \ref{section_example},
for any closed Ricci-flat $(M,g)$,
the product manifold
$\tilde{M}=[0,1]\times M$ equipped with the product metric
$\tilde{g}=dt^2+g$ is Ricci-flat with totally-geodesic boundary.
Therefore, from Theorem \ref{thm2}, we immediately have the following
\begin{cor}
Suppose $(M,g)$ is a closed Ricci-flat manifold.
Let
$\tilde{M}=[0,1]\times M$ be the product manifold
equipped with the product metric $\tilde{g}=dt^2+g$.
Then, for any $(f_1, f_2)\in C^\infty(\tilde{M},\partial\tilde{M})$ such that
$$
\int_{\tilde{M}} f_1 dV_{\tilde{g}}=0~~\mbox{ and }~~\int_{\partial \tilde{M}} f_2 dA_{\tilde{\gamma}}=0,
$$
there exists a metric $g$ such that $R_g=f_1$ in $\tilde{M}$ and $H_\gamma=f_2$ on $\partial \tilde{M}$.
\end{cor}
Next
we have the following theorem of prescribing the scalar curvature
and the mean curvature simultaneously on the upper hemisphere.
\begin{theorem}
Let $f_1\in C^\infty(\mathbb{S}_+^n)$ and $f_2\in C^\infty(\partial \mathbb{S}_+^n)$ such that
$$\int_{\mathbb{S}_+^n} x_i f_1 dV_{g_c}=\int_{\partial \mathbb{S}_+^n} x_i f_2 dA_{\gamma_c}=0~~\mbox{ for }1\leq i\leq n.$$
Then there exists a metric $g$ such that $R_g=f_1$ in $\mathbb{S}^n_+$ and $H_\gamma=f_2$ on $\partial \mathbb{S}^n_+$.
\end{theorem}
\begin{proof}
It follows from
Proposition \ref{hemisphere} that
$\ker(\mathcal{S}_{g_c}^*)=\mbox{span}\{x_1, \cdots, x_n \}$. Hence,
the space
$$\left\{(f_1,f_2)\in C^\infty(\mathbb{S}_+^n)\times C^\infty(\partial \mathbb{S}_+^n)\left|\,\int_{\mathbb{S}_+^n} x_i f_1 dV_{g_c}=\int_{\partial \mathbb{S}_+^n} x_i f_2 dA_{g_c}=0~~\mbox{ for }1\leq i\leq n\right.\right\}$$
lies in $\Phi$ defined in (\ref{kernel}).
Using this, we can follow the same argument as in the proof of Theorem \ref{thm2}
to finish the proof.
\end{proof}
Finally we have the following theorem of prescribing the scalar curvature
and the mean curvature simultaneously on the unit ball.
\begin{theorem}
Given any $f_1\in C^\infty(D^n)$ and $f_2\in C^\infty(\partial D^n)$ such that
\begin{equation}\label{condition1}
\int_{D^n}f_1 x_i dV_{g_0}=\int_{\partial D^n} f_2x_i dA_{\gamma_0}=0~~\mbox{ for any }1\leq i\leq n.
\end{equation}
Then there exists a metric $g$ such that $R_g=f_1$ in $D^n$ and $H_\gamma=f_2$ on $\partial D^n$.
\end{theorem}
\begin{proof}
It follows from Proposition \ref{prop1.5} that
$\ker(\mathcal{S}_{g_0}^*)=\mbox{span}\{x_1, \cdots, x_n \}$. Hence,
the space of all $(f_1,f_2)$ satisfying (\ref{condition1})
lies in $\Phi$ defined in (\ref{kernel}).
Hence, we can follow the same argument as in the proof of Theorem \ref{thm2}
to finish the proof.
\end{proof}
\section{Rigidity results}\label{last_section}
Suppose that $(M,\partial M,\bar{g},f)$ is a singular space
such that
\begin{equation}\label{4.0}
R_{\bar{g}}=0\mbox{ in }M~~\mbox{ and }~~H_{\bar{\gamma}}=0\mbox{ on }\partial M.
\end{equation}
We define the following functional:
\begin{equation}\label{4.1}
\mathcal{F}(g)=\int_M R_g fdV_{g}+2\int_{\partial M}H_\gamma fdA_{g}
\end{equation}
for $g\in \mathcal{M}$.
We have the following:
\begin{lemma}\label{lem4.1}
The metric $\bar{g}$ is a critical point of $\mathcal{F}$ defined in \eqref{4.1}.
\end{lemma}
\begin{proof}
We compute
\begin{equation*}
\begin{split}
\left.\frac{d}{dt}\mathcal{F}(\bar{g}+th)\right|_{t=0}&=
\int_M(\delta R_{\bar{g}} h)f dV_{\bar{g}} +2\int_{\partial M}(\delta H_{\bar{\gamma}} h)f dA_{\bar{\gamma}}\\
&\hspace{4mm}+\int_M R_{\bar{g}} \left.\frac{\partial}{\partial t}dV_{\bar{g}+th}\right|_{t=0}+2\int_{\partial M}H_{\bar{\gamma}} \left.\frac{\partial}{\partial t}dA_{\bar{\gamma}+th}\right|_{t=0}\\
&=\langle\mathcal{S}_{\bar{g}}(h),(f,f)\rangle
=\langle h,\mathcal{S}_{\bar{g}}^*(f,f)\rangle=0,
\end{split}
\end{equation*}
where we have used
(\ref{4.0}) and the fact that $\mathcal{S}_{\bar{g}}^*(f,f)=0$.
This proves the assertion.
\end{proof}
From now on, we suppose that $(M,\partial M,\overline{g})$ is a compact $n$-dimensional manifold
which is flat (hence is Ricci-flat) and has totally geodesic boundary.
It follows from Proposition \ref{supseteq} and Proposition
\ref{Ricci-flat} that $\overline{g}$ is singular and we can take $f\equiv 1$.
Then the functional $\mathcal{F}$ defined in (\ref{4.1}) becomes
\begin{equation}\label{4.2}
\mathcal{F}(g)=\int_M R_g dV_{g}+2\int_{\partial M}H_\gamma dA_{\gamma}.
\end{equation}
We will prove the following rigidity theorem.
\begin{theorem}\label{rigidity}
Let $(M,\partial M,\bar{g})$ be a compact $n$-dimensional manifold
which is flat and has totally geodesic boundary.
If $g$ is sufficiently closed to $\bar{g}$ such that\\
(i) $R_g\geq 0$ in $M$ and $H_\gamma\geq 0$ on $\partial M$,\\
(ii) $g$ and $\bar{g}$ induce the same metric on $\partial M$,\\
then $(M,\partial M,g)$ is also flat and has totally geodesic boundary.
\end{theorem}
To prove Theorem \ref{rigidity}, we need have the following proposition from \cite{Brendle&Marques}:
\begin{prop}[Proposition 11 in \cite{Brendle&Marques}]\label{prop11}
Let $M$ be a compact $n$-dimensional manifold with boundary $\partial M$. Fix a real number $p>n$.
If
$\|g-\bar{g}\|_{W^{2,p}(M,\bar{g})}$
is sufficiently small such that $g$ and $\bar{g}$ induce the same metric on $\partial M$,
then we can find a diffeomorphism
$\varphi: M\to M$ such that $\varphi|_{\partial M}=id$ and
$h=\varphi^*(g)-\bar{g}$ is divergence-free with respect to $\bar{g}$. Moreover,
\begin{equation}\label{bound}
\|h\|_{W^{2,p}(M,\bar{g})}\leq N\|g-\bar{g}\|_{W^{2,p}(M,\bar{g})}
\end{equation}
where $N$ is a positive constant that depends only on $M$.
\end{prop}
We are now ready to prove Theorem \ref{rigidity}.
\begin{proof}[Proof of Theorem \ref{rigidity}]
Suppose that $g$ and $\bar{g}$ are given as in Theorem \ref{rigidity}.
We can apply Proposition \ref{prop11} to get a diffeomorphism
$\varphi: M\to M$ such that $\varphi|_{\partial M}=id$,
$h=\varphi^*(g)-\bar{g}$ is divergence-free with respect to $\bar{g}$
and satisfies (\ref{bound}).
Note that
\begin{equation}\label{simplify}
h=\varphi^*(g)-\bar{g}=0~~\mbox{ on }\partial M,
\end{equation}
since $g$ and $\bar{g}$ induce the same metric on $\partial M$
and $\varphi|_{\partial M}=id$.
We compute
\begin{equation}\label{4.3}
\mathcal{F}(\varphi^*g)=\mathcal{F}(\bar{g})
+D\mathcal{F}_{\bar{g}}(h)
+\frac{1}{2}D^2\mathcal{F}_{\bar{g}}(h,h)+E_3,
\end{equation}
where $E_3$ is bounded by (see (7.11) in \cite{Case})
\begin{equation}\label{4.4}
|E_3|\leq C\|h\|_{C^0(M,\bar{g})}\int_M |\nabla_{\bar{g}} h|^2dV_{\overline{g}}
\end{equation}
for some constant $C$ depending only on $(M,\partial M,\overline{g})$,
thanks to (\ref{simplify}).
It follows from the assumption and Lemma \ref{lem4.1} that
\begin{equation}\label{4.5}
\mathcal{F}({\bar{g}})=0~~\mbox{ and }~~D\mathcal{F}_{\bar{g}}(h)=0.
\end{equation}
We are going to compute $D^2\mathcal{F}_{\bar{g}}(h,h)$.
To this end, we have the following formula: (see the last equation in P.124
of \cite{cruz2019prescribing})
\begin{equation*}
\begin{split}
&\int_M f\delta R_{\hat{g}}dV_{\hat{g}} +2\int_{\partial M}f\delta H_{\hat{\gamma}} hdA_{\hat{\gamma}}\\
&=\int_M\Big(-\Delta_{\hat{g}}f(tr_{\hat{g}}h)+\langle\mbox{Hess}_{\hat{g}}f,h\rangle-f\langle h,Ric_{\hat{g}}\rangle\Big)dV_{\hat{g}}\\
&\hspace{4mm}
+\int_M\Big(tr_{\hat{\gamma}}h\frac{\partial f}{\partial \nu}-f\langle II_{\hat{\gamma}},h\rangle_{\hat{\gamma}}\Big)dA_{\hat{\gamma}}
\end{split}
\end{equation*}
for any metric $\hat{g}$ and any smooth function $f$. In particular, if we take $f\equiv 1$ and $\hat{g}=\bar{g}+th$,
we have
\begin{equation*}
\begin{split}
&\int_M(\delta R_{\bar{g}+th} h)dV_{\bar{g}+th} +2\int_{\partial M}(\delta H_{\bar{\gamma}+th} h)dA_{\bar{\gamma}+th}\\
&=-\int_M\langle h,Ric_{\bar{g}+th}\rangle dV_{\bar{g}+th}
-\int_M\langle II_{\bar{\gamma}+th},h\rangle_{\bar{\gamma}+th}dA_{\bar{\gamma}+th}.
\end{split}
\end{equation*}
Differentiating it with respect to $t$, evaluating it at $t=0$ and
using the fact that $\overline{g}$ is flat with totally geodesic boundary, we obtain
\begin{equation}\label{4.6}
\begin{split}
D^2\mathcal{F}_{\bar{g}}(h,h)&=\frac{d}{dt}\left.\left(\int_M(\delta R_{\bar{g}+th} h)dV_{\bar{g}+th} +2\int_{\partial M}(\delta H_{\bar{\gamma}+th} h)dA_{\bar{\gamma}+th}\right)\right|_{t=0}\\
&=-\int_M\langle h,\left.\frac{\partial}{\partial t}(Ric_{\bar{g}+th})\right|_{t=0}\rangle dV_{\bar{g}}
-\int_M\langle \left.\frac{\partial}{\partial t}(II_{\bar{\gamma}+th})\right|_{t=0},h\rangle_{\bar{\gamma}}dA_{\bar{\gamma}}\\
&=-\int_M\langle h,\left.\frac{\partial}{\partial t}(Ric_{\bar{g}+th})\right|_{t=0}\rangle dV_{\bar{g}}
\end{split}
\end{equation}
where the last equality follows from (\ref{simplify}).
There holds (see (3.2) in \cite{lin2016deformations} for example)
\begin{equation}\label{4.7}
\left.\frac{\partial}{\partial t}(Ric_{\bar{g}+th})_{jk}\right|_{t=0}
=-\frac{1}{2}\big(\Delta_L h_{jk}+\nabla_j\nabla_k(tr_{\bar{g}}h)+\nabla_j(div_{\bar{g}} h)_k+\nabla_k(div_{\bar{g}} h)_j\big).
\end{equation}
Here the Licherowicz Laplacian acting on $h$ is defined as
\begin{equation}\label{4.8}
\Delta_Lh_{jk}=\Delta h_{jk}+2(\overset{\circ}{Rm}\cdot h)_{jk}-Ric_{ji}h^i_k-Ric_{ki}h^i_j,
\end{equation}
where the geometric quantities on the right hand side is with respect to $\bar{g}$.
Since $\bar{g}$ is flat and $div_{\bar{g}}h=0$,
it follows from (\ref{4.6})-(\ref{4.8}) that
\begin{equation}\label{4.9.5}
D^2\mathcal{F}_{\bar{g}}(h,h)=\frac{1}{2}\int_M \Big(h_{jk}\Delta_{\bar{g}}h_{jk}+h_{jk}\nabla_j\nabla_k(tr_{\bar{g}}h)\Big) dV_{\bar{g}}.
\end{equation}
By integration by parts, (\ref{simplify})
and
the fact that $h$ is divergence-free with respect to $\bar{g}$,
we can rewrite (\ref{4.9.5}) as
\begin{equation}\label{4.9}
\begin{split}
D^2\mathcal{F}_{\bar{g}}(h,h)
&=\frac{1}{2}\int_M \Big(h_{jk}\Delta_{\bar{g}}h_{jk}-\nabla_jh_{jk}\nabla_k(tr_{\bar{g}}h)\Big) dV_{\bar{g}}=-\frac{1}{2}\int_M |\nabla_{\bar{g}}h|^2dV_{\bar{g}}.
\end{split}
\end{equation}
Now, we can combine (\ref{4.3}), (\ref{4.5}) and (\ref{4.9}) to obtain
\begin{equation}\label{4.10}
\mathcal{F}(\varphi^*g)=-\frac{1}{2}\int_M |\nabla_{\bar{g}} h|^2dV_{\bar{g}}+E_3
\end{equation}
where $E_3$ satisfies (\ref{4.4}).
By assumption (i) in Theorem \ref{rigidity} and the fact that $\varphi$
is a diffeomorphism, we have
\begin{equation}\label{4.11}
\mathcal{F}(\varphi^*g)=\mathcal{F}(g)\geq 0.
\end{equation}
Combining (\ref{4.10}) and (\ref{4.11}), we get
\begin{equation}\label{4.12}
0\leq -\frac{1}{2}\int_M |\nabla_{\bar{g}} h|^2dV_{\bar{g}}+E_3.
\end{equation}
In view of (\ref{bound}), (\ref{4.4}) and (\ref{4.12}), we can conclude
that $\nabla_{\bar{g}} h=0$ when $g$ is sufficiently closed to $\bar{g}$.
In particular, $h_{jk}$ is constant for each pair of $j,k$.
Since $h=0$ on $\partial M$ by (\ref{simplify}),
we must have $h=0$ in $M$. That is to say, $\varphi^*(g)=\tilde{g}$.
Hence, $(M,\partial M,g)$ is also flat and has totally geodesic boundary.
This finishes the proof of Theorem \ref{rigidity}.
\end{proof}
We remark that the second variation of
the functional defined in (\ref{4.2})
has been computed in \cite{Araujo} in general,
without assuming that $(M,\partial M,\bar{g})$ is Ricci-flat with totally geodesic boundary.
As we have seen in section \ref{section_example}, if
$T^n$ is the $n$-dimensional torus equipped with
the flat metric $g_0$, then $[0,1]\times T^n$ equipped with the product metric
$dt^2+g_0$ is flat and has geodesic boundary.
Combining this with Theorem \ref{rigidity}, we have the following
rigidity result:
\begin{theorem}
Consider $\tilde{M}=[0,1]\times T^n$ equipped with the product metric
$\tilde{g}=dt^2+g_0$, where $T^n$ is the $n$-dimensional torus equipped with
the flat metric $g_0$. If $g$ is sufficiently closed to $\tilde{g}$ such that\\
(i) $R_g\geq 0$ in $\tilde{M}$ and $H_\gamma\geq 0$ on $\partial \tilde{M}$,\\
(ii) $g$ and $\tilde{g}$ induce the same metric on $\partial \tilde{M}$,\\
then $g$ is also flat and has totally geodesic boundary.
\end{theorem}
\section*{Acknowledgement}
The authors would like to thank Prof. Yueh-Ju Lin for answering questions on her paper. Part of the work was done when the first author was visiting National Center for Theoretical Sciences in Taiwan, and he is grateful for the kind hospitality. The first author is supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2019041021), and by Korea Institute for Advanced Study (KIAS) grant funded by the Korea government (MSIP). The second author is supported by Ministry of Science and Technology, Taiwan, with the grant number: 108-2115-M-024-007-MY2.
\bibliographystyle{amsplain}
|
2,877,628,089,099 | arxiv | \section{Introduction}
Let $L$ be a Lie algebra with universal enveloping algebra $U(L)$ over a filed ${\mathbb F}$.
Our aim in this paper is to survey the results concerning the
isomorphism problem for enveloping algebras: given another Lie algebra
$H$ for which $U(L)$ and $U(H)$ are isomorphic as associative algebras, can we deduce that
$L$ and $H$ are isomorphic Lie algebras? We can ask weaker questions in the sense that
given $U(L)\cong U(H)$, what invariants of $L$ and $H$ are the same?
We say that a particular invariant of $L$ is
{\em determined} (by $U(L)$), if
every Lie algebra $H$ also possesses this invariant whenever
$U(L)$ and $U(H)$ are isomorphic as associative algebras. For example, it is well-known that the dimension of a
finite-dimensional Lie algebra $L$ is determined by $U(L)$ since
it coincides with the Gelfand-Kirillov dimension of $U(L)$.
The closely related isomorphism problem for group rings asks: is every finite group $G$
determined by its integral
group ring ${\mathbb Z}G$? A positive solution for the class of all
nilpotent groups was given independently in \cite{RS} and
\cite{W}. There exist, however, a pair of non-isomorphic finite solvable groups of derived length 4 whose integral group rings are isomorphic (see \cite{H}).
In Section \ref{fox}, we discuss identifications of certain Lie subalgebras associated to the augmentation ideal ${\omega}(L)$ of $U(L)$. Let $S$ be a Lie subalgebra of $L$.
The identification of the subalgebras $L\cap {\omega}^n(L){\omega}^m(S)$ naturally
arises in the context of enveloping algebras.
In Section \ref{aug}, we show that certain invariats of $L$ are determined by $U(L)$ including the nilpotence class of a nilpotent Lie algebra $L$. The results of this section motivate one to investigate the isomorphism problem in detail for low dimensional nilpotent Lie algebras. The isomorphism problem for nilpotent Lie algebras of dimension at most 6 is discussed in Section \ref{low-dim}. It turns out that there exist counterexamples to the isomorphism problem in dimension 5 over a field of characteristic 2 and in dimension 6 over a field of characteristic 2 and 3.
The conclusion is that as the dimension of $L$ increases we have to exclude more fields of positive characteristic to avoid counterexamples. Indeed, if there is a pair of Lie algebras $L$ and $H$ that provides a counterexample over a field ${\mathbb F}$ then a central extension of $L$ and $H$ would provide a counterexample in a higher dimension over ${\mathbb F}$. Furthermore, it is shown in Example \ref{ex1} that
over any field of positive characteristic $p$ there exist non-isomorphic Lie algebras of dimension $p+3$ whose enveloping algebras are isomorphic. So over any field of positive characteristic $p$ and any integer $n\geq p+3$ there exists a pair of non-isomorphic Lie algebras of dimension $n$ whose enveloping algebras are isomorphic.
These observations convince us to consider the isomorphism problem over a field of characteristic zero, however there are other invariants that we expect to be determined over any filed. Some of these questions are listed in Section \ref{quo}.
Nevertheless, it makes sense to consider the isomorphism problem for restricted Lie algebras over a filed ${\mathbb F}$ of positive characteristic $p$. We denote the restricted enveloping algebra of a restricted Lie algebra $L$ by $u(L)$. Given another restricted Lie algebra $H$ for which $u(L)\cong u(H)$ as associative algebras, we ask what invariants of $L$ and $H$ are the same? For example, since ${\rm dim}_{{\mathbb F}} u(L)= p^{{\rm dim}_{{\mathbb F}} L}$, the dimension of $L$ is determined. Unlike abelian Lie algebras whose only invariant
is their dimension, abelian restricted Lie algebras have more structure. As a first step, abelian restricted Lie algebras are considered in Section \ref{rest}. Other known results for restricted Lie algebras are also discussed in Section \ref{rest}.
In Section \ref{hopf}, we have collected the known results about the Hopf algebra structure of $U(L)$ and $u(L)$ deducing that
the isomorphism problem is trivial if the Hopf algebra structures of $U(L)$ or $u(L)$ are considered. In this section an example is
given showing that the enveloping algebra of a Lie superalgebra $L$ may not necessary determine the dimension of $L$.
As we mentioned earlier some open problems are discussed in Section \ref{quo}.
\section{Preliminaries}
In this section we collect some basic definitions that can be found in \cite{B} or
\cite{SF}. Every associative algebra can be viewed as a Lie algebra under the natural Lie bracket
$[x,y]=xy-xy$. In fact, every Lie algebra can be embedded into an associative algebra
in a canonical way.
\begin{defi}
Let $L$ be a Lie algebra over ${\mathbb F}$ and $U(L)$ an associative algebra.
Let $\iota:L\to U(L)$ be a Lie homomorphism. The pair $(U(L),\iota)$ is called a (universal)
enveloping algebra of $L$ if for every associative algebra $A$ and every Lie homomorphism
$f:L\to A$ there is a unique algebra homomorphism ${\bar f}:U(L)\to A$ such that
${\bar f}\iota=f$.
\end{defi}
It is clear that if an enveloping algebra exists, then it is unique up to isomorphism. Its
existence can be shown as follows. Let $T(L)$ be the tensor algebra based
on the vector space $L$, that is
$$
T(L)={\mathbb F}\oplus L \oplus (L\otimes L)\oplus\cdots.
$$
The multiplication in $T(L)$ is induced by concatenation which turns $T(L)$ into an
associative algebra. Now let $I$ be the ideal of $T(L)$ generated by all
elements of the form
$$
[x,y]-x\otimes y+y\otimes x, ~~x,y\in L,
$$
and let $U(L)=T(L)/I$. If we denote by $\iota$ the restriction to $L$ of the
natural homomorphism $T(L)\to U(L)$, then it can be verified that
$(U(L),\iota)$ is the enveloping algebra of $L$. Furthermore, since $\iota$
is injective, we can regard $L$ as a Lie subalgebra of $U(L)$. In fact we can say more:
\begin{thm}[Poincar\' e-Birkhoff-Witt]
Let $\{x_j\}_{j\in {\mathcal J}}$ be a totally-ordered basis for $L$ over ${\mathbb F}$.
Then $U(L)$ has a basis consisting of PBW monomials, that is, monomials of the form
$$
x_{j_1}^{a_1}\cdots x_{j_t}^{a_t},
$$
where $j_1<\cdots< j_t$ are in ${\mathcal J}$ and $t$ and each $a_i$ are non-negative
integers.
\end{thm}
This result is commonly referred to as the PBW Theorem.
Let $H$ be a subalgebra of $L$. It follows from the PBW Theorem that the extension of
the Lie homomorphism $H\hookrightarrow L\hookrightarrow U(L)$ to $U(H)$ is an injective
algebra homomorphism. So we can view $U(H)$ as a subalgebra of $U(L)$.
Next consider the \emph{augmentation map} $\varepsilon_L: U(L)\to {\mathbb F}$
which is the unique algebra homomorphism induced by $\varepsilon_L(x)=0$ for
every $x\in L$. The kernel of $\varepsilon _L$ is called the \emph{augmentation ideal} of $L$
and will be denoted by $\omega(L)$; thus, ${\omega}(L)=LU(L)=U(L)L$.
We denote by ${\omega}^n(L)$ the
$n$-th power of ${\omega}(L)$ and ${\omega}^0(L)$ is $U(L)$.
We consider left-normed commutators, that is
$$
[x_1, \ldots, x_n]=[[x_1, x_2], x_3, \ldots, x_n].
$$
The \emph{lower central series} of $L$ is defined inductively by
$\gamma_1(L)=L$ and $\gamma_{n}(L)=[\gamma_{n-1}(L),L]$. The second term
will be also denoted by $L'$; that is, $L'=\gamma_2(L)$. If $L'=0$ then $L$ is called
abelian. A Lie algebra $L$ is said to be \emph{nilpotent} if $\gamma_n(L)=0$ for
some $n$; the \emph{nilpotence class} of $L$ is the minimal integer $c$
such that $\gamma_{c+1}(L)=0$. Also, $L$ is called \emph{metabelian} if $L'$ is abelian.
Let $L$ be a Lie algebra over a field ${\mathbb F}$ of positive characteristic $p$ and denote by
$\ad : L \to L$ the adjoint representation of $L$ given by
$(\ad x)(y) = [y,x]$, where $x, y \in L$.
A mapping $^{[p]}:L\to L$ that satisfies the following
properties for every $x,y\in L$ and $\alpha\in {\mathbb F}$:
\begin{enumerate}
\item $(\ad x)^p=\ad(x^{[p]})$;
\item $(\alpha x)^{[p]}=\alpha^p x^{[p]}$; and,
\item $(x+y)^{[p]}=x^{[p]}+y^{[p]}+\sum_{i=1}^{p-1} s_i(x,y)$,
where $is_i(x,y)$ is the coefficient of $\lambda^{i-1}$ in $\ad(\lambda x+y)^{p-1}(x)$.
\end{enumerate}
is called a $[p]$-mapping. The pair $(L, [p])$ is called a \emph{restricted Lie algebra}.
\begin{rem}\label{si-gammap}
By expanding $\ad(\lambda x+y)^{p-1}(x)$ it can be seen that $s_i(x,y)\in \gamma_p(\la
x,y\ra)$, for every $i$.
\end{rem}
Every Associative algebra can be regarded as a restricted Lie algebra with the natural Lie
bracket and exponentiation by $p$ as the $[p]$-mapping: $x^{[p]}=x^p$.
A Lie subalgbera $H$ of $L$ is called a \emph{restricted subalgebra} of $L$ if $H$ is closed
under the $[p]$-map.
\begin{defi}
Let $L$ be a restricted Lie algebra over ${\mathbb F}$ and $u(L)$ an associative algebra.
Let $\iota:L\to u(L)$ be a restricted Lie homomorphism. The pair $(u(L),\iota)$ is called a
restricted (universal) enveloping algebra of $L$ if for every associative algebra $A$ and
every restricted Lie homomorphism
$f:L\to A$ there is a unique algebra homomorphism ${\bar f}:u(L)\to A$ such that
${\bar f}\iota=f$.
\end{defi}
It is clear that if restricted enveloping algebras exist then they are unique up to
an algebra isomorphism. Let $I$ be the ideal of $U(L)$ generated
by all elements $x^{[p]}-x^p$, $x\in L$. Put $u(L)=U(L)/I$. Then
$(u(L),\iota)$ has the desired property, where $\iota$ is the restriction to $L$
of the natural map $U(L)\to u(L)$.
The analogue of the PBW Theorem for restricted Lie algebras is due to Jacobson:
\begin{thm}[Jacobson]
Let $\{x_j\}_{j\in {\mathcal J}}$ be a totally-ordered basis for $L$ over a filed ${\mathbb F}$ of positive characteristic $p$.
Then $u(L)$ has a basis consisting of restricted PBW monomials.
\end{thm}
An important consequence is that we may regard $L$ as a restricted subalgebra of $u(L)$.
Thus, the $p$-map in $L$ is usually denoted by $x^p$.
Let $X$ be a subset of $L$. The restricted subalgebra generated by $X$ in $L$, denoted by
$\langle X \rangle_p$, is the smallest restricted subalgebra containing $X$. Also, $X^{p^j}$
denotes the restricted subalgebra generated by all $x^{p^j}$ with $x\in X$.
Recall that $L$ is $p$-nilpotent if there exists a positive integer $k$ such that $x^{p^k}=0$, for all $x\in L$. We say $L\in {\mathcal F}_p$ if $L$ is finite dimensional and $p$-nilpotent. Note that if $L\in {\mathcal F}_p$ then $L$ is nilpotent by Engel's Theorem. We denote by $L'_p$ the restricted subalgebra of $L$ generated by $L'$.
\section{Fox-type problems}\label{fox}
Let $L$ be a Lie algebra and $S$ a subalgebra of $L$ over a field ${\mathbb F}$.
The identification of the subalgebras $L\cap {\omega}^n(L){\omega}^m(S)$ naturally
arises in the context of enveloping algebras. It is proved in \cite{Ri, RU} that $L\cap{\omega}^n(L)=\gamma_n(L)$, for every integer $n\geq 1$.
Furthermore, we have:
\begin{prop}[\cite{RU}]
Let $S$ be a subalgebra of a Lie algebra $L$. The following statements hold for every integer $n\geq 1$.
\begin{enumerate}
\item ${\omega}(S)\cap {\omega}^n(S){\omega}(L)={\omega}^{n+1}(S)$; hence, $L\cap {\omega}^n(S){\omega}(L)=\gamma_{n+1}(S)$ .
\item ${\omega}(S)\cap {\omega}^n(S)U(L)={\omega}^{n}(S)$.
\end{enumerate}
\end{prop}
Hurley and Sehgal in \cite{HSe} proved that if $F$ is a free group and $R$ a normal subgroup of $F$, then
$$
F\cap (1+{\omega}^2(F){\omega}^n(R))=\gamma_{n+2}(R)\gamma_{n+1}(R\cap \gamma_2(F)),
$$
for every positive integer $n$. The analogous result for Lie algebras is as follows:
\begin{thm}[\cite{U-JA08}]
Let $L$ be a Lie algebra and $S$ a Lie subalgebra $L$. For every positive integer $n$, the following subalgebras
of $L$ coincide.
\begin{enumerate}
\item $\gamma_{n+2}(S)+\gamma_{n+1}(S\cap \gamma_2(L))$,
\item $L\cap ({\omega}^{n+2}(S)+{\omega}(S\cap \gamma_2(L)){\omega}^n(S))$,
\item $L\cap {\omega}^2(L){\omega}^n(S)$.
\end{enumerate}
\end{thm}
The motivation for this sort of problems
also comes from its group ring counterpart.
Let $F$ be a free group, $R$ a normal subgroup of $F$, and
denote by $\mathfrak{r}$ the kernel of the natural homomorphism ${\mathbb Z} F\to {\mathbb Z}(F/R)$.
Recall that the augmentation ideal $\mathfrak{f}$ of
the integral group ring ${\mathbb Z} F$ is the kernel of the map ${\mathbb Z} F\to {\mathbb Z}$ induced by $g\mapsto 1$ for every $g\in
F$. Fox introduced in \cite{Fox} the problem of identifying the subgroup $F\cap (1+\mathfrak{f}^n\mathfrak{r})$ in terms of $R$. Following Gupta's initial work on Fox's problem (\cite{Gup}), Hurley (\cite{Hur2}) and Yunus
(\cite{Yun}) independently gave a complete solution to this problem.
At the same time Yunus considered the Fox problem for free Lie algebras. Let $\mathcal{L}$ be a free Lie
algebra and $\mathcal{R}$
an ideal of $\mathcal{L}$. Yunus in \cite{Yun2} identified the subalgebra $\mathcal{L} \cap {\omega}(\mathcal{R}){\omega}^n(\mathcal{L})$ in terms of $\mathcal{R}$. The solutions to the Fox problem for free restricted Lie algebras is as follows.
\begin{thm} [\cite{U-JA08}]
Let $\mathcal{R}$ be a restricted ideal of a free restricted Lie algebra $\mathcal{L}$. Then
$$
\mathcal{L}\cap {\omega}^n(\mathcal{L}){\omega}(\mathcal{R})=\sum [\mathcal{R}\cap\gamma_{i_1}(\mathcal{L}),\ldots,\mathcal{R}\cap\gamma_{i_k}(\mathcal{L})]^{p^j}+\sum
(\mathcal{R}\cap\gamma_i(\mathcal{L}))^{p^{\ell}},
$$
where the first sum is over all tuples $(i_1,\ldots,i_k)$, $k\geq 2$, and non-negative integer $j$ such that
$p^j(i_1+\cdots+i_k)-i_t\geq n$, for every $t$ in the range $1\leq t\leq k$ and the second sum is over all
positive integers $i$ and $\ell$ such that $(p^{\ell}-1)i\geq n$.
\end{thm}
Let $L$ be a restricted Lie algebra.
The dimension subalgebras of $L$ are defined by
$$
D_n(L)=L\cap {\omega}^n(L).
$$
\begin{thm}[\cite{RSh}]\label{dimension}
Let be a restricted Lie algebra. Then, for every $m,n\geq 1$, we have
\begin{enumerate}
\item $D_n(L)=\sum_{ip^j\geq n} \gamma_i(L)^{p^j}$,
\item $[D_n(L), D_m(L)]\su \gamma_{m+n}(L)$,
\item $D_n(L)^{p}\subseteq D_{np}(L)$.
\end{enumerate}
\end{thm}
\begin{prop}[\cite{U-IJAC}]\label{dimn-sub-ext}
Let $R$ be a restricted subalgebra of a restricted Lie algebra $L$ and $m$ a positive integer. Then ${\omega}(R)\cap
{\omega}(L){\omega}^m(R)={\omega}^{m+1}(R)$; hence, $L\cap {\omega}(L){\omega}^m(R)= D_{m+1}(R)$.
\end{prop}
\begin{thm}[\cite{U-JA08}]
Let $L$ be a restricted Lie algebra and $S$ a restricted Lie subalgebra of $L$. For every positive integer $n$,
the following subalgebras of $L$ coincide.
\begin{enumerate}
\item $D_{n+2}(S)+D_{n+1}(S\cap D_2(L))$,
\item $L\cap ({\omega}^{n+2}(S)+{\omega}(S\cap D_2(L)){\omega}^n(S))$,
\item $L\cap {\omega}^2(L){\omega}^n(S)$.
\end{enumerate}
\end{thm}
There is a close relationship between restricted Lie algebras and finite $p$-groups.
Indeed, a variant of PBW Theorem was proved by Jennings in \cite{Je} and later
extended in \cite{U-JPAA}. This analogue of PBW Theorem for group algebras proves to be a very
useful tool as, for example, one can prove the following Fox-type results.
Below, ${\omega}(G)$ denotes the augmentation ideal of the group algebra ${\mathbb F} G$ over a field ${\mathbb F}$ of positive characteristic $p$.
\begin{thm}[\cite{U-JPAA}] Let $G$ be a finite $p$-group. For every subgroup $S$
of $G$ and every positive integer $n$, we have
$$
G\cap (1+{\omega}(G){\omega}^n(S))=D_{n+1}(S).
$$
\end{thm}
\begin{thm}[\cite{U-JPAA}] Let $G$ be a finite $p$-group. For every subgroup $S$
of $G$ and every positive integer $n$, we have
$$
G\cap (1+{\omega}^2(G){\omega}^n(S))=D_{n+2}(S)D_{n+1}(S\cap D_2(G)).
$$
\end{thm}
\section{Powers of the augmentation ideal}\label{aug}
Let $L$ be a Lie algebra with universal enveloping algebra $U(L)$.
A first natural question is whether $U(L)$ determines ${\omega}(L)$. The following lemma answers this
question in the affirmative.
\begin{lem}[\cite{RU}]
Let $L$ and $H$ be Lie algebras and suppose that $\varphi: U(L)\to
U(H)$ is an algebra isomorphism. Then there exists an algebra
isomorphism $\psi:U(L)\to U(H)$ with the property that
$\psi(\omega(L))=\omega(H)$.
\end{lem}
Henceforth, $\varphi:U(L)\to U(H)$ denotes an algebra isomorphism
that preserves the corresponding augmentation ideals.
Since $\varphi$ preserves ${\omega}(L)$, it also preserves the
filtration of $U(L)$ given by the powers of ${\omega}(L)$:
$$
U(L)={\omega}^0(L) \supseteq{\omega}^1(L) \supseteq {\omega}^2(L)\supseteq \ldots.
$$
Corresponding to this filtration is the graded associative algebra
$${\rm gr}(U(L))=\oplus_{i\geq 0}\omega^i(L)/\omega^{i+1}(L),$$
where the multiplication in ${\rm gr}(U(L))$ is induced by
$$(y_i+{\omega}^{i+1}(L))(z_j+{\omega}^{j+1}(L))=y_iz_j+{\omega}^{i+j+1}(L),$$ for all
$y_i\in {\omega}^i(L)$ and $z_j\in {\omega}^j(L)$. Certainly ${\rm gr}(U(L))$ is
determined by $U(L)$.
There is an analogous construction for Lie algebras. That is, one
can consider the graded Lie algebra of $L$ corresponding to its
lower central series given by
${\rm gr}(L)=\oplus_{i\geq1}\gamma_i(L)/\gamma_{i+1}(L)$.
Note that each quotient $\gamma_i(L)/\gamma_{i+1}(L)$ embeds into
the corresponding quotient $\omega^i(L)/\omega^{i+1}(L)$. Indeed, this way we get a
Lie algebra homomorphism from ${\rm gr}(L)$ into ${\rm gr}(U(L))$ which induces an algebra map from
$U({\rm gr}(L))$ to ${\rm gr}(U(L))$. We have:
\begin{thm}[\cite{RU}]\label{graded-iso} For any Lie algebra $L$, the map $\phi:U({\rm gr}(L))\to {\rm gr}(U(L))$
is an isomorphism of graded associative algebras.
\end{thm}
Note that under the isomorphism $\phi$ given in Theorem \ref{graded-iso}, we have
$\phi( L/\gamma_2(L))={\omega}(L)/{\omega}^2(L)$. Since ${\rm gr}(L)$ as a Lie algebra is generated by $ L/\gamma_2(L)$, we deduce that $\phi({\rm gr}(L))$ is the Lie subalgebra of ${\rm gr}(U(L))$ genearted by
${\omega}(L)/{\omega}^2(L)$. Hence:
\begin{cor}[\cite{RU}]
The graded Lie algebra ${\rm gr}(L)$ is determined by $U(L)$.
\end{cor}
\begin{cor}[\cite{RU}]\label{lcs} For each pair of integers $(m,n)$ such that $n\ge m\ge 1$,
the quotient $\gamma_n(L)/\gamma_{m+n}(L)$ is determined by
$U(L)$.
\end{cor}
A useful tool that is used to prove many of the results is as follows. Recall that
the height of an element $y\in L$, $\nu(y)$, is
the largest integer $n$ such that $y\in
\gamma_n(L)$ if $n$ exists and is infinite if it does not.
\begin{thm}[\cite{Ri, RU}] Let $L$ be an arbitrary Lie algebra and let
$X=\{\bar{x_i}\}_{i\in {\mathcal I}}$ be a homogeneous basis of
${\rm gr}(L)$. Take a coset representative $x_i$ for each $\bar{x_i}$. Then the
set of all PBW monomials $x_{i_1}^{a_1}x_{i_2}^{a_2}\cdots x_{i_s}^{a_s}$
with the property that $\sum_{k=1}^s a_k\nu(x_{i_k})=n$
forms an ${\mathbb F}$-basis for ${\omega}^n(L)$ modulo ${\omega}^{n+1}(L)$, for every $n\geq 1$.
\end{thm}
A Lie algebra $L$ is called residually nilpotent if $\cap_{n\geq
1}\gamma_n(L)=0$; analogously, an associative ideal $I$ of
$U(L)$ is residually nilpotent whenever $\cap_{n\geq 1}I^n=0$.
\begin{thm}[\cite{RU}]
Let $L$ be a Lie algebra. Then $L$ is residually nilpotent as a Lie algebra if and only if
${\omega}(L)$ is
residually nilpotent as an associative ideal.
\end{thm}
We can now summarize the invariants of $L$ that are determined by $U(L)$.
\begin{thm}[\cite{RU}]\label{major-invariants} The following statements hold for every Lie algebra $L$ over any field.
\begin{enumerate}
\item Whether or not $L$ is residually nilpotent is determined.
\item Whether or not $L$ is nilpotent is determined.
\item\label{nilpotence-class} If $L$ is nilpotent then the nilpotence class of $L$ is determined.
\item If $L$ is nilpotent then the minimal number
of generators of $L$ is determined.
\item If $L$ is a finitely generated free nilpotent Lie algebra then $L$ is determined.
\item The quotient $L'/L''$ is determined.
\item Whether or not $L'$ is residually nilpotent is determined.
\item\label{solvable} If $L$ is finite-dimensional, then whether or not $L$ is solvable is
determined.
\end{enumerate}
\end{thm}
Part \eqref{solvable} of Theorem \ref{major-invariants} over a field of characteristic zero was proved in \cite{RU}, however,
according to \cite{V}
enveloping algebra of a finite-dimensional Lie algebra over any field can be embedded into a (Jacobson)
radical algebra if and only if $L$ is solvable.
\section{Low dimensional nilpotent Lie algebras}\label{low-dim}
Based on results for simple Lie algebras in \cite{M}, it was shown in \cite{CKL}
that $L$ is determined by $U(L)$ in the case when $L$ is any Lie
algebra of dimension at most three over a field of any
characteristic other than two.
In this section we focus on low dimensional nilpotent Lie algebras.
A classification of such Lie algebra is well known and
can be found, for instance, in \cite{degraaf}. Since there is a unique isomorphism class of nilpotent Lie algebras
with dimension 1, and there is a unique such class with dimension 2,
the isomorphism problem is trivial in these
cases.
Up to isomorphism, there are two nilpotent Lie algebras with dimension $3$ one of which is abelian and the other is non-abelian. By Part \eqref{nilpotence-class} of Theorem \ref{major-invariants}, their universal enveloping algebras
must be non-isomorphic. The number of 4-dimensional nilpotent Lie algebras
is 3. One of these algebras is abelian, the second has nilpotency class 2, and
the third has nilpotency class 3. Again, by Part \eqref{nilpotence-class} of
Theorem \ref{major-invariants},
their universal enveloping algebras are pairwise non-isomorphic.
A strategy for higher dimensions is as follows, which we have used for dimensions 5 and 6.
For an arbitrary nilpotent Lie algebra $L$, we know, by Corollary \ref{lcs}, that the \emph{nilpotency sequence} $({\rm dim}\gamma_1(L),{\rm dim} \gamma_2(L),\ldots)$, after omitting the tailing zeros, is determined.
So, nilpotent Lie algebras of the same finite dimension can be divided into smaller clusters where members of each cluster have the same nilpotency sequence. The investigation of the isomorphism problem then reduces to the Lie algebras in the same cluster.
For example, there are 9 isomorphism classes of nilpotent Lie algebras with dimension 5 which can be found in \cite{degraaf}.
The nilpotency sequence of a nilpotent Lie algebra of dimension 5 is then one of $(5)$, $(5,1)$, $(5,2)$, $(5,2,1)$, $(5,3,1)$,
$(5,3,2,1)$, $(5,3,2,1)$. We can now summerize the results for dimensions 5 and 6 as follows:
\begin{thm}[\cite{SU}]\label{5main}
Let $L$ and $H$ be 5-dimensional nilpotent Lie algebras
over a field ${\mathbb F}$. If $\uea{L}\cong\uea{H}$, then one of the followings must hold:
\begin{itemize}
\item[(i)] $L\cong H$;
\item[(ii)] $\charac\,{\mathbb F}=2$ and either $L$ and $H$ are isomorphic to Lie algebras $L_{5,3}$ and $L_{5,5}$ or
$L_{5,6}$ and $L_{5,7}$ in \cite[Section 5]{degraaf}.
\end{itemize}
\end{thm}
\begin{thm}[\cite{SU}]\label{6main}
Let $L$ and $H$ be 6-dimensional nilpotent Lie algebras
over a field ${\mathbb F}$ of characteristic not 2. If $\uea{L}\cong\uea{H}$, then one of the followings must hold:
\begin{itemize}
\item[(i)] $L\cong H$.
\item[(ii)] $\charac\,{\mathbb F}=3$ and $L$ and $H$ are isomorphic to one
of the following pairs of Lie algebras in \cite[Section 5]{degraaf}:
$L_{6,6}$ and $L_{6,11}$;
$L_{6,7}$ and $L_{6,12}$; $L_{6,17}$ and $L_{6,18}$; $L_{6,23}$ and $L_{6,25}$.
\end{itemize}
\end{thm}
At the time when Theorem \ref{6main} was proved a complete list of nilpotent Lie algebras of dimension 6 over a field of characteristic 2 was not available. Recently, this list was obtained in \cite{CdS}.
Since, by Theorem \ref{5main}, $U(L_{5,3})\cong U(L_{5,5})$ over a filed of characteristic 2, then it is evident that setting $L=L_{5,3}\oplus {\mathbb F}$ and $H=L_{5,5}\oplus {\mathbb F}$ provides a pair of non-isomorphic nilpotent Lie algebras of dimension 6 such that $U(L)\cong U(H)$ over a field of characteristic 2.
\section{Positive characteristic and restricted Lie algebras}\label{rest}
The results of Section \ref{low-dim}, show in particular that over a field of characteristic
2 or 3 there exist non-isomorphic nilpotent Lie algebras $L$ and $H$ such that $U(L)\cong U(H)$, thereby providing counterexamples at least in low dimensions. However, the following example provides counterexamples
over any field of positive characteristic $p$ and dimension $p+3$.
\begin{eg}[\cite{SU}]\label{ex1}
\emph{Let $A={\mathbb F} x_0+\cdots +{\mathbb F} x_p$ be an abelian Lie algebra over a field ${\mathbb F}$ of characteristic
$p$. Consider the Lie algebras $L=A+{\mathbb F} \lambda+{\mathbb F} \pi$ and $H=A+ {\mathbb F} \lambda + {\mathbb F} z$ with
relations given by $[\lambda,x_i]=x_{i-1}$, $[\pi,x_i]=x_{i-p}$, $[\lambda,\pi]=[z,H]=0$,
and $x_i=0$ for every $i<0$. Then we have:
\begin{enumerate}
\item $L$ and $H$ are both metabelian and nilpotent of
class $p+1$.
\item The centre of $L$ is spanned by $x_{0}$ while the centre of
$H$ is spanned by $z$ and $x_{0}$; so,
$L$ and $H$ are not isomorphic.
\item The Lie homomorphism $\Phi: L\to U(H)$ defined by $\Phi_{|A+{\mathbb F} \lambda}=\mbox{id}$,
$\Phi(\pi)=z+\lambda^p$ can be extended to a Hopf algebra isomorphism $U(L)\to U(H)$.
\end{enumerate}
}
\end{eg}
So, the isomorphism problem for enveloping algebras of nilpotent Lie algebras has a negative solution over any field of positive characteristic. Another counterexample can be given in the class of free Lie algebras based on \cite[Theorem 28.10]{MZ}.
Recall that the universal enveloping algebra of the free
Lie algebra $L(X)$ on a set $X$ is the free associative algebra
$A(X)$ on $X$.
\begin{eg}[\cite{RU}]\label{ex2}
\emph{Let ${\mathbb F}$ be a field of odd characteristic $p$ and let $L(X)$ be
the free Lie algebra on $X=\{x,y,z\}$ over ${\mathbb F}$. Set
$h=x+[y,z]+(\mbox{ad } x)^p(z)\in L(X)$ and put $L=L(X)/\langle
h\rangle$, where $\langle h\rangle$ denotes the ideal generated by
$h$ in $L(X)$. Then we have
\begin{enumerate}
\item $L$ is not a free Lie algebra.
\item There exists a Hopf algebra isomorphism between $U(L)$ and the
2-generator free associative algebra.
\item The minimal number of generators required to generate $L$ is 3.
\end{enumerate}}
\end{eg}
When the underlying field has positive characteristic, it seems natural to consider the isomorphism problem for restricted Lie algebras, instead.
\subsection{Restricted isomorphism problem}
Let $L$ be a restricted Lie algebra with the restricted enveloping algebra $u(L)$ over a field ${\mathbb F}$ of positive characteristic $p$. Let ${\omega}(L)$ denote the augmentation ideal of $u(L)$ which is the kernel of the augmentation map $\epsilon_{_L}: u(L)\to {\mathbb F}$ induced by $x\mapsto 0$, for every $x\in L$.
Let $H$ be another restricted Lie algebra such that
$\varphi: u(L)\to u(H)$ is an algebra isomorphism.
We observe that the map $\eta: L\to u(H)$ defined by
$\eta=\varphi-\varepsilon _{_H}\varphi$ is a restricted Lie algebra homomorphism.
Hence, $\eta$ extends to an algebra homomorphism
$\overline{\eta}: u(L)\to u(H)$. In fact,
$\overline{\eta}$ is an isomorphism that preserves the augmentation ideals, that is
$\overline{\eta}({\omega}(L))={\omega}(H)$. So, without loss of generality, we assume that $\varphi:u(L)\to u(H)$ is an algebra isomorphism that preserves the augmentation ideals.
Note that the role of lower central series in Lie algebras is played by the dimension subalgebras in restricted Lie algebras. Recall from Theorem \ref{dimension} that the $n$-th dimension subalgebra of $L$ is
$$
D_n(L)=L\cap {\omega}^n(L)=\sum_{ip^j\geq n} \gamma_i(L)^{p^j}.
$$
Now, consider the graded restricted Lie algebra:
$$
{\rm gr}(L):=\bigoplus_{i\ge 1} D_i(L)/D_{i+1}(L),
$$
where the Lie bracket and the $p$-map are defined over homogeneous elements and then extended linearley:
\begin{align*}
[x_i+D_{i+1}(L), x_j+D_{j+1}(L)]&=[x_i, x_j]+D_{i+j+1}(L),\\
(x_i+D_{i+1}(L))^{[p]}&=x_i^p+D_{ip+1}(L)
\end{align*}
for all
$x_i\in D_i(L)$ and $x_j\in D_j(L)$.
In close analogy with Theorem \ref{graded-iso}, one can see that $u({\rm gr}(L))\cong {\rm gr}(u(L))$ as algebras.
So we may identify ${\rm gr}(L)$ as the graded restricted Lie subalgebra of ${\rm gr}(u(L))$
generated by ${\omega}^1(L)/{\omega}^2(L)$. Thus, ${\rm gr}(L)$ is determined.
Recall that $L$ is said to be in the class ${\mathcal F}_p$ if $L$ is finite-dimensional and $p$-nilpotent.
Whether or not $L\in {\mathcal F}_p$ is determined by the following lemma, see \cite{RSh}.
\begin{lem}\label{w(L)-nilpotent}
Let $L$ be a restricted Lie algebra. Then $L\in {\mathcal F}_p$ if and only if ${\omega}(L)$ is nilpotent.
\end{lem}
\begin{lem}[\cite{U-PAMS}]\label{nilp-class}
If $u(L)\cong u(H)$ then the following statements hold.
\begin{enumerate}
\item If $L\in {\mathcal F}_p$ then $\mid cl(L)-cl(H)\mid \leq 1$.
\item $D_i(L)/D_{i+1}(L)\cong D_i(H)/D_{i+1}(H)$, for every $i\geq 1$.
\end{enumerate}
\end{lem}
We remark that
methods of \cite{RSh} and \cite{RU} can be adapted to prove that
the quotients $D_n(L)/D_{2n+1}(L)$ and $D_{n}(L)/D_{n+2}(L)$ are also determined, for every $n\geq 1$.
In particular, $L/D_3(L)$ is determined.
Unlike the isomorphism problem in which abelian Lie algebras are determined by their enveloping algebras, the abelian case for the restricted isomorphism problem is not trivial.
Note that if $L$ is an abelian restricted Lie algebra then the $p$-map reduces to
$$
(x+y)^p=x^p+y^p, \quad (\alpha x)^p=\alpha^px^p,
$$
for every $x,y\in L$ and $\alpha\in {\mathbb F}$. Thus the $p$-map is a semi-linear transformation.
\begin{thm}[\cite{U-PAMS}]\label{prop-perfect}
Let $L\in {\mathcal F}_p$ be an abelian restricted Lie algebra over a perfect field ${\mathbb F}$. If $H$ is a restricted Lie
algebra such
that $u(L)\cong u(H)$, then $L\cong H$.
\end{thm}
\begin{cor}\label{L/L'_p}
Let $L\in {\mathcal F}_p$ be a restricted Lie algebra over a perfect field. Then $L/L'_p$ is determined.
\end{cor}
It turns out that over an algebraically closed field stronger results hold.
\begin{thm}[\cite{U-PAMS}]\label{prop-alg-closed}
Let $L$ be a finite-dimensional abelian restricted Lie algebra over an algebraically closed field ${\mathbb F}$. Let $H$
be a restricted Lie algebra such that $u(L)\cong u(H)$. Then $L\cong H$.
\end{thm}
Using the identity $[ab,c]=a[b,c]+[a,c]b$ which holds in any associative algebra, we can see that
$L'_pu(L)=[{\omega}(L),{\omega}(L)]u(L)$. Thus the ideal $L'_pu(L)$ is preserved by $\varphi$.
Now write $J_L={\omega}(L)L'+L'{\omega}(L)={\omega}(L)L'_p+L'_p{\omega}(L)$. Since both ${\omega}(L)L'_p$ and $L'_p{\omega}(L)$ are determined, it
follows that $J_L$ is determined.
\begin{thm}[\cite{U-PJM}]\label{dim-L-mod}
If $L\in {\mathcal F}_p$ and ${\mathbb F}$ is perfect then $L/(L'^p{+}\gamma_3(L))$ is determined.
\end{thm}
\begin{thm}[\cite{U-PAMS}]\label{L'-quo}
Suppose that $L$ and $H$ are finite-dimensional restricted Lie algebras such that $u(L)\cong u(H)$. Then,
for every positive integer $n$, we have
$$
D_n(L'_p)/D_{n+1}(L'_p)\cong D_n(H'_p)/D_{n+1}(H'_p).
$$
\end{thm}
\begin{lem}[\cite{U-PAMS}]\label{exp-H'}
Let $L\in {\mathcal F}_p$ such that $cl(L)=2$. Then, ${\rm dim}_{{\mathbb F}} {L'_p}^{p^t}$ is determined, for every $t\geq 0$.
\end{lem}
\begin{lem}[\cite{U-PAMS}]\label{dim-H'}
Let $L\in {\mathcal F}_p$ such that $L'_p$ is cyclic. The following statements hold.
\begin{enumerate}
\item $cl(L)\leq 3$.
\item We have $L'^{p^t}u(L)=(L'_pu(L))^{p^t}$, for every $t\geq 1$.
\end{enumerate}
\end{lem}
A restricted Lie algebra $L$ is called \emph{metacyclic} if $L$ has a cyclic restricted ideal $I$ such that
$L/I$ is cyclic. Recall that a $p$-polynomial in $x$ has the form $c_0x+c_1x^p+\cdots+c_t x^{p^t}$, where each $c_i\in {\mathbb F}$. So, if $L$ is metacyclic then there exist generators $x,y\in L$ and some
$p$-polynomials $g$ and $h$ such that
$$
h(x)\in \la y\ra_p,\quad [y,x]=g(y).
$$
Now let $L$ be a non-abelian metacyclic restricted Lie algebra in the class ${\mathcal F}_p$.
It turns out that there exist another $p$-polynomial $f$ and positive integers $m,n$ such that the following
relations hold in $L$:
\begin{align*}
& x^{p^m}=f(y)=y^{p^r}+\cdots,\\
& y^{p^n}=0,\\
& [y,x]=g(y)=b_sy^{p^s}+\cdots, b_s\neq 0.
\end{align*}
Since $L$ is not abelian, we have $1\leq r\leq n$ and $1\leq s\leq n-1$.
\begin{thm}[\cite{U-PAMS}]
Let $L\in {\mathcal F}_p$ be a metacyclic restricted Lie algebra over a perfect field of positive characteristic. Then $L$
is determined by $u(L)$.
\end{thm}
\section{Other observations}\label{hopf}
Because enveloping algebras are Hopf algebras, it also makes sense
to consider an enriched form of the isomorphism problem that takes
this Hopf structure into account.
Recall that a bialgebra is a vector space ${\mathcal H}$ over a
field ${\mathbb F}$ endowed with an algebra structure $({\mathcal H},
M, u)$ and a coalgebra structure
$({\mathcal H}, \Delta, \epsilon)$ such
that $\Delta$ and $\epsilon$ are algebra
homomorphisms. A bialgebra ${\mathcal H}$ having an antipode
$S$ is called a Hopf algebra. It is well-known that
the enveloping algebra of a (resticted) Lie algebra is a Hopf
algebra, see for example \cite{BMPZ} or \cite{MZ}. Indeed, the
counit $\epsilon$ is the augmentation map and the coproduct $\Delta$ is induced by
$x\mapsto x\otimes 1+1\otimes x$, for every $x\in L$. An explicit
description of $\Delta$ can be given in terms of a PBW
monomials (see, for example, Lemma 5.1 in Section 2 of
\cite{SF}). The antipode $S$ is induced by $x\mapsto -x$,
for every $x\in L$. The following proposition is well-known
(see Theorems 2.10 and 2.11 in Chapter 3 of \cite{BMPZ}, for example).
\begin{prop}
Let $L$ be a Lie algebra over a field ${\mathbb F}$ of characteristic $p\ge0$.
\begin{enumerate}
\item If $p=0$ then the set of primitive elements of $U(L)$ is $L$. Since an isomorphism of
Hopf algebras perserves the primitive elements, the Hopf algebra structure of $U(L)$ determines $L$.
\item If $p>0$ and $L$ is a restricted Lie algebra then the primitive elements of $u(L)$ is $L$. Hence,
the Hopf algebra structure of $u(L)$ determines $L$.
\end{enumerate}
\end{prop}
If $p>0$ then the set of primitive elements of $U(L)$ is $L_p$,
the restricted Lie subalgebra of $U(L)$ generated by $L$. Thus,
any Hopf algebra isomorphism from $U(L)\to U(H)$
restricts to a restricted Lie algebra isomorphism $L_p\to H_p$.
We now present an example illustrating that the
analogous isomorphism problem for enveloping algebras of Lie
superalgebras fails utterly in the sense that ${\rm dim}_{{\mathbb F}} L$ may not be determined.
Let ${\mathbb F}$ be a field of characteristic not 2. In the case of characteristic 3, we add the
axiom $[x,x,x]=0$ in order for the universal enveloping algebra,
$U(L)$, of a Lie superalgebra $L$ to be well-defined.
\begin{eg}[\cite{RU}]
\emph{Let $L={\mathbb F} x_0$ be the free Lie superalgebra on one generator $x_0$
of even degree, and let $H={\mathbb F} x_1+{\mathbb F} y_0$ be the free Lie
superalgebra on one generator $x_1$ of odd degree, where $y_0=[x_1,x_1]$.
Then $U(L)$ is
isomorphic to the polynomial algebra ${\mathbb F}[x_0]$ in the indeterminate
$x_0$. On the other hand, $U(H)\cong {\mathbb F}[x_1,y_0]/I$, where
$I$ is the ideal of the polynomial algebra ${\mathbb F}[x_1,y_0]$ generated by
$y_0-2x_1^2$. Hence, $U(H)\cong {\mathbb F}[x_1]\cong {\mathbb F}[x_0]\cong U(L)$.
However, $L$ and $H$ are not isomorphic since they do not have the same
dimension.}
\end{eg}
\section{Open Problems}\label{quo}
Below we list a set of problems that are interesting to investigate:
\begin{enumerate}
\item An interesting open problem asks whether or not similar examples as Example \ref{ex1} can
occur in characteristic zero; that is, does there exist a non-free
Lie algebra $L$ over a field of characteristic zero such that
$U(L)$ is a free associative algebra?
\item Is the derived length of a solvable Lie algebra determined?
\item Let $L$ be a finite-dimensional Lie algebra over a field of characteristic zero. Is $Z(L)$ determined?
\item Let $L$ be a finite-dimensional metabelian Lie algebra over a field of characteristic zero. Is $L$ determined?
\item \textbf{Conjecture:} Let $L$ be a finite-dimensional nilpotent Lie algebra over a field of characteristic zero. Then $L$ is determined by $U(L)$.
\item Provide a counterexample to the restricted isomorphism problem.
\item Let $L$ be a finite-dimensional restricted Lie algebra over a field of positive characteristic. Is $Z(L)$ determined?
\item Let $L$ be a finite-dimensional $p$-nilpotent restricted Lie algebra over a field of positive characteristic. Is the nilpotence class of $L$ determined?
\item Let $L$ be a finite-dimensional $p$-nilpotent restricted Lie algebra over a perfect field of positive characteristic $p$ . Is $L$ determined by $u(L)$?
\end{enumerate}
|
2,877,628,089,100 | arxiv | \section*{Introduction}
We show canonicity and normalization for dependent type theory
with a cumulative sequence of universes $U_0:U_1\dots$
{\em with} $\eta$-conversion. We give the argument in a constructive set theory
CZFu$_{<\omega}$, designed by P. Aczel \cite{Aczel}.
We provide a purely algebraic presentation of a canonicity proof, as a way
to build new (algebraic) models of type theory.
We then present a normalization
proof, which is technically more involved, but is based on the same idea.
We believe our argument to be a simplification of existing proofs \cite{ML72,ML73,Abel,Coq}, in the sense
that we never need to introduce a reduction relation, and the proof theoretic strength
of our meta theory is as close as possible to the one of the object theory \cite{Aczel,Rathjen}.
Let us expand these two points. If we are only interested in {\em canonicity}, i.e. to prove
that a closed Boolean is convertible to $0$ or $1$, one argument for simple type theory
(as presented e.g. in \cite{Shoenfield}) consists in defining a ``reducibility''\footnote{The terminology
for this notion seems to vary: in \cite{Godel}, where is was first introduced, it is called
``berechenbarkeit'', which can be translated by ``computable'',
in \cite{Tait} it is called ``convertibility'', and in \cite{Shoenfield} it is
called ``reducibility''.} predicate by induction
on the type. For the type of Boolean, it means exactly to be convertible to $0$ or $1$,
and for function types, it means that it sends a reducible argument to a reducible value. It is
then possible to show by induction on the typing relation that any closed term is reducible.
In particular, if this term is a Boolean, we obtain canonicity. The problem of extending this
argument for a dependent type system with universes is in the definition of what should be
the reducibility predicate for universes. It is natural to try an inductive-recursive definition;
this was essentially the way it was done in \cite{ML72}, which is an early instance of
an inductive-reductive definition. We define when an element of the universe
is reducible, and, by induction on this proof, what is the associated reducibility predicate
for the type represented by this element.
However, there is a difficulty in this approach: it might
well be {\em a priori} that an element is both convertible for instance to the type
of Boolean or of a product type, and if this is the case, the previous inductive-recursive definition is ambiguous.
In \cite{ML72}, this problem is solved by considering first a
reduction relation, and then showing this reduction relation to be confluent, and defining convertibility
as having a commun reduct. This does {\em not} work however
when conversion is defined as a {\em judgement} (as in \cite{ML73,Abel}).
This is an essential difficulty, and a relatively subtle and complex argument is involved in
\cite{Abel,Coq}
to solve it: one defines first an {\em untyped} reduction relation and a reducibility {\em relation}, which is
used first to establish a confluence property.
The main point of this paper is that this essential difficulty can be solved, in a seemingly magical way,
by considering {\em proof-relevant} reducibility, that is where reducibility is defined as a {\em structure} and not only
as a {\em property}. Such an approach is hinted in the reference \cite{ML73}, but \cite{ML73} still
introduces a reduction relation, and also presents a version of type theory with a restricted
form of conversion (no conversion under abstraction, and no $\eta$-conversion;
this restriction is motivated in \cite{ML74}).
Even for the base type, reducibility is a structure:
the reducibility structure of
an element $t$ of Boolean type contains either $0$ (if $t$ and $0$ are convertible)
or $1$ (if $t$ and $1$ are convertible) and this might a priori contains both $0$ and $1$.
Another advantage of our approach, when defining reducibility in a proof-relevant way, is that
the required meta-language is weaker than the one used for a reducibility relation (where one has to do proofs by induction on
this reducibility relation).
Yet another aspect that was not satisfactory in previous attempts \cite{Abel,Coq} is that it involved
essentially a {\em partial equivalence relation model}. One expects that this would be needed
for a type theory with an extensional equality, but not for the present version of type theory.
This issue disappears here: we only consider {\em predicates}
(that are proof-relevant).
A more minor contribution of this paper is its {\em algebraic} character. For both
canonicity and decidability of conversion, one considers first a general model construction
and one obtains then the desired result by instantiating this general construction to the
special instance of the initial (term) model, using in both cases only the abstract characteristic
property of the initial model.
\section{Informal presentation}
We first give an informal presentation of the canonicity proof by first expliciting
the rules of type theory and then explaining the reducibility argument,
\subsection{Type system}
We use conversion as judgements \cite{Abel}. Note that it is not clear
a priori that subject reduction holds.
$$
\frac{\Gamma\vdash A:U_n}{\Gamma,x:A\vdash}~~~~~~\frac{}{()\vdash}~~~~~~~
\frac{\Gamma\vdash}{\Gamma\vdash x:A}~(x\!:\! A~in~\Gamma)$$
$$
\frac{\Gamma\vdash A:U_n~~~~~~\Gamma,x:A\vdash B:U_n}{\Gamma\vdash \Pi (x:A) B:U_n}~~~~~~~~~
\frac{\Gamma,x:A\vdash t:B}{\Gamma\vdash \lambda (x:A) t:\Pi (x:A) B}~~~~~~~~
\frac{\Gamma\vdash t:\Pi (x:A) B~~~~~~\Gamma\vdash u:A}
{\Gamma\vdash t~u:B(u)}
$$
$$
\frac{\Gamma\vdash A:U_n}{\Gamma\vdash A:U_m}~(n\leqslant m)~~~~~~
\frac{}{\Gamma\vdash U_n:U_m}~(n<m)~~~~~
\frac{}{\Gamma\vdash N_2:U_n}
$$
The conversion rules are
$$
\frac{\Gamma\vdash t:A~~~~~~\Gamma\vdash A~ \mathsf{conv}~ B:U_n}{\Gamma\vdash t:B}~~~~~~~~~
\frac{\Gamma\vdash t ~\mathsf{conv}~u:A~~~~~~\Gamma\vdash A ~\mathsf{conv}~ B:U_n}{\Gamma\vdash t ~\mathsf{conv}~u:B}
$$
$$
\frac{\Gamma\vdash t:A}{\Gamma\vdash t ~\mathsf{conv}~t:A}~~~~~~~~~
\frac{\Gamma\vdash t ~\mathsf{conv}~v:A~~~~~~~~~\Gamma\vdash u ~\mathsf{conv}~v:A}{\Gamma\vdash t ~\mathsf{conv}~u:A}
$$
$$
\frac{\Gamma\vdash A ~\mathsf{conv}~B:U_n}{\Gamma\vdash A ~\mathsf{conv}~B:U_m}~(n\leqslant m)~~~~~~
\frac{\Gamma\vdash A_0 ~\mathsf{conv}~ A_1:U_n~~~~~~~~\Gamma,x:A_0\vdash B_0 ~\mathsf{conv}~ B_1:U_n}
{\Gamma\vdash \Pi (x:A_0) B_0 ~\mathsf{conv}~ \Pi (x:A_1) B_1:U_n}
$$
$$
\frac{\Gamma\vdash t ~\mathsf{conv}~t':\Pi (x:A) B~~~~~~\Gamma\vdash u:A}
{\Gamma\vdash t~u ~\mathsf{conv}~t'~u:B(u)}~~~~~~~~~~~
\frac{\Gamma\vdash t:\Pi (x:A) B~~~~~~\Gamma\vdash u ~\mathsf{conv}~ u':A}
{\Gamma\vdash t~u ~\mathsf{conv}~ t~u':B(u)}
$$
$$
\frac{\Gamma,x:A\vdash t:B~~~~~~~~\Gamma\vdash u:A}{\Gamma\vdash (\lambda (x:A) t)~u ~\mathsf{conv}~ t(u):B(u)}
$$
We consider type theory with $\eta$-rules
$$
\frac{\Gamma\vdash t:\Pi (x:A) B~~~~\Gamma\vdash u:\Pi (x:A) B~~~~\Gamma,x:A\vdash t~x ~\mathsf{conv}~ u~x:B}
{\Gamma\vdash t ~\mathsf{conv}~ u:\Pi (x:A) B}$$
Finally we add $N_2:U_1$ with the rules
$$
\frac{}{\Gamma\vdash 0:N_2}~~~~~~~~~~\frac{}{\Gamma\vdash 1:N_2}
~~~~~~~~~
\frac{\Gamma,x:N_2\vdash C:U_n~~~~~\Gamma\vdash a_0:C(0)~~~~~~~\Gamma\vdash a_1:C(1)}
{\Gamma\vdash \hbox{\sf{brec}}~(\lambda x.C)~a_0~a_1:\Pi(x:N_2)C}
$$
with computation rules
${\hbox{\sf{brec}}~(\lambda x.C)~a_0~a_1~0 ~\mathsf{conv}~ a_0:C(0)}$
and
${\hbox{\sf{brec}}~(\lambda x.C)~a_0~a_1~1 ~\mathsf{conv}~ a_1:C(1)}$.
\subsection{Reducibility proof}
The informal reducibility proof consists in associating to each closed expression
$a$ of type theory (treating equally types and terms) an abstract object $a'$
which represents a ``proof'' that $a$ is reducible. If $A$ is a (closed) type, then
$A'$ is a family of sets over the set $\mathsf{Term}(A)$ of closed expressions of type $A$
{\em modulo conversion}. If $a$ is
of type $A$ then $a'$ is an element of the set $A'(a)$.
The metatheory is a (constructive) set theory with a commulative hierarchy of universes
${\cal U}_n$ \cite{Aczel}.
This is defined by structural induction on the expression as follows
\begin{itemize}
\item $(c~a)'$ is $c'~a~a'$
\item $(\lambda (x:A) b)'$ is the function which takes as arguments
a closed expression $a$ of type $A$ and an element $a'$ in $A'(a)$
and produces $b'(a,a')$
\item $(\Pi (x:A)B)'(w)$ for $w$ closed expression of type $\Pi (x:A)B$
is the set $\Pi (a:\mathsf{Term}(A))(a':A'(a))B'(a,a')(w~a)$
\item $N_2'(t)$ is the set $\{0~|~t~\mathsf{conv}~0\}\cup\{1~|~t~\mathsf{conv}~1\}$
\item $U_n'(A)$ is the set $\mathsf{Term}(A)\rightarrow {\cal U}_n$
\end{itemize}
It can then be shown\footnote{We prove this statement by induction on the derivation and consider
a more general statement involving a context; we don't provide the details
in this informal part since this will be covered in the next section.}
that if $a:A$ then $a'$ is an element of $A'(a)$
and furthermore that if $a~\mathsf{conv}~b:A$ then $a' = b'$ in $A'(a) = A'(b)$.
In particular, if $a:N_2$ then $a'$ is $0$ or $1$ and we get that $a$
is convertible to $0$ and $1$.
\medskip
One feature of this argument is that the required
meta theory, here constructive set theory, is known to be of similar strength as the corresponding
type theory; for a term involving $n$ universes, the meta theory will need $n+1$ universes \cite{Rathjen}.
This is to be contrasted with the arguments in \cite{ML72,Abel,Coq} involving induction recursion
which is a much stronger principle.
\medskip
We believe that the mathematical purest way to formulate this argument
is an algebraic argument, giving a (generalized) algebraic presentation of type theory.
We then use only of the {\em term} model the fact that it is the {\em initial} model
of type theory. This is what is done in the next section.
\section{Model and syntax of dependent type theory with universes}
\subsection{Cumulative categories with families}
We present a slight variation (for universes) of the notion of ``category'' with families
\cite{Dybjer}\footnote{As emphasized in this reference, these models should be more exactly
thought of as {\em generalized algebraic structures} rather than {\em categories}; e.g. the initial model
is defined up to isomorphism and not up to equivalence). This provides a generalized algebraic notion
of model of type theory.}.
A model is given first by a class of {\em contexts}. If $\Gamma,\Delta$ are two given contexts
we have a set $\Delta\rightarrow\Gamma$ of {\em substitutions} from $\Delta$ to $\Gamma$.
These collections of sets are equipped with operations that
satisfy the laws of composition in a category: we have a substitution ${\sf id}$
in $\Gamma\rightarrow\Gamma$ and
a composition operator $\sigma\delta$ in $\Theta\rightarrow\Gamma$ if
$\delta$ is in $\Theta\rightarrow\Delta$ and $\sigma$ in $\Delta\rightarrow\Gamma$. Furthermore
we should have $\sigma {\sf id} = {\sf id} \sigma = \sigma$ and
$(\sigma\delta)\theta = \sigma(\delta\theta)$ if $\theta:\Theta_1\rightarrow\Theta$.
We assume to have a ``terminal'' context $()$: for any other context, there is a
unique substitution, also written $()$, in $\Gamma\rightarrow ()$. In particular
we have $()\sigma = ()$ in $\Delta\rightarrow ()$ if $\sigma$ is in
$\Delta\rightarrow \Gamma$.
We write $|\Gamma|$ the set of substitutions $()\rightarrow\Gamma$.
\medskip
If $\Gamma$ is a context we have a cumulative sequence of sets $\hbox{\sf Type}_n(\Gamma)$
of {\em types over} $\Gamma$ at level $n$ (where $n$ is a natural number).
If $A$ in $\hbox{\sf Type}_n(\Gamma)$ and $\sigma$ in $\Delta\rightarrow\Gamma$
we should have $A\sigma$ in $\hbox{\sf Type}_n(\Delta)$.
Furthermore $A{\sf id} = A$ and $(A\sigma)\delta = A(\sigma\delta)$.
If $A$ in $\hbox{\sf Type}_n(\Gamma)$ we also have a collection $\mathsf{Elem}(\Gamma,A)$
of {\em elements of type} $A$.
If $a$ in $\mathsf{Elem}(\Gamma,A)$
and $\sigma$ in $\Delta\rightarrow\Gamma$
we have $a\sigma$ in $\mathsf{Elem}(\Delta,A\sigma)$. Furthermore
$a{\sf id} = a$ and $(a\sigma)\delta = a(\sigma\delta)$.
If $A$ is in $\hbox{\sf Type}_n()$ we write $|A|$ the set $\mathsf{Elem}((),A)$.
We have a {\em context extension operation}: if $A$ is in $\hbox{\sf Type}_n(\Gamma)$ then we can
form a new context $\Gamma.A$. Furthermore there is a projection
$\mathsf{p}$ in $\Gamma.A\rightarrow \Gamma$ and a special element
$\mathsf{q}$ in $\mathsf{Elem}(\Gamma.A,A\mathsf{p})$. If $\sigma$ is in $\Delta\rightarrow \Gamma$ and
$A$ in $\hbox{\sf Type}_n(\Gamma)$ and $a$ in $\mathsf{Elem}(\Delta,A\sigma)$ we have
an extension operation $(\sigma,a)$ in $\Delta\rightarrow \Gamma.A$.
We should have $\mathsf{p} (\sigma,a) = \sigma$ and $\mathsf{q} (\sigma,a) = a$ and
$(\sigma,a)\delta = (\sigma\delta,a\delta)$ and $(\mathsf{p},\mathsf{q}) = {\sf id}$.
If $a$ is in $\mathsf{Elem}(\Gamma,A)$ we write ${\sf subst}{a}= ({\sf id},a)$ in $\Gamma\rightarrow \Gamma.A$.
Thus if $B$ is in $\hbox{\sf Type}_n(\Gamma.A)$ and $a$ in $\mathsf{Elem}(\Gamma,A)$
we have $B{\sf subst}{a}$ in $\hbox{\sf Type}_n(\Gamma)$.
If furthermore $b$ is in $\mathsf{Elem}(\Gamma.A,B)$ we have $b{\sf subst}{a}$ in $\mathsf{Elem}(\Gamma,B{\sf subst}{a})$.
\medskip
A {\em global} type of level $n$ is given by a an element $C$ in $\hbox{\sf Type}_n()$.
We write simply $C$ instead of $C()$ in $\hbox{\sf Type}_n(\Gamma)$ for $()$ in $\Gamma\rightarrow ()$.
Given such a global element $C$, a global element of type $C$ is given by
an element $c$ in $\mathsf{Elem}((),C)$. We then write similarly
simply $c$ instead of $c()$ in $\mathsf{Elem}(\Gamma,C)$.
Models are sometimes presented by giving a class of special maps (fibrations), where a type
are modelled by a fibration and elements by a section of this fibration. In our case, the fibrations
are the maps $\mathsf{p}$ in $\Gamma.A\rightarrow \Gamma$, and the sections of these fibrations
correspond exactly to elements in $\mathsf{Elem}(\Gamma,A)$.
Any element $a$ $\mathsf{Elem}(\Gamma,A)$ defines a section ${\sf subst}{a} = ({\sf id},a):\Gamma\rightarrow\Gamma.A$
and any such section is of this form.
\subsection{Dependent product types}
A category with families has {\em product types}
if we furthermore have one operation $\Pi~A~B$ in
$\hbox{\sf Type}_n(\Gamma)$ for $A$ is in $\hbox{\sf Type}_n(\Gamma)$ and $B$ is in $\hbox{\sf Type}_n(\Gamma.A)$.
We should have $(\Pi~A~B)\sigma = \Pi~(A\sigma)~(B\sigma^+)$
where $\sigma^+ = (\sigma\mathsf{p},\mathsf{q})$.
We have an abstraction operation $\lambda b$ in $\mathsf{Elem}(\Gamma,\Pi~A~B)$ given
$b$ in $\mathsf{Elem}(\Gamma.A,B)$.
We have an application operation such that $\mathsf{app}(c,a)$ is in $\mathsf{Elem}(\Gamma,B{\sf subst}{a})$ if
$a$ is in $\mathsf{Elem}(\Gamma,A)$ and $c$ is in $\mathsf{Elem}(\Gamma,\Pi~A~B)$.
These operations should satisfy the equations
$$
\mathsf{app}{\lambda b}{a} = b{\sf subst}{a}~~~~~~c = \lambda (\mathsf{app}~(c\mathsf{p},\mathsf{q}))~~~~~
(\lambda b)\sigma = \lambda (b\sigma^+)~~~~
\mathsf{app}{c}{a}\sigma = \mathsf{app}{c\sigma}{a\sigma}
$$
where we write $\sigma^+ = (\sigma\mathsf{p},\mathsf{q})$.
\subsection{Cumulative universes}
We assume to have global elements $U_n$ in $\hbox{\sf Type}_{n+1}(\Gamma)$
such that $\hbox{\sf Type}_n(\Gamma) = \mathsf{Elem}(\Gamma,U_n)$.
\subsection{Booleans}
Finally we add the global constant $N_2$ in $\hbox{\sf Type}_0(\Gamma)$
and global elements
$0$ and $1$ in $\mathsf{Elem}(\Gamma,N_2)$.
Given $T$ in $\hbox{\sf Type}_n(\Gamma.N_2)$ and $a_0$ in $\mathsf{Elem}(\Gamma,T{\sf subst}{0})$
and $a_1$ in $\mathsf{Elem}(\Gamma,T{\sf subst}{1})$ we have an operation
$\hbox{\sf{brec}}(T,a_0,a_1)$ producing an element in $\mathsf{Elem}(\Gamma,\Pi~N_2~T)$
satisfying the equations
$\mathsf{app}{\hbox{\sf{brec}}(T,a_0,a_1)}{0} = a_0$ and $\mathsf{app}{\hbox{\sf{brec}}(T,a_0,a_1)}{1} = a_1$.
Furthermore, $\hbox{\sf{brec}}(T,a_0,a_1)\sigma = \hbox{\sf{brec}}(T\sigma^+,a_0\sigma,a_1\sigma)$.
\section{Reducibility model}
Given a model of type theory $\mathsf{M}$ as defined above, we describe how to build a new
associated ``reducibility'' model $\mathsf{M}^*$.
When applied to the initial/term model $\mathsf{M}_0$, this gives a proof of
canonicity which can be seen as a direct generalization of the argument presented
in \cite{Shoenfield} for G\"odel system T. As explained in the introduction, the main
novelty here is that we consider a proof-relevant notion of reducibility.
A context of $\mathsf{M}^*$ is given by a context $\Gamma$ of the model $\mathsf{M}$ together with
a family of sets $\Gamma'(\rho)$ for $\rho$ in $|\Gamma|$.
A substitution in $\Delta,\Delta'\rightarrow^* \Gamma,\Gamma'$ is given by
a pair $\sigma,\sigma'$ with $\sigma$ in $\Delta\rightarrow\Gamma$ and
$\sigma'$ in $\Pi (\nu\in |\Delta|)\Delta'(\nu)\rightarrow \Gamma'(\sigma\nu)$.
The identity substitution is the pair $1^* = 1,1'$ with $1'\rho\rho' = \rho'$.
Composition is defined by $(\sigma,\sigma')(\delta,\delta') = \sigma\delta,(\sigma\delta)'$
with
$$
(\sigma\delta)'\alpha\alpha' = \sigma'(\delta\alpha)(\delta'\alpha\alpha')
$$
\medskip
The set $\hbox{\sf Type}^*_n(\Gamma,\Gamma')$ is defined to be the set of pairs
$A,A'$ where $A$ is in $\hbox{\sf Type}_n(\Gamma)$ and
$A'\rho\rho'$ is in $|A\rho|\rightarrow{\cal U}_n$. We define then
$A'(\sigma,\sigma')\nu\nu' = A'(\sigma\nu)(\sigma'\nu\nu')$.
We define $\mathsf{Elem}^*(\Gamma,\Gamma')(A,A')$ to be the set of pairs
$a,a'$ where $a$ is in $\mathsf{Elem}(\Gamma,A)$ and
$a'\rho\rho'$ is in $A'\rho\rho'(a\rho)$ for each $\rho$ in $|\Gamma|$
and $\rho'$ in $\Gamma'(\rho)$.
We define then $(a,a')(\sigma,\sigma') = a\sigma,a'(\sigma,\sigma')$
with $a'(\sigma,\sigma')\nu\nu' = a'(\sigma\nu)(\sigma'\nu\nu')$.
\medskip
The extension operation is defined by $(\Gamma,\Gamma').(A,A') = \Gamma.A,(\Gamma.A)'$
where $(\Gamma.A)'(\rho,u)$ is the set of pairs $\rho',u'$
with $\rho'\in \Gamma'(\rho)$ and $u'$ in $A' \rho \rho'(u)$.
We define an element $\mathsf{p}^* = \mathsf{p},\mathsf{p}'$ in $(\Gamma,\Gamma').(A,A')\rightarrow^* \Gamma,\Gamma'$
by taking $\mathsf{p}'(\rho,u)(\rho',u') = \rho'$.
We have then an element $\mathsf{q},\mathsf{q}'$ in $\mathsf{Elem}^*((\Gamma,\Gamma').(A,A'),(A,A')\mathsf{p}^*)$
defined by $\mathsf{q}'(\rho,u)(\rho',u') = u'$.
\subsection{Dependent product}
We define a new operation $\Pi^*~(A,A')~(B,B') = \Pi~A~B,(\Pi~A~B)'$ where
$(\Pi~A~B)'\rho\rho'(w)$ is the set
$$
\Pi (u\in |A\rho|)\Pi (u'\in A'\rho\rho'(u))B'(\rho,u)(\rho',u')(\mathsf{app}{w}{u})
$$
If $b,b'$ is in $\mathsf{Elem}^*((\Gamma,\Gamma').(A,A'),(B,B'))$ then
$\lambda^* (b,b') = \lambda b, (\lambda b)'$ where $(\lambda b)'$ is defined by the equation
$$
(\lambda b)' \rho\rho' u u' = b'(\rho,u)(\rho',u')
$$
which is in
$$
B'(\rho,u)(\rho',u')(\mathsf{app}{(\lambda b)\rho}{u}) = B'(\rho,u)(\rho',u')(b(\rho,u))
$$
We have an application operation $\mathsf{app}^*((c,c'),(a,a')) = (\mathsf{app}(c,a),\mathsf{app}(c,a)')$
where $\mathsf{app}(c,a)'\rho\rho' = c'\rho\rho'(a\rho)(a'\rho\rho').$
\subsection{Universes}
We define $U_n'(A)$ for $A$ in $|U_n|$ to be the set of functions
$|A|\rightarrow {\cal U}_n$. Thus an element $A'$ of $U_n'(A)$ is a family
of sets $A'(u)$ in ${\cal U}_n$ for $u$ in $|A|$. The universe $U_n^*$ of $\mathsf{M}^*$
is defined to be the pair $U_n,U_n'$ and we have
$\mathsf{Elem}^*((\Gamma,\Gamma'),U_n^*) = \hbox{\sf Type}_n^*(\Gamma,\Gamma')$.
\subsection{Booleans}
We define $N_2'(u)$ for $u$ in $|N_2|$ to be the set consisting of
$0$ if $u = 0$ and of $1$ if $u = 1$. We have $N_2'$ in $U_0'(N_2)$.
Note that $N_2'(u)$ may not be a subsingleton if we have $0 = 1$ in the model.
We define $\hbox{\sf{brec}}(T,a_0,a_1)'\rho\rho'u u'$ to be $a_0'\rho\rho'$ if $u' = 0$
and to be $a_1'\rho\rho'$ if $u' = 1$.
\subsection{Main result}
\begin{theorem}
The new collection of context, with the operations $\rightarrow^*,~\hbox{\sf Type}_n^*,\mathsf{Elem}^*$
and $U_n^*$ and $N_2^*$ define a new model of type theory.
\end{theorem}
The proof consists in checking that the required equalities hold for the operations
we have defined. For instance, we have
$$
\mathsf{app}^*(\lambda^*(b,b'),(a,a')) = (\mathsf{app}(\lambda b,a),\mathsf{app}(\lambda b,a)') = (b(1,a),\mathsf{app}(\lambda b,a)')
$$
and
$$
\mathsf{app}(\lambda b,a)'\rho\rho' = (\lambda b)'\rho\rho'(a\rho)(a'\rho\rho') = b'(\rho,a\rho)(\rho',a'\rho\rho')
$$
and
$$
(b(1,a))'\rho\rho' = b'(\rho,a\rho)(1'\rho\rho',a'\rho\rho') = b'(\rho,a\rho)(\rho',a'\rho\rho')
$$
When checking the equalities, we {\em only use $\beta,\eta$-conversions at the metalevel}.
\medskip
There are of course strong similarities with the parametricity model presented in \cite{BJ}.
This model can also be seen as a constructive version of the {\em glueing} technique \cite{LS,Shulman}.
Indeed, to give a family of sets over $|\Gamma|$ is essentially the same as to give a set $X$ and a map
$X\rightarrow |\Gamma|$, which is what happens in the glueing technique \cite{LS,Shulman}.
\section{The term model}
There is a canonical notion of morphism between two models.
For instance, the first projection $\mathsf{M}^*\rightarrow \mathsf{M}$ defines a map of models of type theory.
As for models of generalized algebraic theories
\cite{Dybjer}, there is an {\em initial} model unique up to isomorphism.
We define the {\em term} model $\mathsf{M}_0$ of type theory to be this initial model.
As for equational theories, this model can be presented by first-order terms
(corresponding to each operations) modulo the equations/conversions that have to
hold in any model.
\begin{theorem}
In the initial model given $u$ in $|N_2|$ we have $u = 0$ or $u = 1$. Furthermore
we don't have $0 = 1$ in the initial model.
\end{theorem}
\begin{proof}
We have a unique
map of models $\mathsf{M}_0\rightarrow \mathsf{M}_0^*$. The composition of the first projection
with this map has to be the identity function on $\mathsf{M}_0$.
If $u$ is in $|N_2|$ the image of $u$ by the initial map has hence to be a pair of the
form $u,u'$ with $u'$ in $N_2'(u)$. It follows that we have $u = 0$ if $u' = 0$
and $u = 1$ if $u' = 1$. Since $0' = 0$ and $1' = 1$ we cannot have $0 = 1$ in
the initial model $\mathsf{M}_0$.
\end{proof}
\section{Presheaf model}
We suppose given an arbitrary model $\mathsf{M}$. We define from this the following
category ${\cal C}$ of ``telescopes''. An object of ${\cal C}$ is a list
$A_1,\dots,A_n$ with $A_1$ in $\hbox{\sf Type}()$, $A_2$ in $\hbox{\sf Type}(A_1)$,
$A_3$ in $\hbox{\sf Type}(A_1.A_2)$ $\dots$ To any such object $X$ we can associate a context
$i(X) = A_1.\dots.A_n$ of the model $\mathsf{M}$. If $A$ is in $\hbox{\sf Type}(i(X))$, we define
the set $\mathsf{Var}(X,A)$ of numbers $v_k$ such that $\mathsf{q}\mathsf{p}^{n-k}$ is in $\mathsf{Elem}(i(X),A)$.
We may write simply $\mathsf{Elem}(X,A)$ instead of $\mathsf{Elem}(i(X),A)$. Similarly
we may write $\hbox{\sf Type}_n(X) = \mathsf{Elem}(X,U_n)$ for $\hbox{\sf Type}_n(i(X))$.
If $v_k$ is in
$\mathsf{Var}(X,A)$ we write $[v_k] = \mathsf{q}\mathsf{p}^{n-k}$. If $Y = B_1,\dots,B_m$ is an
object of ${\cal C}$, a map $\sigma:Y\rightarrow X$ is given by a list
$u_1,\dots,u_n$ such that $u_p$ is in $\mathsf{Var}(Y,A_p([u_1],\dots,[u_{p-1}]))$.
We then define $[\sigma] = ([u_1],\dots,[u_p]):i(Y)\rightarrow i(X)$.
It is direct to define a composition operation such that $[\sigma\delta] = [\sigma][\delta]$
which gives a category structure on these objects.
We use freely that we can interpret the language of dependent types (with universes)
in any presheaf category \cite{Hofmann1}.
A presheaf $F$ is given by a family of sets $F(X)$
indexed by contexts with restriction maps $F(X)\rightarrow F(Y),~u\mapsto u\sigma$
if $\sigma:Y\rightarrow X$, satisfying the equations
$u1 = u$ and $(u\sigma)\delta = u(\sigma\delta)$ if $\delta:Z\rightarrow Y$.
A dependent presheaf $G$ over $F$ is a presheaf over
the category of elements of $F$, so it is given by a family of sets $G(X,\rho)$
for $\rho$ in $F(X)$ with restriction maps.
We write ${\cal V}_0,{\cal V}_1,\dots$ the cumulative sequence
of presheaf universes, so that ${\cal V}_n(X)$ is the set of ${\cal U}_n$-valued
dependent presheaves on the presheaf represented by $X$.
$\hbox{\sf Type}_n$ defines a presheaf over this category, with $\hbox{\sf Type}_n$ subpresheaf
of $\hbox{\sf Type}_{n+1}$. We can see $\mathsf{Elem}$ as a dependent presheaf over $\hbox{\sf Type}_n$
since it determines a collection of sets $\mathsf{Elem}(X,A)$ for $A$ in $\hbox{\sf Type}_n(X)$
with restriction maps.
If $A$ is in $\hbox{\sf Type}_n(X)$ we let $\mathsf{Norm}(X,A)$ (resp. $\mathsf{Neut}(X,A)$)
be the set of all expressions of type $A$ that are in normal
form (resp. neutral). As for $\mathsf{Elem}$, we can see $\mathsf{Neut}$ and $\mathsf{Norm}$
as dependent types over $\hbox{\sf Type}_n$, and we have
$$\mathsf{Var}(A)\subseteq \mathsf{Neut}(A)\subseteq\mathsf{Norm}(A)$$
We have an evaluation function $[e]:\mathsf{Elem}(A)$ if $e:\mathsf{Norm}(A)$.
If $a$ is in $\mathsf{Elem}(A)$ then we let $\mathsf{Norm}(A)|a$ (resp. $\mathsf{Neut}(A)|a$) be
the subtypes of $\mathsf{Norm}(A)$ (resp. $\mathsf{Neut}(A)$) of elements $e$ such that $[e] = a$.
\medskip
Each context $\Gamma$ defines a presheaf $|\Gamma|$ by letting $|\Gamma|(X)$ be
the set of all substitutions $i(X)\rightarrow\Gamma$.
Any element $A$ of $\hbox{\sf Type}_n(\Gamma)$ defines internally a function
$|\Gamma|\rightarrow\hbox{\sf Type}_n,~\rho\mapsto A\rho$.
We have a canonical isomorphism between $\mathsf{Var}(A)\rightarrow\hbox{\sf Type}_n$ and $\mathsf{Elem}(A\rightarrow U_n)$.
We can then use this isomorphism to build an operation
$$
\pi : \Pi (A:\hbox{\sf Type}_n)(\mathsf{Var}(A)\rightarrow\hbox{\sf Type}_n)\rightarrow \hbox{\sf Type}_n
$$
such that $(\Pi~A~B)\rho = \pi (A\rho) ((\lambda x:\mathsf{Var}(A\rho))B(\rho,[x]))$.
We can also define, given $A:\hbox{\sf Type}_n$ and $F:\mathsf{Var}(A)\rightarrow\hbox{\sf Type}_n$
an operation $\Lambda A f:\mathsf{Elem}(\pi A F)$, for $f:\Pi (x:\mathsf{Var}(A))\mathsf{Elem}(F~x)$.
Similarly, we can define an operation
$$
\pi : \Pi (A:\mathsf{Norm}(U_n))(\mathsf{Var}([A])\rightarrow\mathsf{Norm}(U_n))\rightarrow \mathsf{Norm}(U_n)
$$
such that $[\pi A F] = \pi [A] (\lambda (x:\mathsf{Var}([A]))[F~x])$ and
given $A:\mathsf{Norm}(U_n)$ and $F:\mathsf{Var}([A])\rightarrow\hbox{\sf Type}_n$
and $f:\Pi (x:\mathsf{Var}([A]))\mathsf{Elem}(F~x)$
an operation $\Lambda A f:\mathsf{Norm}(\pi [A] F)$ such that
$[\Lambda A f] = \Lambda [A] (\lambda (x:\mathsf{Var}([A])[f~x]))$.
While equality might not be decidable in $\mathsf{Var}(A)$ (because we use arbitrary renaming
as maps in the base category), the product operation is injective: if
$\pi A F = \pi B G$ in $\mathsf{Norm}(U_n)$ then $A = B$ in $\mathsf{Norm}(U_n)$ and
$F = G$ in $\mathsf{Var}([A])\rightarrow\hbox{\sf Type}_n$.
\section{Normalization model}
The model is similar to the reducibility model and we only explain the main operations.
As before, a context is a pair $\Gamma,\Gamma'$ where $\Gamma$ is a context of $\mathsf{M}$
and $\Gamma'$ is a dependent family over $|\Gamma|$.
\medskip
A type at level $n$ over this context
consists now of a pair $A,\lift{A}$ where
$A$ is in $\hbox{\sf Type}_n(\Gamma)$ and
$\lift{A}\rho\rho'$ in $U_n'(A\rho)$
for $\rho$ in $|\Gamma|$ and $\rho'$ in $\Gamma'(\rho)$.
An element of $U_n'(T)$ for $T$ in $\hbox{\sf Type}_n$ consists in
a 4-uple $T',T_0,\alpha,\beta$
where the element $T_0$ is in $\mathsf{Norm}(U_n)|T$,
the element $T'$ is in $\mathsf{Elem}(T)\rightarrow {\cal V}_n$,
the element
$\beta$ is in $\Pi (k : \mathsf{Neut}(T))T'(\Val{k})$
and $\alpha$ is in $\Pi (u : \mathsf{Elem}(T))~T'(u)\rightarrow \mathsf{Norm}(T)|u$.
\medskip
An element of this type is a pair $a,\lift{a}$ where $a$ is in $\mathsf{Elem}(\Gamma,A)$
and $\lift{a}\rho\rho'$ is an element of $T'(a\rho)$ where
$(T',T_0,\alpha,\beta) = \lift{A}\rho\rho'$.
\medskip
The intuition behind this definition is that it is a ``proof-relevant'' way
to express the method of reducibility used for proving normalization \cite{FLD}: a
reducibility predicate has to contain all neutral terms and only normalizable terms.
The function $\alpha$ (resp. $\beta$) is closely connected to the ``reify'' (resp. ``reflect'')
function used in normalization by evaluation \cite{BS}, but for a ``glued'' model.
\medskip
We redefine ${N_2}'(t)$ to be the set of elements
$u$ in $\mathsf{Norm}(N_2)|t$
such that $u$ is $0$ or $1$ or is neutral.
We define $\alpha_{N_2} t \nu = \nu$ and $\beta_{N_2}(k) = k$.
\medskip
We define $\alpha_{U_n}~T~(T',T_0,\alpha_T,\beta_T) = T_0$
and for $K$ neutral $\beta_{U_n}(K) = (K',K,\alpha,\beta)$ where $K'(t)$ is
$\mathsf{Neut}(\Val{K})|t$ and $\alpha t k = k$ and $\beta(k) = k$.
\medskip
The set $\hbox{\sf Type}^*_n(\Gamma,\Gamma')$ is defined to be the set of pairs
$A,\lift{A}$ where $A$ is in $\hbox{\sf Type}_n(\Gamma)$ and
$\lift{A}\rho\rho'$ is in $U'_n(A\rho)$.
\medskip
The extension operation is defined by $(\Gamma,\Gamma').(A,\lift{A}) = \Gamma.A,(\Gamma.A)'$
where $(\Gamma.A)'(\rho,u)$ is the set of pairs $\rho',\nu$
with $\rho'\in \Gamma'(\rho)$ and $\nu$ in $\lift{A}\rho \rho'.1(u)$.
\medskip
We define a new operation $\Pi^*~(A,\lift{A})~(B,\lift{B}) = C,\lift{C}$ where $C = \Pi~A~B$
and $\lift{C}\rho\rho'$ is the tuple
\begin{itemize}
\item $C'(w) = \Pi (a : \mathsf{Elem}(A\rho)) \Pi (\nu : T'(u))F' u \nu(\mathsf{app}{w}{u})$
\item $\beta(k) u \nu = \beta_F u \nu (\mathsf{app}{k}{\alpha_T u \nu})$
\item $\alpha~ w~\xi =
\Lambda T_0 g$ with $g(x) = \alpha_F \Val{x} \beta_T(x) (\mathsf{app}{w}{\Val{x}}) (\xi \Val{x} \beta_T(x))$
\item $C_0 = \pi T_0 G$ with $G(x) = F_0 \Val{x} \beta_T(x)$
\end{itemize}
where we write $(T',T_0,\alpha_T,\beta_T) = \lift{A}\rho\rho'$ in $U_n'(A\rho)$
and for each $u$ in $\mathsf{Elem}(A\rho)$ and $\nu$ in $T'(u)$ we write
$(F'u \nu,F_0 u \nu,\alpha_F u \nu,\beta_F u \nu) = \lift{B}(\rho,u)(\rho',\nu)$
in $U_n'(B(\rho,u))$. We can check $[C_0] = (\Pi~A~B)\rho$
and we have $C',C_0,\alpha,\beta$ is an element in $U_n'((\Pi~A~B)\rho).$
\medskip
We define $\lift{U_n} = U_n,{U_n}',\alpha_{U_n},\beta_{U_n}$ and
$\lift{N_2} = N_2,{N_2}',\alpha_{N_2},\beta_{N_2}$.
\medskip
If we have $T$ in $\hbox{\sf Type}_n(\Gamma.N_2)$ and $a_0$ in $\mathsf{Elem}(T{\sf subst}{0})$ and $a_1$ in $\mathsf{Elem}(T{\sf subst}{1})$
and for each $\rho:|\Gamma|$ and $\rho':\Gamma'(\rho)$ and
$u$ in $\mathsf{Elem}(N_2)$ and $\nu$ in $N_2'(u)$ an element
$(T'u \nu,T_0 u \nu,\alpha_T u \nu,\beta_T u \nu)$
in $U_n'(T(\rho,u))$ and $\lift{a_0}$ in $T'00({a_0})$ and
$\lift{a_1}$ in $T'11{a_1}$ we define $f = \lift{\hbox{\sf{brec}}(T,a_0,a_1)}\rho\rho'$ as follows.
We take $f~u~\nu = \lift{a_0}$ if $\nu = 0$ and
$f~u~\nu = \lift{a_1}$ if $\nu = 1$ and finally
$f~u~\nu =
\beta_T u \nu (\hbox{\sf{brec}}(\Lambda(N_2,g),\alpha_T 0 0 {a_0} \lift{a_0},
\alpha_T 1 1 {a_1} \lift{a_1}))(\nu))$
where $g(x) = T_0\Val{x}\beta_{N_2}(x)$ if $\nu$ is neutral.
\medskip
We thus get, starting from an arbitrary model $\mathsf{M}$, a new model $\mathsf{M}^*$ with a projection
map $\mathsf{M}^*\rightarrow \mathsf{M}$. As for the canonicity model, if we start from the initial model $\mathsf{M}_0$
we have an initial map $\mathsf{M}_0\rightarrow\mathsf{M}_0^*$ which is a section of the projection
map. Hence for any $a$ in $\mathsf{Elem}(A)$ we can compute $\lift{a}$ in $A'(a)$
where $(A',A_0,\alpha_A,\beta_A) = \lift{A}$ and we have $\alpha_A~a~\lift{a}$
in $\mathsf{Norm}(A)|a$.
\begin{theorem}
Equality in $\mathsf{M}_0$ is decidable.
\end{theorem}
\begin{proof}
If $a$ and $b$ are of type $A$
we can compute $\lift{A} = (A',A_0,\alpha,\beta)$. We then have $a = b$ in $\mathsf{Elem}(A)$
if, and only if, $\alpha a \lift{a} = \alpha b \lift{b}$ in $\mathsf{Norm}(A)$
since $u = [\alpha u \lift{u}]$ for any $u$ in $\mathsf{Elem}(A)$. The result then
follows from the fact that the equality in $\mathsf{Norm}((),A)$ is decidable.
\end{proof}
We also can prove that $\Pi$ is one-to-one for conversions, following P. Hancock's argument
presented in \cite{ML73}.
\section{Conclusion}
Our argument extends directly to the addition of dependent sum types with surjective
pairing, or inductive types such as the type $\mathsf{W}~A~B$ \cite{ML79}.
The proof is very similar to the argument presented in \cite{ML73}, but it covers
conversion under abstraction and $\eta$-conversion. Instead of set theory, one could formalize
the argument in extensional type theory; presheaf models have been already represented elegantly
in NuPrl \cite{Bickford}.
As we noticed however, the meta theory only uses the form of extensionality ($\eta$-conversion)
also used in the object theory, and we should be able to express the normalization proof as
a program transformation from one type theory to another. The formulation of the presheaf
model as a(n extension of) type theory will be similar to the way cubical type theory \cite{CCHM} expresses syntactically
a presheaf model over a base category which is a Lawvere theory. This should amount essentially
to work in a type theory with a double context, where substitutions for the first context are
restricted to be renamings. We leave this as future work,
which, if successful, would refute some arguments in \cite{ML74} for not accepting $\eta$-conversion
as definitional equality.
\section*{Acknowledgement}
This work started as a reading group of the paper \cite{Shulman} together with Simon Huber and
Christian Sattler. The discussions we had were essential for this work; in particular Christian
Sattler pointed out to me the reference \cite{AHS}
|
2,877,628,089,101 | arxiv | \section{Introduction}
In the factorization approach to non-leptonic meson decays
\cite{FEYNMAN,STECHF} one can
distinguish three classes of decays for which the amplitudes have the
following general structure \cite{BAUER,NEUBERT}:
\begin{equation}\label{1}
A_{\rm I}=\frac{G_F}{\sqrt{2}} V_{CKM}a_1(\mu)\langle O_1\rangle_F
\qquad {\rm (Class~I)}
\end{equation}
\begin{equation}\label{2}
A_{\rm II}=\frac{G_F}{\sqrt{2}} V_{CKM}a_2(\mu)\langle O_2\rangle_F
\qquad {\rm (Class~II)}
\end{equation}
\begin{equation}\label{3}
A_{\rm III}=
\frac{G_F}{\sqrt{2}} V_{CKM}[a_1(\mu)+x a_2(\mu)]\langle O_1\rangle_F
\qquad {\rm (Class~III)}
\end{equation}
Here $V_{CKM}$ denotes symbolically the CKM factor characteristic for a
given decay. $O_1$ and $O_2$ are local four quark operators present in
the relevant effective hamiltonian, $\langle O_i\rangle_F$ are
the hadronic matrix
elements of these operators given as products of matrix elements of
quark currents and $x$ is a non-perturbative factor equal to unity in
the flavour symmetry limit. Finally $a_i(\mu)$ are $QCD$ factors which
are the main subject of this paper.
As an example consider the decay $\bar B^0\to D^+\pi^-$. Then the
relevant effective hamiltonian is given by
\begin{equation}\label{4}
H_{eff}=\frac{G_F}{\sqrt{2}}V_{cb}V_{ud}^{*}
\lbrack C_1(\mu) O_1+C_2(\mu)O_2 \rbrack
\end{equation}
where
\begin{equation}\label{5}
O_1=(\bar d_i u_i)_{V-A} (\bar c_j b_j)_{V-A}
\qquad
O_2=(\bar d_i u_j)_{V-A} (\bar c_j b_i)_{V-A}
\end{equation}
with $(i,j=1,2,3)$ denoting colour indices and $V-A$ referring to
$\gamma_\mu (1-\gamma_5)$. $C_1(\mu)$ and $C_2(\mu)$ are short distance
Wilson coefficients computed at the renormalization scale $\mu=O(m_b)$.
We will neglect the contributions of penguin operators since their
Wilson coefficients are numerically very small as compared to $C_{1,2}$
\cite{BJL,ROME}. Exceptions are CP-violating decays and rare decays which
are beyond the scope of this paper.
Note that we use here the labeling of the operators as given in
\cite{BAUER,NEUBERT} which differs from \cite{BJL,ROME} by the interchange
$1\leftrightarrow 2$.
$C_i$ and $a_i$ are related as follows:
\begin{equation}\label{6}
a_1(\mu)=C_1(\mu)+\frac{1}{N} C_2(\mu) \qquad
a_2(\mu)=C_2(\mu)+\frac{1}{N} C_1(\mu)
\end{equation}
where $N$ is the number of colours. We will set $N=3$ in what follows.
Application of the factorization method gives
\begin{equation}\label{7}
A(\bar B^0\to D^+\pi^-)=\frac{G_F}{\sqrt{2}}V_{cb}V_{ud}^{*}
a_1(\mu)\langle\pi^-\mid(\bar d_i u_i)_{V-A}\mid 0\rangle
\langle D^+\mid (\bar c_j b_j)_{V-A}\mid \bar B^0\rangle
\end{equation}
where $\langle D^+\pi^-\mid O_1 \mid \bar B^0\rangle$ has been factored
into two
quark current matrix elements and the second term in $a_1(\mu)$ represents
the contribution of the operator $O_2$ in the factorization approach.
Other decays can be handled in a similar manner \cite{NEUBERT}.
Although the flavour
structure of the corresponding local operators changes,
the colour structure
and the coefficients $C_i(\mu)$ remain unchanged. For instance
$\bar B^0\to \bar K^0\psi$ and $B^-\to D^0 K^-$ belong to class II and
III respectively. Finally a similar procedure can be applied to
D-decays with the coefficients $C_i$ evaluated at $\mu=O(m_c)$.
Once the matrix elements have been expressed in terms of various meson
decay constants and generally model dependent formfactors, predictions
for non-leptonic heavy meson decays can be made.
Moreover relations between non-leptonic and semi-leptonic decays can
be found which allow to test factorization in a model independent
manner.
Although the simplicity of this framework is rather appealing,
it is well known that
non-factorizable contributions must be present in the hadronic matrix
elements of the current--current operators $O_1$ and $O_2$ in order
to cancel the $\mu$ dependence of $C_i(\mu)$ or $a_i(\mu)$ so that
the physical amplitudes do not depend on the arbitrary renormalization
scale $\mu$.
$\langle O_i\rangle_F$ being products of matrix elements of
conserved currents
are $\mu$ independent and the cancellation of the $\mu$ dependence
in (\ref{1})-(\ref{3}) does not take place.
Consequently from the point of view of QCD
the factorization approach can be at best correct at a single value
of $\mu$, the so-called factorization scale $\mu_F$. Although the
approach itself does not provide the value of $\mu_F$, the proponents
of factorization expect $\mu_F=O(m_b)$ and $\mu_F=O(m_c)$ for
B-decays and D-decays respectively.
Here we would like to point out that beyond the leading logarithmic
approximation for $C_i(\mu)$ a new complication arises. As stressed
in \cite{BJLW}, at next to leading level in the renormalization
group improved perturbation theory the coefficients $C_i(\mu)$
depend on the renormalization scheme for operators. Again only
the presence of non-factorizable contributions
in $\langle O_i\rangle$ can
remove this scheme dependence in the physical amplitudes.
However $\langle O_i\rangle_F$ are renormalization scheme
independent and the factorization approach is of course unable
to tell us whether it works better with an anti-commuting $\gamma_5$
in $D\not=4$ dimensions ( NDR scheme) or with another definition
of $\gamma_5$ such as used in HV (non-anticommuting $\gamma_5$ in
$D\not=4$) or DRED ($\gamma_5$ in $D=4$) schemes.
The renormalization scheme dependence of $a_i$ emphasized here
is rather
annoying from the factorization point of view as it precludes
a unique phenomenological determination of $\mu_F$ as we will
show explicitly below.
On the other hand, arguments have been given
\cite{BJORKEN,DUGAN,NEUBERT} that once $H_{eff}$ in (\ref{4})
has been constructed, factorization could be
approximately true in the case of two-body decays with high
energy release \cite{BJORKEN}, or in certain kinematic regions
\cite{DUGAN}. We will not repeat here these arguments, which
can be found in the original papers as well as in a critical
analysis of various aspects of factorization presented in
\cite{ISGUR}.
Needless to say the issue of factorization does not only
involve the short distance gluon corrections discussed here
but also final state interactions which are discussed in these
papers.
It is difficult to imagine that factorization
can hold even approximately in all circumstances.
In spite of this, it
became fashionable these days to
test this idea,
to some extent, by using a certain set of formfactors to calculate
$ \langle O_i\rangle_F $ and by making global fits of
the formulae (\ref{1})-(\ref{3})
to the data treating
$ a_1 $ and $ a_2 $ as free independent parameters. The most recent
analyses of this type give for non-leptonic two-body B-decays
\cite{DEANDREA}-\cite{KAMAL}
\begin{equation}\label{8}
a_1\approx 1.05\pm0.10
\qquad
a_2\approx 0.25\pm0.05
\end{equation}
which is compatible with earlier analyses \cite{NEUBERT,STONE90}.
The new CLEO II data \cite{CLEO} favour
a {\it positive} value of $a_2$ in contrast to earlier expectations
\cite{BAUER,RUCKL} based on extrapolation from charm decays.
At the level of accuracy of the existing experimental data and
because of strong model
dependence in the relevant formfactors it is not yet possible
to conclude on the basis of these analyses whether the
factorization approach is a useful approximation in general or not.
It is
certainly conceivable that factorization may apply better to some
non-leptonic decays than to others
\cite{NEUBERT,BJORKEN,DUGAN,ISGUR,BIGI,RUCKL94}
and using all decays in a global fit may misrepresent the true
situation.
Irrespective of all these reservations let us ask
whether the numerical values in (\ref{8}) agree with
the QCD expectations for
$\mu=0(m_b)?$
A straightforward calculation of $a_i(\mu)$ with $C_i(\mu)$ in the
leading logarithmic approximation \cite{LEE} gives for $ \mu= 5.0~GeV $
and the QCD scale $\Lambda_{LO} = 225\pm85~MeV$
\begin{equation}\label{9}
a_1^{LO}=1.03\pm0.01\quad \qquad a_2^{LO}=0.10\pm0.02
\end{equation}
Whereas the result for $ a_1 $ is compatible with the experimental findings,
the theoretical value for $ a_2 $ disagrees roughly by a factor of two.
The solution to this problem by dropping the $1/N$ terms in (\ref{6})
suggested in \cite{BAUER} and argued for in \cite{RUCKL,BSQCD,BLOK} gives
$a_1^{LO}=1.12\pm 0.02$ and $a_2^{LO}=-0.27\pm0.03$. Whereas the absolute
magnitudes for $a_i$ are consistent with (\ref{8}), the sign of $a_2$ is
wrong. It
has been remarked in \cite{NEUBERT} that the value of $a_2$ could be
increased by
using (\ref{6}) with $ \mu >> m_b $ .
Indeed as shown in table 1 for $\mu=15-20~GeV$
the calculated values for $a_1$ and $a_2$ are compatible with (\ref{8}).
The large value of $\mu=(3-4)~m_b$ is, however, not really what
the proponents of factorization would expect.
\begin{table}[thb]
\caption{Leading order coefficients
$a_1^{LO}$ and $a_2^{LO}$ for B-decays.}
\begin{center}
\begin{tabular}{|c|c|c||c|c||c|c|}
\hline
& \multicolumn{2}{c||}{$\Lambda_{LO}^{(5)}=140~MeV$} &
\multicolumn{2}{c||}{$\Lambda_{LO}^{(5)}=225~MeV$} &
\multicolumn{2}{c| }{$\Lambda_{LO}^{(5)}=310~MeV$} \\
\hline
$\mu [GeV]$ & $a_1$ & $a_2$ & $a_1$ &
$a_2$ & $a_1$ & $a_2$ \\
\hline
\hline
5.0 & 1.024 & 0.124 & 1.030 & 0.099 & 1.035 & 0.078
\\
\hline
10.0 & 1.011 & 0.191 & 1.014 & 0.176 & 1.016 & 0.164
\\
\hline
15.0 & 1.007 & 0.224 & 1.008 & 0.214 & 1.009 & 0.205
\\
\hline
20.0 & 1.004 & 0.246 & 1.005 & 0.238 & 1.006 & 0.231
\\
\hline
\end{tabular}
\end{center}
\end{table}
Yet it should be recalled that in order to address the issue of the value
of $ \mu $
corresponding to the findings in (\ref{8}) it is mandatory to go beyond the
leading logarithmitic approximation and to include at least
the next-to-leading (NLO)
terms. In particular only then one is able to use meaningfully the value
for $\Lambda_{\overline{MS}}$ extracted from high energy processes. As an illustration we
have used in (\ref{9}) $\Lambda_{LO}=\Lambda_{\overline{MS}}$ which is of course rather
arbitrary.
To our surprise no NLO analysis of $ a_1 $ and $ a_2 $ has been
presented in the literature in spite of the fact that the NLO
corrections to $ C_1 $ and $ C_2 $ have been known for
many years \cite{ALT,WEISZ}.
At this point an important warning should be made. The coefficients
$ C_1 $ and $C_2 $ as given in \cite{ALT,WEISZ} and also in \cite{BJLW}
cannot simply be inserted
into (\ref{6}) as done often in the literature.
As stressed in \cite{BJL} the coefficients
given in \cite{ALT,WEISZ,BJLW}
differ from the true coefficients of the operators $O_i$
by $ O(\alpha_s) $ corrections which
have been included in these papers in order to remove the renormalization
scheme dependence. The only paper which gives the true $ C_1 $ and
$ C_2 $ for B-decays is ref. \cite{BJL},
where these coefficients have been
given for
the NDR and HV renormalization
schemes.
Now the main topic of ref. \cite{BJL} was the ratio
$ \varepsilon'/\varepsilon $. Consequently
the full set of ten operators including QCD-penguin and
electroweak penguin
operators had to be be considered which made the whole analysis rather
technical. The penguin operators have, however, no impact on the
coefficients $ C_1 $ and $ C_2 $ and also $ O(\alpha_{QED}) $
renormalization considered in \cite{BJL} can be neglected here. On the other
hand we are interested in the $ \mu$ dependence of $ a_1 $ and $ a_2 $
around $ \mu = O(m_b) $ and consequently we have to generalize the
numerical analysis of \cite{BJL}.
At this point it should be remarked that in the context of the
leading logarithmic approximation, the sensitivity of $a_2$ to
the precise values of $C_i$ has been emphasised in ref. \cite{K84}
long time ago. The expectation of K\"uhn and R\"uckl that higher
order QCD corrections should have an important impact on the
numerical values of $a_2$ turns out to be correct as we will
demonstrate explicitly below.
The main objectives of the present paper are:
\begin{itemize}
\item
The values of $ a_1(\mu) $ and $ a_2(\mu) $ beyond the leading
logarithmic approximation,
\item
The analysis of their $ \mu $ and $\Lambda_{\overline{MS}}$ dependences,
\item
The analysis of their renormalization scheme dependence in
general terms, which we will illustrate here
by calculating $ a_i(\mu) $ in three renormalization schemes:
NDR, HV and DRED.
\end{itemize}
Since the $\mu$, $\Lambda_{\overline{MS}}$ and the renormalization scheme dependences of
$a_i(\mu)$ are caused by the non-factorizable hard gluon contributions,
this analysis should give us some estimate of the expected departures
from factorization. It will also give us the answer whether, within the
theoretical uncertainties, the problem of the small value of $a_2$,
stressed by many authors in the past, can be avoided.
Our paper is organized as follows. In section 2 we give
a set of compact expressions
for $ C_1(\mu) $
and $ C_2(\mu) $ which clearly exhibit the $ \mu $ and
renormalization scheme dependences. Subsequently in sections 3 and 4
we will critically analyse $a_i$ for B-decays and D-decays respectively.
Our main findings and conclusions are given in section 5.
\section{Master Formulae}
The coefficients $C_i(\mu)$ can be written as follows:
\begin{equation}\label{10}
C_1(\mu)=\frac{z_+(\mu)+z_-(\mu)}{2}
\qquad\qquad
C_2(\mu)=\frac{z_+(\mu)-z_-(\mu)}{2}
\end{equation}
where
\begin{equation}\label{11}
z_\pm(\mu)=\left[1+\frac{\alpha_s(\mu)}{4\pi}J_\pm\right]
\left[\frac{\alpha_s(M_W)}{\alpha_s(\mu)}\right]^{d_\pm}
\left[1+\frac{\alpha_s(M_W)}{4\pi}(B_\pm-J_\pm)\right]
\end{equation}
with
\begin{equation}\label{12}
J_\pm=\frac{d_\pm}{\beta_0}\beta_1-\frac{\gamma^{(1)}_\pm}{2\beta_0}
\qquad\qquad
d_\pm=\frac{\gamma^{(0)}_\pm}{2\beta_0}
\end{equation}
\begin{equation}\label{13}
\gamma^{(0)}_\pm=\pm 2 (3\mp 1)
\qquad\quad
\beta_0=11-\frac{2}{3}f
\qquad\quad
\beta_1=102-\frac{38}{3}f
\end{equation}
\begin{equation}\label{14}
\gamma^{(1)}_{\pm}=\frac{3 \mp 1}{6}
\left[-21\pm\frac{4}{3}f-2\beta_0\kappa_\pm\right]
\end{equation}
\begin{equation}\label{15}
B_\pm=\frac{3 \mp 1}{6}\left[\pm 11+\kappa_\pm\right].
\end{equation}
Here we have introduced the parameter $\kappa_\pm$ which
distinguishes between various renormalization
schemes:
\begin{equation}\label{16}
\kappa_\pm = \left\{ \begin{array}{rc}
0 & (\rm{NDR}) \\
\mp 4 & (\rm{HV}) \\
\mp 6-3 & (\rm{DRED})
\end{array}\right.
\end{equation}
Thus $J_\pm$ in (\ref{12}) can also be written as
\begin{equation}\label{17}
J_\pm=(J_\pm)_{NDR}+\frac{3\mp 1}{6}\kappa_\pm
=(J_\pm)_{NDR}\pm\frac{\gamma^{(0)}_\pm}{12}\kappa_\pm
\end{equation}
Setting $\gamma_\pm^{(1)}$, $B_\pm$ and $\beta_1$ to zero
gives the leading logarithmic approximation \cite{LEE}. The
NLO corrections in the dimensional reduction scheme (DRED)
have been first considered in \cite{ALT}. The corresponding
calculations in the NDR scheme
and in the HV scheme have been presented in \cite{WEISZ},
where the DRED-results of \cite{ALT} have been confirmed.
In writing (\ref{14}) we have incorporated
the $-2 \gamma^{(1)}_J$ correction
in the HV scheme resulting from the non-vanishing two--loop anomalous
dimension of the weak current. Similarly we have incorporated in
$\gamma^{(1)}_\pm$ a finite renormalization of $\alpha_s$ in the
case of the DRED scheme in order to work in all schemes with the usual
$\overline{MS}$ coupling \cite{BBDM}. For the latter we take
\begin{equation}\label{18}
\alpha_s(\mu)=\frac{4\pi}{\beta_0 \ln(\mu^2/\Lambda^2_{\overline{MS}})}
\left[1-\frac{\beta_1}{\beta^2_0}
\frac{\ln\ln(\mu^2/\Lambda^2_{\overline{MS}})}
{\ln(\mu^2/\Lambda^2_{\overline{MS}})}\right].
\end{equation}
The formulae given above
depend on $f$, the number of active flavours. In the case of
B--decays $f=5$. According to the most recent world avarage
\cite{WEBER} we have:
\begin{equation}\label{19}
\alpha_s(M_Z)=0.117\pm0.007
\qquad\quad
\Lambda_{\overline{MS}}^{(5)}=(225\pm85)~MeV
\end{equation}
where the superscript stands for $f=5$.
In the case of D-decays the relevant scale is $\mu=O(m_c)$. In order
to calculate $C_i(\mu)$ for this case one has to evolve these
coefficients from $\mu=O(m_b)$ down to $\mu=O(m_c)$ in an effective
theory with $f=4$. Matching $\alpha_s^{(5)}(m_b)=\alpha_s^{(4)}(m_b)$
we find to a very good approximation $\Lambda_{\overline{MS}}^{(4)}=(325\pm110)~MeV$.
Unfortunately the necessity to evolve $C_i(\mu)$ from $\mu=M_W$
down to $\mu=m_c$ in two different theories ($f=5$ and $f=4$) and
eventually with $f=3$ for $\mu< m_c$ makes the formulae
for $C_i(\mu)$ in D--decays rather complicated.
They can be found in \cite{BJL}.
Fortunately all these complications can be avoided by a simple trick,
which reproduces the results of \cite{BJL} to better than $0.5\%$.
In order to find $C_i(\mu)$ for $1~GeV\leq\mu\leq 2~GeV$ one can
simply use the master formulae given above with $\Lambda_{\overline{MS}}^{(5)}$ replaced
by $\Lambda_{\overline{MS}}^{(4)}$ and $f=4.15$. The latter "effective" value for $f$
allows to obtain a very good agreement with \cite{BJL}. The nice
feature of this method is that the $\mu$ and renormalization scheme
dependences of $C_i(\mu)$ can be studied in simple terms.
Returning to (\ref{11}) we note that $(B_\pm-J_\pm)$ is scheme independent.
The scheme dependence of $z_\pm(\mu)$ originates then entirely from
the scheme dependence of $J_\pm$ which has been explicitly shown
in (\ref{17}). We should stress that by the scheme dependence we always mean
the one related to the operator renormalization. The scheme for $\alpha_s$
is always $\overline{MS}$.
The scheme dependence present in the first factor in (\ref{11}) has
been removed in \cite{WEISZ} by multiplying $z_\pm(\mu)$ by
$(1-B_\pm \alpha_s(\mu)/4\pi)$ and the corresponding hadronic
matrix elements by $(1+B_\pm \alpha_s(\mu)/4\pi)$. Although this
procedure is valid in general, it is not useful in the case of
the factorization approach which precisely omitts the non-factorizable,
scheme dependent corrections such as $B_\pm$ or $J_\pm$ in the
hadronic matrix elements. Consequently in what follows we will work
with the true coefficients $C_i(\mu)$ of the operators $O_i$ as given
in (\ref{10}) and (\ref{11}).
In order to exhibit the $\mu$ dependence on the same footing as the
scheme dependence, it is useful to rewrite (\ref{11}) as follows:
\begin{equation}\label{20}
z_\pm(\mu)=\left[1+\frac{\alpha_s(m_b)}{4\pi} \tilde J_\pm(\mu)\right]
\left[\frac{\alpha_s(M_W)}{\alpha_s(m_b)}\right]^{d_\pm}
\left[1+\frac{\alpha_s(M_W)}{4\pi}(B_\pm-J_\pm)\right]
\end{equation}
with
\begin{equation}\label{21}
\tilde J_\pm(\mu)=(J_\pm)_{NDR}\pm
\frac{\gamma^{(0)}_\pm}{12}\kappa_\pm
+\frac{\gamma^{(0)}_\pm}{2}\ln(\frac{\mu^2}{m^2_b})
\end{equation}
summarizing both the renormalization scheme dependence and the
$\mu$--dependence. Note that in the first parenthesis in (\ref{20})
we have
set $\alpha_s(\mu)=\alpha_s(m_b)$ as the difference in the
scales in this correction is still of a higher order.
We also note that the scheme and the $\mu$--dependent terms
are both proportional to $\gamma^{(0)}_\pm$. This implies that a
change of the renormalization scheme can be compensated by a change
in $\mu$. From (\ref{21}) we find generally
\begin{equation}\label{21a}
\mu_i^\pm=\mu_{NDR}\exp\left(\mp\frac{\kappa_\pm^{(i)}}{12}\right)
\end{equation}
where $i$ denotes a given scheme. From (\ref{16}) we have then
\begin{equation}\label{22}
\mu_{HV}=\mu_{NDR}\exp\left(\frac{1}{3}\right)
\qquad
\mu_{DRED}^{\pm}=
\mu_{NDR}\exp\left(\frac{2\pm 1}{4}\right)
\end{equation}
Evidently whereas the change in $\mu$ relating HV and NDR is the
same for $z_+$ and $z_-$ and consequently for $a_i(\mu)$ and
$C_i(\mu)$, the relation between NDR and DRED is more involved. In any
case $\mu_{HV}$ and $\mu_{DRED}^\pm$ are larger than $\mu_{NDR}$.
This discussion shows that a meaningful analysis of the $\mu$
dependence of $C_i(\mu)$ can only be made simultaneously with the
analysis of the scheme dependence.
Using (\ref{20}) and (\ref{21}) we can find the explicit dependence
of $a_i$ on $\mu$ and the renormalization scheme:
\begin{equation}\label{21c}
\Delta a_{1,2}(\mu)
=\frac{\alpha_s(m_b)}{3\pi}\left[F_+\mp F_-\right]
\ln(\frac{\mu^2}{m_b^2})+
\frac{\alpha_s(m_b)}{18\pi}\left[F_+\kappa_+\pm F_-\kappa_-\right]
\end{equation}
where $F_{\pm}$ denotes the product of the last two factors in
(\ref{20}) which are scheme independent.
For $m_b=4.8~GeV$, $\Lambda_{\overline{MS}}^{(5)}=225\pm85~MeV$
we have $F_+=0.88\pm 0.01 $ and $F_-=1.28\pm 0.03$.
It is evident from (\ref{21c}) that
the $\mu$ and renormalization scheme dependences are much smaller
for $a_1$ than for $a_2$. We will verify this numerically below.
We have written all the formulae without invoking heavy quark
effective theory (HQET). It is sometimes stated in the literature
that for $\mu<m_b$ in the case of B-decays one {\it has to } switch
to HQET. In this case for $\mu<m_b$ the anomalous dimensions
$\gamma_\pm$ differ from those given above \cite{GKMW}. We
should however stress that switching to HQET can be done at
any $\mu<m_b$ provided the logarithms $\ln(m_b/\mu)$ in
$\langle O_i \rangle$ do not become too large. Similar comments
apply to D-decays with respect to $\mu=m_c$. Of course the
coefficients $C_i$ calculated in HQET for $\mu<m_b$ are
different from the coefficients presented here. However the
corresponding matrix elements $\langle O_i \rangle$ in HQET are
also different so that the physical amplitudes remain unchanged.
Again, if factorization for $\langle O_i \rangle$ is used, it
matters to some extent at which $\mu$ the HQET is invoked.
For the range of $\mu$ considered here this turns out to be
inessential.
\begin{table}[thb]
\caption{The coefficient $C_1(\mu)$ for B-decays.}
\begin{center}
\begin{tabular}{|c|c|c|c||c|c|c||c|c|c|}
\hline
& \multicolumn{3}{c||}{$\Lambda_{\overline{MS}}^{(5)}=140~MeV$} &
\multicolumn{3}{c||}{$\Lambda_{\overline{MS}}^{(5)}=225~MeV$} &
\multicolumn{3}{c| }{$\Lambda_{\overline{MS}}^{(5)}=310~MeV$} \\
\hline
$\mu [GeV]$ & NDR & HV & DRED & NDR &
HV & DRED & NDR & HV & DRED \\
\hline
\hline
4.0 & 1.074 & 1.092 & 1.073 & 1.086 & 1.107 & 1.086 &
1.096 & 1.120 & 1.097 \\
\hline
5.0 & 1.062 & 1.078 & 1.061 & 1.072 & 1.090 & 1.071 &
1.080 & 1.101 & 1.079 \\
\hline
6.0 & 1.054 & 1.069 & 1.052 & 1.062 & 1.079 & 1.060 &
1.068 & 1.087 & 1.067 \\
\hline
7.0 & 1.047 & 1.061 & 1.045 & 1.054 & 1.069 & 1.052 &
1.059 & 1.077 & 1.057 \\
\hline
8.0 & 1.042 & 1.055 & 1.039 & 1.047 & 1.062 & 1.045 &
1.052 & 1.069 & 1.050 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[thb]
\caption{The coefficient $C_2(\mu)$ for B-decays.}
\begin{center}
\begin{tabular}{|c|c|c|c||c|c|c||c|c|c|}
\hline
& \multicolumn{3}{c||}{$\Lambda_{\overline{MS}}^{(5)}=140~MeV$} &
\multicolumn{3}{c||}{$\Lambda_{\overline{MS}}^{(5)}=225~MeV$} &
\multicolumn{3}{c| }{$\Lambda_{\overline{MS}}^{(5)}=310~MeV$} \\
\hline
$\mu [GeV]$ & NDR & HV & DRED & NDR &
HV & DRED & NDR & HV & DRED \\
\hline
\hline
4.0 & --.175 & --.211 & --.216 & --.197 & --.239 & --.244 &
--.216 & --.264 & --.269 \\
\hline
5.0 & --.151 & --.184 & --.189 & --.169 & --.208 & --.213 &
--.185 & --.228 & --.233 \\
\hline
6.0 & --.133 & --.164 & --.169 & --.148 & --.184 & --.190 &
--.161 & --.201 & --.207 \\
\hline
7.0 & --.118 & --.148 & --.153 & --.132 & --.166 & --.171 &
--.143 & --.181 & --.186 \\
\hline
8.0 & --.106 & --.135 & --.140 & --.118 & --.151 & --.156 &
--.128 & --.164 & --.169 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{B-Decays}
The coefficients $C_i(\mu)$ are shown in tables
2 and 3 for different
$\mu$, $\Lambda_{\overline{MS}}^{(5)}$ and the three renormalization schemes in question.
We include these results because they should be useful independently
of the
factorization issue. The corresponding values for $a_i(\mu)$ are
given in tables 4 and 5. We observe:
\begin{itemize}
\item
the coefficient $a_1$ is very weakly dependent on $\mu$, $\Lambda_{\overline{MS}}^{(5)}$
and the choice of the renormalization scheme. In the full range of
parameters considered we find:
\begin{equation}\label{35}
a_1=1.01\pm0.02
\end{equation}
in an excellent agreement with (\ref{8}). The weak dependence of
$a_1$ on the parameters considered can be
understood by inspecting (\ref{21c}).
\item
the coefficient $a_2$ depends much stronger on $\mu$, $\Lambda_{\overline{MS}}^{(5)}$
and the choice of the renormalization scheme. Interestingly, for the
NDR scheme we find
\begin{equation}\label{36}
a_2^{NDR}=0.20\pm0.05
\end{equation}
which is in the ball park of the experimental findings in (\ref{8}).
Smaller
values are found for HV and DRED schemes:
\begin{equation}\label{37}
a_2^{HV}=0.16\pm0.05
\qquad
a_2^{DRED}=0.15\pm0.05
\end{equation}
\end{itemize}
\begin{table}[thb]
\caption{The coefficient $a_1(\mu)$ for B-decays.}
\begin{center}
\begin{tabular}{|c|c|c|c||c|c|c||c|c|c|}
\hline
& \multicolumn{3}{c||}{$\Lambda_{\overline{MS}}^{(5)}=140~MeV$} &
\multicolumn{3}{c||}{$\Lambda_{\overline{MS}}^{(5)}=225~MeV$} &
\multicolumn{3}{c| }{$\Lambda_{\overline{MS}}^{(5)}=310~MeV$} \\
\hline
$\mu [GeV]$ & NDR & HV & DRED & NDR &
HV & DRED & NDR & HV & DRED \\
\hline
\hline
4.0 & 1.016 & 1.021 & 1.002 & 1.020 & 1.027 & 1.004 &
1.024 & 1.033 & 1.007 \\
\hline
5.0 & 1.012 & 1.017 & 0.998 & 1.015 & 1.021 & 1.000 &
1.018 & 1.025 & 1.002 \\
\hline
6.0 & 1.010 & 1.014 & 0.996 & 1.012 & 1.017 & 0.997 &
1.014 & 1.020 & 0.998 \\
\hline
7.0 & 1.008 & 1.011 & 0.994 & 1.010 & 1.014 & 0.995 &
1.012 & 1.017 & 0.995 \\
\hline
8.0 & 1.007 & 1.010 & 0.993 & 1.008 & 1.012 & 0.993 &
1.010 & 1.014 & 0.993 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[thb]
\caption{The coefficient $a_2(\mu)$ for B-decays.}
\begin{center}
\begin{tabular}{|c|c|c|c||c|c|c||c|c|c|}
\hline
& \multicolumn{3}{c||}{$\Lambda_{\overline{MS}}^{(5)}=140~MeV$} &
\multicolumn{3}{c||}{$\Lambda_{\overline{MS}}^{(5)}=225~MeV$} &
\multicolumn{3}{c| }{$\Lambda_{\overline{MS}}^{(5)}=310~MeV$} \\
\hline
$\mu [GeV]$ & NDR & HV & DRED & NDR &
HV & DRED & NDR & HV & DRED \\
\hline
\hline
4.0 & 0.183 & 0.153 & 0.142 & 0.165 & 0.130 & 0.118 &
0.149 & 0.110 & 0.097 \\
\hline
5.0 & 0.203 & 0.175 & 0.164 & 0.188 & 0.156 & 0.144 &
0.175 & 0.139 & 0.127 \\
\hline
6.0 & 0.219 & 0.192 & 0.181 & 0.206 & 0.175 & 0.164 &
0.195 & 0.161 & 0.149 \\
\hline
7.0 & 0.231 & 0.205 & 0.195 & 0.220 & 0.191 & 0.179 &
0.210 & 0.178 & 0.166 \\
\hline
8.0 & 0.241 & 0.216 & 0.206 & 0.231 & 0.203 & 0.192 &
0.223 & 0.193 & 0.181 \\
\hline
\end{tabular}
\end{center}
\end{table}
This exercise shows that by including NLO QCD corrections and choosing
"appropriately" the renormalization scheme for the operators $O_i$,
one can achieve the agreement of the QCD factor $a_2$ in (\ref{6})
evaluated at $\mu=O(m_b)$ with the phenomenological findings. No high
scales
as found in the leading logarithmic approximation are necessary.
Moreover, as it is clear from (\ref{21a}), by choosing a scheme with
positve $\kappa_+$ and negative $\kappa_-$ even higher values for
$a_2$ at $\mu=m_b$ can be obtained.
In spite of the possibility of "fitting" the phenomenological
values for $a_2$ by choosing appropriately the renormalization scheme,
the sizable dependence of $a_2$ on $\mu$ and the renormalization
scheme is rather disturbing from the point of view of the factorization
approach. On the other hand it is interesting that within $2-3\%$
we find $a_1=1$
in the full range of the parameters considered.
We will return to these issues in
the final section.
\section{Charm Decays}
The phenomenological analyses of (\ref{1})-(\ref{3}) give in the case
of two-body D meson decays \cite{NEUBERT}:
\begin{equation}\label{38}
a_1\approx 1.2\pm0.10
\qquad
a_2\approx -0.5\pm 0.10
\end{equation}
The different sign of $a_2$ compared with the case of B-decays shows
that the structure of non-leptonic D decays differs considerably from
the one in B decays. Calculating $a_i$ according to our master formulae
for scales $1.0~GeV\leq \mu\leq 2.0~GeV$ we find that $a_1$
roughly agrees with
(\ref{38}). On the other hand as already found in the leading order
\cite{BAUER,RUCKL,NEUBERT}, the coefficient $a_2$ is generally
substantially smaller than its phenomenological value (\ref{38})
due to strong cancellation between $C_2$ and $C_1/3$. Only for
$\mu=1.0~GeV$, the largest $\Lambda_{\overline{MS}}$ and HV and DRED schemes it is
possible to obtain $a_2$ within a factor of two from the value in
(\ref{38}). Otherwise one finds typically $a_2=O(0.1)$ and consequently
branching ratios for class II decays by an order of magnitude smaller
than the experimental branching ratios.
Because of these findings, a "new factorization" \cite{BAUER} approach
has been proposed in which the "1/N" terms in (\ref{6}) are discarded.
Some arguments for this modified approach can be given in the frameworks
of $1/N$ expansion \cite{RUCKL} and QCD sum rules \cite{BSQCD}.
Yet "the rule of discarding $1/N$ terms" is certainly not established
both theoretically \cite{BLOK}
and phenomenologically \cite{STONE}.
Moreover as we already mentioned in the introduction,
it does not work for B decays giving wrong sign for $a_2$.
For completeness however we show in tables 6 and 7 the values of
$a_1=C_1$ and $a_2=C_2$ relevant for D-decays. We observe:
\begin{itemize}
\item
the coefficient $a_1$ is weakly dependent on the choice of
the renormalization scheme
for fixed $\mu$ and $\Lambda_{\overline{MS}}^{(4)}$. The dependence on $\mu$ and $\Lambda_{\overline{MS}}^{(4)}$
is sizable. In the full range of parameters we find
\begin{equation}\label{39}
a_1=1.31\pm 0.19
\end{equation}
which is compatible with phenomenology.
\item
the coefficient $a_2$ depends much stronger on the renormalization
scheme than $a_1$ and the dependence on $\mu$ and $\Lambda_{\overline{MS}}^{(4)}$ is
really large.
Restricting the range of $\mu$ to $\mu=1.25\pm0.25~GeV$ we find
\begin{equation}\label{39a}
a_2^{NDR}=-0.47\pm 0.15
\qquad
a_2^{HV}\approx a_2^{DRED}\approx -0.60\pm0.22
\end{equation}
in the ball park of (\ref{38}).
\item
the dependences of $a_1$ and $a_2$ on the parameters considered are
stronger in the charm sector than in B decays because of the larger
QCD coupling involved.
\end{itemize}
\begin{table}[thb]
\caption{The coefficient $C_1(\mu)$ for D-decays.}
\begin{center}
\begin{tabular}{|c|c|c|c||c|c|c||c|c|c|}
\hline
& \multicolumn{3}{c||}{$\Lambda_{\overline{MS}}^{(4)}=215~MeV$} &
\multicolumn{3}{c||}{$\Lambda_{\overline{MS}}^{(4)}=325~MeV$} &
\multicolumn{3}{c| }{$\Lambda_{\overline{MS}}^{(4)}=435~MeV$} \\
\hline
$\mu [GeV]$ & NDR & HV & DRED & NDR &
HV & DRED & NDR & HV & DRED \\
\hline
\hline
1.00 & 1.208 & 1.259 & 1.224 & 1.275 & 1.358 & 1.309 &
1.363 & 1.506 & 1.432 \\
\hline
1.25 & 1.174 & 1.216 & 1.185 & 1.221 & 1.282 & 1.242 &
1.277 & 1.367 & 1.314 \\
\hline
1.50 & 1.152 & 1.187 & 1.160 & 1.188 & 1.237 & 1.203 &
1.228 & 1.296 & 1.252 \\
\hline
1.75 & 1.136 & 1.167 & 1.142 & 1.165 & 1.207 & 1.176 &
1.196 & 1.252 & 1.214 \\
\hline
2.00 & 1.123 & 1.152 & 1.128 & 1.148 & 1.185 & 1.156 &
1.174 & 1.221 & 1.187 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[thb]
\caption{The coefficient $C_2(\mu)$ for D-decays.}
\begin{center}
\begin{tabular}{|c|c|c|c||c|c|c||c|c|c|}
\hline
& \multicolumn{3}{c||}{$\Lambda_{\overline{MS}}^{(4)}=215~MeV$} &
\multicolumn{3}{c||}{$\Lambda_{\overline{MS}}^{(4)}=325~MeV$} &
\multicolumn{3}{c| }{$\Lambda_{\overline{MS}}^{(4)}=435~MeV$} \\
\hline
$\mu [GeV]$ & NDR & HV & DRED & NDR &
HV & DRED & NDR & HV & DRED \\
\hline
\hline
1.00 & --.410 & --.491 & --.492 & --.510 & --.631 & --.630 &
--.632 & --.825 & --.815 \\
\hline
1.25 & --.356 & --.424 & --.427 & --.430 & --.523 & --.525 &
--.512 & --.642 & --.640 \\
\hline
1.50 & --.319 & --.379 & --.383 & --.378 & --.457 & --.459 &
--.439 & --.543 & --.543 \\
\hline
1.75 & --.291 & --.346 & --.350 & --.340 & --.410 & --.414 &
--.390 & --.478 & --.480 \\
\hline
2.00 & --.269 & --.320 & --.324 & --.311 & --.375 & --.379 &
--.353 & --.431 & --.435 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Final Remarks}
We have calculated the QCD factors $a_1$ and $a_2$,
entering the tests of factorization in non-leptonic heavy meson decays,
beyond the leading logarithmic approximation. In particular we have
pointed out that $a_i$ in QCD depend not only on $\mu$ and $\Lambda_{\overline{MS}}$,
but also on the renormalization scheme for the operators. The latter
dependence precludes a unique determination of the factorization
scale $\mu_F$, if such a scale exists at all, at which the factorization
approach would give results identical to QCD. For instance going from
the NDR scheme to the HV scheme is equivalent, in the case of
current-current
operators $O_i$, to a change of $\mu_F$ by $40\%$. Simultaneously
we would like to emphasize
the strong dependence of a possible $\mu_F$ on
$\Lambda_{\overline{MS}}$. The latter uncertainty can however be considerably reduced in
the future by reducing the uncertainty in $\Lambda_{\overline{MS}}$ extracted from high
energy processes. The NLO calculations of $a_i$ and $C_i$ presented
here, allow a meaningful use of $\Lambda_{\overline{MS}}$, extracted from high energy
processes, in the non-leptonic decays in question.
On the phenomenological side the following results are in our opinion
interesting. In the simplest renormalization scheme with anti-commuting
$\gamma_5$ (NDR), $\Lambda_{\overline{MS}}^{(5)}=(225\pm85)~MeV$ and $\mu=6\pm2~GeV$,
we find in the case of B-decays
\begin{equation}\label{40}
a_1^{NDR}=1.02\pm0.01
\qquad
a_2^{NDR}=0.20\pm0.05
\end{equation}
which are in the ball park of the results of phenomenological analyses.
In particular, the inclusion of NLO corrections in the NDR scheme
appears to "solve" the problem of the small value of $a_2$ obtained
in the leading order.
In the case of D-decays, $\Lambda_{\overline{MS}}^{(4)}=(325\pm110)~MeV$,
$\mu=1.25\pm0.25~GeV$ and using the "new factorization" approach we
find
\begin{equation}\label{41}
a_1^{NDR}=1.26\pm0.10
\qquad
a_2^{NDR}=-0.47\pm0.15
\end{equation}
again in the ball park of phenomenological analyses. The standard
factorization gives for D-decays $a_1^{NDR}\approx 1.10\pm0.05$
and $a_2^{NDR}\approx-0.06\pm0.12$ for the same range of parameters.
The result for $a_2$ is phenomenologically inacceptable.
We have also stressed that similar results for $a_1$ in B-decays
are obtained in HV and DRED schemes. Moreover the very weak dependence
of $a_1$ on $\mu$ and $\Lambda_{\overline{MS}}$ indicates that $a_1$ is predicted to be
close to unity in agreement with phenomenology of factorization.
However the $\mu$, $\Lambda_{\overline{MS}}$ and scheme dependences of $a_2$
for B decays and in particular for D decays
are rather sizable.
In our opinion the failure of the usual factorization approach in
D decays and the strong dependence of $a_2$ on $\mu$, $\Lambda_{\overline{MS}}$ and
the choice of the renormalization scheme indicate
that non-factorizable contributions must
play generally an important role in heavy meson non-leptonic decays
if QCD is
the correct description of these decays.
In K meson decays the non-factorizable contributions are known to
be very important anyway \cite{BLOK,NONFACT}.
Consequently we expect that,
when the experimental data improves, sizable departures from factorization
should become visible in particular in decays belonging to class II.
An exception could be the class I in B decays where an accidental
approximate cancellations of $\mu$ and renormalization scheme dependences
takes place in $a_1$. It should however be stressed that the stability
of $a_1$ with respect to changes of $\mu$ and the renormalization
scheme is only a necessary condition for an "effective" validity of
factorization in class I decays. It certainly does not imply that
factorization of matrix elements indeed takes place.
In spite of these critical remarks the tests of factorization in
non-leptonic decays are important because the patterns of the expected
departures from factorization will teach us about the non-factorizable
contributions. Recent discussion of such contributions
can be found in \cite{RUCKL94}.
In this connection, once the data and the models for
formfactors improve, it would be useful to investigate in detail how the
phenomenologically extracted parameters $a_1$ and $a_2$ depend on the
decay channel considered.
I would like to thank Gerhard Buchalla and Robert Fleischer for
critical reading of the manuscript. I also thank Reinhold R\"uckl
for a discussion related to his work.
|
2,877,628,089,102 | arxiv | \section{Introduction}
For much of the past decade, a ($\simeq500$ Myr) gap has existed in our knowledge of cosmic history, between the cosmic microwave background at redshift $z\simeq1100$, and the earliest known galaxies at $z\simeq10$ (e.g., \citealt{Coe2013, McLure2013, McLeod2016, Oesch2016}; Donnan et al. in prep). This has been largely due to a lack of deep, high-resolution imaging and spectroscopic capability at $\lambda > 2\mu$m. These instrumental limitations have also significantly restricted our knowledge of galaxy evolution during the first two billion years prior to $z=3$, due to our inability to study the detailed rest-frame {\it optical} properties of galaxies at such redshifts.
To constrain the build-up of stellar mass in currently unseen galaxies at $z>10$, much attention has focused on attempting to measure the star-formation histories (SFHs) of $6 < z < 10$ galaxies. The most important spectral feature is the Balmer break at $\lambda_\mathrm{rest}\simeq4000$\AA, which becomes stronger as galaxy stellar populations age, placing a lower bound on the redshift at which significant star formation commenced. The only data available for this purpose have been relatively shallow, low spatial resolution, very broad-band data from the \textit{Spitzer} IRAC 3.6$\mu$m and 4.5$\mu$m channels. The IRAC signature of a Balmer break can however be degenerate with strong [O\,\textsc{iii}]+H$\beta$ emission, especially when relying on uncertain photometric redshifts (e.g., \citealt{Oesch2015, Roberts-Borsani2016, Roberts-Borsani2020}).
\begin{figure*}
\includegraphics[width=\textwidth]{figures/jwst_paper_fig1.1.pdf}
\includegraphics[width=\textwidth]{figures/jwst_paper_fig1.2.pdf}
\caption{The 10 JWST NIRSpec spectra from which we were able to measure secure spectroscopic redshifts. Wavelength ranges containing key spectral features are excerpted from the full dataset, which spans $\lambda_\mathrm{obs} \simeq1.8-5.2\mu$m with spectral resolution $R=1000$. The top panels show 5 objects at $1 < z < 3$, with redshifts determined primarily from Hydrogen Balmer, Paschen and Brackett lines. The lower panels show 5 objects at $5 < z < 9$, with redshifts measured primarily from Oxygen and Hydrogen Balmer lines. The spectra have been flux normalised and arbitrary vertical shifts applied for visualisation purposes.}
\label{fig:spectra}
\end{figure*}
Recently, several authors have reported evidence for significant Balmer breaks in the spectra of galaxies at $z\simeq8-9$ (e.g., \citealt{Strait2020, Strait2021, Laporte2021}). These results suggest stellar populations with ages of several hundred Myr already in place when the Universe was only $\simeq600$ Myr old, in some cases implying that significant star formation was underway as early as $\simeq100$ Myr after the Big Bang ($z\simeq30$). However, constraining galaxy SFHs from photometric data is challenging due to the age-metallicity-dust degeneracy in galaxy spectral shapes (e.g., \citealt{Conroy2013}), as well as the ill-conditioned nature of the galaxy spectral fitting problem, which results in strong prior-dependence (e.g., \citealt{Ocvirk2006, Carnall2019a, Leja2019}). The above considerations led \cite{Tacchella2022} to conclude that stellar ages for $z\simeq10$ galaxies derived from current data are still highly uncertain.
In \cite{Whitler2022}, the authors find a range of different SFHs for $z=6.6-6.9$ galaxies. They suggest that the most luminous objects at this epoch are a mixture of the most massive galaxies, with ages of up to a few hundred Myr, and galaxies that have undergone a very recent, rapid increase in star formation during the preceding $\lesssim10$\,Myr. However, at these redshifts the wavelength range of interest is only sampled by one, very broad \textit{Spitzer} IRAC band. There is also no coverage of the wavelength range $\lambda_\mathrm{obs} =2.3-3.1\mu$m. To make further progress, separating out the Balmer break from extreme-equivalent-width line emission in the rest-frame optical is critical.
By providing ultra-deep, high spatial and spectral resolution imaging and spectroscopy as far into the infrared as $\lambda_\mathrm{obs} = 28\mu$m, including particularly wide-ranging capabilities at $1-5\mu$m, the \textit{James Webb Space Telescope} (JWST) is set to revolutionise our understanding of galaxy formation during the first few billion years of cosmic history. This will allow us not only to reliably detect, and confirm redshifts for, large samples of $z>10$ galaxies, but also to gain a detailed physical understanding of galaxies at $3 < z < 10$ (e.g., \citealt{Chevallard2019, Kemp2019, Roberts-Borsani2021}).
In this paper, we focus on the first publicly released data from JWST, the Early Release Observations (ERO) covering the SMACS J0723.3–7327 galaxy cluster (hereafter SMACS0723). We aim to demonstrate the improvement in galaxy physical parameter constraints that can be achieved at $5 < z < 9$ by combining spectroscopic redshifts from NIRSpec with deeper, redder and narrower-band photometry from NIRCam.
We begin by reporting 10 spectroscopic redshifts from a total of 35 objects that were observed in this field with the NIRSpec microshutter array ($\lambda_\mathrm{obs}=1.8-5.2\mu$m, at spectral resolution $R=\lambda/\Delta\lambda=1000$). Five of these objects are at $1 < z < 3$, with redshifts measured principally via Hydrogen Paschen lines. The other 5 span $5 < z < 9$, with their redshifts determined from strong rest-frame optical Oxygen and Hydrogen Balmer lines.
For the 5 high-redshift galaxies, we measure fluxes in the 6 NIRCam bands included in the ERO, spanning $\lambda_\mathrm{obs}=0.8-5\mu$m. We perform spectral fitting with \textsc{Bagpipes} \citep{Carnall2018}, employing our new spectroscopic redshifts and JWST photometric data, in combination with existing \textit{Hubble Space Telescope} (HST) photometry. We measure stellar masses, with these objects being some of the first for which robust masses are available at such high redshifts. We also discuss the SFHs of these objects, with the aim of constraining the redshifts at which they began forming stars.
The structure of this paper is as follows. In Section \ref{data} we introduce the NIRCam and NIRSpec data for SMACS0723. In Section \ref{redshifting}, we describe the redshift measurements from the NIRSpec data. In Section \ref{sed_fitting} we present our spectral energy distribution (SED) fitting methodology and results. We present our conclusions in Section \ref{conclusion}. All magnitudes are quoted in the AB system. For cosmological calculations, we adopt $\Omega_M = 0.3$, $\Omega_\Lambda = 0.7$ and $H_0$ = 70 $\mathrm{km\ s^{-1}\ Mpc^{-1}}$. We assume a \cite{Kroupa2001} initial mass function, and assume the Solar abundances of \cite{Asplund2009}, such that $\mathrm{Z_\odot} = 0.0142$.
\section{Data}\label{data}
All JWST observations used in this work were taken as part of the SMACS0723 Early Release Observations (Programme ID 2736).
\subsection{NIRCam imaging}\label{data_nircam}
We utilise the deep the NIRCam imaging of SMACS0723 in the F090W, F150W, F200W, F277W, F356W and F444W filters, that provides coverage of the $\lambda_\mathrm{obs} = 0.8-5\mu$m wavelength range.
We reduce the raw level-1 data products using PENCIL (PRIMER Enhanced NIRCam Image Processing Library), a custom version of the JWST pipeline (version 1.6.0), using the latest available calibration files (CRDS\_CTX = jwst\_0916.pmap). We align and stack the reduced images using a combination of {\sc scamp} \citep{Bertin2006} and {\sc swarp} \citep{Bertin2010}, producing final deep images aligned to GAIA EDR3 with a pixel scale of 0.031$^{\prime\prime}$.
Due to the extended nature of the high-redshift galaxies, we perform variable-aperture photometry, using diameters in the range 0.5$^{\prime\prime}$ to 1.25$^{\prime\prime}$, depending on source angular diameter. To measure robust photometric uncertainties for each source, we measure the aperture-to-aperture rms of the nearest $\sim 100$ blank sky apertures, after masking out neighbouring sources \citep{McLeod2016}. We follow the same procedure to extract fluxes in the HST ACS F606W and F814 bands, starting with the mosaics made available by the Reionization Lensing Cluster Survey (RELICS; \citealt{Coe2019}) team.
\subsection{NIRSpec spectroscopy}\label{data_nirspec}
Two NIRSpec microshutter array pointings were conducted, s007 and s008, during which 35 objects were observed. For each pointing, two grism/filter combinations were used: G235M/F170LP and G395M/F290LP, providing coverage over the wavelength range $\lambda_\mathrm{obs} \simeq1.8-5.2\mu$m at spectral resolution $R=\lambda/\Delta\lambda\simeq1000$. The 10 objects discussed in this work all received the full integration time of 8754 seconds in both pointings and with both grism/filter combinations. We have used the original level-3 data products made available on 12/07/2022, which were processed with version 1.5.3 of the JWST Science Calibration Pipeline. The calibration reference data used was jwst\_0916.pmap. Coordinates for each object were obtained by cross-matching IDs with the Astronomer's Proposal Tool (APT) input catalogue and refined using the NIRCam imaging.
\begin{table}
\caption{Redshifts for the 5 galaxies shown in the top panels of Fig. \ref{fig:spectra}.}
\label{table:redshifts_loz}
\begingroup
\setlength{\tabcolsep}{12pt}
\renewcommand{\arraystretch}{1.1}
\begin{center}
\begin{tabular}{lccc}
\hline
ID & Redshift & RA & DEC \\
\hline
1917 & 1.244 & 110.87105 & $-$73.46559 \\
8506 & 2.213 & 110.91640 & $-$73.45864 \\
9239 & 2.463 & 110.76578 & $-$73.45161 \\
9483 & 1.163 & 110.79735 & $-$73.44899 \\
9922 & 2.743 & 110.85947 & $-$73.44409 \\
\hline
\end{tabular}
\end{center}
\endgroup
\end{table}
\begin{table*}
\caption{Redshifts, stellar masses and mean stellar ages for the 5 high-redshift galaxies for which spectroscopic redshifts could be measured. The lensing factors were taken from the maps published by the RELICS team using the \textsc{Glafic} tool \citep{Oguri2010}. The SEDs and SFHs for these galaxies are shown in Fig. \ref{fig:sfhs}.}
\label{table:redshifts_hiz}
\begingroup
\setlength{\tabcolsep}{12pt}
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{lcccccc}
\hline
ID & Redshift & RA & DEC & log$_{10}(M_*/$M$_\odot)$ & Mean stellar age / Myr & Lensing factor\\
\hline
4590 & 8.498 & 110.85933 & $-$73.44916 & $7.42^{+0.40}_{-0.15}$ & $2^{+10}_{-1}$& 10.09 \\
5144 & 6.383 & 110.83972 & $-$73.44536 & $7.53^{+0.04}_{-0.04}$ & $1.2^{+0.3}_{-0.2}$ & 2.89 \\
6355 & 7.665 & 110.84452 & $-$73.43508 & $8.65^{+0.12}_{-0.11}$ & $1.2^{+0.3}_{-0.2}$ & 2.69 \\
8140 & 5.275 & 110.78804 & -73.46179 & $9.19^{+0.19}_{-0.16}$ & $19^{+21}_{-10}$ & 1.67 \\
10612 & 7.663 & 110.83395 & $-$73.43454 & $8.09^{+0.09}_{-0.07}$ & $1.2^{+0.3}_{-0.2}$ & 1.58 \\
\hline
\end{tabular}
\endgroup
\end{table*}
\section{Spectroscopic redshift determination}\label{redshifting}
The spectra described in Section \ref{data_nirspec} were redshifted by a combination of visual inspection and the Pandora.ez tool \citep{Garilli2010}. From the 35 objects for which data are available, secure redshifts were obtained in 10 cases. In each case, a range of high-SNR emission line detections were observed, leading to precise and unambiguous spectroscopic redshifts. Key sections of the spectra are shown in Fig. \ref{fig:spectra}, with emission features labelled. Object IDs, coordinates and spectroscopic redshifts are presented in Tables \ref{table:redshifts_loz} and \ref{table:redshifts_hiz}.
The galaxies for which redshifts could be obtained fall into two categories. Firstly, the 5 objects shown in the top panel of Fig. \ref{fig:spectra} fall within the redshift range $1 < z < 3$, with their redshifts determined primarily from Hydrogen Balmer, Paschen and Brackett lines. These near-infrared Hydrogen lines hold much promise as star-formation-rate indicators, as they are far less affected by dust than H$\alpha$ (e.g., \citealt{Pasha2020}). The clear detection of these lines showcases the unique capabilities of JWST. Another highlight is the detection of He\,\textsc{i} 10830\AA\ (e.g., \citealt{Groh2007}) and [Fe\,\textsc{ii}] 12570\AA\ (e.g., \citealt{Izotov2009}), both of which are associated with massive stars.
The focus of this work however is on the second group, shown in the bottom panels of Fig. \ref{fig:spectra}, which comprises $5 < z < 9$ galaxies. For these objects, redshifts were measured primarily using a combination of rest-frame optical Hydrogen Balmer and Oxygen lines. Interestingly, two objects (6355 and 10612) display almost identical redshifts. These objects are however far from the centre of the cluster, and are not listed as multiple images in any currently available lensing analysis of this field (\citealt{Mahler2022, Pascale2022, Caminha2022}).
Several of these spectra show clear detections of the [O\,\textsc{iii}] 4363\AA\ auroral line, commonly used in ``direct" method metallicity measurements \citep{Kewley2019}. Future work by our group will exploit these data, along with other early JWST spectra, to measure gas-phase metallicities during the first billion years (Cullen et al. in prep).
\begin{figure*}
\includegraphics[width=\textwidth]{figures/jwst_paper_fig2.pdf}
\caption{Spectral energy distributions, SFHs and F150W cutout images for the five SMACS0723 galaxies at $5 < z < 9$ with spectroscopic redshifts. In the left panels, blue circles indicate HST photometry, whereas golden hexagons indicate JWST NIRCam photometry. The three highest-redshift objects exhibit a characteristic U-shaped pattern in the F277, F356 and F444W bands indicative of a Balmer break seen in emission and high-equivalent-width [O\,\textsc{iii}]+H$\beta$ emission. This indicates a large increase in SFR within the last 10 Myr. Only the lowest-redshift object in the top panel exhibits a traditional Balmer break.}
\label{fig:sfhs}
\end{figure*}
\section{Spectral Energy Distributions}\label{sed_fitting}
The HST+NIRCam SEDs for the five $5 < z < 9$ galaxies are shown in the left-hand panels of Fig. \ref{fig:sfhs}. At $z\simeq7-9$, the F277W and F356W filters ($4.4< \log_{10}(\lambda/$\AA$)<4.6)$ bracket the Balmer break, whereas [O\,\textsc{iii}] 5007\AA\ is in the F444W filter. For our three highest-redshift objects, a U-shaped pattern can be seen across these three filters. This indicates that the Balmer break is seen in emission, sometimes referred to as a Balmer jump. This is a signature of a galaxy dominated by a very young stellar population, with the additional flux below the break provided by nebular continuum emission and (potentially) by extremely massive stars \citep{Martins2020}.
Evidence for a traditional Balmer break is seen only in the spectrum of our lowest-redshift object, at $z=5.275$, with excess flux in the F277W filter. However, it should be noted that [O\,\textsc{iii}] 5007\AA\ falls into the edge of F277W at this redshift, in a region where the filter transmission is approximately a third of its maximum value.
To gain a better understanding of the SFHs of these galaxies, their SEDs were fitted using the \textsc{Bagpipes} spectral fitting code \citep{Carnall2018}. We use the 2016 updated version of the \cite{Bruzual2003} stellar population models with the MILES stellar spectral library. We allow the logarithm of the stellar metallicity, $Z_*$, to vary with a uniform prior from $-2 < \log_{10}(Z_*/$Z$_\odot) < -0.3$. Nebular emission is included in \textsc{Bagpipes} via the \textsc{Cloudy} code \citep{Ferland2017}, following the method laid out in \cite{Carnall2018}. We allow the logarithm of the ionization parameter, U, to vary over the range $-2 <$ log$_{10}(U) < -4$ with a uniform prior, and assume the nebular metallicity is the same as the stellar metallicity. We model dust attenuation with the \cite{Salim2018} model, using the same priors as in \cite{Carnall2020}. We vary the ratio of attenuation for stars within stellar birth clouds (assuming a lifetime of 10 Myr) and the broader interstellar medium with a uniform prior from 1 to 3.
We explored a variety of different SFH models to try to understand the constraining power of these new data. In all cases, the four highest-redshift galaxies could not be well fitted except by models in which the bulk of the current stellar population formed within the preceding 10\,Myr. In particular, this is necessary to reproduce the Balmer jump between the F277W and F356W bands seen in the three highest-redshift spectra. In the end, we use a simple constant SFH model, which is adequate to explain the data. We vary the age using a logarithmic prior from 1 Myr to the age of the Universe.
The results of our SED fitting analysis are also shown in Fig. \ref{fig:sfhs}, with key parameters listed in Table \ref{table:redshifts_hiz}. We correct our SFHs and stellar masses using the lensing model released by the RELICS team computed with \textsc{Glafic} \citep{Oguri2010}. We find very young ages for the four highest-redshift objects, significantly below 10 Myr. This is perhaps unsurprising however, given their relatively low delensed stellar masses. The highest mass, SFR and most extreme Balmer jump all belong to object 6355, which shows clear structure in the F150W imaging, which could indicate an ongoing merger event.
\section{Conclusion}\label{conclusion}
In this work we present a first-look analysis of the SMACS0723 JWST ERO data, focusing on galaxies with new spectroscopic redshifts from NIRSpec, in particular those at $5 < z < 9$. We report 10 new redshifts from the ERO NIRSpec data, which are shown in Fig. \ref{fig:spectra}. Half of these spectra are for comparatively low redshift ($1 < z < 3$) galaxies, for which NIRSpec detects a wealth of rest-frame near-infra-red emission lines, primarily from the Hydrogen Paschen series. The other 5 spectra are for $5 < z < 9$ galaxies, which display rest-frame optical Hydrogen Balmer and Oxygen lines.
We then fit SEDs generated from HST+NIRCam imaging data for the 5 high-redshift galaxies, focusing on determining their stellar masses and SFHs. For the four $z>6$ objects we see evidence for a Balmer break in emission (Balmer jump), associated with a very young stellar population, the bulk of which must have formed within the past 10 Myr. The three highest-redshift galaxies in particular show a U-shaped pattern in the F277W, F356W and F444W bands, due to the presence of the Balmer jump and high-equivalent-width [O\,\textsc{iii}]+H$\beta$ emission. These extremely young ages are consistent with the relatively low stellar masses we find for these galaxies, with all except the lowest-redshift ($z=5.275$) being comfortably below log$_{10}(M_*/$M$_\odot) = 9$ when corrected for lensing effects.
Larger-area JWST surveys, such as Cosmic Evolution Early Release Science (CEERS\footnote{https://ceers.github.io/}) and Public Release IMaging for Extragalactic Research (PRIMER\footnote{https://primer-jwst.github.io/}), may well uncover more-mature and more-massive galaxies at $z>7$ that do contain stellar populations old enough to exhibit clear Balmer breaks in NIRCam imaging. However, this study highlights the key importance of coupling such imaging surveys with deep NIRSpec follow-up observations, in order to obtain the robust spectroscopic redshifts necessary to distinguish between Balmer breaks and high-equivalent-width [O\,\textsc{iii}]+H$\beta$ emission.
\section*{Acknowledgements}
A.\,C. Carnall thanks the Leverhulme Trust for their support via the Leverhulme Early Career Fellowship scheme. R. Begley, D.\,J. McLeod, M. Hamadouche, C. Donnan, R.\,J. McLure, J.\,S.~Dunlop and F. Cullen acknowledge the support of the Science and Technology Facilities Council. S. Jewell and C. Pollock acknowledge the support of the School of Physics \& Astronomy, University of Edinburgh via Summer Studentship bursaries.
\section*{Data Availability}
\bibliographystyle{mnras}
|
2,877,628,089,103 | arxiv | \section{Introduction}
Holographic studies have recently led to interesting new insights into quantum chaos and the behaviour of entanglement in near-thermal systems \cite{Shenker:2013pqa,Shenker:2013yza,Maldacena:2015waa}. These investigations consider the thermofield double state,
\begin{equation} \label{tfd}
|\psi \rangle = \frac{1}{\sqrt{Z}} \sum_i e^{-\beta E_i/2} |E_i \rangle_1 \otimes |E_i \rangle_2,
\end{equation}
which describes an entangled state between two quantum systems with isomorphic Hilbert spaces $\mathcal H_1$, $\mathcal H_2$, where $|E_i \rangle$ are the energy eigenstates, and $Z = \mbox{tr}\, e^{-\beta H}$ is the partition function, included for normalization. Tracing over one copy leads to a thermal density matrix in the other. When we consider this state in a conformal field theory with a holographic dual \cite{Maldacena:1997re}, it can be described holographically by an eternal black hole, with the two copies of the Hilbert space identified with the two asymptotic boundaries of the black hole \cite{Maldacena:2001kr}. This state purifies the thermal density matrix by a specific pattern of short-range entanglement between the two copies of the CFT; this creates non-zero correlations between operators in the two copies, $\langle O_1 O_2 \rangle \neq 0$. The thermal density matrix is time-independent, which is reflected by the invariance of \eqref{tfd} under time evolution with $H_1 - H_2$. However, if we apply evolution with $H_1 + H_2$, the state evolves in a non-trivial fashion, with the entanglement between the two copies becoming more non-local (as signalled by a decay of the two-sided correlators of local operators). This time-evolution of the entanglement was studied holographically in \cite{Hartman:2013qma}, finding that the entanglement spreads out to larger distance scales linearly in time.
In \cite{Shenker:2013pqa}, Shenker and Stanford initiated a study of perturbations of the thermofield double state, to study the behaviour of near-thermal systems.\footnote{The investigation of perturbations of the thermofield double state is also motivated by the conjecture that generic entangled states are related to wormholes (ER=EPR) \cite{Maldacena:2013xja}; see also \cite{Marolf:2013dba,Balasubramanian:2014gla}.} They considered a small perturbation $W$ added to one of the two CFTs at some early time $-t_w$, and studied its effect on the structure of the state at $t=0$.\footnote{Initially the perturbation was taken to be spatially homogeneous, although localised perturbations were later considered in \cite{Roberts:2014isa}. We will restrict attention to homogeneous perturbations.} As we will review below, for large $t_w$, the perturbation can be modelled by a shock wave near the horizon of the black hole. They considered specifically two-dimensional conformal field theories, for which the dual black hole geometry in the thermofield double state is the non-rotating BTZ black hole. The perturbation deforms the geometry of the wormhole connecting the two asymptotic regions at $t=0$, with the length of a geodesic connecting the two boundaries in the perturbed geometry being given by
\begin{equation}
\frac{d}{l} = 2\log\frac{r}{r_+} + 2\log\left(1 + \frac{\alpha}{2}\right),
\end{equation}
where $l$ is the AdS scale, $r$ is a large-distance cutoff, $r_+$ is the radius of the black hole horizon, and
\begin{equation}
\alpha \sim \frac{E}{M} e^{\frac{2\pi}{\beta} t_w}
\end{equation}
is a parameter controlling the strength of the shock.
This growth in the length of the geodesic is reflected in the correlation functions for generic local operators in the two CFTs; for a given operator $V$ of dimension $\Delta$, we can approximate
\begin{equation} \label{ggrowth}
\langle W | V_L V_R | W \rangle \sim e^{-\Delta d/l} \sim r^{-2\Delta} \left(1 + \frac{\alpha}{2}\right)^{-2\Delta}.
\end{equation}
We see that the effect of the early perturbation on the correlation function at $t=0$ grows exponentially in $t_w$; this is a sign of sensitive dependence on initial conditions. The exponential growth produces an appreciable effect on the correlator when $\alpha$ becomes of order one, at the scrambling time \cite{Sekino:2008he}. In \cite{Maldacena:2015waa}, the value of the commutator $C(t) = - \langle [W(t), V(0)]^2 \rangle$ was adopted as a signature of quantum chaos. The behaviour of the commutator is controlled by the out of time order (OTO) correlation function $\langle V(0) W(t) V(0) W(t) \rangle$, which can be related to \eqref{ggrowth} as will be reviewed below. The lengthening of the geodesic can also be related to changes in the entanglement structure of the dual state through the Ryu-Takayanagi proposal \cite{Ryu:2006bv}.
The behaviour found in \cite{Shenker:2013pqa} is understood to be robust and generic in the space of theories. The calculations were extended to multiple shocks in \cite{Shenker:2013yza}, to localised shocks in \cite{Roberts:2014isa}, and to include stringy corrections in \cite{Shenker:2014cwa}. Field theory arguments have shown that these results apply not just to CFTs with a holographic dual, but much more generally \cite{Roberts:2014ifa,Maldacena:2015waa,Berenstein:2015yxu,Gur-Ari:2015rcq,Stanford:2015owe}.
Another natural extension is to consider the behaviour in the presence of chemical potentials for charge or angular momentum, where the thermofield double state is generalised to
\begin{equation} \label{ctfd}
|\psi \rangle = \frac{1}{\sqrt{Z}} \sum_i e^{-\beta (E_i+ \mu Q_i)/2} |E_i, Q_i \rangle_1 \otimes |E_i, -Q_i \rangle_2.
\end{equation}
This is described holographically by a charged or rotating black hole. The holographic correspondence for eternal charged black holes was studied in \cite{Brecher:2004gn,Andrade:2013rra}. The entanglement structure of these states is similar to the thermofield double, but in the extremal limit $\beta \to 0$, the entanglement becomes more non-local, and the wormhole in the holographic dual becomes infinitely long. The two copies of the CFT remain entangled, however, and there are classes of operators whose two-sided correlation functions remain finite.
It is interesting to ask how small perturbations of \eqref{ctfd} behave, and how the previous results on the sensitive dependence on initial conditions are modified by the additional scale introduced by the chemical potential. The purpose of this paper is to explore this question holographically, in the simple context of 2+1 gravity, dual to suitable two-dimensional conformal field theories. After completing our work, we realised that this extension of \cite{Shenker:2013pqa} was previously considered in \cite{Leichenauer:2014nxa}.\footnote{Related work on extending the complexity ideas of \cite{Susskind:2014rva} to charged black holes appeared in \cite{Halyo:2015fpa}, and the recent work on complexity and action covers both charged and uncharged examples \cite{Brown:2015bva,Brown:2015lvg}.} (Related recent work is \cite{Sircar:2016old,Roberts:2016wdl}.) There is some overlap with our work; differences are that that paper focuses on the mutual information, and higher-dimensional black holes, while we will focus on correlation functions in 2+1 dimensional black holes.
We find that the growth of the effect is still controlled by the temperature; for the case with rotation, the parameter controlling the strength of the shock is
\begin{equation}
\alpha = \frac{\Delta r_+}{2\kappa l^2}e^{\kappa t_w} = \frac{r_+^2}{(r_+^2 - r_-^2)^2}\left(\frac{E}{4M}(r_+^2 + r_-^2) - \frac{L}{2J}r_-^2\right)\exp\left(\frac{r_+^2 - r_-^2}{l^2r_+}t_w\right),
\end{equation}
where $\kappa$ is the surface gravity of the black hole and $E$, $L$ are the energy and angular momentum carried by the shock, which modifies the geometry by shifting the outer horizon radius by an amount $\Delta r_+$. This is consistent with the results of \cite{Leichenauer:2014nxa}. The Lyapunov exponent characteristic of quantum chaos is thus still $\lambda_L = \kappa$, as in the simple thermal systems. The prefactor is also controlled by the surface gravity, so the dynamics is not directly sensitive to the additional scale associated with the angular momentum. The same slowing down of time evolution controlled entirely by the temperature is seen in correlation functions on unperturbed charged black holes \cite{Brecher:2004gn,Andrade:2013rra}.
\section{Review of the uncharged, non-rotating case}
We first review the original work of \cite{Shenker:2013pqa} on the uncharged case. They considered a spherically symmetric perturbation of an uncharged, non-rotating black hole. For simplicity, they considered the non-rotating BTZ solution in 2+1 dimensions,
\begin{equation}
ds^2 = -f(r)dt^2 + \frac{dr^2}{f(r)} + r^2d\phi^2
\end{equation}
where
\begin{equation}
f(r) = \frac{r^2 - r_+^2}{l^2} = \frac{r^2}{l^2} - M.
\end{equation}
The horizon radius is $r_+$ (this was $R$ in \cite{Shenker:2013pqa}), $l$ is the AdS scale, and $M$ is the black hole mass. To understand the matching across the shell near the horizon, we also need to use Kruskal coordinates,
\begin{align}
U &= -e^{-\kappa u},\\
V &= e^{\kappa v}
\end{align}
where $\kappa = r_+/ l^2$ is the surface gravity, and $u, v = t \mp r_*$, with the tortoise coordinate
\begin{equation}
r_* = -\int_r^\infty\frac{dr'}{f(r')} = \frac{l^2}{2r_+}\log\left(\frac{r - r_+}{r + r_+}\right).
\end{equation}
This gives
\begin{equation}
UV = - \frac{r - r_+}{r + r_+},
\end{equation}
and the manifestly non-singular form of the metric
\begin{equation}
ds^2 = \frac{-4l^2dUdV + r_+^2(1 - UV)^2d\phi^2}{(1 + UV)^2}.
\label{Kruskal metric}
\end{equation}
This defines the relation of the ordinary BTZ coordinates to Kruskal coordinates in region I of figure \ref{regions}. There are similar relations in the other regions.
\begin{figure}[tb]
\centering
\includegraphics[width=0.4\textwidth]{BTZKruskal.eps}
\caption{Regions I to IV in Kruskal coordinates.}
\label{regions}
\end{figure}
We add energy to the system on the left boundary at some early time, i.e. at a large value, $t_w$, of the $t$ coordinate. For simplicity, it is assumed that the perturbation is spherically symmetric, while the asymptotic energy of the perturbation, $E$, is assumed to be small compared with $M$. Formally, we take a limit $E/ M \rightarrow 0$, $t_w \rightarrow \infty$ with $Ee^{\kappa t_w}/M$ fixed.
In this limit, the perturbation approximately follows null geodesics, so the perturbed geometry is obtained by gluing a BTZ solution with mass $M$ (to the past/right of the perturbation) to one with mass $M + E$ (to the future/left of the perturbation) across the null surface $U_w = e^{-\kappa t_w}$, which meets the left boundary at $t=t_w$. To the right of the shock we have coordinates $U$, $V$ and $\phi$, with parameter $M$ or $r_+$. To the left, we have coordinates $\tilde{U}$, $\tilde{V}$ and $\phi$ and parameter $\tilde{M} = M + E$ or $\tilde{r}_+ =\sqrt{\frac{M + E}{M}}r_+$. The relationship between the two coordinate systems on the shock is fixed by imposing two conditions:
\begin{enumerate}
\item The time coordinate $t$ is required to be continuous at the boundary, i.e. at $r = \infty$. This fixes a relative boost ambiguity.
\item The size of the $S^1$ must be continuous across the shock.
\end{enumerate}
The first of these conditions means that, to the left of the shock, $\tilde{U}_w = e^{-\tilde{\kappa} t_w }$, where $\tilde{\kappa} = \tilde{r}_+ / l^2$. In the limit we get $\tilde{U}_w = U_w(1 + t_w\Delta\kappa)$ where $t_w\Delta\kappa$ is small. The second condition then gives
\begin{equation}
\tilde{V} = V + \alpha,
\label{V has a step}
\end{equation}
where
\begin{equation}
\alpha = \frac{\Delta r_+}{2 \kappa l^2} e^{\kappa t_w} = \frac{E}{4M}e^{r_+t_w / l^2}.
\label{step size}
\end{equation}
This is illustrated in the diagram of figure \ref{perturbed}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.5\textwidth]{PerturbedBTZ.eps}
\caption{Kruskal coordinates and the perturbed BTZ solution.}
\label{perturbed}
\end{figure}
Note that the positivity of $\alpha$, the step change in the $V$ coordinate, is simply related to the second law of thermodynamics for the entropy of the black hole. We can make $\alpha$ as large as desired by pushing the perturbation further back in time, i.e. by increasing $t_w$.
As we will see later, the general form of the perturbation $\alpha$ will be essentially the same in the other cases we consider; the essential ingredients are just the structure of the Kruskal coordinates in terms of the tortoise coordinate and the matching conditions.
As BTZ is locally AdS$_3$, the length of geodesics is conveniently calculated by using the embedding coordinates in a flat $2 + 2$ dimensional spacetime, in which the length of geodesics between points $p$ and $p'$ is given by
\begin{equation}
\cosh\frac{d}{l} = \frac{1}{l^2}\left(T_1T'_1 + T_2T'_2 - X_1X'_1 - X_2X'_2\right),
\label{distance formula}
\end{equation}
These coordinates are related to the Kruskal and BTZ coordinates by
\begin{align}
T_1 &= l\frac{V + U}{1 + UV} = \frac{l}{r_+}\sqrt{r^2 - r_+^2}\sinh\frac{r_+t}{l^2},\\
T_2 &= l\frac{1 - UV}{1 + UV}\cosh\frac{r_+\phi}{l} = \frac{lr}{r_+}\cosh\frac{r_+\phi}{l},\\
X_1 &= l\frac{V - U}{1 + UV} = \frac{l}{r_+}\sqrt{r^2 - r_+^2}\cosh\frac{r_+t}{l^2},\\
X_2 &= l\frac{1 - UV}{1 + UV}\sinh\frac{r_+\phi}{l} = \frac{lr}{r_+}\sinh\frac{r_+\phi}{l},
\end{align}
in region I. We will use a similar method later for rotating BTZ.
Geodesics between two points on opposite boundaries must necessarily cross the shock. To calculate the geodesic distance between such points, we
\begin{enumerate}
\item Calculate the geodesic distances between a general location, $U = 0$, $V = V_{\text{shock}}$, on the shock and each of the two boundary points.
\item Extremize the sum over $V_{\text{shock}}$.
\end{enumerate}
This is illustrated in figure \ref{gluing geodesics}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{gluingGeodesics.eps}
\caption{Geodesics through the perturbed wormhole are found by gluing geodesics from each side at a general location on the shock and extremizing the total length over this location. The blue, dashed line shows two geodesics glued at an arbitrary location. The red, solid line, passing through the centre of the conformal diagram, extremizes the total length and is therefore the geodesic required.}
\label{gluing geodesics}
\end{figure}
We use the coordinates to the right of the shock to label the point on the shock. If the two boundary points are both at $t = 0$ and at equal angular coordinate $\phi$, we find that the geodesic crosses the shock at the centre of the conformal diagram at $V_{\text{shock}} = -\alpha / 2$, as one would expect from symmetry. Regulating the overall divergence in the length of the geodesic by taking the distance between points at some large fixed radius $r$, we obtain
\begin{equation}
\frac{d}{l} = 2\log\frac{2r}{r_+} + 2\log\left(1 + \frac{\alpha}{2}\right).
\end{equation}
The second term gives the increase in the length of the geodesic resulting from the addition of the perturbation. This increase may be made arbitrarily large by increasing $t_w$, i.e. by adding the perturbation further back in the past.
As mentioned in the introduction, we can use the geodesic length to obtain an approximation to the two-point correlation function of operators inserted on the two boundaries of the black hole,
\begin{equation}
\langle W | V_L V_R | W \rangle \sim e^{-\Delta d/l} \sim r^{-2\Delta} \left(1 + \frac{\alpha}{2}\right)^{-2\Delta},
\end{equation}
where $|W \rangle$ is the state obtained by acting on the thermofield double state with the perturbation $W(t_w)$ on the Hilbert space $\mathcal H_1$. Thinking of the thermofield double state as prepared by a path integral on the Euclidean circle, this correlation function can be interpreted as an out of time order correlation function
\begin{equation}
Z^{-1} \mbox{tr}(e^{-\beta H/2} V W(t_w) e^{-\beta H/2} V W(t_w) ),
\end{equation}
The exponential growth of $\alpha$ with $t_w$ corresponds to an exponential decay of this OTO correlation function, which leads to a growth in the squared commutator \cite{Maldacena:2015waa}
\begin{equation}
- Z^{-1} \mbox{tr}e^{-\beta H/2} [W(t_w),V] e^{-\beta H/2} [W(t_w),V] ).
\end{equation}
The time scale at which $\alpha$ becomes of order one, $t_s = \frac{\beta}{2\pi} \ln \frac{M}{E}$, is recognised as the scrambling time. On this same time scale, the entanglement between the two copies of the CFT in the thermofield double state, which was initially between approximately local degrees of freedom in the two copies, has become delocalised.
\section{Perturbing the rotating BTZ solution}
The simplest extension of this calculation to consider is the rotating BTZ solution, as the geometry is still locally AdS, so geodesic calculations will be simple, and a good deal of progress can be made analytically. This introduces an additional length scale associated with the rotation, and the interesting question is to what extent the physical effects depend on this scale.
The rotating BTZ metric is
\begin{equation}
ds^2 = -f^2(r)dt^2 + f^{-2}(r)dr^2 + r^2\left[N^\phi(r)dt + d\phi\right]^2
\end{equation}
where
\begin{equation}
f^2(r) = -M + \left(\frac{r}{l}\right)^2 + \frac{J^2}{4r^2},
\end{equation}
and we adopt co-rotating coordinates, since we are interested in the behaviour near the horizon, so
\begin{equation}
N^\phi(r) = \frac{J}{2}\frac{r^2 - r_+^2}{r^2r_+^2}.
\end{equation}
The horizon radii $r_+$ and $r_-$ are the solutions to $f^2(r) = 0$,
\begin{equation}
r_\pm^2 = \frac{1}{2}\left(Ml^2 \pm \sqrt{(Ml^2)^2 - J^2l^2}\right).
\end{equation}
We will find it useful to express the metric entirely in terms of $r_+$ and $r_-$, rather than $M$ and $J$, which are
\begin{equation}
\begin{split}
M &= \frac{r_+^2 + r_-^2}{l^2},\\
J &= \frac{2r_+r_-}{l}.
\end{split}
\label{M and J}
\end{equation}
We will assume without loss of generality that $J$ is positive. The metric functions in terms of $r_\pm$ are
\begin{equation}
N^\phi(r) = \frac{r_-}{r_+}\,\frac{r^2 - r_+^2}{lr^2}
\end{equation}
and
\begin{equation}
f^2(r) = \frac{(r^2 - r_+^2)(r^2 - r_-^2)}{l^2r^2}.
\end{equation}
We introduce Kruskal coordinates by writing as before
\begin{align}
U &= -e^{-\kappa u},\\
V &= e^{\kappa v},
\end{align}
where $u, v = t \mp r_*$ and the tortoise coordinate is
\begin{equation}
r^* = \frac{1}{2\kappa}\log\frac{\sqrt{r^2 - r_-^2} - \sqrt{r_+^2 - r_-^2}}{\sqrt{r^2 - r_-^2} + \sqrt{r_+^2 - r_-^2}}
\end{equation}
where $\kappa$ is given by
\begin{equation}
\kappa = \frac{r_+^2 - r_-^2}{l^2r_+}.
\end{equation}
This gives the metric
\begin{equation}
ds^2 = \frac{-4l^2dUdV - 4lr_-(UdV - VdU)d\phi + \left[(1 - UV)^2r_+^2 + 4UVr_-^2\right]d\phi^2}{(1 + UV)^2}.
\end{equation}
\subsection{Adding the perturbation}
We consider a spherically symmetric shell which meets the left boundary at some time $t_w$. For finite $t_w$, the trajectory of this shell in the $U,V$ plane will depend on the angular momentum it carries, but as we take the limit of large $t_w$, we apply a large boost in the $U,V$ plane, and the trajectory becomes approximately lightlike, along a line of constant $U$, as in the non-rotating case. The matching problem is then very similar to the one in the non-rotating case. We glue two copies of the rotating BTZ spacetime together along a shock at $U = e^{-\kappa t_w}$. To the right of the shock, the black hole has mass $M$, angular momentum $J$ and coordinates $U$, $V$ and $\phi$, while to the left of the shock, we have mass $\tilde{M} = M + E$, angular momentum $\tilde{J} = J + L$ and coordinates $\tilde{U}$, $\tilde{V}$ and $\phi$. We impose continuity of $t$ at the boundary and $r$ across the shock as for the non-rotating case. The result is a jump in $V$,
\begin{equation}
\tilde{V} = V + \alpha,
\end{equation}
where
\begin{equation}
\alpha = \frac{\Delta r_+}{2\kappa l^2}e^{\kappa t_w}
\end{equation}
exactly as in the non-rotating case. In terms of $M$, $J$, $E$ and $L$
\begin{equation}
\alpha = \frac{r_+^2}{(r_+^2 - r_-^2)^2}\left(\frac{E}{4M}(r_+^2 + r_-^2) - \frac{L}{2J}r_-^2\right)\exp\left(\frac{r_+^2 - r_-^2}{l^2r_+}t_w\right).
\end{equation}
Since the rotating black holes have a throat which grows infinitely long in the extremal limit, one might have thought that for near-extremal black holes it would be possible to add a shock that took one away from extremality, increasing the size of the black hole while lowering the length of the wormhole. However, we find that so long as the second law of thermodynamics is obeyed so $\Delta r_+ >0$, the jump $\alpha >0$. We will now see that this leads to a longer wormhole.
\subsection{Geodesic lengths}
We will calculate the length of the geodesics in embedding coordinates, as in the non-rotating case. For our co-rotating coordinates, the relation to embedding coordinates is
\begin{align}
T_1 &= \pm\sqrt{\pm B(r)}\sinh\tilde{t}(t, \phi),\\
T_2 &= \sqrt{A(r)}\cosh\tilde{\phi}(t, \phi),\\
X_1 &= \pm\sqrt{\pm B(r)}\cosh\tilde{t}(t, \phi),\\
X_2 &= \sqrt{A(r)}\sinh\tilde{\phi}(t, \phi),
\end{align}
where
\begin{align}
A(r) &= l^2\frac{r^2 - r_-^2}{r_+^2 - r_-^2},\\
B(r) &= l^2\frac{r^2 - r_+^2}{r_+^2 - r_-^2}
\end{align}
and
\begin{align}
\tilde{\phi} &= \frac{r_+\phi}{l}\\
\tilde{t} & = \kappa t - \frac{r_-}{l}\phi.
\end{align}
Here the first $\pm$ in the formulae is positive for regions I and II and negative for regions III and IV, while the second is positive for regions I and IV and negative for regions II and III. The transformation from the Kruskal coordinates to the embedding coordinates is
\begin{equation}
\begin{split}
T_1 &= l\frac{V + U}{1 + UV}\cosh\frac{r_-\phi}{l} - l\frac{V - U}{1 + UV}\sinh\frac{r_-\phi}{l},\\
T_2 &= l\frac{1 - UV}{1 + UV}\cosh\frac{r_+\phi}{l},\\
X_1 &= l\frac{V - U}{1 + UV}\cosh\frac{r_-\phi}{l} - l\frac{V + U}{1 + UV}\sinh\frac{r_-\phi}{l},\\
X_2 &= l\frac{1 - UV}{1 + UV}\sinh\frac{r_+\phi}{l}.
\end{split}
\label{embedding}
\end{equation}
These hold in each of the four regions.
We consider first a geodesic from a point at $t=0, \phi=0$ on one boundary to a point at $t=0, \phi =0$ on the other boundary. The main complication relative to the discussion in \cite{Shenker:2013pqa} is that the geodesic may not meet the shock at $\phi =0$. We must join geodesics from the two boundary points at a general point on the shock and then extremize the geodesic length with respect to both the $V$ and $\phi$ coordinates of the meeting point.
To the left of the shock, we need the distance from $(t, r, \phi) = (0, r, 0)$ (in region IV) to $(U', V', \phi') = (0, V + \alpha, \phi)$. The embedding coordinates of the first point are
\begin{align}
T_1 &= 0,\\
T_2 &= l\sqrt{\frac{r^2 - r_-^2}{r_+^2 - r_-^2}},\\
X_1 &= -l\sqrt{\frac{r^2 - r_+^2}{r_+^2 - r_-^2}},\\
X_2 &= 0.
\end{align}
while for the second point we get
\begin{align}
T'_1 &= l(V + \alpha)\cosh\frac{r_-\phi}{l} - l(V + \alpha)\sinh\frac{r_-\phi}{l} = l(V + \alpha)e^{-r_-\phi/l},\\
T'_2 &= l\cosh\frac{r_+\phi}{l},\\
X'_1 &= l(V + \alpha)\cosh\frac{r_-\phi}{l} - l(V + \alpha)\sinh\frac{r_-\phi}{l} = l(V + \alpha)e^{-r_-\phi/l},\\
X'_2 &= l\sinh\frac{r_+\phi}{l}
\end{align}
If we let $d_1$ be the length of the geodesic to the left of the shock, then
\begin{align}
\cosh\frac{d_1}{l} &= \frac{1}{l^2}(T_2T'_2 - X_1X'_1)\\
&= \sqrt{\frac{r^2 - r_-^2}{r_+^2 - r_-^2}}\cosh\frac{r_+\phi}{l} + (V + \alpha)\sqrt{\frac{r^2 - r_+^2}{r_+^2 - r_-^2}}e^{-r_-\phi / l}\\
&\simeq \frac{r}{\sqrt{r_+^2 - r_-^2}}\left(\cosh\frac{r_+\phi}{l} + (V + \alpha)e^{-r_-\phi / l}\right).
\end{align}
For the geodesic to the right of the shock, the calculation proceeds as above, but with the sign of $X_1$ reversed for the boundary point and $V + \alpha$ replaced by $V$ at the shock. Hence
\begin{equation}
\cosh\frac{d_2}{l} \simeq \frac{r}{\sqrt{r_+^2 - r_-^2}}\left(\cosh\frac{r_+\phi}{l} - Ve^{-r_-\phi / l}\right).
\label{d2}
\end{equation}
To find the value of $V$ that extremizes $d = d_1 + d_2$, we differentiate to get
\begin{align}
\frac{1}{l}\sinh\left(\frac{d_1}{l}\right)\frac{\partial d_1}{\partial V} &= \frac{r}{\sqrt{r_+^2 - r_-^2}}e^{-r_-\phi / l},\\
\frac{1}{l}\sinh\left(\frac{d_2}{l}\right)\frac{\partial d_2}{\partial V} &= -\frac{r}{\sqrt{r_+^2 - r_-^2}}e^{-r_-\phi / l}
\end{align}
so that
\begin{equation}
\frac{\partial d}{\partial V} = \frac{lr}{\sqrt{r_+^2 - r_-^2}}e^{-r_-\phi / l}\left(\frac{1}{\sinh\frac{d_1}{l}} - \frac{1}{\sinh\frac{d_2}{l}}\right).
\end{equation}
This vanishes if $d_1 = d_2$, which gives $V = -\alpha / 2$, as we might again have expected from symmetry. Equation (\ref{d2}) now gives us
\begin{equation}
\frac{d}{2l} = \log\frac{2r}{\sqrt{r_+^2 - r_-^2}} + \log\left(\cosh\frac{r_+\phi}{l} + \frac{\alpha}{2}e^{-r_-\phi / l}\right),
\label{lengthening}
\end{equation}
where we have used $\cosh^{-1}x \simeq \pm\log(2x)$ for large $x$. Note that since $\alpha > 0$, the perturbation must increase the length of the geodesic, as we said earlier.
Extremizing (\ref{lengthening}) with respect to $\phi$ gives us
\begin{equation}
r_+\sinh\frac{r_+\phi}{l} = \frac{\alpha r_-}{2}e^{-r_-\phi / l}.
\label{joining phi}
\end{equation}
We define $\phi^*$ to be the value of $\phi$ satisfying this equation and we let $p(\alpha)$ be the contribution of the perturbation to the geodesic length $d/l$, i.e.
\begin{equation}
p(\alpha) = 2\log\left(\cosh\frac{r_+\phi^*}{l} + \frac{\alpha}{2}e^{-r_-\phi^* / l}\right),
\end{equation}
so that
\begin{equation} \label{chgeod}
\frac{d}{l} = 2\log\frac{2r}{\sqrt{r_+^2 - r_-^2}} + p(\alpha).
\end{equation}
Unfortunately, it appears that we cannot solve (\ref{joining phi}) analytically, except in the special cases of non-rotating and extremal black holes. In the first case, we saw earlier that $\phi^* = 0$ and
\begin{equation}
p(\alpha) = 2\log\left(1 + \frac{\alpha}{2}\right),
\end{equation}
while for extremal black holes when $r_+ = r_-$ we get
\begin{equation}
\phi^* = \frac{l}{2r_+}\log\left(1 + \alpha\right)
\end{equation}
and
\begin{equation}
p(\alpha) = \log(1 + \alpha).
\end{equation}
In the general case, it is straightforward to show that both $\phi^*$ and $p(\alpha)$ (and hence the geodesic length) increase with $\alpha$. Given the expressions for $p(\alpha)$ for the two special cases, one would expect similar logarithmic increases in $p(\alpha)$ with respect to $\alpha$ in the general case. The results of numerical calculations, displayed in figure \ref{p},
\begin{figure}[tb]
\centering
\includegraphics[width=0.7\textwidth]{p.pdf}
\caption{Increase in the length of the geodesic, as a function of the size of the jump in $V$ coordinate at the shock. Here $r_+ = 1$, so the plot shows graphs for the non-rotating and extremal black holes and four intermediate cases.}
\label{p}
\end{figure}%
would appear to confirm this.
Given the non-trivial behaviour of the angular coordinate for these geodesics, there is the concern that it might be possible to find a shorter geodesic between the boundary points, by allowing $\phi$ to go from zero on one boundary to $\phi = 2\pi$ on the other. Applying the numerical calculations to general values of $\phi$ on the boundaries is straightforward, resulting in figure \ref{dVsPhi}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.7\textwidth]{d.pdf}
\caption{Overall geodesic distance plotted against the value of $\phi$ on the right hand boundary, for a range of different $r_-$. Again, $r_+ = 1$, while $\phi$ is set to zero on the left hand boundary.}
\label{dVsPhi}
\end{figure}%
The monotonic increase in geodesic length with the difference in angular coordinate confirms that the shortest geodesic between matching boundary points is that calculated between matching values of $\phi$, not values differing by some multiple of $2\pi$.
As in the non-rotating case, this increase in the length of the geodesics can be interpreted as a decrease in the correlation functions of operators in the state $|W \rangle$ created by acting with the perturbation $W(t_w)$. In the rotating black hole, the initial value of the correlators before the perturbation is smaller, as the factor of $\sqrt{r_+^2 -r_-^2}$ in \eqref{chgeod} increases the length of the geodesics, but the dynamical evolution is as in the non-rotating case, and the change in the length of the geodesics becomes appreciable when $\alpha$ is of order one, at the scrambling time $t_s = \kappa^{-1} \ln \kappa/\Delta r_+$. As in the non-rotating case, this scales as the ratio of the energy of the black hole to the energy of the perturbation. If we take the extremal limit, $\Delta r_+$ could be small compared to $r_+$ but large compared to $\kappa$, but for this to change the scaling form of $t_s$ we would need to go to temperatures $T$ of order the energy of the perturbation.
We can also consider the implications of the geodesic calculation for the entanglement entropy (as in \cite{Leichenauer:2014nxa}), which is also similar to the non-rotating case. Consider the mutual information of two matching regions, one on each boundary, with arc length $\phi$ and centred on the same angular coordinate. Firstly, the entanglement entropy of one of the regions is, assuming $\phi < \pi$,
\[
S_A = \frac{l}{4G_N}\left(2\log\frac{2r}{\sqrt{r_+^2 - r_-^2}} + \log\sinh\frac{(r_+ + r_-)\phi}{2l} + \log\sinh\frac{(r_+ - r_-)\phi}{2l}\right).
\]
Meanwhile, the entanglement entropy of $A \cup B$ is the smallest of
\begin{align}
S_{A \cup B}^{(1)} &= S_A + S_B,\\
S_{A \cup B}^{(2)} &= \frac{l}{2G_N}\left(2\log\frac{2r}{\sqrt{r_+^2 - r_-^2}} + p(\alpha)\right).
\end{align}
Now
\begin{equation}
S_{A \cup B}^{(1)} - S_{A \cup B}^{(2)} = \frac{l}{2G_N}\left(\log\sinh\frac{(r_+ + r_-)\phi}{2l} + \log\sinh\frac{(r_+ - r_-)\phi}{2l} - p(\alpha)\right)
\label{mutual information}
\end{equation}
and if this is positive, then it gives the mutual information, $I(A; B)$. Otherwise, the mutual information is zero and there is no entanglement between the two regions. Near extremality, we need to have large regions to have non-zero mutual information. But our interest here is in the effect of the perturbation, and again the effect becomes significant, decreasing the local entanglement, just when $\alpha$ becomes of order one. Local entanglement is therefore reduced by the perturbation at a rate controlled by the scrambling time.
\section{Perturbing the charged BTZ solution}
The calculation for rotating BTZ is interesting, but as the solution is still locally AdS$_3$, this is a rather special case. We would like to extend the above calculation to further examples. As we will discuss in the next section, considering the correlators for black holes in higher dimensions (charged or uncharged) is challenging. Therefore, we consider here the calculation for a charged black hole in $2+1$ dimensions. We consider Einstein-Hilbert gravity coupled to an ordinary Maxwell field. (It is perhaps more common to consider a Chern-Simons gauge field in this context, but then the solution would remain locally AdS.)
The metric is
\begin{equation}
ds^2 = -f(r)dt^2 + \frac{dr^2}{f(r)} + r^2d\phi^2
\end{equation}
where
\begin{equation}
f(r) = \frac{r^2}{l^2} - M - \frac{Q^2}{2}\log\frac{r}{l}.
\end{equation}
This is supported by a gauge field
\begin{equation}
A = Q \log \frac{r}{r_+} dt.
\end{equation}
We can introduce Kruskal coordinates where
\begin{align}
U &= -e^{-\kappa u},\\
V &= e^{\kappa v},
\end{align}
where $u, v = t \mp r_*$ with a tortoise coordinate
\begin{equation} \label{tcharged}
r_* = - \int_r^\infty \frac{dr}{f(r)},
\end{equation}
and $\kappa = f'(r_+) / 2$ is the surface gravity. The metric in these coordinates is
\begin{equation} \label{kcharged}
ds^2 = \frac{f(r)}{\kappa^2UV}dUdV + r^2d\phi^2,
\end{equation}
where $r$ is a function of $U$ and $V$. In this case one cannot evaluate the integral in \eqref{tcharged} for the tortoise coordinate, so we cannot give a simple expression for $r$ in terms of $U$ and $V$. Near the horizon $r=r_+$,
\begin{equation}
\lim_{r \to r_+} r_*= \frac{1}{2\kappa} \ln \left( \frac{r-r_+}{r_+} \right) + \frac{1}{2\kappa} \ln C
\end{equation}
for some finite constant $C$. This gives $UV \approx -C\frac{(r-r_+)}{r_+}$, so the metric \eqref{kcharged} is regular there. The constant $C$ can be determined numerically for generic parameter values; in the extremal limit $r_+ \to r_-$, it diverges as $C \sim 1/(r_+ - r_-) \sim 1/\kappa$, as in the rotating case.
We consider perturbing this solution by throwing in a charged spherically symmetric shell from the left boundary at some early time $t_w$. The shell will then approximately follow the null trajectory $U = e^{-\kappa t_w}$. The step change in the $V$ coordinate in the shock is determined by the same matching conditions, which give, as before
\begin{equation}
\tilde{V} = V + \alpha,
\end{equation}
where
\begin{equation}
\alpha = C\frac{\Delta r_+}{r_+} e^{\kappa t_w}.
\end{equation}
Here the relation between $\Delta r_+$ and the parameters of the shell would need to be determined numerically for finite $\Delta r_+$ --- for small perturbations adding $m$ to the black hole mass $M$ and $q$ to the charge $Q$ we have
\begin{equation}
\Delta r_+ \approx \frac{1}{2\kappa}\left(m + Qq\log\frac{r_+}{l}\right).
\end{equation}
However, we can see that positivity of the shift $\alpha$ continues to be related to the second law.
\subsection{Geodesic lengths}
\label{charged no perturbation}
For this case, we cannot find the lengths of geodesics by using the embedding coordinates, so we need to simply solve the geodesic equations numerically. Using the symmetry of the solution we can reduce the problem to an effective one-dimensional problem, for spacelike geodesics
\begin{equation}
\dot{r}^2 = f(r) \left(1 - \frac{L^2}{r^2} \right) + E^2,
\end{equation}
where $E = f(r) \dot t$ and $L = r^2 \dot \phi$ are the constants of motion.
In the unperturbed spacetime, we are interested in geodesics in a constant-time slice (at $t=0$), so we take $E=0$. These geodesics can have turning points at $r = r_-$, $r = r_+$ or $r = \left|L\right|$. For $\left|L\right| > r_+$ we obtain geodesics that return to the boundary from which they started. These will be used in calculations of mutual information. Smaller values of $\left|L\right|$ pass through the wormhole. In either case, half the geodesic length is given by
\begin{equation}
\frac{d}{2} = \lambda_{\text{turn}} = \int_\infty^{r_{\text{turn}}} \frac{dr}{\dot{r}} = \bigints_{r_{\text{turn}}}^\infty \frac{dr}{\sqrt{\left(1 - \frac{L^2}{r^2}\right)f(r) }},
\end{equation}
where we have assumed that the affine parameter $\lambda$ starts at zero on the boundary and that $\dot{r}$ is negative up to the half way point at $\lambda = \lambda_{\text{turn}}$. This is clearly divergent. To find the convergent part, we calculate the integral up to some large value $R$ and subtract the divergent part, given by $l\log R$. We also need to determine the change in the angular coordinate,
\begin{equation}
\frac{\Delta\phi}{2} = \int_0^{\lambda_{\text{turn}}} \frac{L}{r^2}\,d\lambda.
\end{equation}
For the perturbed spacetime, we consider the geodesics connecting two points at $t=0, \phi=0$ on the two boundaries. The symmetry implies the minimal geodesic connecting these points will have $L=0$. It will run from the first boundary to some point on the shock with arbitrary $V$ coordinate and then to the second boundary; we need to consider general points on the shock and extremise over the position. These geodesics will then have $E \neq 0$. The turning points are solutions to $f(r) + E^2 = 0$. If $E$ is large enough then there are no solutions, and the geodesic hits the singularity. Alternatively, there will be two (possibly coincident) solutions, with values of $r$ between $r_-$ and $r_+$.
The simplest case is when $E > 0$. Then $\dot{t} > 0$ and so the geodesic reaches the shock at $r = r_+$ before reaching a turning point. Using
\begin{equation}
\dot{v} = \frac{E - \sqrt{\left(1 - \frac{L^2}{r^2}\right)f(r) + E^2}}{f(r)},
\label{v differential equation}
\end{equation}
given a solution for $r$, we can integrate to obtain $v$ at the intersection with the shock. The geodesic for $E = 1$, up to the shock, is shown in figure \ref{positive H}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{positiveH.pdf}
\caption{Geodesic for $L = 0$, $E = 1$, up to the shock. Note that $r$ remains greater than $r_+$ until the geodesic reaches the shock.}
\label{positive H}
\end{figure}
The more important case will be when $E<0$, so that the geodesic passes through the past horizon at $r = r_+$, and reaches a turning point where $\dot{r}$ becomes positive before reaching the shock. These will give the minimum length geodesics. The geodesic must then be calculated in two halves, before and after the turn. Up to the turn we solve for $r$ and $u$ in terms of $\lambda$ as before, but using
\begin{equation}
\dot{u} = \frac{E + \sqrt{\left(1 - \frac{L^2}{r^2}\right)f(r) + E^2}}{f(r)}
\end{equation}
to calculate $u$ rather than $v$ since $v$ behaves poorly upon crossing the past horizon. We then convert $u$ to $v$ at the turn by adding $2r^*$ using the region III formula for $r^*$.
To handle the second half, we integrate $dr/\dot{r}$ from $r_{\text{turn}}$ to $r_+$ to get $\lambda$ at the shock, and hence the length of the geodesic up to this point. We numerically solve the differential equation for $r$ back from the shock to the turning point and use the result to solve for $v$ using (\ref{v differential equation}). Geodesics for a range of negative values of $E$ are shown in figure \ref{negative H}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{negativeH.pdf}
\caption{Geodesics for $L = 0$ and $E = -0.05, -0.1, \dots, -0.45$. The red, topmost geodesic is for $E = -0.05$, with the hue changing gradually as $-E$ increases. Note that the geodesics all pass through the past horizon and reach a turning point before hitting the shock.}
\label{negative H}
\end{figure}
If we now calculate geodesics for a sufficient number of values of $E$ then we can estimate the value of $E$ required to hit any particular point on the shock. This allows us to calculate the length of full geodesics across the shock. If the perturbation gives a step change of $\alpha$ in the $V$ coordinate upon crossing the shock, then for each value of $V_{\text{shock}}$ we sum the length of geodesics from the right boundary to $(U, V) = (0, V_{\text{shock}})$ and from the left boundary to $(U, V) = (0, V_{\text{shock}} + \alpha)$. If we do this for, for example, $\alpha = 4$, then we obtain the results in figure \ref{alpha4}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{alpha4.pdf}
\caption{The sum of geodesic lengths from $(U, V, \phi) = (-1, 1, 0)$ to $(U, V, \phi) = (0, V_{\text{shock}}, 0)$ to the right of the shock and $(U, V, \phi) = (1, -1, 0$ to $(U, V, \phi) = (0, V_{\text{shock}} + \alpha, 0)$ to the left, plotted against $V_{\text{shock}}$ for $\alpha = 4$. Geodesic length is extremal only at the centre of the perturbed conformal diagram at $V_{\text{shock}} = -\alpha / 2$.}
\label{alpha4}
\end{figure}
The lack of any extrema except for the one expected by symmetry, at $V_{\text{shock}} = -\alpha / 2$, repeated for other values of $\alpha$ indicates that the geodesics joining matching points on the two boundaries cross the shock at the centre of the conformal diagram. This allows us to easily plot the geodesic length against $\alpha$, as in figure \ref{length vs alpha}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{lengthVsAlpha.pdf}
\caption{Geodesic length across the shock, plotted against the strength of the shock as given by $\alpha$.}
\label{length vs alpha}
\end{figure}
We see that the geodesic length increases monotonically with $\alpha$, becoming significant only for $\alpha$ of order one. Thus, as in the rotating case, the effect of the perturbation on correlation functions and mutual information at $t=0$ is determined by the scrambling time at which $\alpha$ becomes of order one, $t_s \sim \kappa^{-1} \ln r_+/\Delta r_+ \sim \kappa^{-1} \ln N^2$.
\section{Higher dimensions}
Our investigations, and the original work on the butterfly effect in \cite{Shenker:2013pqa}, have focused on black holes in three bulk dimensions, corresponding to two-dimensional field theories. It would seem useful to extend the discussion to higher dimensions, as in the study of mutual information in \cite{Leichenauer:2014nxa}. However, there is a significant obstacle to doing so for correlation function calculations. In more than three bulk dimensions, the correlation functions in the unperturbed thermofield double state for $t \neq 0$ are not correctly reproduced by considering the real geodesics in the real Lorentzian geometry; one needs to take complexified geodesics into account \cite{Fidkowski:2003nf}. The geodesics in the real Lorentzian geometry become null, corresponding to a singular correlation function, if we consider equal-time correlation functions at some boundary time $t = -t_*$.
The correlations we have been considering in the perturbed black hole are at $t=0$, but the calculation involves a geodesic on the right which goes from $t=0$ on the boundary to a point on the shock at $V = -\alpha/2$ (and on the left, from $t=0$ on the boundary to a point on the shock at $V = \alpha/2$). If we considered extending this geodesic to the other boundary in the unperturbed geometry, it would meet the other boundary at some $t= -t_0$. Thus, this is just a time-translated version of the geodesic that \cite{Fidkowski:2003nf} concluded was not relevant to the calculation of the correlator on the real sheet.
This is signalled by the fact that when we consider the geodesic from the boundary to the shock as a function of $\alpha$, there is a critical value of $\alpha$ beyond which there is no longer a spacelike geodesic which connects $t=0$ on the boundary to $V = -\alpha/2$ on the shock, as shown in figure \ref{4d negative H}. This critical value of $\alpha$ should correspond to the critical time $t_*$ in \cite{Fidkowski:2003nf}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{4dNegativeH.pdf}
\caption{Geodesics for $L = 0$ and $E = -0.5, -1, \dots, -4.5$ for the simple 3+1 dimensional black hole. The black hole mass, $M$, and the AdS length, $l$, are both set to 1. The red geodesic is for $E = -0.5$, with the hue changing gradually as $-E$ increases. As $-E$ increases, the intersection with the shock moves away from $V = 0$, reaches a critical point and then moving back towards, but does not reach, $V = 0$.}
\label{4d negative H}
\end{figure}
Thus, in higher dimensions, to calculate correlators in the perturbed geometry in the geodesic approximation, we would need to use complexified geodesics as in \cite{Fidkowski:2003nf}. However, the shock wave spacetime is not an analytic solution, so it does not have a unique complex extension allowing us to calculate the lengths of these complex geodesics. This problem could perhaps be addressed by moving away from the shock wave approximation and modelling the effects of the perturbation as some smooth deformation, but this will lead to considerable technical complication, so we leave this for future work.
\acknowledgments
We are grateful for discussions with Yang Lei. AR is supported by an STFC studentship. SFR is supported in part by STFC under consolidated grant ST/L000407/1.
\bibliographystyle{JHEP}
|
2,877,628,089,104 | arxiv | \section*{}
\label{intro}
The story of neutrino physics, which started in 1930 with the hypothesis of Pauli’s “neutron”~\cite{Pauli}, assumed to be a massless,
chargeless fermion of spin $\frac{1}{2}$, in order to explain the two outstanding problems of that time associated with the conservation laws {\it viz.} the conservation of energy and the conservation of angular momentum, has been an amazing one. This hypothesis got a solid foundation in 1933 with the theory of beta decay~\cite{Fermi}
propounded by Fermi who rechristened the Pauli’s “neutron” to "neutrino" and argued that the four point interaction vertex in a beta decay is vectorial in nature.
Later with the observations of parity violation in beta decay~\cite{Wu}, and the observation of neutrinos to be left handed particles~\cite{Goldhaber},
it was established that the weak interaction vertex has V-A(Vector-Axial Vector) nature, and a theory of neutrino interaction with matter was formulated using chiral($\gamma_5$) invariance assuming neutrinos to be massless~\cite{Feynman:1958ty}. To avoid ultraviolet catastrophe in $\nu_e - e$ scattering, it was assumed that the interaction is mediated by a heavy boson. In 1956,
electron type neutrino~(rather an antineutrino $\bar\nu_e$) was detected at the Savannah River reactor~\cite{Reines}. Later it was observed that there are three different flavors of neutrinos {\it viz.} electron neutrino~($\nu_e$), muon neutrino~($\nu_\mu$) and tauon neutrino~($\nu_\tau$), which are characterized by their own lepton quantum number $L_e$, $L_\mu$ and $L_\tau$, and these are conserved
separately in the weak interaction. In the Standard Model~(SM) of particle physics~\cite{Glashow:1961tr,Weinberg:1967,Salam} presently known to best describe the properties of the fundamental particles and their interactions, there are three generations of lepton flavors, each one of them placed in a weak isospin doublet, where corresponding to each charged lepton i.e. $e^-$, $\mu^-$ and
$\tau^-$, there is a massless neutrino of the same lepton number i.e. $\nu_e$, $\nu_\mu$ and $\nu_\tau$.
The existence of three flavors of neutrinos was experimentally established in 1989 when the Large Electron-Positron Collider(LEP) confirmed the presence of three active neutrinos~\cite{Mele:2015etc}. The interactions of these neutrinos with matter are described by the standard model gauge bosons {\it viz.} $W^+$, $W^-$, and $Z^o$. The absolute masses of neutrinos are not known, and there are upper experimental limits on them obtained from the end point spectrum of beta decay for $\nu_e$, pion decay for $\nu_\mu$ and tauon decay for $\nu_\tau$, as well as from some other indirect methods.
The history of the progress of our understanding of the physics of neutrinos is full of surprises as they continue to challenge our expectations regarding the validity of certain symmetry principles and conservation laws in particle physics. Today, we know that neutrinos are the most abundant particles in the Universe after photons, but least understood, due to their weakly interacting nature, though they play important role not only in particle and nuclear physics, but also in cosmology and astrophysics. There are natural sources of neutrinos like the ones produced during the nuclear fusion inside the star’s core, supernova bursts, decay of secondary cosmic ray particles in the earth’s atmosphere, geoneutrinos produced in the earth's core, etc., as well as the artificial sources of neutrinos like those produced from the nuclear reactors and the particle accelerators. Many of these neutrino sources are being used in learning the properties of neutrinos and their interactions with matter. These neutrinos are also helpful in knowing the various astrophysical information, about the sun’s core, composition of earth’s core, time and place of supernova explosion, etc.~\cite{S}.
The observations of solar neutrino anomaly and the atmospheric neutrino puzzle, are generally understood on the basis of neutrino flavor oscillation, a quantum mechanical effect which implies that at least two of these neutrinos have tiny masses. The observation of the phenomenon of neutrino oscillation therefore requires new physics Beyond the Standard Model~(BSM). Neutrino oscillation has also been observed in the accelerator as well as reactor neutrino experiments. The three neutrino flavor states $\nu_e$, $\nu_\mu$ and $\nu_\tau$ of the standard model
are considered to be mixture of the three mass eigenstates $m_1$, $m_2$ and $m_3$. The mixing is described in terms of the
Pontecorvo, Maki, Nagakawa, Sakata~(PMNS) matrix~\cite{Maki}, which is most popularly parameterized in terms of the
three mixing angles $\theta_{12}$, $\theta_{13}$ and $\theta_{23}$, and a phase $\delta$, better known as $\delta_{CP}$ as it can be used to describe CP violation. Some of these oscillation parameters have been determined in the solar, reactor~($\theta_{12}$), accelerator~($\theta_{13}$) and atmospheric~($\theta_{23}$) neutrino experiments. One important determination which is yet to be made is whether the neutrinos follow normal mass hierarchy~($m_1~<~m_2~<~m_3$) or inverted mass hierarchy~($m_3~<~m_1~<~m_2$). This is because the neutrino oscillation experiments can determine only the square of the mass differences ${\Delta m}^2_{21}$~(sensitive solar and reactor sources) and the absolute value of $|{\Delta m}^2_{31}|$~(sensitive to reactor, accelerator and atmospheric), and the sign of ${\Delta m}^2_{31}$ is required to settle the
mass hierarchy problem. Recently, there is some information on $\delta_{CP}$, but it is very limited. To understand the properties of neutrinos as well as to determine the various parameters of the PMNS matrix, CP violating phase
$\delta_{CP}$, and to determine the mass hierarchy in neutrino mass eigen states, several experiments in the low energies~(corresponding to reactor, solar, and supernova neutrinos) as well as in the medium energies~(corresponding to accelerator and atmospheric neutrinos) are being performed.
In the region of very low energy relevant for the reactor and solar neutrinos, the exclusive transitions to the ground state or a few low excited states in the final nucleus are accessible. In the medium and high energy region, the accelerator experiments like MiniBooNE, T2K, SciBooNE, CNGS, OPERA, NOvA, etc., as well as atmospheric neutrino experiments like SuperKamiokande, IceCube, etc., have (anti)neutrinos in the energy range which is sufficient enough to excite many nuclear states, the energies are sufficient enough to create new particles and can induce various inclusive processes of quasielastic(QE), inelastic(IE) (like 1$\pi$, 1$\eta$, 1$K$, $YK$($Y=\Lambda, \Sigma$) production, etc.) and deep inelastic(DIS) scattering processes given by~\cite{S}:
\begin{eqnarray}\label{Ch-4:process1_nu}
\nu_{l} (\bar{\nu}_{l}) + N &\longrightarrow & l^{-} (l^{+}) + N^{\prime}, \quad \quad \rm{(QE)}\nonumber\\
\bar{\nu}_{l} + N &\longrightarrow & l^{+} + Y,\;\; \rm{QE \;\;hyperon(Y) \;\;production}\nonumber\\
\nu_{l} (\bar{\nu}_{l}) + N &\longrightarrow & l^{-} (l^{+}) + N^{\prime} + X,\;\;(IE), \;\;X=\pi, K, \eta,\nonumber\\
\nu_{l} (\bar{\nu}_{l}) + N &\longrightarrow & l^{-} (l^{+}) + Y + K(\bar K),\quad \quad (IE \;\; \rm{Associated\;\; particle \;\;production})\nonumber\\
\nu_{l} (\bar{\nu}_{l}) + N&\longrightarrow & l^{-} (l^{+}) + \rm{jet\;\; of\;\; hadrons\;\; (DIS)}, \rm{where}\nonumber\\
&&N,N^\prime=n \;\;or\;\; p. \;\;\nonumber
\end{eqnarray}
\begin{figure}
\begin{center}
\includegraphics[height=5 cm, width=5.9cm]{3gev_color_presentation.eps}
\includegraphics[height=5 cm, width=5.9cm]{10gev_color_presentation.eps}
\end{center}
\caption{Allowed kinematical region for $\nu_l~-~N$ scattering in the ($Q^2, \nu$) plane for $E_\nu$=3 GeV(left panel) and
$E_\nu$=10 GeV(right panel) for $Q^2 \ge 0$ ($Q^2$ is the four momentum transfer square). Invariant mass square is defined as $W^2=M^2+2M\nu-Q^2$ and the elastic limit is
$x=\frac{Q^2}{2M_N\nu}=1$. The forbidden region in terms of $x$ and $y=\frac{\nu}{E_\nu}=\frac{(E_\nu - E_l)}{E_\nu}$ is defined
as $x,y~\notin~[0,1]$. The process like photon emission is possible in the extreme left band(the region between $W=M$ and $W < M+m_\pi$). The SIS region has been defined as the region for which $M+m_\pi \le W \le 2GeV$ and $Q^2 \ge 0$, the DIS
region is defined as the region for which $Q^2 \ge 1~GeV^2$ and $W \ge 2~GeV$, and Soft DIS region is defined as $Q^2 < 1GeV^2$
and $W \ge 2~GeV$. The soft DIS region is also nothing but the SIS region. The extreme left band also gets the contribution for the bound nucleons in nuclear targets through $np-nh$ like 2p-2h
excitations. The boundaries between regions are not sharply established and are indicative only.} \label{fig3}
\end{figure}
This volume is focused on the interaction of the intermediate and high energy neutrinos in the region of a few GeV. The inclusive cross sections in the quasielastic region are analysed in terms of the weak form factors of the nucleon and the cross sections in the
inelastic scattering corresponding to the excitations of various
nucleon resonances, lying in the first or higher
resonance region are described in terms of the transition form factors corresponding to the nucleon to resonance transition. On the other hand, if the energy transfer($\nu$) and the four momentum transfer square($Q^2$) are large, the inclusive cross sections
are expressed in terms of the structure functions corresponding to the deep inelastic scattering
process from the quarks and gluons in the nucleon. In the intermediate energy region corresponding to the
transition between resonance excitations and DIS, we are yet to find a method best suited
to describe the inclusive charged lepton or (anti)neutrino scattering processes. Using the
kinematical cuts in the $Q^2 - \nu$ plane (Figure \ref{fig3}), one may understand the regions like elastic scattering
($W = M$), inelastic scattering($M \le W \le 2$~GeV), deep inelastic scattering($Q^2 > 1 GeV^2$, $W >$ 2 GeV) and the soft DIS ($Q^2 < 1
GeV^2$, $W > 2 GeV$) regions.
In the quasielastic region, for the scattering of (anti)neutrinos with the nucleons, the nucleons keep their identity intact except for the $\Delta S=1$ reaction where a nucleon is converted into a hyperon(in the case of $\bar\nu$ only). In the inelastic region the scattering leads to the excitation of various resonances.
The resonance excitation of the nucleon includes isospin
$I=\frac{1}{2}$~resonances like $N^*(1440)$, $N^*(1520)$, $N^*(1535)$, etc., and isospin $I=\frac{3}{2}$~resonances like $\Delta(1232)$, $\Delta^*(1600)$, $\Delta^*(1700)$, etc., together with a non-resonant continuum. The decays of these resonances lead predominantly to single pion i.e. $\pi N$ state, and also to other final states like $\gamma N$, $\eta N$, $K Y$, $\pi \pi N$, $\rho N$, etc.
The shallow inelastic region(SIS) covers resonance excitation on the nucleon that, together with a non-resonant continuum, leads predominantly to the above mentioned final states.
In Figure \ref{fig3}, we show the importance of the different kinematic regions relevant for the QE, IE and DIS scattering corresponding to the two neutrino energies viz. $E_\nu$ = 3 GeV (left panel) and E$_\nu$ = 10 GeV (right panel).
It can be observed from the figures that as one moves to the higher $\nu$ and $Q^2$ regions, the DIS becomes the most dominant
process in the
neutrino interactions where (anti)neutrino interacts with the
quarks and gluonic degrees of freedom in the nucleons~\cite{SajjadAthar:2020nvy}. The DIS process, in this kinematic region, is described using perturbative QCD. However, presently there is no sharp kinematic boundary on $\nu$ and $Q^2$ for the onset of deep inelastic scattering in literature. Generally $Q^2 > 1GeV^2$ is chosen for the onset of DIS.
A kinematic
constraint of $W >$ 2 GeV is also applied to safely describe the DIS region. However, in the
kinematic region of Q$^2 < 1 GeV^2$, nonperturbative QCD effects must be taken into serious
consideration. In this region, which is also known as the transition region, it is expected that
the principle of
quark–hadron duality can be used to obtain the neutrino cross sections. However, there is not much work done either theoretically or experimentally to understand neutrino cross section using quark-hadron duality. This issue has been recently raised in the Snowmass~\cite{snowmass} and NUSTEC~\cite{NuSTEC:2019lqd} meetings.
In Figure \ref{zeller}, we show the relative importance of the above processes(QE, IE and DIS) through the energy dependence
of their cross sections~\cite{Lipari:1994pz}.
This figure depicts the total scattering cross section per nucleon per unit energy of the incoming particles vs.
neutrino~(left panel) and antineutrino~(right panel) energy in the charged current induced process. The dashed line,
dashed-dotted and dotted lines represent the contributions from the quasielastic scattering, inelastic resonance~(RES) and deep
inelastic scattering~(DIS) processes, respectively. The sum of all the scattering cross sections~(TOTAL) is shown by the solid
line~\cite{Lipari:1994pz}. The experimental results are the data from the older (ANL and BNL) as well as the experiments performed
recently using (anti)neutrino beam. It may be realized that the experimental error bars are large and precise measurements are needed. In all the present generation neutrino experiments, nuclear targets like $^{12}$C, $^{16}$O, $^{40}$Ar, $^{56}$Fe, $^{208}$Pb, etc. are being used and the
interactions take place with the nucleons that are bound inside the nucleus, where nuclear medium effects become important.
These neutrino experiments are measuring (anti)neutrino events that are a convolution of
\begin{itemize}
\item [(i)] energy-dependent neutrino flux and
\item [(ii)] energy-dependent neutrino-nucleon cross section.
\end{itemize}
Therefore, it is highly desirable to
understand the energy dependence of the neutrino-nucleon cross sections. In the context of the present neutrino oscillation experiments using the nuclear targets, it is of great importance to understand the energy dependence of nuclear medium effects.
Especially in the precision era of neutrino oscillation experiments,
to achieve an accuracy of a few percent~(2-3\%) in the systematics, a good understanding of neutrino-nucleon and neutrino-nucleus cross sections is required. Presently, due to the lack in the understanding of these cross sections, an uncertainty of 25--30\% in the systematics arise. The study of neutrino interactions with matter is not only important for the understanding of neutrino physics but is also important to get a better insight of hadronic interactions in the weak sector, where there is an additional contribution of axial vector current besides the vector current.
In the case of QE process the general consideration of the nuclear medium effects(NME) include the Fermi motion, Pauli blocking and the multinucleon
correlation effects. In the case of single pion production, one considers Fermi motion and Pauli blocking of the nucleon as well as the modification of the properties of the various excited resonances especially their masses and widths in the medium. However, these modifications are well studied only in the case of $\Delta$ resonance. In addition, the pion produced in the decay of these resonances
undergo final state interaction with the residual nucleus, where charge
exchange process(like $\pi^- p \rightarrow \pi^o n$) or modulation in its energy and momentum or pion absorption($\pi$ NN $\rightarrow$ NN) may take place. If the produced pion is absorbed it mimics a quasilike event. In the case of DIS, shadowing and antishadowing corrections become important in the region of low x, the Bjorken variable. In the intermediate region of x, the mesonic contributions become important where the interaction of an intermediate vector boson(W,Z) takes place with the virtual mesons in the nucleus and in the region of high x, Fermi motion effects are important.
The consideration of different NME is model dependent and there is no consensus on any particular nuclear model~\cite{NuSTEC:2017hzk,Katori:2016yel}.
In order to understand nuclear medium effects in (anti)neutrino-nucleus scattering, more data with better precision are needed.
\begin{figure}[h]
\begin{center}
\includegraphics[width=5.4cm,height=8.5cm]{zeller.eps}
\includegraphics[width=5.4cm,height=8.5cm]{zeller_anti.eps}
\end{center}
\caption{Charged current induced total scattering cross section per nucleon per unit energy of the incoming particles vs.
neutrino~(left panel) and antineutrino~(right panel) energy for all the three processes labelled on the curve along with the total scattering cross sections. Dashed line shows the contribution from the quasielastic~(QE)
scattering while the dashed-dotted and dotted lines represent the contributions from the inelastic resonance~(RES) and deep
inelastic scattering~(DIS), respectively. The sum of all the scattering cross sections~(TOTAL) is shown by the solid
line~\cite{Lipari:1994pz}. We have also mentioned the energy region of various experiments.}
\label{zeller}
\end{figure}
This volume is devoted to the study of neutrino interactions from the nucleons and nuclei in the region of intermediate and high energies and comprises of seventeen articles discussing quasielastic, one pion production, other inelastic processes, and the
deep inelastic scattering of (anti)neutrino from the nucleons and nuclei. All the contributing articles are arranged
according to the following aspects of their contents:
\begin{itemize}
\item [(i)] Experimental,
\item [(ii)] Theoretical, and
\item [(iii)] Phenomenological.
\end{itemize}
To give a general historical understanding of neutrino experiments starting from Gargamelle to MINERvA, a bird's eye view has been lucidly illustrated by Morfin~\cite{Morfin},
where he summarizes various attempts which have been made for exploring the structure of the nucleon with neutrinos. In the next five articles, the current status and results of some important experiments being performed in the few GeV energy region are discussed like the efforts of
MINERvA, NOvA, MicroBooNE, ArgoNeuT and
neutrino interaction physics in neutrino telescope. The MINERvA@Fermilab took data using the (Anti)Neutrinos at the Main Injector (NuMI) beamline from 2009 to 2019 in the Low-Energy range and the Medium-Energy range that peak at 3 GeV
and 6 GeV, respectively, using several nuclear targets like carbon in scintillators, oxygen in water, iron and lead and the aim was to understand nuclear medium effects in the wide range of Bjorken $x$ and $Q^2$. Xianguo Lu et al.~\cite{MINERvA} on behalf of the MINERvA collaboration, have presented the latest
results of the differential and total scattering cross sections for the inclusive, quasielastic, inelastic one pion, single kaon, etc. processes and highlights their salient observations. The NOvA@Fermilab has been collecting data in the NuMI
neutrino beam since 2014 and the expectation is that it will continue till 2026. Shanahan and Vahle~\cite{NOvA}, on behalf of the NOvA
collaboration have presented experimental results in 3 flavor neutrino oscillation scenario as well as the results of the differential scattering cross sections for the inclusive channel and for the coherent pion production processes. The MicroBooNE and ArgoNeuT Liquid Argon
Time Projection Chambers(LArTPC)@Fermilab have collected data in the NuMI and Booster Neutrino Beams, respectively.
The neutrino interaction measurements of these experiments are presented by Duffy et al.~\cite{MicroBooNE}, for the charged-current $\nu_\mu$ scattering in
the inclusive channel, $0\pi$ channel (in which no pions but some number of protons
may be produced), and for the single pion production (including production of both charged
and neutral pions), as well as measurements of inclusive scattering cross sections for $\nu_e + \bar\nu_e$ interactions have been presented.
Katori et al.~\cite{IceCube} have discussed the neutrino interaction physics in neutrino telescopes, where interactions are detected via Cherenkov radiation emitted by the charged secondaries, and specifically discussed in detail, the largest neutrino telescope in operation till date i.e. the IceCube Neutrino Observatory.
These articles are followed by ten articles dealing with various aspects of theoretical developments in the elastic, quasielastic, inelastic and the deep inelastic scattering of (anti)neutrinos from the nucleons and nuclei. Benhar~\cite{Benhar}
has elucidated the problems and uncertainties while evaluating the cross sections with nuclear medium effects and also deals with the theoretical understanding required to unravel the flux-averaged neutrino-nucleus cross section by discussing in detail quasielastic scattering, one pion production and in very brief the deep inelastic scattering processes.
Nuclear model dependence in quasielastic scattering has been discussed by Amaro et al.~\cite{Amaro} where they explicitly describe the neutrino-nucleus scattering cross section using superscaling (SuSA) approach by considering one- and two- body currents and showing first that
the model explains well the electron scattering data and then applied it to understand weak interaction induced processes. Jackowicz and Nikolakopoulos~\cite{Jackowicz} have studied nuclear medium effects in quasielastic neutrino-nucleus scattering by using nuclear mean field and random phase approximation, and highlighted the differences between neutrino- and
antineutrino-induced reactions. Martini et al.~\cite{Martini} have described the neutrino-nucleus scattering cross section for CCQE process using response functions as well as spectral functions, and highlighted
the model dependence of multinucleon correlation effects in the different models. Alvarez-Ruso et al.~\cite{Alvarez-Ruso} have discussed neutrino interactions with matter with the particular emphasis on the
MiniBooNE anomaly. Fatima et al.~\cite{Fatima} have pointed out the importance of $\bar{\nu}_{\mu}$ induced quasielastic production of hyperons leading to pions(the reaction which is forbidden for the neutrino induced processes due to $\Delta S=\Delta Q$ rule). The effects of the second class currents in the axial vector sector with and without T-invariance as well as the effect of SU(3) symmetry breaking have also been discussed. Paschos~\cite{Paschos} has discussed a model for the flavor changing neutral current of leptons.
Neutrino-nucleon reactions in resonance region have been studied by Sato~\cite{Sato} using dynamical coupled channel(DCC) model by restoring full unitarity. The cross sections of charged current neutrino reaction are examined to analyze the
mechanism of the neutrino induced meson production reaction, and possible way to
test the model of the axial vector current contribution. In the case of deep inelastic scattering, the evolution of the electroweak structure functions of nucleons has been studied by Reno~\cite{Reno}
in the context of muon and tau neutrino and antineutrino scattering. Ansari et al.~\cite{Ansari} in their review article have discussed the effect of the nonperturbative corrections such as the
target mass correction and higher twist effects, perturbative evolution of the parton densities, nuclear medium modifications of the nucleon structure functions, nuclear isoscalarity corrections on the weak nuclear structure functions in the (anti)neutrino-nucleus scattering in the DIS region. The numerical results for the structure functions and the cross sections are compared with some of the available experimental data including the recent results from MINERvA. The predictions are made in argon nuclear target which is planned to be used as a target material in DUNE at the Fermilab.
Neutrino event Monte Carlo(MC) generators play an important role in the design, optimization, and execution of neutrino oscillation
experiments and the two
most widely used neutrino event generators in the present experimental physics community are GENIE and NEUT which predict the neutrino event rates by using the various inputs including the (anti)neutrino-nucleus scattering cross sections and the final state interactions
of the produced hadrons in the nucleus. In this volume main features of these two generators are separately elaborated. GENIE as explained by Alvarez-Ruso et al.~\cite{GENIE}, with its gradual evolution and adaptability is assumed to become a standard tool, forming an
indispensable part of many experiments and has been widely tested against neutrino cross-section data. Important features of the NEUT MC generator have been illustrated by Hayato et al.~\cite{NEUT} It can be used to
simulate interactions for neutrinos between 100 MeV and a few TeV of energy, and is also
capable of simulating hadron interactions within a nucleus.
We are thankful to all the authors who have contributed to this volume. Really their efforts are commendable and we hope that this volume will be helpful to the students, young as well as senior research workers in the field of neutrino physics and would stimulate some new ideas and investigation. Special thanks are due to B. Ananthanarayan, member of the editorial board of EPJ-ST who invited us to bring this topical volume. The help and cooperation of the editorial team of EPJ-ST is duly acknowledged.
|
2,877,628,089,105 | arxiv |
\section{Analysis of Metabolomic Aging Clocks}
\label{sec:age-metabolite-analysis}
In this section, we demonstrate our method in analysis of the effect of age on small molecules, called ``metabolites", in humans. The metabolome consists of the structural and functional building blocks of an organism, which bridges genotype and phenotype and plays an important role in studies of aging and age-related traits \citep{hwangbo2021aging}. We utilize targeted metabolomics dataset from \citet{hwangbo2021aging} to investigate how aging affects the concentration of metabolites in cerebrospinal fluid (CSF). We focus on the analysis of 39 targeted metabolites measured for 85 individuals, and their corresponding age, ranging from 20 to 86 years old at time of healthy sample collection. We follow the data pre-processing used by \citep{hwangbo2021aging} and use the R package $\texttt{Amelia}$ to impute the missing values. We let $T$ denote the age and $Y = (Y_1, \cdots, Y_{39})$ be the measured concentration of targeted metabolites in CSF. Our estimand of interest is the biological effect of one year increase in age on each metabolite.
Inferring the biological effect of aging on the metabolome can be complicated by experimental confounders other sources of unwanted biological variation \citep{livera2015statistical}. For example, in their original analysis \citet{hwangbo2021aging} did not take into account the time between sample collection and the mass spectrometry-based assay to measure abundances of metabolites. Although not some studies indicate that long term storage may lead to significant decay among many metabolites \citep{haid2018long}. Unfortunately, samples from all subjects under 40 years of age were collected in the first 3 years of the study, whereas samples from older individuals were collected nearly uniformly over an 8 year time period.
For simplicity, we model the outcome model with a linear regression, although other flexible outcome models like $\texttt{BART}$ are also applicable \citep{chipman2010bart}. With the linear assumption, our estimand reduces to the regression coefficient, $\tau_j$, which is invariant to the specific level of $t_1$ and $t_2$, i.e., our estimand is
\begin{equation}
\label{eqn:estimand-age-metabolite}
\text{PATE}_{e_j} = \tau_j
\end{equation}
for all $j = 1, \cdots, q$, and all $t_1, t_2$ such that $t_1 - t_2 = 1$. We rescale all outcomes to unit variance and regress the scaled outcomes on age to estimate $\tau^{\text{naive}}_j$ under an assumption of no unobserved confounding, and we apply factor analysis using the R function $\texttt{factanal}$ to the residual outcomes. We use cross validation to select the latent confounder dimension $m = 3$. First, we compute the ignorance regions of the treatment effects for all metabolites assuming $R^2_{T \sim U} \leq 95\%$ without incorporating any null or non-null control assumptions. As a result, all ignorance regions contain zero, which suggests that their treatment effects are sensitive to the unobserved confounding (see Figure \ref{fig:meta_ig_all}).
In Table \ref{table:rv_meta} of the Appendix, we include robustness values for each metabolite, where the median is at 22\% and the maximum is at 87\% for glycerol 3-phosphate.
In the following, we show that, by making use of the null or non-null control outcomes, we can further shrink the ignorance regions and make some of the metabolites with significant treatment effects distinguishable from the others.
\vspace*{12pt}
\noindent \textbf{Calibration with null controls.} Following the discussion in Section \ref{sec:cali_wnco}, we demonstrate the validity of our calibration method with null control outcomes.
To demonstrate our approach, we use sorbitol, a sugar substitute, as a null control, as it has a reduced tendency to increase the sugar level in the blood and is used by diabetes patients and elderly individuals \citep{gabbay1973sorbitol} \footnote{We do not claim that sorbitol }.
We plot the ignorance regions before and after the calibration with the null control, ordered by the extent to which they are influence by the non-null calibration (Figure \ref{fig:cali_wnco}).
The ignorance region is constructed using 95\% confidence interval for $\tau^{\text{naive}}$
in order to account for the estimation uncertainty of the observed data distribution.
With the additional information about confounders provided by the null control, the ignorance region is largely reduced for treatment effects of the first few metabolites on the left in plot \ref{fig:cali_wnco}. Also, the estimate under the nonconfoundedness assumption (i.e. $R^2_{T \sim U} = 0$) for each metabolite changes after taking the null control outcome into account. Notably, we find that, after adjusting sorbitol to have a zero treatment effect, the effect of age on
alpha-ketoisovaleric acid
becomes robustly negative at level $R^2_{T \sim U} = 95\%$.
\begin{figure}
\centering
\includegraphics[width = \textwidth]{figs/metabolite_robustness.pdf}
\caption{Robustness values for metabolites, before and after considering sortibol as a null control outcome. Reported values are the lower end point of the 95\% posterior credible region for the robustness value. Note that while sorbitol is significant under NUC, the lower endpoing of the robustness value is only $RV = 0.02$, which suggests that it is sensitive to confounding. Despite this small value, after enforcing the null control assumption, the robustness values for all other metabolites increase, often by much more than 0.02.}
\label{fig:cali_wnco}
\end{figure}
Latent mediators include factors related to diet, exercise and lifestyle. The metabolome is known to be particularly sensitive to diet
\section{Theory}
\subsection{Proof of Theorem \ref{thm:gamma-identifiability}}
\noindent \textbf{Theorem \ref{thm:gamma-identifiability}.}
\begin{itshape}
Assume model \eqref{eqn:treatment_general,multi-y}-\eqref{eqn:epsilon_y} where $\Gamma$ has some prespecified rank, $m$. If there remain two disjoint matrices of rank $m$ after deleting any row of $\Gamma$, then $\psi_Y$ is identifiable up to the causal equivalence class $[\psi_Y] = \{\Gamma \theta : \theta\theta'=I\}$ and we can assume $\Sigma_{u \mid x}=I_m$ without loss of generality. If the factor model for the residuals given the treatment and observed covariates is not identifiable up to rotations of $\Gamma$, then $\psi_Y$ is not identifiable up to a causal equivalence class.
\end{itshape}
\begin{proof}
For $\tilde u = h(u)$, $f_{\psi_T}(t\mid u, x)$ is equal to $f_{\tilde \psi_T}(t\mid \tilde u, x)$ when $f(\tilde u \mid t,x)$ is proportional to $f(u \mid t,x)$. This is automatically satisfied for invertible transformation $h$ since the Jacobian determinant is constant.
From \citet{anderson1956statistical}, if there remain two disjoint matrices of rank $m$ after deleting any row of $\Gamma$, then $\Gamma\Gamma'$ and $\Lambda_{y| t,u,x}$ are identified from $\Sigma_{y\mid t,x} =\Gamma\Gamma' + \Lambda_{y| t,u,x}$, and $\Gamma$ is identified up to rotations. Under model \ref{eqn:treatment_general,multi-y}-\ref{eqn:epsilon_y}, the X-specific average causal effect is
\begin{equation}
E_{\psi_T, \psi_Y}[Y\mid do(T=t), X=x] = g(t,x) + \Gamma\Sigma_{u\mid t, x}^{-1/2}\mu_{u\mid x},
\end{equation}
with sensitivity parameters $\psi_T=\{\mu_{u|t,x}, \Sigma_{u|t,x}: t \in \mathcal{T}, x \in \mathcal{X}\}$ and $\psi_Y=\{\Gamma\}$.
Consider $\tilde U$, which is characterized by $\tilde U\mid X = A_XU\mid X$, then we have $E[\tilde U\mid T=t,X=x] = A_x\mu_{u\mid t, x}$, $Cov(\tilde U\mid T=t,X=x) = A_x\Sigma_{u\mid t, x}A_x^T$. Denote $\Sigma_{u\mid x}^{1/2}=B_x$, $\Sigma_{u|t,x}^{1/2} = C_x$. For any $x$, the invertible linear transformation $A_x = \tilde{\theta}_x^T B_x^{-1}$ with arbitrary orthogonal matrix $\tilde{\theta}_x$, makes $\tilde{\Sigma}_{u \mid x} := \text{Cov}(\tilde U\mid x) =A_x \Sigma_{u\mid x} A_x^{T} = I_m$. For any $\tilde{\psi}_Y=\{\tilde{\Gamma}\}=\{\Gamma\theta\} \in [\psi_Y]$, let $\tilde{\psi}_T = \{\tilde{\mu}_{u|t,x}, \tilde{\Sigma}_{u|t,x}: t \in \mathcal{T}, x \in \mathcal{X}\} = \{A_x\mu_{u|t,x}, A_x\Sigma_{u|t,x}A_x^{T}: t \in \mathcal{T}, x \in \mathcal{X}\}$. The the X-specific average causal effect can be alternatively expressed as
\begin{equation}
E_{\tilde \psi_{T}, \tilde \psi_{Y}}[Y\mid do(T=t), X=x] = g(t,x) + \tilde{\Gamma}\tilde{\Sigma}_{u\mid t, x}^{-1/2}\tilde{\mu}_{u\mid x}
\end{equation}
For each $x$, we need to find $\tilde{\theta}_x$ such that $\tilde{\Gamma}\tilde{\Sigma}_{u\mid t,x}^{-1/2}\tilde{\mu}_{u\mid x} = \Gamma\Sigma_{u\mid t,x}^{-1/2}\mu_{u\mid x}$. This equation is equivalent to the polar decomposition of $\tilde{\theta}_x^T B_x^{-1} C_x$:
\begin{equation}
\tilde{\theta}_x^T B_x^{-1} C_x = (\tilde{\theta}_x^T B_x^{-1} C_xC_x B_x^{-1} \tilde{\theta}_x)^{1/2} \theta^T.
\end{equation}
Suppose the singular value decomposition of $B_x^{-1} C_x$ is $B_x^{-1} C_x =WDV$, then $\tilde{\theta}_x^T B_x^{-1} C_x = \tilde{\theta}_x^T WDV = W_1D V$ is the singular value decomposition of $\tilde{\theta}_x^T B_x^{-1} C_x$. Since $\tilde{\theta}_x^T B_x^{-1} C_x$ is invertible, the polar decomposition can be uniquely written as $\tilde{\theta}_x^T B_x^{-1} C_x = W_1D W_1^T W_1V^T$, where $W_1D W_1^T =(\tilde{\theta}_x^T B_x^{-1} C_xC_x B_x^{-1} \tilde{\theta})^{1/2}$ and $W_1V^T$ is the rotation matrix. We just need to make $W_1V^T = \theta^T$, which implies $\tilde{\theta}_x = WV^T\theta$. This $\tilde{\theta}_x$ makes $E_{\tilde \psi_{T}, \tilde \psi_{Y}}[Y\mid do(T=t), X=x] = E_{\psi_{T}, \psi_{Y}}[Y\mid do(T=t), X=x]$. The observed data distributions $f(t,x)$ and $f(y\mid t,x)$ are invariant under these sensitivity parameters. Then take expectation with respect to $X$, we obtain $E_{\tilde \psi_{T}, \tilde \psi_{Y}}[Y\mid do(T=t)] = E_{\psi_{T}, \psi_{Y}}[Y\mid do(T=t)]$. Therefore, $[\psi_Y]$ is a causal equivalence class and we can assume $\Sigma_{u \mid x}=I_m$ without loss of generality.
\end{proof}
\subsection{Proof of Theorem \ref{thm:ignorance_region_woNC,multi-y} and Corollary \ref{cor:ignorance_region_global_woNC,multi-y}}
\subsubsection*{Proof of Theorem \ref{thm:ignorance_region_woNC,multi-y}}
\noindent \textbf{Theorem \ref{thm:ignorance_region_woNC,multi-y}.}
\begin{itshape}
Suppose that the observed data is generated by model \eqref{eqn:confounder,multi-y}-\eqref{eqn:outcome,multi-y}. Then, $\forall \beta$ of length $||\beta'||_2 = \sigma_t\sqrt{R_{T \sim U}^2}$ with $0 \leq R_{T \sim U}^2 < 1$, the confounding bias of outcome $a'Y$ is bounded by
\begin{equation}
\text{Bias}_{a}^2 \leq \frac{1}{\sigma_t^2} \frac{R_{T \sim U}^2}{1 - R_{T \sim U}^2} \parallel a' \Gamma \parallel_2^2,
\end{equation}
where the bound is achieved when $\beta$ is collinear with $a' \Gamma$.
\end{itshape}
\begin{proof}
The sensitivity parameter $\beta$ can be reparameterized in terms of length $d^\beta$ and direction $u^\beta$:
\begin{equation}
\beta = d^\beta u^\beta,
\end{equation}
where $d^\beta = \sigma_t \sqrt{R_{T \sim U}^2}$ and $u^\beta \in \mathcal{C}^{m-1}$ is a m-dimensional unit vector.
Then we can write the eigendecomposition of matrix $I_m - \frac{\beta \beta' }{\sigma_t^2}$ as
\begin{align}
I_m - \frac{\beta \beta' }{\sigma_t^2} &= U
\begin{bmatrix}
1-(\frac{d^\beta}{\sigma_t})^2 & & &\\
& 1 & &\\
& & \ddots & \\
& & & 1 \\
\end{bmatrix}
U^T,
\end{align}
where $U$ is an orthogonal matrix with the first column as $u^\beta$.
Thus, $\text{Bias}_{a}$ can be simplified as
\begin{align}
|\text{Bias}_{a}| &= |\frac{1}{\sigma_t^2} a' \Gamma (I_m - \frac{\beta \beta' }{\sigma_t^2})^{-1/2} \beta| \\
&= |\frac{d^\beta}{\sigma_t \sqrt{\sigma_t^2 - (d^\beta)^2}} a' \Gamma u^\beta|\\
&= \frac{1}{\sigma_t} \sqrt{\frac{R_{T \sim U}^2}{1 - R_{T \sim U}^2}} |a' \Gamma u^\beta| \label{eqn:bias_deltat} \\
&\leq \frac{1}{\sigma_t} \sqrt{\frac{R_{T \sim U}^2}{1 - R_{T \sim U}^2}} \parallel a' \Gamma \parallel_2 ,
\end{align}
where the bound is reached when $u^\beta$ is collinear with $ \Gamma'a$, i.e., $\beta$ is collinear with $\Gamma'a$.
\end{proof}
\subsubsection*{Proof of Corollary \ref{cor:ignorance_region_global_woNC,multi-y}}
\noindent \textbf{Corollary \ref{cor:ignorance_region_global_woNC,multi-y}.}
\begin{itshape}
Let $d_1$ be the largest singular value of $\Gamma$. For all unit vectors $a$, the confounding bias is bounded by
\begin{equation}
\text{Bias}_{a}^2 \leq \frac{d_1^2}{\sigma_t^2} \frac{R_{T \sim U}^2}{1 - R_{T \sim U}^2},
\end{equation}
with equality when $a = u_1^{\Gamma}$, the first left singular vector of $\Gamma$, and when $\beta$ is collinear with $v_1^{\Gamma}$, the first right singular vector of $\Gamma$. When $a \in Null(\Gamma')$, the NUC estimate is unbiased, that is, $a'\check{\tau} = a'\tau$.
\end{itshape}
\begin{proof}
From Equation \eqref{eqn:bias_deltat}, we have
\begin{align}
\text{Bias}_{a} &= \frac{1}{\sigma_t} \sqrt{\frac{R_{T \sim U}^2}{1 - R_{T \sim U}^2}} a' \Gamma u^\beta.
\end{align}
According to the property of Rayleigh quotient, $a' \Gamma u^\beta$ reaches its maximum, $d_1$, the largest singular value of $\Gamma$, when $a = u_1^{\Gamma}$, the first left singular vector of $\Gamma$, and $u^\beta = v_1^{\Gamma}$, the first right singular vector of $\Gamma$.
Thus,
\begin{equation}
\text{Bias}_{a}^2 \leq \frac{d_1^2}{\sigma_t^2} \frac{R_{T \sim U}^2}{1 - R_{T \sim U}^2}.
\end{equation}
\end{proof}
\subsection{Proof of Theorem \ref{thm:ignorance_region_general,multi-y} and Proposition \ref{cor:conservative}}
\subsubsection*{Proof of Theorem \ref{thm:ignorance_region_general,multi-y}}
\noindent \textbf{Theorem \ref{thm:ignorance_region_general,multi-y}.}
\begin{itshape}
Assume model \eqref{eqn:treatment_general,multi-y}-\eqref{eqn:epsilon_y} with $\psi_T$ defined by \eqref{eqn:conditional_u_mean}-\eqref{eqn:conditional_u_cov}. The confounding bias of $\text{PATE}_{a,t_1,t_2}$, $\text{Bias}_{a,t_1,t_2}$ is equal to $\frac{a' \Gamma \Sigma_{u\mid t,x}^{-1/2}\beta}{\sigma_{t\mid x}^2}(t_1 - t_2)$ and it is bounded by
\begin{equation}
\text{Bias}_{a,t_1,t_2}^2
\, \leq \,
\frac{1}{\sigma_{t\mid x}^2} \left(\frac{R_{T \sim U \mid X}^2}{1 - R_{T \sim U \mid X}^2}
\right)\parallel a' \Gamma(t_1-t_2)\parallel_2^2
\, \leq \,
\frac{1}{\sigma_{t\mid x}^2} \left(\frac{R_{T \sim U \mid X}^2}{1 - R_{T \sim U \mid X}^2}
\right)\parallel a' \Sigma_{y\mid t, x}^{1/2}(t_1-t_2)\parallel_2^2
\end{equation}
where $\Sigma_{y\mid t,x} = \text{Cov}(Y\mid T=t, X=x)$ is the identifiable residual covariance of $Y$. The first bound is achieved when $\beta$ is collinear with $a'\Gamma$.
\end{itshape}
\begin{proof}
Under model \eqref{eqn:treatment_general,multi-y}-\eqref{eqn:epsilon_y}, we have the following equations
\begin{align}
E[Y\mid T=t,X=x] &= g(t,x) + \Gamma\Sigma_{u\mid t, x}^{-1/2}\mu_{u\mid t,x}, \\
E[Y\mid do(T=t), X=x] &= g(t,x) + \Gamma\Sigma_{u\mid t, x}^{-1/2}\mu_{u\mid x}.
\end{align}
\noindent Note that with \eqref{eqn:conditional_u_mean}-\eqref{eqn:conditional_u_cov}, $\sigma^2_{t\mid x}$ and $\Sigma_{u\mid t, x}$ do not change with $t$ and $x$. So the confounding bias is equal to
\begin{align}
\text{Bias}_{a,t_1,t_2} &= (E[a'Y \mid t_1] - E[a'Y \mid t_2]) - (E[a'Y \mid do(t_1)] - E[a'Y \mid do(t_2)]) \\
&= E_X[a'(\Gamma\Sigma_{u\mid t_1, x}^{-1/2}\mu_{u\mid t_1,x} - \Gamma\Sigma_{u\mid t_2, x}^{-1/2}\mu_{u\mid t_2,x}) - a'(\Gamma\Sigma_{u\mid t_1, x}^{-1/2}\mu_{u\mid x} - \Gamma\Sigma_{u\mid t_2, x}^{-1/2}\mu_{u\mid x})] \\
&= E_X\left[\frac{a'\Gamma \Sigma_{u\mid t, x}^{-1/2} \beta}{\sigma^2_{t\mid x}}(t_1 - t_2)\right] \\
&= \frac{a'\Gamma \Sigma_{u\mid t, x}^{-1/2} \beta}{\sigma^2_{t\mid x}}(t_1 - t_2),
\end{align}
where $\Sigma_{u \mid t,x} = I_m - \frac{\beta\beta^{\prime}}{\sigma_{t\mid x}^{2}}$.
Then follow the proof of Theorem \ref{thm:ignorance_region_woNC,multi-y}, with $\sigma_t^2$ replaced by $\sigma_{t\mid x}^{2}$, we obtain
\begin{equation}
\text{Bias}_{a,t_1,t_2}^2
\leq \frac{1}{\sigma_{t\mid x}^2} \left(\frac{R_{T \sim U \mid X}^2}{1 - R_{T \sim U \mid X}^2}
\right)\parallel a' \Gamma(t_1-t_2)\parallel_2^2,
\end{equation}
where the bound is achieved when $\beta$ is collinear with $\Gamma'a$. And note that $a'\Sigma_{y\mid t, x}a - a'\Gamma\Gamma'a = a'\Lambda_{y| t,u,x}a \geq 0$. The second inequality is proved.
\end{proof}
\subsubsection*{Proof of Proposition \ref{cor:conservative}}
\noindent \textbf{Proposition \ref{cor:conservative}.}
\begin{itshape}
Let $U_1 = AU$, where $A$ is an $r \times m$ semi-orthogonal matrix with $0 \le r \le m$. Assume latent ignorability holds given just $U_1$ and $U_1$ satisfies Assumption \ref{asm:potential_conf}. Then, we can rewrite Equation \eqref{eqn:epsilon_y} as $Y = g(T, X) + \Gamma_1 \Sigma_{u_1 \mid t,x}^{-1/2} U_1 + \epsilon_1$, with $Cov(\epsilon_1 \mid t) = M + \Lambda_{y| t,u}$ for a positive semi-definite matrix $M$, and $\epsilon_1$ independent of $U_1$ conditional on $T$ and $X$. Then, the confounding bias of contrast $a'Y$ is still in the intervals defined in Theorem \ref{thm:ignorance_region_general,multi-y}.
\end{itshape}
\begin{proof}
First, we have $\mu_{u_1\mid t,x} = \frac{\beta_1}{\sigma_{t \mid x}^{2}}\left(t-\mu_{t\mid x}\right)$ and $\Sigma_{u_1 \mid t,x} = I_m-\frac{\beta_1 \beta_1^{\prime}}{\sigma_{t\mid x}^{2}}$,
where $\beta_1= A\beta$. Since $A$ is $r \times m$ semi-orthogonal, we have $\parallel \beta \parallel_2 = \parallel A'\beta_1 \parallel_2 =\parallel \beta_1 \parallel_2$.
Under the assumptions about $U_1$, the conditional expectation of $Y$ can be written as
\begin{equation}
E[Y\mid T=t,X=x] = g(t,x) + \Gamma_1\Sigma_{u_1\mid t, x}^{-1/2}\mu_{u_1\mid t,x} + E[\epsilon_1 \mid t,x].
\end{equation}
And the X-specific average causal effect is equal to
\begin{equation}
E[Y\mid do(T=t), X=x] = g(t,x) + \Gamma_1\Sigma_{u_1\mid t, x}^{-1/2}\mu_{u_1\mid x} + E[\epsilon_1 \mid t,x].
\end{equation}
Then follow the proof of Theorem \ref{thm:ignorance_region_general,multi-y} and write $\beta_1 = d^{\beta_1} u^{\beta_1}$, the confounding bias of contrast $a'Y$ is equal to
\begin{align}
|\text{Bias}_{a}| &= |\frac{a' \Gamma_1}{\sigma^2_{t\mid x}} (I_m - \frac{\beta_1 \beta_1' }{\sigma^2_{t\mid x}})^{-1/2} \beta_1| \\
&= |\frac{d^{\beta_1}}{\sigma_{t\mid x} \sqrt{\sigma^2_{t\mid x} - (d^{\beta_1})^2}} a' \Gamma_1 u^{\beta_1}|\\
&\leq \frac{1}{\sigma_{t\mid x}} \sqrt{\frac{R^2_{T \sim U \mid X}}{1 - R^2_{T \sim U \mid X}}} \parallel a' \Gamma_1 \parallel_2 \\
&\leq \frac{1}{\sigma_{t\mid x}} \sqrt{\frac{R^2_{T \sim U \mid X}}{1 - R^2_{T \sim U \mid X}}} \parallel a' \Gamma \parallel_2.
\end{align}
The first inequality is due to $d^{\beta_1} = \parallel \beta_1 \parallel_2 = \parallel \beta \parallel_2 =d^{\beta}$ and Cauchy–Schwarz inequality. The second inequality comes from $\parallel a' \Gamma_1 \parallel_2^2 = a'\Gamma_1\Gamma_1'a \leq a'\Gamma\Gamma'a = \parallel a' \Gamma \parallel_2^2$, since $\Gamma\Gamma' - \Gamma_1\Gamma_1' = M$ is positive semi-definite. Therefore, the confounding bias of contrast $a'Y$ is still in the intervals defined in Theorem \ref{thm:ignorance_region_general,multi-y}.
\end{proof}
\subsection{Proof of Proposition \ref{prop:cali_wnco}, Theorem \ref{thm:ignorance-region-gaussian-wnc,multi-y} and Corollary \ref{corollary:rv_increase}}
\subsubsection*{Proof of Proposition \ref{prop:cali_wnco}}
\textbf{Proposition \ref{prop:cali_wnco}.}
\begin{itshape}
Suppose there are $c$ null control outcomes, $Y_j$, such that $\tau_j = 0$ for $j \in \mathcal{C}$. Then, $\check{\tau}_{\mathcal{C}}$ must be in the column space of $\Gamma_{\mathcal{C}}$. In addition, the fraction of variation in the treatment due to the confounding is lower bounded by
\begin{equation}
R^2_{T \sim U \mid X} \geq R_{\text{min}}^2 := \frac{\sigma_{t\mid x}^2 \parallel
\Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}} \parallel_2^2}{1+\sigma_{t \mid x}^2 \parallel
\Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}} \parallel_2^2},
\end{equation}
where $\Gamma_{\mathcal{C}}^{\dagger}$ denotes the pseudoinverse of $\Gamma_{\mathcal{C}}$.
\end{itshape}
\begin{proof}
Assume there are $c$ null control outcomes, satisfying
\begin{equation}
\label{eqn:nc_constraint,multi-y,proof}
\check{\tau}_{\mathcal{C}} = \frac{1}{\sigma_{t\mid x}^2\sqrt{1 - R^2_{T \sim U \mid X}}} \Gamma_{\mathcal{C}} \beta,
\end{equation}
The solution for the above equation exists if and only if $ Q_{\Gamma_{\mathcal{C}}} \check{\tau}_{\mathcal{C}} = \check{\tau}_{\mathcal{C}}$ holds, where $Q_{\Gamma_{\mathcal{C}}} =\Gamma_{\mathcal{C}}\Gamma_{\mathcal{C}}^{\dagger}$ is the projection matrix onto the column space of $\Gamma_{\mathcal{C}}$. Under this condition, all solutions to Equation \eqref{eqn:nc_constraint,multi-y,proof} can be written as
\begin{equation}
\beta = \sigma_{t\mid x}^2 \sqrt{1 - R^2_{T \sim U \mid X}} \
\Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}}
+ (I_m - \Gamma_{\mathcal{C}}^{\dagger} \Gamma_{\mathcal{C}}) w.
\end{equation}
Since $\beta'\beta = \sigma_{t\mid x}^2 R^2_{T \sim U \mid X}$, $w$ can be any $m \times 1$ vector satisfying
\begin{equation}
\parallel (I_m - \Gamma_{\mathcal{C}}^{\dagger} \Gamma_{\mathcal{C}}) w \parallel_2^2 = \sigma_{t\mid x}^2 R^2_{T \sim U \mid X} - \sigma_{t\mid x}^4(1 - R^2_{T \sim U \mid X})\parallel \Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}} \parallel_2^2
\end{equation}
In addition, since $\parallel (I_m - \Gamma_{\mathcal{C}}^{\dagger} \Gamma_{\mathcal{C}}) w \parallel_2^2 \geq 0$, we prove the result
\begin{equation}
R^2_{T \sim U \mid X} \geq R_{\text{min}}^2 := \frac{\sigma_{t\mid x}^2 \parallel
\Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}} \parallel_2^2}{1+\sigma_{t \mid x}^2 \parallel
\Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}} \parallel_2^2}.
\end{equation}
\end{proof}
\subsubsection*{Proof of Theorem \ref{thm:ignorance-region-gaussian-wnc,multi-y}}
\textbf{Theorem \ref{thm:ignorance-region-gaussian-wnc,multi-y}.}
\begin{itshape}
Suppose there are $c$ known null control outcomes, $Y_j$, such that $\tau_j := 0$ for $j \in \mathcal{C}$. For any value of $R^2_{T \sim U \mid X} \geq R^2_{\min}$, the confounding bias for the treatment effect of outcome $a'Y$ is in the interval
\begin{equation}
\label{eqn:ignorance-region-gaussian-wnc,multi-y}
\text{Bias}_a \in
\left[a' \Gamma \Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}}
\; \pm \;
\parallel
a'\Gamma P_{\Gamma_{\mathcal{C}}}^{\perp}
\parallel_2
\sqrt{
\frac{1}{\sigma_{t\mid x}^2}\left(
\frac{R^2_{T \sim U \mid X}}{1 - R^2_{T \sim U \mid X}} -
\frac{R^2_{min}}{1 - R^2_{min}}
\right)}
\right],
\end{equation}
where $P_{\Gamma_{\mathcal{C}}}^{\perp} = I_m - \Gamma_{\mathcal{C}}^{\dagger} \Gamma_{\mathcal{C}}$ is the $m \times m$ projection matrix onto the space orthogonal to the row space of $\Gamma_{\mathcal{C}}$.
\end{itshape}
\begin{proof}
According to Proposition \ref{prop:cali_wnco}, the confounding bias of outcome $a'Y$ is
\begin{align}
\text{Bias}_{a} &= \frac{1}{\sigma_{t\mid x}^2\sqrt{1 - R^2_{T \sim U \mid X}}} a'\Gamma\beta \\
&= a'\Gamma \Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}} + \frac{1}{\sigma_{t\mid x}^2\sqrt{1 - R^2_{T \sim U \mid X}}} a'\Gamma (I_m - \Gamma_{\mathcal{C}}^{\dagger} \Gamma_{\mathcal{C}})^2w
\end{align}
Note that we have
\begin{equation}
|a'\Gamma (I_m - \Gamma_{\mathcal{C}}^{\dagger} \Gamma_{\mathcal{C}})^2w| \leq \parallel a'\Gamma P_{\Gamma_{\mathcal{C}}}^{\perp} \parallel_2 \sqrt{\sigma_{t\mid x}^2 R^2_{T \sim U \mid X} - \sigma_{t\mid x}^4(1 - R^2_{T \sim U \mid X})\parallel \Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}} \parallel_2^2},
\end{equation}
with $P_{\Gamma_{\mathcal{C}}}^{\perp}:= I_m - \Gamma_{\mathcal{C}}^{\dagger} \Gamma_{\mathcal{C}}$, and the bound is achieved when $P_{\Gamma_{\mathcal{C}}}^{\perp} w$ is collinear with $P_{\Gamma_{\mathcal{C}}}^{\perp}\Gamma'a$. Combine this with the definition of $R^2_{min}$, we prove that the confounding bias of $a'Y$ is in the interval
\begin{equation}
a' \Gamma \Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}}
\; \pm \;
\parallel
a'\Gamma P_{\Gamma_{\mathcal{C}}}^{\perp}
\parallel_2
\sqrt{
\frac{1}{\sigma_{t\mid x}^2}\left(
\frac{R^2_{T \sim U \mid X}}{1 - R^2_{T \sim U \mid X}} -
\frac{R^2_{min}}{1 - R^2_{min}}
\right)}.
\end{equation}
\end{proof}
\subsubsection*{Proof of Corollary \ref{corollary:rv_increase}}
\textbf{Corollary \ref{corollary:rv_increase}.}
\begin{itshape}
Under assumptions established in Theorem \ref{thm:ignorance-region-gaussian-wnc,multi-y}, null control outcomes increase the robustness value of outcome $a'Y$ to
\begin{equation}
RV_{a,\mathcal{C}}^\Gamma = \frac{\sigma_{t\mid x}^2 w_{\mathcal{C}}}{1 + \sigma_{t\mid x}^2 w_\mathcal{C}}\geq max(R^2_{min}, RV^{\Gamma}_{a})
\end{equation}
where $w_{\mathcal{C}} = (a'(\check{\tau} - \Gamma\Gamma^\dagger_{\mathcal{C}}\check{\tau}_{\mathcal{C}}))^2/
\parallel a'\Gamma P_{\Gamma_{\mathcal{C}}}^{\perp} \parallel^2_2 + \parallel \Gamma_{\mathcal{C}}^{\dagger}\check{\tau}_{\mathcal{C}}\parallel^2_2$.
\end{itshape}
\begin{proof}
It's reasonable to consider the robustness value only when $\text{nullity}(\Gamma_{\mathcal{C}}) > 0$ and $a'\Gamma \notin \text{rowsp}(\Gamma_{\mathcal{C}})$. Based on Equation \eqref{eqn:ignorance-region-gaussian-wnc,multi-y}, the bounds of the ignorance region of $\text{PATE}_a$ can be written as
\begin{equation}
\label{eqn:ig-bound-nc-gaussian}
a'\check{\tau} - a' \Gamma \Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}}
\; \pm \;
\parallel
a'\Gamma P_{\Gamma_{\mathcal{C}}}^{\perp}
\parallel_2
\sqrt{
\frac{1}{\sigma_{t\mid x}^2}\left(
\frac{R^2_{T \sim U \mid X}}{1 - R^2_{T \sim U \mid X}} -
\frac{R^2_{min}}{1 - R^2_{min}}
\right)}.
\end{equation}
To calculate the minimum amount of confounding needed to change the sign of the treatment effect, we can set Equation \eqref{eqn:ig-bound-nc-gaussian} to zero and solve for $R^2_{T \sim U \mid X}$. Therefore, we have
\begin{align}
\frac{(a'\check{\tau} - a' \Gamma \Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}})^2}{\parallel
a'\Gamma P_{\Gamma_{\mathcal{C}}}^{\perp}
\parallel_2^2}
&=
\frac{RV_{a,\mathcal{C}}^\Gamma}{\sigma_{t\mid x}^2(1 - RV_{a,\mathcal{C}}^\Gamma)} - \parallel \Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}} \parallel_2^2 \\
\Rightarrow \frac{RV_{a,\mathcal{C}}^\Gamma}{\sigma_{t\mid x}^2(1 - RV_{a,\mathcal{C}}^\Gamma)}
&= \frac{(a'(\check{\tau} - \Gamma\Gamma^{\dagger}_{\mathcal{C}}\check{\tau}_{\mathcal{C}}))^2}{\parallel a'\Gamma P_{\Gamma_{\mathcal{C}}}^{\perp} \parallel^2_2} + \parallel \Gamma_{\mathcal{C}}^{\dagger}\check{\tau}_{\mathcal{C}}\parallel^2_2
\label{eqn:rv-nc-gaussian-mediate} \\
\Rightarrow RV_{a,\mathcal{C}}^\Gamma &= \frac{\sigma_{t\mid x}^2 w_{\mathcal{C}}}{1 + \sigma_{t\mid x}^2 w_\mathcal{C}},
\end{align}
where $w_{\mathcal{C}}$ denotes the right-hand side of Equation \eqref{eqn:rv-nc-gaussian-mediate}, i.e., $w_{\mathcal{C}} := \frac{(a'(\check{\tau} - \Gamma\Gamma^{\dagger}_{\mathcal{C}}\check{\tau}_{\mathcal{C}}))^2}{\parallel a'\Gamma P_{\Gamma_{\mathcal{C}}}^{\perp} \parallel^2_2} + \parallel \Gamma_{\mathcal{C}}^{\dagger}\check{\tau}_{\mathcal{C}}\parallel^2_2$. $RV_{a,\mathcal{C}}^\Gamma$ must be greater than $R^2_{min}$ and $RV^{\Gamma}_{a}$ according to the definition in \eqref{def:combined-rv}.
\end{proof}
\subsection{Proof of Proposition \ref{prop:lambda}}
\textbf{Proposition \ref{prop:lambda}.}
\begin{itshape}
Assume $U$ is conditionally Gaussian with mean $\mu_{u\mid t,x}$ and covariance $\Sigma_{u\mid t,x}$ as given in Equations \ref{eqn:conditional_u_mean} and \ref{eqn:conditional_u_cov}. Further, denote $e(x) = P(T=1\mid X=x)$ and $e_0(x, U) = P(T=1 \mid X=x, U)$. Then, for any $\beta$ we have $\lambda(X=x, U) = \frac{e_0(x, U)/(1-e_0(x, U))}{e(x)/(1-e(x))} = \text{exp}(V_x)$, where $V_x = (2I_x-1)Z$, $I_x\sim \text{Ber}(e(x)), Z \sim N(\mu_\lambda, \sigma^2_\lambda )$ with $\mu_\lambda = \frac{1}{2\sigma^2_{t\mid x}}\frac{R^2_{T \sim U \mid X}}{1-R^2_{T \sim U \mid X}}$ and $\sigma^2_\lambda = \frac{1}{\sigma^2_{t\mid x}}\frac{R^2_{T \sim U \mid X}}{1-R^2_{T \sim U \mid X}}$.
\end{itshape}
\begin{proof}
According to Bayes' rule and the conditional distribution of $U$ given $T$ and $X$, we can write the odds of treatment after accounting for unmeasured confounders as
\begin{align}
\frac{e_0(x, u)}{1-e_0(x, u)} &= \frac{f(u\mid T=1, x)e(x)}{f(u\mid T=0, x)(1 - e(x))} \\
\begin{split}
&= \frac{e(x)}{1-e(x)}\text{exp}\Big[\frac{1}{2}(\mu_{u\mid T=0,x}'\Sigma_{u\mid t,x}^{-1}\mu_{u\mid T=0,x} - \mu_{u\mid T=1,x}'\Sigma_{u\mid t,x}^{-1}\mu_{u\mid T=1,x}) \\
&\quad +(\mu_{u\mid T=1,x}' - \mu_{u\mid T=0,x}')\Sigma_{u\mid t,x}^{-1}u \Big].
\end{split}
\label{eqn:true_odds}
\end{align}
Then from Equations \ref{eqn:conditional_u_mean} and \ref{eqn:conditional_u_cov}, we have
\begin{align}
(\mu_{u\mid T=1,x}' - \mu_{u\mid T=0,x}')\Sigma_{u\mid t,x}^{-1} &= \frac{\beta'}{\sigma^2_{t\mid x}}\left(I_m - \frac{\beta\beta'}{\sigma^2_{t\mid x}}\right)^{-1} \\
&= \frac{\beta'}{\sigma^2_{t\mid x} - \parallel \beta \parallel_2^2}.
\end{align}
Note that $\mu_{t\mid x} = e(x)$, then from Equation \ref{eqn:true_odds}, the odds ratio given $x$ can be written as
\begin{align}
\lambda(x, U) &\overset{d}{=} \text{exp}\left[\frac{2e(x) - 1}{2\sigma^4_{t\mid x}}\beta'\Sigma_{u\mid t,x}^{-1}\beta + \frac{\beta'U_x}{\sigma^2_{t\mid x} - \parallel \beta \parallel_2^2}\right] \\
&= \text{exp}\left[\frac{(2e(x) - 1)\parallel \beta \parallel_2^2}{2\sigma^2_{t\mid x}(\sigma^2_{t\mid x} - \parallel \beta \parallel_2^2)} + \frac{\beta'U_x}{\sigma^2_{t\mid x} - \parallel \beta \parallel_2^2}\right],
\end{align}
where $U_x \sim f(u\mid x)$. Denote the indicator function $\mathbbm{1}\{T=1\mid x\}$ as $I_x \sim \text{Ber}(e(x))$. Then $\beta'U_x$ can be written as a two-component mixture of $(1 - I_x)V_0 + I_x V_1$, where $V_0 = \beta'U\mid T=0,x$ and $V_1 = \beta'U\mid T=1,x$. From the distribution of $U$ given $T$ and $X$, we have $V_0 \sim N(\frac{- \parallel \beta \parallel_2^2 e(x)}{\sigma^2_{t\mid x}}, \frac{ \parallel \beta \parallel_2^2(\sigma^2_{t\mid x} - \parallel \beta \parallel_2^2)}{\sigma^2_{t\mid x}})$ and $V_1 \sim N(\frac{ \parallel \beta \parallel_2^2 (1-e(x))}{\sigma^2_{t\mid x}}, \frac{ \parallel \beta \parallel_2^2(\sigma^2_{t\mid x} - \parallel \beta \parallel_2^2)}{\sigma^2_{t\mid x}})$. Therefore, the odds ratio given $x$ has the following distribution
\begin{align}
\lambda(x, U) &\overset{d}{=} \text{exp}\left[\frac{(2e(x) - 1)\parallel \beta \parallel_2^2}{2\sigma^2_{t\mid x}(\sigma^2_{t\mid x} - \parallel \beta \parallel_2^2)} + \frac{(1 - I_x)V_0 + I_x V_1}{\sigma^2_{t\mid x} - \parallel \beta \parallel_2^2}\right] \\
&\overset{d}{=} \text{exp}[(2I_x-1)Z],
\end{align}
where $Z \sim N(\mu_\lambda, \sigma^2_\lambda )$ with $\mu_\lambda = \frac{1}{2\sigma^2_{t\mid x}}\frac{R^2_{T \sim U \mid X}}{1-R^2_{T \sim U \mid X}}$ and $\sigma^2_\lambda = \frac{1}{\sigma^2_{t\mid x}}\frac{R^2_{T \sim U \mid X}}{1-R^2_{T \sim U \mid X}}$.
\end{proof}
\pagebreak
\section{Simulation Study}
\label{sec:simulation}
In this section, we demonstrate in simulation how we can leverage assumptions about shared confounding across multiple outcomes to help provide more informative bounds on the causal effects. We focus on building intuition about the theoretical results in a simple Gaussian setting without covariates. We generate $n=1000$ observations from model \eqref{eqn:confounder,multi-y}-\eqref{eqn:outcome,multi-y} with $m=2$ latent variables, $q=10$ outcomes and a Gaussian treatment, $T$. We choose $\tau$ so that there is no causal effect on the first outcome, and a causal effect of one unit for a one standard deviation change in $t$ for the subsequent nine outcomes. To illustrate our theoretical results clearly, we choose a relatively large sample size so that estimation uncertainty is small relative to the potential confounding bias. We also choose $\Gamma$ with a particularly simple structure, shown in Figure \ref{fig:gamma_heat}. We then fit a Bayesian multivariate linear regression model with a factor model structure on the residual covariance using the probabilistic programming language Stan \citep{stan}. The Bayesian model accounts for uncertainty in $\check{\tau}$ as well as uncertainty in $\Gamma$.
\begin{figure}[t!]
\centering
\begin{subfigure}[t]{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/gamma_plot.pdf}
\vspace*{0pt}
\caption{\label{fig:gamma_heat}}
\end{subfigure}
\hspace{.25in}
\begin{subfigure}[t]{0.7\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/nc_sim_plot_w_uncertainty.pdf}
\caption{\label{fig:sim_intervals}}
\end{subfigure}
\caption{a) Heatmap of the true $\Gamma$. b) 95\% posterior credible regions for the causal effects on each of the ten outcomes for $R^2_{T\sim U} \leq 0.5$ based on $n=1000$ simulated observations without any null controls (blue) and with $Y_1$ as a null control (red). Robustness values are reported above the intervals in black. Combined robustness values under the null control assumption are given in red. Outcomes 2 and 3 are identified because the corresponding rows of $\Gamma$ are collinear with row 1. The midpoints of the ignorance regions for outcomes 7-10 shift upward, in the same direction as the null control. Rows 4-6 of $\Gamma$ are orthogonal to the first row of $\Gamma$, so there is only a small reduction in the size of the identification region and no change in the midpoint of this region. \label{fig:sim_plots}}
\end{figure}
In Figure \ref{fig:sim_intervals}, we plot the 95\% posterior credible interval for each treatment effect assuming $R^2_{T\sim U} \leq 0.5$ (blue) by computing the relevant posterior quantiles of the endpoints of the partial identification region determined by Equation \ref{eqn:worst-case-bias-gaussian,multi-y}. First, note that the outcomes with the largest intervals are those for which the corresponding rows of $\Gamma$ have larger magnitudes (i.e. darker colors in Figure \ref{fig:gamma_heat}). As a conservative measure of robustness to confounding, we report the lower 5\% quantile
of the posterior distribution of the robustness value for each outcome and print the values in black above the corresponding intervals. Only outcomes 4, 7 and 8 are robustly different from zero at $R^2_{T \sim U} \leq 0.5$.
We also trace the implications of a null control assumption through the sensitivity model to illustrate the results of Section \ref{sec:nco}. In Figure \ref{fig:sim_intervals} we plot the posterior 95\% credible regions under the additional assumption that the first outcome is a null control outcome (red). After incorporating the null control assumption, the posterior credible regions for outcomes 2-10 include the true causal effect, $a'\tau=1$, assuming $R^2_{T \sim U} \leq 0.5$. Only the posterior distribution for the effect on outcome $6$ includes $0$ under the null control assumption. First, note that $R^2_{T \sim U} \geq R^2_{min} = RV^{\Gamma}_\mathcal{C} = 0.34$, since an $R^2_{T\sim U}$ value of this magnitude would be necessary to induce the bias observed for the null control (Proposition \ref{prop:cali_wnco}). Since the first three rows of $\Gamma$ are mutually collinear, fixing the bias of $Y_1$ implies that, with infinite data, the effects for outcomes $2$ and $3$ are also identified since $\Gamma_{i}P_{\Gamma_\mathcal{C}}^\perp = 0$ for rows $i \in \{2, 3\}$ (Equation \eqref{eqn:ignorance_width}). Rows 4-10 are not mutually collinear with the first row of $\Gamma$ and thus, even with infinite data, the effects remain unidentified. However, the dot products of the first row of $\Gamma$ with rows 7-10 are all positive, which means that the midpoints of the new ignorance regions shift upward (in the same direction as the change for outcome 1). In contrast, rows 4-6 of $\Gamma$ are orthogonal to row 1, which implies that $\Gamma_{i}P_{\Gamma_\mathcal{C}}^\perp = \Gamma_{i}$ for $i \in \{4, 5, 6\}$ and thus the midpoints of the intervals remain unchanged for outcomes 4-6. The interval widths still shrink slightly, since only $(R^2_{T \sim U}-R^2_{min}) = 0.5 - 0.34 = 0.16$ of the treatment variance can be explained by the confounders of $Y_4$ through $Y_6$ after accounting for the null control outcomes. Finally, we include combined robustness values in red corresponding to the total treatment variance explained by confounders needed to explain away the causal effect and satisfy the null control assumption. If the posterior interval for an effect includes 0 when $R^2_{T\sim U} = R^2_{min}$ then we simply report `X' in lieu of the robustness value, which means that no additional confounding is needed to explain away the significance of the effect.
\pagebreak
\section{Additional Figures and Tables}
\begin{table}[ht]
\centering
\begin{tabular}{l|c|c|c|c|}
& $R^2_{Y \sim Age | -Age}$ & $R^2_{Y \sim Gender | -Gender}$ & $R^2_{Y \sim Education | -Education}$ & $R^2_{Y\sim U | T, X}$ ($\Gamma$-confounding) \\
\hline
HDL & 0.013 & 0.12 & 0.01 & 0.483 \\
Methylmercury & 0.019 & NS & 0.02 & 0.067 \\
Glucose & 0.092 & 0.014 & 0.005 & 0.181 \\
Potassium & 0.005 & 0.045 & NS & 0.049 \\
Sodium & 0.008 & 0.003 & NS & 0.635 \\
Iron & 0.007 & 0.071 & 0.006 & 0.34 \\
Lead & 0.215 & 0.047 & 0.014 & 0.376 \\
LDL & NS & NS & NS & 0.511 \\
Triglycerides & 0.034 & 0.009 & 0.003 & 0.862 \\
Cadmium & 0.041 & 0.028 & 0.013 & 0.345 \\
\hline
\end{tabular}
\caption{Partial coefficient of determination, $R^2_{Y \sim X_i | T, X_{-i}}$ for each outcome for age, gender and education (first 3 columns) and the partial coefficient of determination for unobserved confounders given observed confounders under the $\Gamma$-confounding assumption. ``NS'' the covariate was not a significant predictor for the outcome. $R^2_{Y \sim U \mid T, X}$ under $\Gamma$-confounding is significantly larger than the observable counterparts, although it is by far the smallest for methylmercury and potassium. It may be important to consider the possibility of single outcome confounding for these outcomes. \label{tab:obs_partial_rsq}}
\end{table}
\begin{table}[h!]
\centering
\begin{tabular}{l|r|r|}
& ELPD Difference & Standard Error of Difference \\
\hline
Full rank & 0.00 & 0.00 \\
\hline
Rank 6 & -5.13 & 16.61 \\
\hline
Rank 5 & -7.00 & 15.53 \\
\hline
Rank 4 & -21.70 & 17.91 \\
\hline
Rank 3 & -48.17 & 19.70 \\
\hline
Rank 2 & -102.69 & 17.01 \\
\hline
Rank 1 & -223.20 & 24.33 \\
\hline
\end{tabular}
\caption{Difference in expected log posterior density (ELPD) for ranks 1 through 6 and for full rank, with associated standard error. Results were computed using \texttt{loo} package \citep{loo}. Ranks 4-6 all have an ELPD which is within two standard errors of the full rank model, which suggests that these are all reasonable models for the observed data. Models with rank 1-3 are insufficient to capture correlations between the outcomes. \label{tab:elpd}}
\end{table}
\begin{figure}[h!]
\centering
\includegraphics[width = 0.7\textwidth]{figs/nhanes_heat_rank_5.pdf}
\caption{Heatmap of the $\hat \Gamma$ from the NHANES example under $\Gamma$-confounding. $\Gamma$ is only identifiable up to rotations from the right, so for visualization purposes we choose a rotation which makes the upper triangle of the first four rows zero. The first column thus indicates the direction and magnitude of the change in the partial identification regions for each outcome when methylmercury is an assumed null control. If we were to fix $R^2_{a'Y\sim U | T,X} = 1$ for methylmercury then we would simply change the first cell of the first row to $3.24$}
\label{fig:nhanes_heat}
\end{figure}
\section{Introduction}
Large observational datasets often include measurements of multiple outcomes of interest. For example, in high-throughput biology, the goal might be to understand the effect of a treatment on multiple biomarkers \citep{leek2007capturing, zhao2018cross} or in patient-centered epidemiologic studies, researchers might study the potential impact of health recommendations on multiple disease-related outcomes \citep{sanchez2005structural, kennedy2019estimating}. While it is always possible to analyze each outcome separately, there has been a recent emphasis on the importance of techniques for simultaneously inferring the effect of an intervention on multiple outcomes \citep[][]{vanderweele2017outcome}.
As in any observational study, the validity of multi-outcome causal inference rests on untestable and often implausible assumptions about unconfoundedness. As such, methods which explore the sensitivity and robustness of effects to assumptions about unconfoundedness are increasingly recognized as a crucial part of any rigorous analysis. In this paper, we explore a number of different sensitivity assumptions which are unique to causal inference with multiple outcomes and which can be leveraged to more precisely characterize the robustness of causal conclusions. We focus on linear factor models of the outcomes where we 1) establish bounds on the magnitude of unobserved confounder bias for all outcomes as a function of a single interpretable sensitivity parameter under an assumption about shared confounding, 2) provide novel theoretical results that demonstrate how assumptions about null control outcomes inform the sensitivity analysis and 3) provide practical guidance and a Bayesian workflow for sensitivity analysis. We demonstrate this workflow in simulation and an analysis of the effect of light drinking on multiple biomarkers for health.
After reviewing related literature in Section \ref{sec:related_lit}, we establish the basic framework for our proposed sensitivity analysis with multivariate outcomes under a shared confounding assumption (Section \ref{sec:copula-based_sens_analy_framework, multi-y}). In Section \ref{sec:copula-based_sens_analy_gaussian_copula,multi-y}, we introduce theoretical results about confounding bias for a model in which the expected outcomes are linear in the unobserved confounders. We also provide theoretical insights into how null control outcomes can be leveraged in conjunction with our sensitivity analysis. In Section \ref{sec:calibration,multi-y}, we discuss the interpretation and calibration of sensitivity parameters. We illustrate the theoretical insights with a Bayesian implementation of our sensitivity analysis on simulated data in Appendix \ref{sec:simulation}. In Section \ref{sec:nhanes}, we illustrate our approach on a real world example of the effects of light drinking on health measures using data from the National Health and Nutrition Examination Study (NHANES).
\subsection{Related Literature}
\label{sec:related_lit}
There are a number of methods for causal inference with multiple outcomes, although the majority of these appear in the context of randomized control trials \citep[][]{freemantle2003composite, mattei2013exploiting}. In the context of observational studies, \citet{sammel1999multivariate} propose a multivariate linear mixed effects model for multi-outcome inference, \citet{thurston2009bayesian} consider a Bayesian generalization of multiple outcomes which are nested across different domains and \citet{sanchez2005structural} review a variety of structural equation models for multiple outcomes for application to epidemiological problems. These works have the important caveat that they typically assume a version of the ``no unobserved confounding'' (NUC) assumption. More recently, \citet{kennedy2019estimating} develop a nonparametric doubly robust method for estimation and hypothesis testing of scaled treatment effects on multiple outcomes, but again assume NUC.
When unobserved confounding is expected to be an issue, certain assumptions about additional outcomes can sometimes be used to identify the effects of alternative primary outcomes. As one important example, additional outcomes called ``proxy confounders'' or ``null control outcomes'' can sometimes be used to identify causal effects for a set of primary outcomes \citep{shi2020selective, tchetgen2020introduction}. Relatedly, \citet{wang2017confounder} establish identifying assumptions in a linear Gaussian model with multiple outcomes when the non-null effects are assumed to be sparse. Although not explicitly framed in causal language, a closely related line of work explores approaches for de-biasing estimates in the presence of confounders, for example when there are batch effects in high-throughput biological datasets \citep[e.g.][]{gagnon2012using, gagnon2013removing}. These works all focus on various assumptions that can be made to point identify causal effects for each outcome.
In contrast, sensitivity analyses are useful for explicitly relaxing such identifying assumptions. There are a variety of different approaches for assessing sensitivity to potential unmeasured confounders in single treatment, single outcome settings \citep{rosenbaum1983assessing, tan2006distributional, franks2019flexible, cinelli2020making, veitch2020sense}. In the multi-outcome setting, \citet{fogarty2016sensitivity}
consider sensitivity analysis for multiple comparisons in matched observational studies using weighting estimators. They focus on the implications of the key fact that omitted variable biases across multiple outcomes are connected through the shared effect of the unmeasured confounders on the treatment. Under a similar matched pairs framework, \citet{rosenbaum2021sensitivity} considers a sensitivity analysis for a single primary outcome and shows how a null control outcome can sometimes increase the evidence for the robustness of the primary outcome. One potential concern with their sensitivity analyses for weighting estimators, is that the sensitivity analyses are implicitly based on the overly conservative assumption that unmeasured confounders explain nearly all the variation in the outcomes. We address this concern by developing a sensitivity analysis in which we explicitly account for the effect of unmeasured confounders on the outcomes.
Building on existing parametric models for multiple outcomes \citep[e.g.][]{sammel1999multivariate, sanchez2005structural, thurston2009bayesian}, we propose an outcome model with latent variables which account for the residual correlations between outcomes. Unlike these previous works, we expressly consider the possibility that these latent variables might correspond to potential confounders, in that they may also correlate with the treatment. To account for the potential dependence between latent variables and treatment, we use a so-called latent variable sensitivity analysis \citep{rosenbaum1983assessing}. Typically, in a latent variable sensitivity analysis, sensitivity parameters govern the functional relationship between a latent variable and both the outcome and the treatment. For instance, \citet{cinelli2020making} propose a particularly intuitive latent variable sensitivity analysis for single treatment, single outcome problems, with two sensitivity parameters corresponding to the fraction of outcome residual variance explained by latent confounders and the fraction of treatment residual variance explained by confounders. We generalize their approach to multi-outcome settings, paralleling a sensitivity analysis strategy developed in the context of causal inference with multiple treatments and a scalar outcome \citep{zheng2021copula}. We also implement a Bayesian approach for inference and partial identification, similar to \citet{zheng2021bayesian}, who emphasize the importance of prior specifications for multiple partially identified treatment effects.
\section{Setup}
\label{sec:copula-based_sens_analy_framework, multi-y}
We let $Y$ denote a q-vector of outcomes, $T$ a scalar treatment variable, $U$ an $m$-vector of potential unobserved confounders, and $X$ any observed pre-treatment covariates. The goal of multi-outcome causal inference is to infer the effect of a scalar treatment on the q-dimensional outcome. In this setting, we define a class of causal estimands as the population average treatment effect (PATE) for any linear combination of the outcomes, $a'Y$, as
\begin{equation}
\text{PATE}_{a,t_1, t_2} := E[a'Y \mid do(t_1)] - E[a'Y \mid do(t_2)],
\end{equation}
\noindent where the \emph{do}-operator indicates that we are intervening to set the level to $t$ rather than merely conditioning on $t$ \citep{pearl2009causality}. Most commonly, we take $a = e_j$ for some $j$, so that $PATE_{e_j, t_1, t_2}$ is simply the causal effect on measured outcome $j$ in the original coordinates. In some cases, other linear combinations may be of interest, for example when there is not enough power to detect differences in individual outcomes, but there are detectable and interesting differences for linear combinations of outcomes \citep[e.g. see][]{cook2010envelope}. Relatedly, it is often desirable to define the PATE on standardized outcomes, so that each dimension of $Y$ has unit variance e.g. $a_j = e_j/sd(Y_j)$ \citep{kennedy2019estimating}.
In general, the PATEs cannot be identified from observational data without assumptions about the absence of unobserved confounders. If, in addition to $X$, the unmeasured confounders $U$ were observed, then the following three assumptions would be sufficient to identify the causal effects:
\begin{assumption}[Latent ignorability]
\label{asm:latent-unconfoundedness-scm}
$U$ and $X$ block all backdoor paths between $T$ and $Y$ \citep{pearl2009causality}.
\end{assumption}
\begin{assumption}[Latent positivity]
\label{asm:latent-positivity-scm}
$f(T = t \mid U = u, X=x) > 0$ for all $u$ and $x$.
\end{assumption}
\begin{assumption}[SUTVA]\label{asm:sutva} There are no hidden versions of the treatments and there is no interference between units \citep[see][]{rubin1980comment}).
\end{assumption}
\noindent Since $U$ is not observed, we cannot identify causal effects without additional assumptions. Instead of making potentially implausible assumptions about NUC, we advocate for reasoning about the strength of potential unobserved confounding. Here, we argue that sensitivity analysis can be a useful tool for characterizing how robust our multiple causal conclusions are to such confounding.
\section{Sensitivity Analysis with Multiple Outcomes}
\label{sec:copula-based_sens_analy_gaussian_copula,multi-y}
Let $f(y \mid do(t))$ denote the distribution of $y$ if we were to intervene to set the level of treatment to $t$. As shown in \citet{damour2019aistats} and \citet{zheng2021copula}, $f(y \mid do(t))$ can be written as $f_{\psi}(y \mid do(t)) = \int_{\mathcal{U}} f_{\psi_{Y}}(y \mid t, u)\left[\int f_{\psi_{T}}(u \mid t) f(\tilde{t}) d \tilde{t}\right] d u$ where $f_{\psi_{Y}}(y \mid t, u)$ is the full conditional outcome density, $f_{\psi_{T}}(u \mid t)$ is a proposed distribution for unobserved confounders given the treatment and $f(t)$ is the marginal density of the treatment. In the multivariate outcome setting, $\psi_Y$ are sensitivity parameters which govern the relationship between the $q$-dimensional outcomes and the $m$-dimensional unobserved confounders, whereas $\psi_T$ are parameters which govern the relationship between the scalar treatment and unobserved confounders.
A potential difficulty with multi-outcome inference is that the dimension of $\psi_Y$ scales with the number of outcomes. In this work, we emphasize a variety of assumptions that can be used to mitigate the challenges associated with having to reason about high-dimensional sensitivity parameters. We start by focusing explicitly on settings in which $\psi_Y$ is identifiable up to an equivalence class under an appropriate assumption about shared confounding (Section \ref{sec:shared_confounding}) and deriving worst-case bounds on the causal effects for all outcomes under this assumption (Sections \ref{sec:prototype} and \ref{sec:bias_nonGaussian}). We then explore how assumptions about null control outcomes can be incorporated into our framework to further constrain the set of plausible causal conclusions (Section \ref{sec:nco}). In Section \ref{sec:relaxing}, we discuss relaxations of our proposed shared confounding assumption. We demonstarte the behavior of the sensitivity analysis and highlight the theoretical implications from this section on simulated data in Appendix \ref{sec:simulation}.
\subsection{A Multi-outcome Model with Factor Confounding}
\label{sec:shared_confounding}
In this paper we focus on models where the outcomes follow a linear factor model. We explicitly parameterize the model in terms of the latent factors, $U$, motivated by the idea that unmeasured confounders can induce residual correlation between outcomes. Throughout, we assume
\begin{align}
&T|X \sim F_{T|X} \label{eqn:treatment_general,multi-y}\\
&E[U\mid T=t, X=x] =\mu_{u\mid t,x},\quad Cov(U\mid T=t, X=x) = \Sigma_{u \mid t,x} \label{eqn:conditional-confounder,multi-y} \\
&Y = g(T,X) + \Gamma\Sigma_{u|t,x}^{-1/2}U + \epsilon_{y|t,u,x}, \label{eqn:epsilon_y}
\end{align}
where $F_{T|X}$ is the CDF of the treatment given covariates, and $\epsilon_{y| t,u,x}$ has mean zero and diagonal covariance $\Lambda_{y| t,u,x}$ independent of $T, X$ and $U$. We include $\Sigma_{u\mid t,x}^{-1/2}$ in (\ref{eqn:epsilon_y}) without loss of generality, because, as we will show, under certain assumptions $\Gamma$ is identifiable (up to rotation), but $\Gamma\Sigma_{u\mid t,x}^{-1/2}$ is not.
Model \eqref{eqn:treatment_general,multi-y}-\eqref{eqn:epsilon_y} alone does not imply that the factors $U$ are necessarily confounders, as this model is equally compatible with a number of causal models, including those in which $U$ is a latent mediator\footnote{See \citet{zhang2022interpretable}, who derive omitted variable bounds for the direct and indirect effects in a mediation analysis under a similar framework to the one used in this paper.}. To fix a particular sensitivity model, we make the following additional causal assumption:
\begin{assumption}[Potential confounding]
\label{asm:potential_conf}
$U$ are \emph{potential confounders} in that they are possible causes of $T$ and $Y$. Further, $T$ is not a cause for any function of $U$.
\end{assumption}
\noindent While we make this assumption throughout, even if $U$ are not potential confounders, our sensitivity analysis will yield conservative bounds on the treatment effects. We will formalize this fact later in Section \ref{sec:relaxing} after characterizing bounds on the omitted confounder bias under the potential confounding assumption. When Assumption~\ref{asm:potential_conf} holds, the sensitivity analysis is well-defined in that the PATEs are identified given the parameters $\psi_T=\{\mu_{u|t,x}, \Sigma_{u|t,x}: t \in \mathcal{T}, x \in \mathcal{X}\}$, which govern the relationship between confounders and the treatment, and $\psi_Y = \{\Gamma \}$, which is the $q \times m$ factor loading matrix which governs the influence of the unmeasured confounders on each outcome.
Importantly, under the proposed factor model, there are conditions under which we only need to calibrate $\psi_T$, thus avoiding the need to specify and calibrate the $q \times m$ values of $\psi_Y$.
This occurs when $\psi_Y$ can be identified up to a causal equivalence class.
In general, we say that $\psi_Y$ is identifiable \emph{up to a causal equivalence class} when the causal conclusions about the PATE don't change with the value of $\psi_Y$ within the class, as long as we hold the implied assumptions about the dependence between treatment and confounders fixed.
\begin{definition}[Causal equivalence class for multi-outcome inference] Let $\tilde U = h(U)$ for some invertible function $h$. $[\psi_Y]$ is a causal equivalence class of $\psi_Y$ if and only if for any pair $(\psi_Y, \tilde \psi_Y \in [\psi_Y])$, there exist corresponding $(\psi_T, \tilde \psi_T)$
such that $f_{\psi_T}(T=t|U=u, X=x) = f_{\tilde \psi_T}(T=t|\tilde U= h(u), X=x)$ \emph{and} $E_{\psi_T, \psi_Y}[Y \mid do(T=t), X=x] = E_{\tilde \psi_{T}, \tilde \psi_{Y}}[Y \mid do(T = t), X=x]$ for all $t, u$ and $x$.
\end{definition}
\noindent In general, $\Gamma$ is not point identifiable, but the following theorem establishes conditions under which $\Gamma$ is identifiable up to a causal equivalence class defined by right multiplication with an orthogonal matrix.
\begin{theorem}
\label{thm:gamma-identifiability}
Assume model \eqref{eqn:treatment_general,multi-y}-\eqref{eqn:epsilon_y} where $\Gamma$ has some prespecified rank, $m$. If there remain two disjoint matrices of rank $m$ after deleting any row of $\Gamma$, then $\psi_Y$ is identifiable up to the causal equivalence class $[\psi_Y] = \{\Gamma \theta : \theta\theta'=I\}$ and we can assume $Cov(U | X=x)=I_m$ without loss of generality. If the factor model for the residuals given the treatment and observed covariates is not identifiable up to rotations of $\Gamma$, then $\psi_Y$ is not identifiable up to a causal equivalence class.
\end{theorem}
When $\Gamma$ is identified up to rotations, it is identified up to a causal equivalence class, and thus we can explore the range of causal effects for all outcomes under different choices of sensitivity parameters $\psi_T$ only.\footnote{Under different parametric assumptions, e.g. when $U$ is conditionally heavy-tailed given treatments, $\Gamma$ may be identifiable up to an even smaller equivalence class, e.g. up to sign change on the columns \citep{rohe2022vintage}.}
Theorem \ref{thm:gamma-identifiability} implies that in order for $\Gamma$ to be identified up to a causal equivalence class we must have $(q-m)^2-q-m\geq0$ and each confounder must influence at least three outcomes \citep{anderson1956statistical} . We summarize the conditions under which the factor model \eqref{eqn:treatment_general,multi-y}-\eqref{eqn:epsilon_y} yields a sensitivity analysis entirely parameterized by $\psi_T$ in the following assumption.
\begin{assumption}[Factor confounding]
\label{def:gamma_conf}
A causal model satisfies factor confounding if the outcomes follow the model proposed in equation \eqref{eqn:epsilon_y}, $U$ are potential confounders (Assumption \ref{asm:potential_conf}), and $\Gamma$ is identifiable up to a causal equivalence class. We say that a model satisfies factor confounding \emph{for outcome $a'Y$} if $a'\Gamma$ is identifiable up to causal equivalence.
\end{assumption}
There are some useful ways that practitioners can reason about the plausibility of factor confounding.
In particular, by Theorem \ref{thm:gamma-identifiability}, factor confounding is violated if there are confounders which influence fewer than three outcomes. This fact can help practitioners reason about the set of outcomes that should be included in an analysis to make factor confounding more plausible. While the bulk of our analysis is done under the factor confounding assumption, even when factor confounding is violated, we can still apply our sensitivity analysis, albeit with more conservative bounds on the causal effects. As such, we view factor confounding as a useful ``reference assumption'' that can help establish informative bounds on the causal effects. For now, we assume factor confounding, and later explore relaxations of Assumption \ref{def:gamma_conf} in Section \ref{sec:relaxing} and the application in Section \ref{sec:nhanes}.
\subsection{Prototype: Linear-Gaussian Model}
\label{sec:prototype}
We begin by illustrating our sensitivity analysis approach in a simple multivariate regression model where $(Y, T, U)$ are jointly Gaussian. In this model, we derive explicit expressions for the omitted confounder bias involving an unidentified sensitivity vector. We then establish bounds on this bias as a function of more interpretable R-squared style summaries for these sensitivity vectors. Without loss of generality, we exclude conditioning on observed covariates, and return to models with covariates in the next section. Here, we posit the following structural equation model:
\begin{align}
U & = \epsilon_u, \quad \epsilon_u \sim N_m(0, \Sigma_u), \label{eqn:confounder,multi-y}\\
T &= \beta'U + \epsilon_{t|u}, \quad \epsilon_{t|u} \sim N(0, \sigma_{t|u}^2), \label{eqn:treatment,multi-y}\\
Y &= \tau T + \Gamma \Sigma_{u \mid t}^{-1/2}U + \epsilon_{y|t,u}, \quad \epsilon_{y|t,u} \sim N_q(0, \Lambda_{y| t,u}), \label{eqn:outcome,multi-y}
\end{align}
\noindent with $\beta \in \mathbb{R}^m$, $\tau \in \mathbb{R}^q$ and $\Gamma \in \mathbb{R}^{q \times m}$ and $\Lambda_{y|t, u}$ is diagonal.
Model \eqref{eqn:confounder,multi-y}-\eqref{eqn:outcome,multi-y} is a special case of model \eqref{eqn:treatment_general,multi-y}-\eqref{eqn:epsilon_y}, where the association between outcomes and confounders conditional on the treatment, parameterized by $\Gamma$, can be characterized by a Gaussian copula.
Theorem \ref{thm:gamma-identifiability} establishes the conditions on equivalence class identification of $\Gamma$ and also that we can assume without loss of generality that $\Sigma_U = I$, which we now do throughout the remainder of the paper. Under model \eqref{eqn:confounder,multi-y}-\eqref{eqn:outcome,multi-y}, the intervention distribution density has
\begin{equation}
f(y \mid do(T = t)) \sim N_q(\tau t, \ \Lambda_{ y| t,u} + \Gamma\Sigma_{u\mid t}^{-1}\Gamma').
\end{equation}
which is in contrast to the observed outcome distribution,
\begin{equation}
f(y \mid T = t) \sim N_q(\check{\tau} t, \ \Sigma_{y|t}), \label{eqn:sur}
\end{equation}
where $\sigma_t^2 = ||\beta||_2^2 + \sigma_{t|u}^2$ is the marginal variance of the treatment, $\check{\tau} = \tau + \frac{\Gamma \Sigma_{u\mid t}^{-1/2} \beta}{\sigma_t^2}$ is causal effect under NUC and $\Sigma_{y|t} = \Lambda_{ y| t,u} + \Gamma \Gamma'$ is the residual covariance between outcomes.
Equation \eqref{eqn:sur} is the formulation for the classic seemingly unrelated regressions (SUR) model, where here the treatment $T$ is a shared predictor for multiple distinct outcomes with non-independent error structure \citep{zellner1962efficient}. The marginal treatment variance, $\sigma^2_{t}$, can be decomposed into non-confounding variation, $\sigma_{t|u}^2$, and confounding variation, $||\beta||^2_2$, where the variance of confounders is constrained by the overall magnitude of the marginal variance of the treatment (identifiable).
In the linear Gaussian model, $\text{PATE}_{a, t_1, t_2}$ is linear in the difference between two treatments, $t_1 - t_2$. We assume that $t_1 - t_2 = 1$ without loss of generality and thus, $\text{PATE}_{a, t_1, t_2}$ equals $\text{PATE}_{a} := a'\tau(t_1 - t_2) = a'\tau$ which is invariant to the exact levels of $t_1$ and $t_2$. The confounding bias of $\check{\tau}$, $\text{Bias}_{a} = a'\check{\tau} - \text{PATE}_{a}$, can then be expressed as
\begin{align}
\text{Bias}_{a} &= \frac{a' \Gamma \Sigma_{u\mid t}^{-1/2}\beta}{\sigma_t^2} = \frac{1}{\sigma_t^2} a' \Gamma (I_m - \frac{\beta \beta' }{\sigma_t^2})^{-1/2} \beta ,
\end{align}
\noindent Here, $\sigma_t^2$ is identifiable, so that under factor confounding, $\beta$ is the only unidentified parameter determining the omitted variable bias. Although $\beta$ is not identified, its magnitude is constrained by the identifiable treatment variance. Specifically, Equation \eqref{eqn:treatment,multi-y} implies
\begin{equation}
\label{eqn:r2_treatment}
0 \leq R_{T \sim U}^2 := \frac{||\beta||_2^2}{\sigma_t^2} < 1,
\end{equation}
which corresponds to the fraction of variation in the treatment due to confounding, $(Var(T)-Var(T^{\perp U}))/Var(T)$. $R^2_{T \sim U}$ will be the primary sensitivity parameter in our analysis. $R^2_{T \sim U}$ must be \emph{strictly} less than $1$, i.e. $Var(T^{\perp U}) > 0$, in order for treatment in order for positivity to hold (Assumption \ref{asm:latent-positivity-scm}). The above model implies the following confounding bias of the NUC estimate, $\check{\tau}$.
\begin{theorem}
\label{thm:ignorance_region_woNC,multi-y}
Suppose that the observed data is generated by model \eqref{eqn:confounder,multi-y}-\eqref{eqn:outcome,multi-y}. Then, $\forall \beta$ of length $||\beta'||_2 = \sigma_t\sqrt{R_{T \sim U}^2}$ with $0 \leq R_{T \sim U}^2 < 1$, the confounding bias of outcome $a'Y$ is bounded by
\begin{equation}
\label{eqn:worst-case-bias-gaussian,multi-y}
\text{Bias}_{a}^2 \leq \frac{1}{\sigma_t^2} \frac{R_{T \sim U}^2}{1 - R_{T \sim U}^2} \parallel a' \Gamma \parallel_2^2,
\end{equation}
where the bound is achieved when $\beta$ is collinear with $a' \Gamma$.
\end{theorem}
\noindent This theorem implies that the true treatment effect for contrast $a'Y$ lies in the interval $a'\check{\tau} \pm \sqrt{\frac{1}{\sigma_t^2} \frac{R_{T \sim U}^2}{1 - R_{T \sim U}^2}} || a' \Gamma ||_2$. For any fixed $R^2_{T \sim U}$, the right-hand side of Equation \eqref{eqn:worst-case-bias-gaussian,multi-y} is the worst-case bias of the NUC estimator. In contrast to the analogous results for multi-treatment inference \citep{zheng2021copula}, the bias for treatment $a'Y$ is unbounded, since $\frac{R_{T \sim U}^2}{1 - R_{T \sim U}^2}$ can be arbitrarily large. The bias for outcome $j$, is proportional to the norm of the $j$th row of $\Gamma$. In the following corollary, we establish a global bound on the bias over all outcome contrasts $a'Y$ with $||a||_2=1$.
\begin{corollary}
\label{cor:ignorance_region_global_woNC,multi-y}
Let $d_1$ be the largest singular value of $\Gamma$. For all unit vectors $a$, the confounding bias is bounded by
\begin{equation}
\label{eqn:worst-case-bias-gaussian-global,multi-y}
\text{Bias}_{a}^2 \leq \frac{d_1^2}{\sigma_t^2} \frac{R_{T \sim U}^2}{1 - R_{T \sim U}^2},
\end{equation}
with equality when $a = u_1^{\Gamma}$, the first left singular vector of $\Gamma$, and when $\beta$ is collinear with $v_1^{\Gamma}$, the first right singular vector of $\Gamma$. When $a \in Null(\Gamma')$, the NUC estimate is unbiased, that is, $a'\check{\tau} = a'\tau$.
\end{corollary}
\noindent For $a = u_1^{\Gamma}$, $a'Y$ corresponds to the linear combination of outcomes that is most correlated with confounders. In contrast, when $a$ is in the null space of $\Gamma'$, $\text{PATE}_a$ is identified because $a'Y$ is uncorrelated with $U$ under factor confounding.
\subsection{Generalizing the Linear-Gaussian Model}
\label{sec:bias_nonGaussian}
Next, we return to the more general model proposed in equations \eqref{eqn:treatment_general,multi-y}-\eqref{eqn:epsilon_y}. This model generalizes model \eqref{eqn:confounder,multi-y}-\eqref{eqn:outcome,multi-y} in a few key ways: we allow the outcomes to be a non-linear function of the treatment and we introduce a generalized sensitivity parameterization which accommodates both observed covariates, $x$, and an arbitrary distribution for the treatment. Most importantly, since neither of the conditional moments of $U$ in equation \eqref{eqn:conditional-confounder,multi-y} is identifiable, we must choose a sensible and interpretable parameterization for them. Here, by analogy to the model for a Gaussian treatment, we propose a parameterization that only depends on the q-dimensional sensitivity vector, $\beta$.
\begin{align}
\mu_{u\mid t,x} &= \frac{\beta}{\sigma_{t \mid x}^{2}}\left(t-\mu_{t\mid x}\right) \label{eqn:conditional_u_mean}, \\
\Sigma_{u \mid t,x} &= I_m-\frac{\beta \beta^{\prime}}{\sigma_{t\mid x}^{2}} \label{eqn:conditional_u_cov},
\end{align}
for all $t \in \mathcal{T}, x \in \mathcal{X}$, where we define $\mu_{t\mid x} := E[T\mid X=x]$ and $\sigma^2_{t\mid x} := E[Var(T\mid X)]$. These moments are consistent with the conditional mean and covariance of $U$ implied by model \eqref{eqn:confounder,multi-y}-\eqref{eqn:outcome,multi-y}. We further have that $E[U\mid X=x] = 0$, $Cov(U)=I_m$, and $Cor(X, U)=0$. Note that $I_m-\frac{\beta \beta^{\prime}}{\sigma_{t\mid x}^{2}}$ is only positive definite, if $||\beta||_2^2 < \sigma^2_{t\mid x}$, and with slight abuse of notation, we still use the notation $0 \leq R^2_{T \sim U \mid X} := \frac{||\beta||^2_2}{\sigma^2_{t\mid x}} < 1$ to reflect a measure of the strength of dependence between confounders and the treatment. In general, $\frac{||\beta||^2_2}{\sigma^2_{t\mid x}} = ||\text{Cor}(T^{\perp X}, U^{\perp X})||_2^2$ is the squared norm of the partial correlation between $T$ and $U$ given $x$, which is identically the partial R-squared based on a linear fit to $T$. This leads to the following more general version of Theorem \ref{thm:ignorance_region_woNC,multi-y}.
\begin{theorem}
\label{thm:ignorance_region_general,multi-y}
Assume model \eqref{eqn:treatment_general,multi-y}-\eqref{eqn:epsilon_y} with $\psi_T$ defined by \eqref{eqn:conditional_u_mean}-\eqref{eqn:conditional_u_cov}. The confounding bias of $\text{PATE}_{a,t_1,t_2}$, $\text{Bias}_{a,t_1,t_2}$ is equal to $\frac{a' \Gamma \Sigma_{u\mid t,x}^{-1/2}\beta}{\sigma_{t\mid x}^2}(t_1 - t_2)$ and it is bounded by
\begin{equation}
\label{eqn:worst-case-bias-general,multi-y}
\text{Bias}_{a,t_1,t_2}^2
\, \leq \,
\frac{1}{\sigma_{t\mid x}^2} \left(\frac{R^2_{T \sim U \mid X}}{1 - R^2_{T \sim U \mid X}}
\right)\parallel a' \Gamma(t_1-t_2)\parallel_2^2
\, \leq \,
\frac{1}{\sigma_{t\mid x}^2} \left(\frac{R^2_{T \sim U \mid X}}{1 - R^2_{T \sim U \mid X}}
\right)\parallel a' \Sigma_{y\mid t, x}^{1/2}(t_1-t_2)\parallel_2^2,
\end{equation}
where $\Sigma_{y\mid t,x} = \text{Cov}(Y\mid T=t, X=x)$ is the identifiable residual covariance of $Y$. The first bound is achieved when $\beta$ is collinear with $a'\Gamma$.
\end{theorem}
\noindent The right-hand bound on the worst-case bias for outcome $a'Y$ can be achieved when all the residual outcome variance is driven by confounders, e.g. $Var(a'Y \mid U, T, X) = 0.$
\subsection{Null Control Outcomes}
\label{sec:nco}
Theorem \ref{thm:ignorance_region_general,multi-y} establishes marginal bounds for any treatment contrast, but does not tell us anything about the dependence in the omitted variable bias across outcomes. In this section, we explore the joint relationship in the bias across multi-outcomes by considering how assumptions about null control outcomes influence the bounds on causal effects for non-null outcomes. This relationship has been explored from a different perspective by \citet{rosenbaum2021sensitivity} who demonstrates that null control outcomes can make tests of significance on the primary outcome more robust to confounding. In this work, we focus on how assumptions about null control outcomes constrain the sensitivity vector, $\beta$, and thus reduce the size of the partial identification regions for the causal effects of the other outcomes.
For a fixed treatment contrast $t_1$ versus $t_2$, and with slight abuse of notation, we let $\tau$ correspond to the $q$-vector of PATEs on each of the measured outcomes and let $\check{\tau}$ denote the $q$-vector of PATEs under NUC. Assume that $\mathcal{C} \subset \{1, \dots, q\}$ is a set of indices for $c < q$ null control outcomes such that there is no difference in the effect of the treatments on these $c$ measured outcomes, that is, $\tau_j = 0$ for any $j \in \mathcal{C}$. For these null control outcomes, the corresponding c-vector of treatment effects under NUC, $\check{\tau}_{\mathcal{C}}$, must equal the corresponding confounding bias. Since the bias is a function of the sensitivity vector $\beta$, we have that $\check{\tau}_{\mathcal{C}} = \frac{1}{\sigma_{t\mid x}^2\sqrt{1 - R^2_{T \sim U \mid X}}} \Gamma_{\mathcal{C}} \beta$
where $\Gamma_{\mathcal{C}}$ is a $c \times m$ matrix equal to the $c$ rows of $\Gamma$ corresponding to null control outcomes. This equation implies that $\check{\tau}_{\mathcal{C}}$ must be in the column space of $\Gamma_{\mathcal{C}}$ and also implies a lower bound on the fraction of confounding variation in the treatment, $R^2_{T \sim U \mid X}$.
\begin{proposition}
\label{prop:cali_wnco}
Suppose there are $c$ null control outcomes, $Y_j$, such that $\tau_j = 0$ for $j \in \mathcal{C}$. Then, $\check{\tau}_{\mathcal{C}}$ must be in the column space of $\Gamma_{\mathcal{C}}$. In addition, the fraction of variation in the treatment due to the confounding is lower bounded by
\begin{equation}
R^2_{T \sim U \mid X} \geq R_{\text{min}}^2 := \frac{\sigma_{t\mid x}^2 \parallel
\Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}} \parallel_2^2}{1+\sigma_{t \mid x}^2 \parallel
\Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}} \parallel_2^2},
\end{equation}
where $\Gamma_{\mathcal{C}}^{\dagger}$ denotes the pseudoinverse of $\Gamma_{\mathcal{C}}$.
\end{proposition}
\noindent When the number of null control outcomes is smaller than the rank of $\Gamma$, then $\check{\tau}_{\mathcal{C}}$ is automatically in the column space of $\Gamma$. In order to correct for the bias of null control observations, confounding must explain at least $R^2_{min}$ of the treatment variance. Moreover, for any assumed $R^2_{T \sim U \mid X} \geq R^2_{min}$, the null controls assumption constrains the space of possible effects for the non-null outcomes. We formalize this below.
\begin{theorem}
\label{thm:ignorance-region-gaussian-wnc,multi-y}
Suppose there are $c$ known null control outcomes, $Y_j$, such that $\tau_j := 0$ for $j \in \mathcal{C}$. For any value of $R^2_{T \sim U \mid X} \geq R^2_{\min}$, the confounding bias for the treatment effect of outcome $a'Y$ is in the interval
\begin{equation}
\label{eqn:ignorance-region-gaussian-wnc,multi-y}
\text{Bias}_a \in
\left[a' \Gamma \Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}}
\; \pm \;
\parallel
a'\Gamma P_{\Gamma_{\mathcal{C}}}^{\perp}
\parallel_2
\sqrt{
\frac{1}{\sigma_{t\mid x}^2}\left(
\frac{R^2_{T \sim U \mid X}}{1 - R^2_{T \sim U \mid X}} -
\frac{R^2_{min}}{1 - R^2_{min}}
\right)}
\right],
\end{equation}
where $P_{\Gamma_{\mathcal{C}}}^{\perp} = I_m - \Gamma_{\mathcal{C}}^{\dagger} \Gamma_{\mathcal{C}}$ is the $m \times m$ projection matrix onto the space orthogonal to the row space of $\Gamma_{\mathcal{C}}$.
\end{theorem}
\noindent Note that the ignorance region is no longer centered at $a'\check{\tau}$ but instead $a'\check{\tau} - a' \Gamma \Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}}$, where $a' \Gamma \Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}}$ is the bias correction under the null controls assumption. Theorem \ref{thm:ignorance-region-gaussian-wnc,multi-y} indicates that whenever $\Gamma_{\mathcal{C}}$ is of rank $m$ or whenever we assume $R^2_{T \sim U \mid X} = R^2_{min}$, treatment effects for all outcomes are identifiable under factor confounding. A direct comparison of the ignorance regions from Theorem \ref{thm:ignorance_region_general,multi-y} and Theorem \ref{thm:ignorance-region-gaussian-wnc,multi-y} indicates that after incorporating null control outcomes, for any fixed $R^2_{T \sim U \mid X}$ the width of the ignorance region is reduced by a multiplicative factor of
\begin{equation}
\sqrt{
1 - \left(\frac{R_{\text{min}}^2}{1 - R_{\text{min}}^2} \bigg/ \frac{R^2_{T \sim U \mid X}}{1 - R^2_{T \sim U \mid X}}
\right)}
\frac{\parallel
a'\Gamma P_{\Gamma_{\mathcal{C}}}^{\perp}
\parallel_2}{
\parallel a'\Gamma \parallel_2}
\leq 1.
\label{eqn:ignorance_width}
\end{equation}
From Equation \eqref{eqn:ignorance_width}, it is evident that null controls reduce the width of the worst-case ignorance region in two ways. The first factor under the radical is due to the fact that only $R^2_{T \sim U \mid X} - R^2_{min}$ of the treatment variance can be due to confounders which are uncorrelated with the null control outcomes. This factor reduces the width of the ignorance regions for all non-null outcomes by an equal proportion. As a special case, if we think it is unlikely that confounding explains more than $R^2_{min}$ of the treatment variance, i.e. $R^2_{T \sim U \mid X} \approx R^2_{min}$, then all treatment effects are (nearly) identified. In contrast, the second factor depends on the specific outcome of interest, $a'Y$. The ignorance region shrinks the most for outcomes which are correlated with approximately the same set of confounders as the null control outcomes are. Mathematically, when $a'\Gamma$ is in the row space of $\Gamma_{\mathcal{C}}$, the treatment effect of $a'Y$ is identified under factor confounding. When $a'\Gamma$ is orthogonal to the row space of $\Gamma_{\mathcal{C}}$, $\frac{\parallel a'\Gamma P_{\Gamma_{\mathcal{C}}}^{\perp} \parallel_2}{\parallel a'\Gamma \parallel_2} = 1$, so that the confounders affecting the null control outcomes are independent of the confounders affecting $a'Y$, and thus there is no further reduction of the ignorance region. In summary, the best null control outcomes are those which have large confounding biases (and hence large values of $R^2_{min}$) and also have similar outcome-confounder associations with the outcomes of interest. We illustrate these results in a simulation study in Appendix \ref{sec:simulation}.
\subsection{Relaxing Assumptions about Factor Confounding}
\label{sec:relaxing}
We now return to the factor confounding assumption (Assumption \ref{def:gamma_conf}). Here, we explore the ways in which this assumption might plausibly be violated and characterize the effects that such violations might have on the causal effect bounds. First, we consider the consequences of violating the assumption that the residual confounding is induced by unmeasured pre-treatment variables (Assumption \ref{asm:potential_conf}). Some function of the latent factors, $U$, may reflect unmeasured post-treatment mediators or even direct dependence amongst the outcomes. In this case, conditional on all unmeasured pre-treatment variables, there would still be residual correlations in the outcomes from these unmeasured post-treatment sources. Here, we show that regardless of the source of the residual correlations in the outcomes, the bias of the causal effect estimate will never exceed the bounds established in Theorem \ref{thm:ignorance_region_general,multi-y}.
\begin{proposition}
Let $U_1 = AU$, where $A$ is an $r \times m$ semi-orthogonal matrix with $0 \le r \le m$. Assume latent ignorability holds given just $U_1$ and $U_1$ satisfies Assumption \ref{asm:potential_conf}. Then, we can rewrite Equation \eqref{eqn:epsilon_y} as $Y = g(T, X) + \Gamma_1 \Sigma_{u_1 \mid t,x}^{-1/2} U_1 + \epsilon_1$, with $Cov(\epsilon_1 \mid t) = M + \Lambda_{y| t,u}$ for a positive semi-definite matrix $M$, and $\epsilon_1$ independent of $U_1$ conditional on $T$ and $X$.
Then, the confounding bias of contrast $a'Y$ is still in the intervals defined in Theorem \ref{thm:ignorance_region_general,multi-y}.
\label{cor:conservative}
\end{proposition}
\noindent This proposition indicates that violations to Assumption \ref{asm:potential_conf} are relatively innocuous, in that the omitted variable bias for all causal effects still cannot exceed the established bounds. For methods that address sensitivity to unobserved confounding in mediation analyses, see a closely related approach by \citet{zhang2022interpretable}.
Next, we consider violations of factor confounding that arise because some confounders influence fewer than three outcomes, thus rendering $\Gamma$ non-identifiable, at least for some rows (Theorem \ref{thm:gamma-identifiability}). Such a violation is of potential concern because it may lead to underestimates of $||a'\Gamma||_2$ and hence underestimates of the potential omitted variable bias. Fortunately, Theorem \ref{thm:ignorance_region_general,multi-y} highlights that even when $\Gamma$ is not identifiable, we can still identify a conservative bound for the omitted variable bias for any outcome by replacing $\Gamma$ with a square root of the inferred residual covariance, $\Sigma_{y\mid t, x}^{1/2}$. More generally, we might be able to establish less conservative bounds, by calibrating the fraction of each outcome's residual variance explained by potential confounders. Given $\Gamma$, the fraction of outcome residual variance for outcome $a'Y$ is $R^2_{a'Y \sim U \mid T,X} = \frac{a'\Gamma\Gamma'a}{a'(\Gamma\Gamma' +\Lambda_{y\mid t, u ,x})a}$. By making the factor confounding assumption, we avoid having to directly specify $R^2_{a'Y \sim U \mid T,X}$ for each of $q$ outcomes, but when we suspect factor confounding is violated for outcome $a'Y$, we can make adjustments to $\Gamma$ by reasoning about $R^2_{a'Y \sim U \mid T,X}$. For these reasons, we view bounds based on factor confounding as a useful reference assumption that must be rigorously justified.
In Section \ref{sec:cal_r2y}, we discuss how to calibrate $R^2_{a'Y\sim U \mid T, X}$ when factor confounding does not hold. In the analysis of Section \ref{sec:nhanes} we demonstrate how to reason about the plausibility of factor confounding in a real data example.
\section{Calibration and Robustness}
\label{sec:calibration,multi-y}
In this section, we start by describing strategies for calibrating the unidentified m-dimensional sensitivity vector, $R^2_{T \sim U \mid X}$, the sole sensitivity parameter the worst case bias under factor confounding. We then propose a robustness value for factor confounding, which corresponds to the smallest value of $R^2_{T \sim U \mid X}$ needed to change the sign of causal effect and demonstrate how robustness if influenced by assumptions about null controls outcomes. Lastly, we propose an alternative sensitivity parameterization tailored to binary treatments in Section \ref{sec:cali_binary_t} and explore approaches for relaxing the factor confounding assumption in Section \ref{sec:cal_r2y}.
\subsection{Calibrating $R^2_{T \sim U \mid X}$ and Characterizing Robustness}
\label{sec:cal_rsqtx}
Recall that the magnitude of $\beta$ is bounded and can be characterized by $R^2_{T \sim U \mid X}$. For model \eqref{eqn:treatment,multi-y}, $R^2_{T \sim U \mid X}$ can directly be interpreted as the partial fraction of treatment variance explained by confounders given $X$,
$R^2_{T \sim U \mid X} = \frac{\text{Var}(T | X) - \text{Var}(T | U,X)}{\text{Var}(T| X)} = \frac{\| \beta \|^2_2}{\sigma_{t|x}^2}$,
and more generally, as the norm of the partial correlations between the treatment and confounders (see Section \ref{sec:bias_nonGaussian}). Following prior work, we can calibrate $R^2_{T \sim U \mid X}$ by comparing it to an estimable partial fraction of variance explained. For a reference covariate (or set of covariates), $X_j$, and given all baseline covariates $X_{-j}$, we compute the partial R-squared $R_{T \sim X_j \mid X_{-j}}^2 := \frac{R_{T \sim X}^2 - R_{T \sim X_{-j}}^2}{1 - R_{T \sim X_{-j}}^2}$, which serves as a reference for the magnitude of $R^2_{T \sim U}$. See \citet{cinelli2020making} and \citet{zheng2021copula} for additional details on calibration.
\label{sec:cal_rv}
A common strategy for characterizing the robustness to confounding is to identify the ``smallest'' sensitivity parameter(s) which makes the causal effect change sign. For example, for single outcome inference \citet{cinelli2020making} define the robustness value as the smallest value of $R^2_{T \sim U \mid X} = R^2_{Y\sim U \mid T, X}$ needed to change the sign of the effect and define an ``extreme robustness value'' (XRV) as the smallest value of $R^2_{T \sim U \mid X}$, assuming $R^2_{Y\sim U \mid T, X}=1$ needed to change the sign \citep{cinelli2020omitted}. Here, we define the factor confounding robustness value, $RV^\Gamma_a$, of outcome $a'Y$ as the smallest value of $R^2_{T \sim U \mid X}$ needed to make the causal effect zero under factor confounding. $RV^\Gamma_a = XRV_a$ when $a'\Gamma\Gamma'a = a'\Sigma_{y\mid t, x}a$. For model \eqref{eqn:confounder,multi-y}-\eqref{eqn:outcome,multi-y}, $RV^{\Gamma}_a$ is available analytically as $RV^{\Gamma}_{a} = \frac{\omega}{1+\omega}$ with
$\omega = \left(\frac{a'\check{\tau} \sigma_t}{\parallel a'\Gamma \parallel_2}\right)^2$.
In addition, we can characterize how assumptions about null control outcomes influence this robustness value. First, note that when there is only a single null control, indexed by $c$, $R^2_{min}= RV^{\Gamma}_{e_c}$, since $RV^{\Gamma}_{e_c}$ is the smallest fraction of treatment variance needed to nullify outcome $c$. In other words, the total fraction of treatment variance explained by confounders is lower bounded by the robustness value for the null control. We define the ``combined robustness value'' with null controls, $\mathcal{C}$, as
\begin{equation}
\label{def:combined-rv}
RV_{a,\mathcal{C}}^\Gamma := \underset{\beta}{min}\,\, \frac{||\beta||^2_2}{\sigma^2_{t \mid x}} \text{ s.t. } a'\tau = 0 \text{ and } e_c'\tau = 0, \,\forall\, c \in \mathcal{C},
\end{equation}
\noindent where $e_c$ is the $c$th canonical basis vector. $RV_{a,\mathcal{C}}^\Gamma$ corresponds to the minimum fraction of treatment variance explained by unobserved confounders required to both make the causal effect on $a'Y$ equal to zero \emph{and} satisfy all null control assumptions. Naturally, the minimum fraction of treatment variance explained by confounders needed to satisfy the null controls assumptions and nullify $a'Y$ must be larger than the minimum fraction needed to nullify just $a'Y$.
\begin{corollary}
\label{corollary:rv_increase}
Under assumptions established in Theorem \ref{thm:ignorance-region-gaussian-wnc,multi-y}, the combined robustness value for outcome $a'Y$ given null controls $\mathcal{C}$ is
\begin{equation}
RV_{a,\mathcal{C}}^\Gamma = \frac{\sigma_{t\mid x}^2 w_{\mathcal{C}}}{1 + \sigma_{t\mid x}^2 w_\mathcal{C}}\geq max(R^2_{min}, RV^{\Gamma}_{a})
\end{equation}
where $w_{\mathcal{C}} = (a'(\check{\tau} - \Gamma\Gamma^\dagger_{\mathcal{C}}\check{\tau}_{\mathcal{C}}))^2/
\parallel a'\Gamma P_{\Gamma_{\mathcal{C}}}^{\perp} \parallel^2_2 + \parallel \Gamma_{\mathcal{C}}^{\dagger}\check{\tau}_{\mathcal{C}}\parallel^2_2$.
\end{corollary}
\noindent There is some subtlety in the definition of $RV_{a,\mathcal{C}}^{\Gamma}$, since $a' \check{\tau} - a' \Gamma \Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}}$ can have opposite sign of $\check{\tau}$, in which case $RV_{a,{\mathcal{C}}}^\Gamma$ refers to the robustness of the sign of $a' \check{\tau} - a' \Gamma \Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}}$, which is the effect when all the confounding bias is captured by the null controls.
\subsection{Calibration for Binary Treatments}
\label{sec:cali_binary_t}
For binary-valued treatments, it is somewhat unnatural to use the R-squared parameterization. As an alternative, we suggest a parameterization which is closely related to the parameterization in the marginal sensitivity model for inference with binary-valued treatments in single outcome problems \citep{tan2006distributional}. Let $e(X) = P(T=1 \mid X)$ be the ``nominal'' propensity score, $e_0(X, U) = P(T=1 \mid X, U)$ be the ``true'' propensity score and $\lambda(X, U) = \frac{e_0(X, U)/(1-e_0(X, U))}{e(X)/(1-e(X))}$ be the change in the odds of treatment after accounting for unmeasured confounders. The core assumption of the the marginal sensitivity model is that the odds of treatment is bounded by some constant, that is, $\Lambda^{-1} \leq \frac{e_0(X, U)/(1-e_0(X, U))}{e(X)/(1-e(X))}\leq \Lambda$ holds with probability one. Under parametric assumptions about the conditional distribution of $U$, then we can actually characterize the full distribution of $\lambda(X, U)$. Although not strictly necessary, in this work we largely focus on models in which $U$ is assumed to be Gaussian given $T$ and $X$.
\begin{proposition}
Assume $U$ is conditionally Gaussian with mean $\mu_{u\mid t,x}$ and covariance $\Sigma_{u\mid t,x}$ as given in Equations \ref{eqn:conditional_u_mean} and \ref{eqn:conditional_u_cov}. Further, denote $e(x) = P(T=1\mid X=x)$ and $e_0(x, U) = P(T=1 \mid X=x, U)$. Then, for any $\beta$ we have $\lambda(X=x, U) = \frac{e_0(x, U)/(1-e_0(x, U))}{e(x)/(1-e(x))} = \text{exp}(V_x)$, where $V_x = (2I_x-1)Z$, $I_x\sim \text{Ber}(e(x)), Z \sim N(\mu_\lambda, \sigma^2_\lambda )$ with $\mu_\lambda = \frac{1}{2\sigma^2_{t\mid x}}\frac{R^2_{T \sim U \mid X}}{1-R^2_{T \sim U \mid X}}$ and $\sigma^2_\lambda = \frac{1}{\sigma^2_{t\mid x}}\frac{R^2_{T \sim U \mid X}}{1-R^2_{T \sim U \mid X}}$.
\label{prop:lambda}
\end{proposition}
\noindent Proposition \ref{prop:lambda} states that $\text{log}(\lambda(X=x, U))$ is a two-component mixture of normal distributions. Using this proposition, we can find $\Lambda_\alpha$, such that
$P(\Lambda^{-1}_\alpha \leq \lambda(X, U) \leq \Lambda_\alpha) \geq 1-\alpha.
\label{eqn:lambda_bound}$
Unconditional on $x$, $\lambda(X,U)$ is a two-component mixture with means $\pm \mu_\lambda$, variance $\sigma^2_{\lambda}$ and mixture weights $E[e(X)]$ and $1-E[e(X)]$, which we use use to compute $\Lambda_\alpha$. Since $\mu_\lambda$ and $\sigma^2_{\lambda}$ only depend on $R^2_{T\sim U|X}$ and $\sigma^2_{t\mid x}$, we can also derive the robustness of outcome $a'Y$ in the $\Lambda$-parameterization by replacing $R^2_{T\sim U|X}$ with $RV^{\Gamma}_a$ in the formulas for $\mu_\lambda$ and $\sigma^2_{\lambda}$. Similarly, we can benchmark $\Lambda_\alpha$ by computing how much the odds of treatment change when adding a reference covariate into a propensity model which already includes some baseline covariates \citep[see e.g.][]{kallus2021minimax, dorn2022sharp}. Here, we can compute the $1-\alpha$th quantile of $\frac{e(X_{base}, X_{ref})/(1-e(X_{base}, X_{ref}))}{e(X_{base})/(1-e(X_{base}))}$ where $X_{base}$ is a set of observed baseline covariates and $X_{ref}$ is a set of reference covariates. We demonstrate an analysis using this benchmarking strategy in Section \ref{sec:nhanes}.
\subsection{Calibrating outcome variance explained by confounding}
\label{sec:cal_r2y}
So far, we have focused on settings in which $\Gamma$ is identifiable up to a causal equivalence class (factor confounding), or fixed by assumption (e.g. to $\Sigma^{1/2}_{y\mid t, x}$ which leads to the largest ignorance region for all causal effects given any choice of $R^2_{T \sim U \mid X}$). As shown in Theorem \ref{thm:gamma-identifiability}, factor confounding requires that each confounder influences multiple outcomes and that the number of confounders, $m$, is not too large relative to the number of outcomes, $q$. Factor confounding makes particular sense in some settings, for instance, when confounding is caused by batch effects \citep{gagnon2013removing} or when we have domain knowledge that specific unmeasured confounders are likely to influence several outcomes. Given that factor confounding rests on these untestable assumptions, we also recommend benchmarking the outcome variance explained by factor confounding, $R^2_{a'Y\sim U \mid T, X} = \frac{\text{Var}(a'Y|T, X) - \text{Var}(a'Y \mid U, T, X)}{\text{Var}(a'Y| T, X)} = \frac{a'\Gamma\Gamma'a}{a'(\Gamma\Gamma' +\Lambda_{y\mid t, u ,x})a}$.
In general, our strategy is to lower bound $R^2_{a'Y \sim U \mid T,X}$ with values based on the factor confounding assumption and then increase $R^2_{a'Y \sim U \mid T,X}$ for specific outcomes as needed based on benchmarked values. Assume that we obtain $\hat \Gamma$ and $\hat \Lambda_{y\mid t,u,x}$ by fitting a factor model, and use these to compute
$\frac{a'\hat \Gamma\hat \Gamma'a}{a'(\hat \Gamma\hat \Gamma' +\hat\Lambda_{y\mid t, u, x}) a}$. We can compare this quantity to partial R-squared measures computed from reference covariates. For a reference covariate $X_j$ and all baseline covariates $X_{-j}$, we can compute the partial R-squared for outcome $a'Y$, $R_{a'Y \sim X_j \mid X_{-j}, T}^2 := \frac{R_{a'Y \sim X, T}^2 - R_{a'Y\sim X_{-j}, T}^2}{1 - R_{a'Y \sim X_{-j}, T}^2}$. When $\frac{a'\hat \Gamma\hat \Gamma'a}{a'(\hat \Gamma\hat \Gamma' +\hat\Lambda_{y\mid t, u, x}) a}$ is large relative to expectations based on these benchmark values, then we might expect bounds for the causal effect on outcome $a'Y$ to be conservative under factor confounding. In cases where $\frac{a'\hat \Gamma\hat \Gamma'a}{a'(\hat \Gamma\hat \Gamma' +\hat\Lambda_{y\mid t, u, x}) a}$ is small relative to expectations from benchmark values, we should consider the possibility that factor confounding is violated for outcome $a'Y$ and make appropriate adjustments. For example, there may be confounders of outcome $a'Y$ which are not reflected in the inferred loading matrix $\hat \Gamma$ because they uncorrelated with other outcomes (``single outcome confounders'').
In this case, we can manually attribute additional residual outcome variance on a per outcome basis by computing bounds on the causal effects using $\Gamma = \text{chol}(\hat \Gamma \hat \Gamma' + D)$ where $D$ is a diagonal matrix such that $0 \preceq D \preceq \hat \Lambda_{y\mid t, u, x}$ and $\text{chol}(A)$ denotes the cholesky factor of matrix $A$. By choosing D appropriately, we can fix $R^2_{a'Y \sim U \mid T, X}$ to any value between $\frac{a'\hat \Gamma\hat \Gamma'a}{a(\hat \Gamma\hat \Gamma'+\hat \Lambda_{y\mid t,u,x}) a}$ and $1$. The most conservative sensitivity bounds are achieved by using $D = \hat \Lambda_{y \mid t,u,x}$ so that $\Gamma\Gamma' = \Sigma_{y\mid t,x}$ and $R^2_{a'Y \sim U \mid T,X} = 1$ for any $a$ (Theorem \ref{thm:ignorance_region_general,multi-y}). As we will illustrate empirically in Section \ref{sec:nhanes}, changing $R^2_{a'Y \sim U \mid T,X}$ for the null control outcomes also impacts the sensitivity regions for the non-null controls.
\begin{comment}
Briefly, we comment on how accounting for additional confounding changes the bias correction induced by null control outcomes. Whenever we increase the fraction of null control outcome variance assumed due to confounding, we decrease the magnitude of the bias adjustment, $a'\Gamma\Gamma^{\dagger}_\mathcal{C}\check{\tau}_{\mathcal{C}}$, for all other outcomes, $a'Y$. We formalize this in the proposition below.
\begin{proposition}
Let $\tilde \Gamma = \mathrm{chol}(\hat \Gamma \hat \Gamma' + D_\mathcal{C})$, where $D_\mathcal{C}$ is zero everywhere except for nonnegative entries along the diagonal elements corresponding to the null control outcomes. Then,
\begin{equation}
|a'\tilde \Gamma \tilde \Gamma^{\dagger}_{\mathcal{C}} \check{\tau}_{\mathcal{C}}| \leq |a'\hat \Gamma\hat \Gamma^{\dagger}_{\mathcal{C}} \check{\tau}_{\mathcal{C}}|
\end{equation}
with equality if and only if $D_\mathcal{C}=0$.
\label{prop:nc_bias_correction}
\end{proposition}
\noindent The left-hand side of the equation corresponds to the magnitude of the bias correction for outcome $a'Y$ when we allow for additional confounding in the null control outcomes and the right-hand size is the original bias correction assuming factor confounding. Intuitively, assuming that $\hat \Gamma$ accounts for all residual covariation between the outcomes, incorporating additional ``single outcome confounding'' can only reduce the correlation between the omitted variable bias of the null controls and all other outcomes. Thus, the bias correction is largest when factor confounding is satisfied for the null controls. We illustrate this phenomenon in the analysis of NHANES data in Section \ref{sec:nhanes}.
\end{comment}
\section{Analyzing The Effects of Light Alcohol Consumption}
\label{sec:nhanes}
We demonstrate our proposed multi-outcome sensitivity analysis by investigating a long-standing question about the potential health benefits of light to moderate alcohol consumption on health outcomes. In particular, observational data indicates that light alcohol consumption is positively correlated with blood levels of HDL (``good cholesterol'') and negatively correlated with LDL (``bad cholesterol'') \citep[e.g.][]{choudhury1994alcohol, meister2000health, o2007alcohol}. However, there are known to be many potential confounders related to diet and lifestyle which could explain these associations. We consider treated individuals ($n_T=114$) as those who self-reported drinking between one and three alcoholic beverages per day, and untreated individuals ($n_C=1439$) as those that averaged one drink per week or less. We make use of laboratory outcomes, $Y = (Y_1, \cdots, Y_{10})$, which consist of three measures of cholesterol (HDL, LDL, and triglycerides), as well as blood levels of potassium, iron, sodium, and glucose and levels three environmental toxicants, methylmercury, cadmium and lead, all collected from 2017-2020 (pre-pandemic) as part of the National Health and Nutrition Examination Study (NHANES). We control for observed confounders which include age, gender, and an indicator for education beyond a high school degree.
In our analysis, we consider the layers of assumptions that one might make to reason about the set of causal effects which are consistent with the observed data. First, we report posterior intervals for causal effects under the NUC assumption, and then consider bounds on the causal effects under factor confounding (Definition \ref{def:gamma_conf}). We calibrate the bounds by benchmarking values of $\Lambda_{0.95}$ using metrics computed from the observed covariates (Section \ref{sec:cali_binary_t}). Alternatively, in lieu of specifying $\Lambda_{0.95}$ directly, we also explore the implications of a carefully chosen null control outcome on non-null outcomes. We report robustness values for all outcomes both with and without the null control assumption. Finally, following the suggestions in Section \ref{sec:cal_r2y}, we use the inferred factor loading matrix to reason about the plausibility of factor confounding for each outcome. Specifically, we use inferred values of $R^2_{a'Y \sim U \mid T,X}$ to identify outcomes for which there may be additional confounding that is not reflected in the inferred factor loading matrix. We focus primarily on potential violations of factor confounding for our chosen null control outcome, for which the inferred value of $R^2_{a'Y \sim U \mid T,X}$ under factor confounding is relatively small. We briefly compare our qualitative conclusions to those from a related analysis by \citet{rosenbaum2021sensitivity}.
We start by estimating associations under NUC by fitting a linear regression model, where the estimands reduce to the regression coefficients, $\tau_j$, for $j = 1, \cdots, 10$. The logarithms of all outcomes are approximately unimodal, symmetric and not heavy tailed, and thus we regress the log outcomes on age, gender, education and the alcohol consumption indicator to estimate $\check{\tau}_j$ under an assumption of no unobserved confounding.
Here, we fit model model \ref{eqn:outcome,multi-y}, a Bayesian multivariate linear regression with a rank-$m$ factor model structure on the Gaussian residuals using STAN \citep{stan}. Note that factor confounding can only be satisfied if $m \le 6$ otherwise $(q − m)^2 − q − m < 0$. As such, we fit models of rank $m \le 6$ and use Pareto-Smoothed Importance Sampling estimates of the leave-one-out cross-validation loss to evaluate relative model fit \citep{vehtari2017practical}. We compared differences in expected log predictive density (ELPD) for different ranks and found that models with ranks $m=5$ and $m=6$ have an ELPD within one standard deviation of the full rank model, and thus can be viewed as statistically indistinguishable from the model in which there are no constraints on the residual covariance matrix, $\Sigma_{y\mid t, x}$ (See Appendix Table \ref{tab:elpd}). For the remainder of the analysis, we proceed with the rank-5 model as the smallest model which can explain the correlations in the data (See Appendix Figure \ref{fig:nhanes_heat} for a heatmap of the inferred $\Gamma$).
We find that age, gender, and education are significant predictors for almost every outcome (Appendix Table \ref{tab:obs_partial_rsq}) and are also all significantly correlated with the propensity for light drinking. We use these measured covariates to compute benchmark values for the sensitivity values in $\Lambda$-parameterization for binary treatments (Section \ref{sec:cali_binary_t}). First, we compute how much the predicted odds of light drinking would change if any of the observed covariates---age, gender, or education---were omitted from the analysis. The empirical 95\% quantile for $\text{Odds}(X)/\text{Odds}(X_{-age})$ = 3.5, which means that for 95\% of the observed units, the predicted odds of light drinking change by a multiplicative factor between $1/3.5$ and $3.5$ when adding age into a treatment model that already included gender and education. Similarly, we find that the 95\% empirical quantile of $\text{Odds(X)}/\text{Odds}(X_{-gender})$ and $\text{Odds}(X)/\text{Odds}(X_{-education}) $ are both $1.5$.
In Figure \ref{fig:intervals_no_single_outcome_confounders}, we plot the 95\% posterior intervals for the causal effects on each of the ten outcomes under the no unobserved confounding assumption as black lines. We find that HDL cholesterol, lead, methylmercury, and potassium are positively associated with light drinking and glucose is negatively associated with light drinking. The 95\% credible intervals for all other outcomes include zero. We also include the worst case bounds for each outcome assuming $\Lambda_{0.95} = 3.5$ (black rectangles), which matches the benchmark value computed using age, the observed confounder with the strongest relationship to the treatment. While each marginal interval except methylmercury includes zero under this assumption, it is \emph{not} true that all of those outcomes could simultaneously be zero at $\Lambda_{0.95}=3.5$, since the worst-case bounds for each outcome are achieved with a different sensitivity vector. The marginal robustness values for each of these estimates under factor confounding are reported in the $\Lambda$-parameterization above each interval in black (Proposition \ref{prop:lambda}). Since there is estimation uncertainty for this quantity, we conservatively report the lower endpoint of the 95\% posterior interval for the robustness values. We find that under the factor confounding assumption, methylmercury levels are the most robust to confounding ($\Lambda_{0.95}=4.2$) followed by HDL ($\Lambda_{0.95}=3.5$). These robustness values correspond to multiplicative changes in the odds of light drinking which match or exceed the change observed when adding age into a model which already includes gender and education.
\begin{figure}
\centering
\begin{subfigure}[t]{0.75\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/effect_intervals_weduc_2_odds_m5.png}
\caption{\label{fig:intervals_no_single_outcome_confounders}}
\end{subfigure}
\begin{subfigure}[t]{0.75\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/effect_intervals_2_odds_m5_null_r2y1.png}
\caption{\label{fig:intervals_r2y1}}
\end{subfigure}
\caption{a) 95\% posterior credible intervals for the causal effects of light drinking on each outcome under NUC (black) and under factor confounding with $\Lambda_{0.95} \leq 3.5$ (black box). $\Lambda_{0.95}=3.5$ matches the magnitude of the multiplicative change in odds of light drinking when adding age into a propensity model which includes gender and education. In red, we plot the 95\% credible intervals when methylmercury is assumed to be a null control outcome and $\Lambda_{0.95} = \Lambda_{min}$, the value needed to nullify methylmercury under factor confounding (red intervals). Numbers in black indicate $\Lambda^{RV}_{0.95}$ for the outcomes whose credible intervals did not overlap zero under NUC and numbers in red indicate the combined robustness, $\Lambda^{RV}_{0.95, \mathcal{C}} \geq \Lambda_{min}$ after incorporating the null controls assumption. Listed values are conservatively reported as the lower endpoint of the 95\% posterior credible intervals of $\Lambda^{RV}_{0.95}$. b) Posterior intervals when assuming $R^2_{a'Y \sim U | T, X} = 1$ for methylmercury, so that all residual outcome variance in the null control is due to confounders (predominantly single outcome confounding). In this setting, the value needed to nullify mercury drops to $\Lambda_{min}=1.4$. The null control assumption is considerably weaker in this case, and thus has less influence on non-null outcomes.
\label{fig:interval_plots}}
\end{figure}
Despite the apparent robustness of these effects, we suspect that methylmercury levels are primarily tied to the consumption of fish and note that mercury is not found in alcoholic beverages. Since there is no other known credible mechanism for light drinking directly influencing mercury levels, methylmercury makes an ideal null control outcome. We then evaluate whether correcting the inferred bias in effect estimates for mercury also explains away the apparent effects for other outcomes. We plot 95\% posterior credible intervals for all effects after incorporating methylmercury as a null control, assuming no additional confounding beyond the smallest amount required to explain away methylmercury's association. Stated differently, these plots show the posterior distribution of $a'\check{\tau} - a' \Gamma \Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}}$ which corresponds to the midpoint of the causal effect ignorance regions for outcome $a'Y$ given null controls $\mathcal{C}$, or equivalently, the identifiable effect estimate under the assumption that $\Lambda_{0.95} =\Lambda_{min}$ (Theorem \ref{thm:ignorance-region-gaussian-wnc,multi-y}). If the posterior interval for an outcome includes 0, then apparently the null control assumption is enough to explain away the significance of the effect for that outcome. If it does not include zero, then in red we report the combined robustness, $\Lambda^{RV}_{0.95, \mathcal{C}}$, corresponding to the magnitude of confounding needed to nullify the effect of light drinking on mercury \emph{and} nullify the effect on the corresponding outcome (Corollary \ref{corollary:rv_increase}).
When methylmercury is a null control, $\Lambda_{min} = 4.2$, mercury's robustness value. The 95\% posterior credible intervals of $a'\check{\tau} - a' \Gamma \Gamma_{\mathcal{C}}^{\dagger} \check{\tau}_{\mathcal{C}}$ for HDL, lead, and glucose all include zero after incorporating the null control constraint, meaning that factor confounding and the null control assumption is enough to explain away the causal effects for these outcomes, assuming no additional confounding beyond the minimum needed to explain away the effect on mercury. In contrast, the 95\% credible interval for the causal effect on potassium, which included zero under NUC, moves away from zero since potassium levels are negatively correlated with mercury levels, and becomes highly robust ($\Lambda^{RV}_{0.95, \mathcal{C}}=11.5$).
Under factor confounding, we might conclude that there is evidence that light alcohol consumption has a positive causal effect on potassium levels, but no apparent effect on HDL, lead, or glucose levels. To further understand the plausibility of the factor confounding assumption, we examine the implied values of $R^2_{a'Y\sim U | T, X}$, the partial variance in each outcome that is explained by the latent factors under factor confounding (see Section \ref{sec:cal_r2y}). In Appendix Table \ref{tab:obs_partial_rsq} we report $R^2_{a'Y\sim U | T, X}$ for each outcome under factor confounding and compare these values to the partial fraction of variance explained by each of gender, age and education. We find that while the observed covariates are significantly associated with the outcomes, they explain a small fraction of the total variation in them (mostly less than $0.1$). In contrast, we find that for the vast majority of outcomes, latent factors explain a relatively large fraction of the outcome variance, with $R^2_{a'Y \sim U \mid T,X}$ greater than $0.3$ for most outcomes. For these outcomes, we might expect that the bounds computed under factor confounding are conservative. Two important exceptions are methylmercury and potassium for which the values for $R^2_{a'Y\sim T, U | X}$ are $0.07$ and $0.05$ respectively, the smallest values across all outcomes. The inferred latent factors seem to explain far less residual variation in methylmercury and potassium than they do for most other measured outcomes.
The small value for mercury might indicate that there are environmental confounders that influence both mercury levels and the propensity for light drinking, but do not influence the other outcomes, thus violating factor confounding for methylmercury. To account for this possibility, we first establish robustness under the most conservative assumption about the outcome-confounder dependence for mercury, by fixing $a'\Gamma=a'\Sigma_{y\mid t, x}^{1/2}$ so that $R^2_{a'Y\sim U|T, X}=1$. In Figure \ref{fig:intervals_r2y1} we plot the posterior intervals and report $\Lambda_{0.95}$ when all the variance in this null control outcome is due to confounders. When $R^2_{a'Y\sim U|T, X}=1$ for mercury, the strength of treatment-confounder association needed to satisfy the null control assumption drops from $\Lambda^{RV}_{0.95} = 4.2$ to $\Lambda^{RV}_{0.95} = 1.4$. Additionally, the bias correction for non-null outcomes is very small when fixing $R^2_{a'Y\sim U|T, X}=1$ for methylmercury, since $1-0.07 = 0.93$ of the residual variance in methylmercury is driven by confounders which are uncorrelated with all other outcomes. In this case, the posterior interval for potassium changes very little and still overlaps zero. Relative to NUC, the qualitative conclusions for all outcomes remain essentially unchanged, although the robustness for each significant outcome increases slightly.
These results are consistent with the qualitative conclusions from \citet{rosenbaum2021sensitivity}, who argue that the apparent presence of confounding bias for methylmercury strengthens the robustness of the finding that light drinking increases HDL cholesterol. Likewise, we show that when $R^2_{a'Y \sim U \mid T,X}$ for methylmercury, the robustness of HDL increases from $\Lambda^{RV}_{0.95} = 3.5$ to $\Lambda^{RV}_{0.95, \mathcal{C}} = 3.9$ under the null control assumption. That said, our analysis under factor confounding suggests a more nuanced interpretation is in order: if $R^2_{a'Y \sim U|T,X} \ll 1$ for methylmercury (e.g. closer to benchmark values based on observed covariates), then a larger value of $\Lambda_{0.95}$ (i.e. closer to $4.2$ as in the results from Figure \ref{fig:intervals_no_single_outcome_confounders}) is needed to explain away positive association of drinking with methylmercury. This value of $\Lambda_{0.95}$ is strong enough to explain away the apparent effect of light drinking on HDL, lead, and glucose levels. Stated differently, under factor confounding, a strong association between confounders and treatment is needed to explain the bias in mercury, which further implies a large bias adjustment for non-null outcomes. When $R^2_{a'Y \sim U \mid T,X}=1$, we assume the strongest possible association between confounders and mercury levels, which means that only a weak association between confounders and treatment is needed to account for the omitted confounder bias in the PATE for mercury. This leads to a small value of $\Lambda_{min}$ which is commensurate with the magnitude of the bias adjustment for non-null outcomes.
In summary, this analysis shows the many ways a practitioner could reason about the effects of light alcohol consumption on multiple outcomes.
Practitioners with sufficient domain knowledge could reason more rigorously about plausible values of $\Lambda_{0.95}$ and $R^2_{a'Y \sim U \mid T,X}$ for other outcomes which might violate factor confounding (e.g. potassium), and thus pin down a smaller set of plausible conclusions. In full generality, we note that each of the $q \times m$ entries of $\Gamma$ can always be specified manually by a practitioner, although in practice it is likely to be very difficult to rigorously justify any such choice without starting from higher level assumptions like those proposed in this work.
\section{Discussion}
In this paper, we propose a sensitivity analysis for characterizing the range of potential biases that can arise in observational analyses with multivariate outcomes. Unlike previous work on observational causal inference with multivariate outcomes, which typically require stronger assumptions for causal identification, we take a Bayesian approach to exploring the range of causal effects that are compatible with the observed data under different untestable assumptions about the strength of unobserved confounding. We show precisely how the bias varies by outcome and depends on the inferred residual covariance in the outcome model. When appropriate, we show how assumptions about factor model identifiability can be used to provide stronger results about the robustness of effects. We then characterize how null control outcomes influence both the partial identification region and robustness of effects.
There are several extensions and generalizations that are worth considering. Importantly, in this work, we focus primarily on modeling under a factor model structure, although extensions for non-continuous outcomes and more complex structures could be developed, perhaps by leveraging the copula decomposition proposed by \citet{zheng2021copula} or by making use of generalized latent variable models \citep{skrondal2004generalized}. We also focus on inference and sensitivity analysis in the Bayesian paradigm, and there is room for a more in-depth exploration of the effects of different prior distributions for partially identified parameters \citep[]{gustafson2015bayesian, zheng2021bayesian}.
Finally, this work builds on closely related work on sensitivity analyses for multi-treatment causal analyses \citep{zheng2021copula}. A natural follow-up would be to consider the identification implications for data that involve both multiple simultaneously applied treatments and multiple outcomes. This would further bridge our work and recent works in proximal causal inference \citep{tchetgen2020introduction}. There may be particular connections to the work of \citet{miao2018identifying} who discuss conditions under which the average treatment effects can be nonparametrically identified with a single null control treatment and a null control outcome, via a double null controls design \citep[See also][]{miao2018confounding, shi2020multiply}).
|
2,877,628,089,106 | arxiv | \section{Introduction}
The 13th Hilbert problem asks whether all functions can be
represented as compositions of binary functions. This question
can be understood in different ways. Initially Hilbert was interested in
a specific function (roots of a polynomial as function of its coefficients).
Kolmogorov and Arnold (see~\cite{hilbert-positive}) gave kind
of a positive answer for
continuous functions proving that any continuous function of several
real arguments can be represented as a composition of continuous
unary functions and addition (a binary function). On the other
hand, for differentiable functions negative answer was obtained
by Vituschkin. Later Kolmogorov interpreted this result in terms
of information theory (see~\cite{hilbert-negative}): the
decomposition is impossible since we have ``much more'' ternary
functions than compositions of binary ones.
In a discrete setting this information-theoretic argument
was used by Hansen, Lachish and
Miltersen (\cite{hansen-lachish-miltersen}. We consider
similar questions in a (slightly) different
setting.
Let us start with a simple decomposition problem. An input (say,
a binary string) is divided into three parts $x$, $y$ and $z$.
We want to represent $T(x,y,z)$ (for some function $T$)
as a composition of three binary
functions:
$$
T(x,y,z)=t(a(x,y),b(y,z)).
$$
In other words, we want to compute $T(x,y,z)$ under the
following restrictions:
\begin{figure}
\begin{center}
\includegraphics[scale=0.85]{decomp-1.mps}
\end{center}
\caption{Information transmission for the decomposition.}
\label{abc}
\end{figure}
\noindent
node $A$ gets $x$ and $y$ and computes some function $a(x,y)$;
node $B$ gets $y$ and $z$ and computes some function $b(y,z)$;
finally, the output node $T$ gets $a(x,y)$ and $b(y,z)$ and
should compute $T(x,y,z)$.
The two upper channels have limited capacity; the question is
how much capacity is needed to make such a decomposition
possible. If $a$- and $b$-channels are wide enough, we may
transmit all the available information, i.e., let
$a(x,y)=\langle x,y\rangle$ and $b(y,z)=\langle y,z\rangle$.
Even better, we can split $y$ in an arbitrary proportion and
send one part with $x$ and the other one with $z$.
Is it possible to use less capacity? The answer evidently
depends on the function $T$. If, say, $T(x,y,z)$ is \texttt{xor}
of all bits in $x$, $y$ and $z$, one bit for $a$- and $b$-values
is enough. However, for other functions $T$ it is not the case,
as we see below.
In the sequel we prove different lower bounds for the necessary
capacity of two upper channels in different settings; then we
consider related questions in the framework of multi-source
algorithmic information theory~\cite{multisource}).
Before going into details, let us note that the definition of
communication complexity can be reformulated in similar terms:
one-round communication
complexity corresponds to the network
\begin{center}
\includegraphics[scale=0.85]{decomp-2.mps}
\end{center}
\noindent
(dotted line indicates channel of limited capacity) while
two-rounds communication complexity corresponds to the network
\begin{center}
\includegraphics[scale=0.85]{decomp-3.mps}
\end{center}
etc. Another related setting that appears in communication
complexity theory: three inputs $x,y,z$ are distributed between
three participants; one knows $x$ and $y$, the other knows $y$
and $z$, the third one knows $x$ and $z$; all three participants
send their messages to the fourth one who should compute
$T(x,y,z)$ based on their messages (see~\cite{nisan}).
One can naturally define communication complexity for other
networks (we select some channels and count the bits that go
through these channels).
\section{Decomposition complexity}\label{sec:communication}
Now let us give formal definitions.
Let $T=T(x,y,z)$ be a function defined on $\mathbb{B}^p\times
\mathbb{B}^q\times \mathbb{B}^r$ (here $\mathbb{B}^k$ is the set
of $k$-bit binary strings) whose values belong to some set $M$. We say
that \emph{decomposition complexity} of $T$ does not exceed $n$
if there exist $u+v\le n$ and functions $a\colon
\mathbb{B}^p\times\mathbb{B}^q\to\mathbb{B}^u$, $b\colon
\mathbb{B}^q\times\mathbb{B}^r\to\mathbb{B}^v$ and $t\colon
\mathbb{B}^u\times\mathbb{B}^v\to M$ such that
$$
T(x,y,z)=t(a(x,y),b(y,z))
$$
for all $x\in \mathbb{B}^p$, $y\in \mathbb{B}^q$, $z\in
\mathbb{B}^r$. (As in communication complexity, we take into
account the total number of bits transmitted via both restricted
links. More detailed analysis could consider $u$ and $v$
separately.)
\subsection{General upper and lower bounds}
Since the logarithm of the image cardinality is an evident lower
bound for decomposition complexity, it is natural to consider
\emph{predicates} $T$ (so this lower bound is trivial). This makes
our setting different from \cite{hansen-lachish-miltersen} where
all the arguments and values have the same size. However,
the same simple counting argument can be used to provide
worst-case lower bounds for arbitrary functions.
\begin{theorem}
\label{th:general}
\textup{\textbf{(Upper bounds)}}~Complexity of any function does not exceed
$n=p+q+r$; complexity of any predicate does not exceed $2^r+r$
as well as $2^p+p$.
\textup{\textbf{(Lower bound)}}~If $p$ and $r$ are not too small
\textup(at least $\log n+O(1)$\textup), then there exists a
predicate with decomposition complexity $n-O(1)$.
\end{theorem}
The second statement shows that the upper bounds provided by the
first one are rather tight.
\proof (Upper bounds)~For the first bound one can let, say,
$a(x,y)=\langle x,y\rangle$ and $b(y,z)=z$. (One can also
split $y$ between $a$ and $b$ in an arbitrary proportion.)
For the second bound: for each $x,y$ the predicate $T_{x,y}$
$$
z\mapsto T_{x,y}(z)=T(x,y,z)
$$
can be encoded by $2^r$ bits, so we let $a(x,y)=T_{x,y}$ and
$b(z)=z$ and get decomposition complexity at most $2^r+r$. The
bound $2^p+p$ is obtained in a symmetric way.
(Lower bound)~We can use a standard counting argument (in the same way
as in \cite{hansen-lachish-miltersen}; they consider functions,
not predicates, but
this does not matter much.)
Let us count how many possibilities we have for a predicate with
decomposition complexity $m$ or less. Choosing such a predicate,
we first have to choose numbers $u$ and $v$ such that $u+v\le m$.
Without loss of generality we may assume that $u+v=m$ (adding
dummy bits). First, let us count (for fixed $u$ and $v$)
all the decompositions where
$a$ has $u$-bit values and $b$ has $v$-bit values.
We have $(2^u)^{2^{p+q}}$ possible $a$'s, $(2^v)^{2^{q+r}}$
possible $b$'s and $2^{2^{u+v}}$ possible $t$'s, i.e.,
$$
2^{u2^{p+q}}\cdot
2^{v2^{q+r}}\cdot
2^{2^{u+v}} = 2^{u2^{p+q}+v2^{q+r}+2^{u+v}}\le
2^{(u+v)2^{p+q}+(u+v)2^{q+r}+2^{u+v}}
$$
possibilities (for fixed $u,v$). In total we get at most
$$
m 2^{m2^{p+q}+
m2^{q+r}+
2^m}
$$
predicates of decomposition complexity $m$ or less (the factor
$m$ appears since there are at most $m$ decompositions of $m$
into a sum of positive integers $u$ and $v$). Therefore, if all $2^{2^n}$ predicates
$\mathbb{B}^p\times\mathbb{B}^q\times\mathbb{B}^r\to\mathbb{B}$
have decomposition complexity at most $m$, then
$$
m 2^{m2^{p+q}+m2^{q+r}+2^m} \ge 2^{2^n}
$$
or
$$
\log m + m2^{p+q}+m2^{q+r}+2^m \ge 2^n
$$
At least one of the terms in the left-hand side should be
$\Omega(2^n)$, therefore either $m\ge n-O(1)$ [if $2^m=\Omega(2^n)$],
or $\log m \ge r-O(1)$ [if $m2^{p+q}\ge\Omega(2^n)=\Omega(2^{p+q+r})$],
or $\log m \ge p-O(1)$
[if $m2^{q+r}\ge\Omega(2^n)=\Omega(2^{p+q+r})$].\qed
\subsection{Bounds for explicit predicates}
As with circuit complexity, an interesting question is to
provide a lower bound for an explicit function; it is usually much
harder than proving the existence results. The following
statement provides a lower bound for a simple function.
Consider the predicate $T\colon
\mathbb{B}^k\times\mathbb{B}^{2^{2k}}\times\mathbb{B}^k\to
\mathbb{B}$ defined
as follows:
$$
T(x,y,z)=y(x,z)
$$
where $y\in \mathbb{B}^{2^{2k}}$ is treated as a function
$\mathbb{B}^k\times\mathbb{B}^k\to \mathbb{B}$.
\begin{theorem}
\label{indexing}
The decomposition complexity of $T$
is at least $2^k$.
\end{theorem}
(Note that this lower bound almost matches the second upper bound
of Theorem~\ref{th:general}, which is $k+2^k$.)
\proof Assume that some decomposition of $T$ is given:
$$
T(x,y,z)=t(a(x,y),b(y,z)),
$$
where $a(x,y)$ and $b(y,z)$ consist of $u$ and $v$ bits respectively.
Then every $y:\mathbb{B}^k\times\mathbb{B}^k\to\mathbb{B}$
determines two functions $a_y\colon\mathbb{B}^k\to\mathbb{B}^u$
and $b_y\colon\mathbb{B}^k\to\mathbb{B}^v$ obtained from $a$ and
$b$ by fixing $y$. Knowing these two functions (and $t$) one should be
able to reconstruct $T(x,y,z)$ for all $x$ and $z$, since
$$
T(x,y,z)= t(a_y(x),b_y(z)),
$$
i.e., to reconstruct $y$. Therefore, the number of possible
pairs $\langle a_y,b_y\rangle$, which is at most
$$
2^{u2^k} \cdot 2^{v2^k},
$$
is at least the number of all $y$'s, i.e. $2^{2^{2k}}$. So we get
$$
(u+v)2^k \ge 2^{2k},
$$
or $u+v\ge 2^k$, therefore the decomposition complexity of $T$
is at least $2^k$.\qed
\textbf{Remarks}.
\textbf{1}.
In this way we get a lower bound
$\Omega(\sqrt{n})$ (where $n$ is the total input size) for the case
when $x$ and $z$ are of size about $\frac{1}{2}\log n$. In this
case this lower bound matches the upper bound of Theorem~\ref{th:general},
as we have noted.
\textbf{2}.
Here is another example where upper and lower bounds match.
If the predicate $t(x,y,z)$ is defined as $x=z$, we
need to transmit $x$ and $z$ completely (see \cite{nisan} or use the
pigeon-hole principle).
So there is a trivial (and tight) linear lower bound if we let $x$
and $z$ be long (of $\Theta(n)$) size.
\textbf{3}. It would be interesting
to get a linear bound for an explicit function in an
intermediate case when $x$ and $z$ are short compared to $y$
(preferable even of logarithmic size) but not as short as in
Theorem~\ref{indexing} (so a non-constructive
lower bound applies).
Such a lower bound would mean that $a(x,y)$ or $b(y, z)$ has to
retain a significant part of information in $y$. Intuitive
explanation for this necessity could be: ``since we do not know $z$
when computing $a(x,y)$, we do not know which part of $y$-information
is relevant and need to retain a significant fraction of $y$''.
Note that for the function $T$ defined above this is not the case:
not knowing $z$, we still know $x$ so only one row ($x$th row)
in the matrix
$y$ is relevant.
The natural candidate is the function
$
T'\colon \mathbb{B}^k\times \mathbb{B}^{2^k}\times\mathbb{B}^k\to\mathbb{B}
$
defined by $T'(x,y,z)=y(x\oplus z)$. Here $y$ is considered as a vector
$\mathbb{B}^k\to\mathbb{B}$,
not matrix, and $x\oplus z$ denotes bitwise XOR of two $k$-bit
strings $x$ and $z$. The size of $x$ and $z$ is about $\log n$ (where $n$ is the
total input size), and for these input sizes the worst-case lower bound is
indeed linear. One could think that this lower bound could be obtained for
$T'$: ``when computing $a(x,y)$ we do not know $z$, and $x\oplus z$
could be any bit string of length $k$, so all the information in $y$ is
relevant''. However, this intuition is false, and there exists a sublinear
upper bound $O(n^{0.92})$, see~\cite{babai} or~\cite{nisan}, p.~95.\footnote{This upper
bound is obtained as follows. Let us consider $y$ as a Boolean function of
$k$ Boolean variables; $y\colon (u_1,\ldots,u_k)\mapsto y(u_1,\ldots,u_k)$.
Such a Boolean function can be represented as a multi-linear
polynomial of degree $k$ over the $2$-element field $\mathbb{F}_2$. This polynomial
$y(u_1,\ldots,u_k)$
has $2^k$ bit coefficients
and is known when $a(x,y)$ or $b(y,z)$ are computed. Let us separate terms
of ``high'' and ``low'' degree in this polynomial:
$$
y(u_1,\ldots)=y_\textrm{low}(u_1,\ldots)+y_\textrm{high}(u_1,\ldots),
$$
taking $\frac{2}{3}k$ as the threshold between ``low'' and ``high''.
The polynomial $y_\textrm{high}$ is
included in $a$ (or $b$) as is, just by listing all its coefficients. (We have about
$2^{H(\frac{2}{3})k}\approx n^{0.92}$ of them, where $H$ is Shannon entropy
function.) For $y_\textrm{low}$ we use the following trick. Consider
$y(X_1\oplus Z_1,\ldots,X_k\oplus Z_k)$ as a polynomial $\tilde y$ of $2k$ variables $X_1,\ldots,X_k,Z_1,\ldots,Z_k\in\mathbb{F}_2$. Its degree is at most $\frac{2}{3}k$, and each
monomial includes at most $\frac{2}{3}k$ variables. So we can split $\tilde y$ again:
$$
\tilde y(X_1,\ldots,Z_1,\ldots)=\tilde y_{x\textrm{-low}}(X_1,\ldots,Z_1,\ldots) +\tilde y_{z\textrm{-low}}(X_1,\ldots,Z_1,\ldots);
$$
here the first term has small $X$-degree ($Z$-variables are treated as constants),
and the second term has small $Z$-degree. Here ``small'' means ``at most $\frac{1}{3}k$''.
All this could be done
in both nodes (while computing $a$ and $b$), since $y$ is known there;
$X_i$ and $Z_i$ are just variables. Now we include in $a(x,y)$ the
coefficients of the polynomial $(Z_1,\ldots,Z_k)\mapsto\tilde y_{z\textrm{-low}}
(x_1,\ldots,x_k,Z_1,\ldots,Z_k)$, and do the symmetric thing for $b(y,z)$.
Both polynomial have degree at most $\frac{1}{3}k$, so we again
need only $O(n^{0.92})$ bits to specify them.}
(This upper bound should be compared to
the $\Omega(\sqrt{n})$ lower bound obtained
by reduction to $T$: in the special case when the left half of $x$ and the
right half of $z$ contain only zeros, we get $T$ out of $T'$.)
\textbf{Question}: what happens if we replace $x\oplus z$ by $x+z\bmod 2^k$
in the definition of $T'$? It seems that the upper bound argument does not
work any more.
\section{Probabilistic decomposition}
As in communication complexity theory, we may consider also
probabilistic and distributional versions of decomposition
complexity. In the probabilistic version we consider random variables
instead of binary functions $a, b, t$ (with shared random bits or
independent random bits). In the distributional version we
look for a decomposition that is
Hamming-close to a given function.
It turns out that the lower bounds mentioned above are robust in that
sense and remain valid for distributional (and therefore probabilistic)
decomposition complexity almost unchanged.
Let $\varepsilon$ be a positive number less than $1/2$. We are
interested in a minimum decomposition complexity of a function
that $\varepsilon$-approximates a given one (coincides with it
with probability at least $1-\varepsilon$ with respect to uniform distribution
on inputs). For $\varepsilon\ge \frac{1}{2}$ this question is trivial
(either $0$ or $1$ constant provide the required approximation).
So we assume that some $\varepsilon<\frac{1}{2}$ is fixed (the $O()$-constants
in the statements will depend on it).
A standard argument shows that lower bounds established for
distributional decomposition complexity remain
true for probabilistic complexity (where $a,b,t$ use random bits and
for every input $x,y,z$ the random variable $t(a(x,y),b(y,z))$ should
coincide with a given function with probability at least $1-\varepsilon$).
So we may consider only the distributional complexity.
\begin{theorem}\label{th:prob}
%
\textbf{\textup{(1)}}~Let $n=p+q+r$ and $p,r\ge \log n+O(1)$. Then there
exists a predicate
$T\colon \mathbb{B}^p\times\mathbb{B}^q\times\mathbb{B}^r\to \mathbb{B}$
such that decomposition complexity of any its $\varepsilon$-approximation
is at least $n-O(1)$.
\textbf{\textup{(2)}}~For the predicate $T$ used in Theorem~\ref{indexing}
we get the lower bound $\Omega(2^k)$ \textup(in the same setting\textup).
%
\end{theorem}
\proof \textbf{1}. Assume this is not the case. We repeat the same
counting argument as in Theorem~\ref{th:general}. Now we have
to count not only the predicates that have decomposition
complexity at most $m$, but also their $\varepsilon$-approximations.
The volume of an $\varepsilon$-ball in $\mathbb{B}^{2^{n}}$ is
about $2^{H(\varepsilon)2^{n}}$, so the number of the centers
of the balls that cover the entire space is at least
$2^{(1-H(\varepsilon))2^{n}}$. So after taking the logarithms
we get a constant factor $(1-H(\varepsilon))$, and the lower
bound for $m$ remains $n-O(1)$.
\textbf{2}. If the computation is correct for $1-\varepsilon$ fraction
of all triples $(x,y,z)$, then there exist $\varepsilon'<\frac{1}{2}$ and
$\varepsilon''>0$ such that for at least $\varepsilon''$-fraction of
all $y$ the computation is correct with probability at least $1-\varepsilon'$
(with respect to uniform distribution on $x$ and $z$).
This means that $\varepsilon'$-balls around functions
$(x,z)\mapsto t(a_y(x),b_y(z))$ cover at least $\varepsilon''$-fraction
of all functions $y$. (See the proof of Theorem~\ref{indexing}.)
Again this gives us a constant factor before $2^{2k}$, but here we
do not take the logarithm second time, so we get $u+v\ge\Omega(2^k)$,
not $2^k-O(1)$.
\qed
\section{Applications to cellular automata}
An (one-dimensional) cellular automata is a linear array of
cells. Each of the cells can be in some state from a finite set
$S$ of states (the same for all cells). At each step all the
cells update their state; new state of a cell is some fixed function of
its old state and the states of its two neighbors. All the updates
are made synchronously.
Using a cellular automaton to compute a predicate, we assume
that there are two special states $0$ and $1$ and a neutral
state that is stable (if a cell and both its neighbors
are in the neutral state, then the cell remains neutral). To compute $P(x)$ for a
$n$-bit string $x$, we assemble $n$ cells and put them into
states that correspond to $x$; the rest of the (biinfinite) cell
array is in a neutral state.
Then we start the computation; the answer should appear in some
predefined cell (see below about the choice of this cell).
There is a natural non-uniform version of cellular automata: we
assume that in each vertex of the time-space diagram an
arbitrary ternary transition function (different for different
vertices) is used. Then the only restriction is caused
by the limited capacity of links: we require that
inputs/outputs of all functions (in all vertices) belong to some
fixed set $S$.
In this non-uniform setting a predicate $P$ on binary strings is
considered as a family of Boolean functions $P_n$ (where
$P_n$ is a restriction of $P$ onto $n$-bit strings) and for each
$P_n$ we measure the minimal size of a set $S$ needed to compute
$P_n$ in a non-uniform way described above. If this size is an
unbounded function of $n$, we conclude that predicate $P$ is not
computable by a cellular automaton. (In classical complexity theory we use
the same approach when we try to prove that some predicate is
not in P since it needs superpolynomial circuits in a non-uniform
setting.)
As usual, getting lower bounds for nonuniform models is
difficult, but it turns out that decomposition complexity
can be used if the cellular automaton is required to produce the
answer as soon as possible.
Since each cell gets information only from
itself and its two neighbors, the first occasion to use all
$n$ input bits happens around time $n/2$ in the middle of the
string:
%
\begin{center}
\includegraphics[scale=0.85]{decomp-7.mps}
\end{center}
Now we assume that the output of a cellular automaton is
produced at this place (both in uniform and non-uniform model).
(This is a very strong version of real-time computation by
cellular automata; we could call it ``as soon as
possible''-computation.)
The next theorem observes that non-uniformly computable family
of predicates is transformed into a function with small decomposition
complexity if we split the input string in three parts.
\begin{theorem}
%
Let $T_k \colon \mathbb{B}^{k+f(k)+k}=
\mathbb{B}^k\times \mathbb{B}^{f(k)} \times
\mathbb{B}^k\to \mathbb{B}$ be a family of predicates that is
non-uniformly computable in this sense. Then the decomposition
complexity of $T_k$ is $O(k)$, and the constant in $O$-notation
is the logarithm of the number of states.
%
\end{theorem}
\proof Consider Figure~\ref{decomp-automaton} where
the (nonuniform) computation is presented
%
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.85]{decomp-6.mps}
\end{center}
\caption{Automaton run and its decomposition.}\label{decomp-automaton}
\end{figure}
%
(we use bigger units for time direction to make
the picture more clear).
Let us look at the contents of the line of length $2k$ located $k$
steps before the end of the computation. The left half is
$a(x,y)$, the right half is $b(y,z)$ and the function $t$ is
computed by the upper part of the circuit. It is easy to see
that $a(x,y)$ indeed depends only on $x$ and $y$ since
information about $z$ has not arrived yet; for the same reason
$b(y,z)$ depends only on $y$ and $z$. The bit size of $a(x,y)$
and $b(y,z)$ is $k\log\#S$.\qed
\begin{corollary}The predicate $T$ from
Theorem~\ref{indexing} cannot be computed in this model.
\end{corollary}
This predicate splits a string of length $k+2^{2k}+k$
into three pieces $x,y,z$ of length $k$, $2^{2k}$ and $k$ respectively,
and then computes $y(x,z)$. Note that this can be done
by a cellular automaton
in linear time. Indeed, we combine the string $x$ and $z$ into a
$2k$-binary string; then we move this string across the middle
part of input subtracting one at each step and waiting until our
counter decreases to zero; then we know where the output bit
should be read. So we get the following result:
\begin{theorem}
\label{separation}
There exists a linear-time computable predicate that is not
computable ``as soon as possible'' even in a non-uniform model.
%
\end{theorem}
\textbf{Remark}. This result and the intuition behind the proof
are not new (see the paper of V.~Terrier~\cite{terrier}; see
also~\cite{culik}). However, the explicit use of decomposition
complexity helps to formalize the intuition behind the proof. It
also allows us to show (in a similar way) that this predicate
cannot be computed not only ``as soon as possible'', but even
after $o(\sqrt{n})$ steps after this moment (which seems to be
an improvement).
Another improvement that we get for free is that we cannot even
$\varepsilon$-approximate this predicate in the ``as soon as
possible'' model.
\textbf{Question}: There could be other ways to get lower
bounds for non-uniform automata (=triangle circuits). Of course,
there is a counting lower bound, but this does not give any
explicit function. Are there some other tools?
\section{Algorithmic Information Theory}\label{sec:multisource}
Now we can consider the Kolmogorov complexity version of the
same decomposition problem. Let us start
with some informal comments. Assume that we have four binary
strings $x,y,z,t$ such that $\KS(t|x,y,z)$ is small
(we write $\KS(t|x,y,z)\approx 0$, not specifying exactly how
small should it be). Here
$\KS(\alpha|\beta)$ stands for conditional complexity of
$\alpha$ when $\beta$ is known, i.e., for the minimal length of
a program that transforms $\beta$ to $\alpha$. (Hence our
requirement says that there is a short program that produces $t$
given $x,y,z$.)
We are looking for strings $a$ and $b$ such that
$\KS(a|x,y)\approx 0$, $\KS(b|y,z)\approx 0$, and
$\KS(t|a,b)\approx 0$. Such $a$ and $b$ always exist, since we
may let $a=\langle x,y\rangle$ and $b=\langle y,z\rangle$
(again, $y$ can also be split between $a$ and $b$). However, the
situation changes if we restrict the complexities of $a$ and $b$ (or their
lengths, this does not matter, since each string can be replaced
by its shortest description).
As we shall see, sometimes we need $a$ and $b$ of total
complexity close to $\KS(x)+\KS(y)+\KS(z)$ even if $t$ has much
smaller complexity. (Note that now we cannot restrict ourselves to one-bit
strings $t$ for evident reasons.)
To be specific, let us agree that all the strings $x,y,z,t$ have the
same length $n$; we look for strings $a$ and $b$ of length
$m$, and ``small'' conditional complexity means that complexity
is less than some $c$.
\begin{theorem}
\label{th:complexity-1}
If $3c<n-O(1)$ and $2m+c<3n-O(1)$, there exist
strings $x,y,z,t$ of length $n$ such that $K(t|x,y,z)=O(\log n)$,
but there are no strings $a,b$ of length $m$ such that
$$
K(a|x,y)<c,\qquad K(b|y,z)<c, \qquad K(t|a,b)<c.
$$
\end{theorem}
For example, this is true if $c=O(\log n)$ and $m$ is $1.5n-O(\log n)$
(note that for $m=1.5n$ we can split $y$ into two halves and combine
the first half with $x$, and the second half with $y$).
\proof Consider the following algorithm. Given $n$, we generate
(in parallel for all $x,y\in\mathbb{B}^n$) the lists of those $m$-bit strings who
have conditional complexity (with respect to $x$ and $y$) less than $c$ (one list
for each pair $x,y$).
Also we generate (in parallel for all strings $a$ and $b$ of length $m$)
the lists of those strings $t$ who have complexity less than $c$ given $a$ and $b$
(one list for each pair $a,b$).
At every step of enumeration we imagine that these lists are final and construct
a quadruple
$x,y,z,t$ that satisfies the statement of the theorem. It is done as
follows: we take a ``fresh'' triple $x,y,z$ (that was not used on the
previous steps of the construction), take all strings $a$ that are in
the list for $x,y$, take all strings $b$ that are in the list for $y,z$, and take
all strings $t$ that are in the lists for those $a$s and $b$s. Then we choose
some $t$ that does not appear in all these lists.
Such a $t$ exists since we have at most $2^c$ strings $a$ (for given $x$ and $y$),
and at most $2^c$ strings $b$ (for given $y$ and $z$). For every of $2^{2c}$ pairs
$(a,b)$ there are at most $2^c$ strings $t$, so in total at most $2^{3c}$ values
of $t$ are unsuitable, and we can choose a suitable one.
We also need to ensure that there are enough ``fresh'' pairs for
all the steps of the construction. The new elements
in the first series of lists may appear at
most $2^{n}\times 2^{n}\times 2^c$ times (we have
at most $2^n\times 2^n$ pairs $(x,y)$ and at most $2^c$ values of $a$ for
each pair). Then we have $2^{m}\times 2^{m}\times 2^{c}$ events for the
second series of lists. On the other hand, we have $2^{3n}$ triples $(x,y,z)$,
so we need the inequality
$$
2^{2n+c} + 2^{2m+c}< 2^{3n},
$$
which is guaranteed by our assumptions.
To run this process, it is enough to know $n$, so for every $x,y,z,t$ generated
by this algorithm we have $K(t|x,y,z)=O(\log n)$. (For given $x,y,z$ only one $t$
may appear since we take a fresh triple each time.)
%
\qed
This result can be improved:
\begin{theorem}
Assume that $3c<n-O(1)$ and $m\le 1.5n-O(\log n)$.
We can effectively construct for every $n$ a
total function $T:\mathbb{B}^n\times \mathbb{B}^n\times\mathbb{B}^n\to
\mathbb{B}^n$ such that for random
\textup(=\,incompressible\textup) triple $x,y,z$ and
$t=T(x,y,z)$ the strings $a$ and $b$ of length $m$
that provide a decomposition
\textup(as defined above\textup) do
not exist.
\end{theorem}
The improvement is two-fold: first, we have a total function $T$ (instead of
a partial one provided by the previous construction); second, we claim that
all random triples have the required property (instead of mere existence of
such a triple).
\proof Let us first deal with the first improvement.
Consider multi-valued functions $A,B\colon \mathbb{B}^n\times
\mathbb{B}^n\to\mathcal{P}(\mathbb{B}^m)$ that map every pair of $n$-bit
strings into a $2^c$-element set of $m$-bit strings. Consider also
multi-valued function $F\colon \mathbb{B}^m\times\mathbb{B}^m\to\
\mathcal{P}(\mathbb{B}^n)$ whose values are $2^c$-element sets of $n$-bit
strings. We say that $A,B,F$ \emph{cover} a total function
$T:\mathbb{B}^n\times \mathbb{B}^n\times\mathbb{B}^n\to
\mathbb{B}^n$ if for every $x,y,z\in\mathbb{B}^n$ there exist strings
$a,b\in\mathbb{B}^m$ such that $a\in A(x,y)$, $b\in B(y,z)$, and
$T(x,y,z)\in F(a,b)$.
Let us prove first the following combinatorial statement: \emph{there exists
a function $T$ that is not covered by any triple
of functions $A,B,F$}. This can be shown
by a counting argument similar to the proof of Theorem~\ref{th:general}.
Indeed, let us compute the probability of the event ``random function $T$
is covered by some fixed $A,B,F$''. This event is the intersection of
independent events (for each triple $x,y,z$). For given $x,y,z$ there are
$2^c$ possible $a$s, $2^c$ possible $b$s, and $2^c$ possible elements
in $F(a,b)$ for each $a$ and $b$, i.e., $2^{3c}$ possibilities altogether.
Since $3c<n-O(1)$, each of the independent events has probability less
than $\frac{1}{2}$, and their intersection has probability less than
$2^{-2^{3n}}$.
This probability then should be multiplied by the number of triples $A,B,F$.
For $A$ and $B$ we have at most $(2^m)^{2^n\times 2^n\times 2^c}$ possibilities,
for $F$ we have at most $(2^n)^{2^m\times 2^m\times 2^c}$ possibilities.
So the existence of a function $T$ not covered by any triple is guaranteed if
$$
2^{m2^{2n+c}}\times 2^{m2^{2n+c}}\times 2^{n2^{2m+c}}\times 2^{-2^{3n}}<1,
$$
i.e.,
$$
m2^{2n+c}+m2^{2n+c}+n2^{2m+c} < 2^{3n},
$$
and this inequality follows from the assumptions.
The property ``$T$ can be covered by some triple $A,B,F$'' can be computably
tested by an exhaustive search over all triples $A,B,F$. So we can
(for every $n$) computably find the first (in some order) function $T$ that
does not have this property. For these $T$ there are some $x,y,z$ that do not
allow decomposition. Indeed, we can choose $A$ so that
$A(x,y)$ contains all strings $a$ of length $m$ such that $K(a|x,y)<c$, etc.
However, we promised more: we need to show not only the existence of $x,y,z$
but that all incompressible triples (this means that $K(x,y,z)\ge 3n-O(1)$) have
the required property. This is done in two steps. First, we show than (for
some $F$ that computably depends on $n$)
most triples do not allow decomposition. Then we note that one can enumerate
triples that allow decomposition, so they can be encoded by their
ordinal number in the enumeration and therefore are compressible.
To make this plan work, we need to consider other property of function $T$.
Now we say that $T$ is covered by $A,B,F$ if at least $2^{-O(1)}$-fraction of
all triples $(x,y,z)$ admit $a$ and $b$. The probability of this event should
now be estimated by Chernoff inequality (we guarantee first that the probability of
each individual event is, say, twice smaller than the threshold), and we get
a bound of the same type, with $\Omega(2^{3n})$ instead of $2^{3n}$, which
is enough.\qed
In fact, this argument provides a decomposition complexity bound similar
to Theorem~\ref{th:general}, but now the functions $a$, $b$ and $t$ are
multi-valued and we can choose any of their values to obtain $t(x,y,z)$.
\subsection*{Remarks and questions}
\textbf{1}. Similar results can be obtained for more binary operations in the
decomposition. Imagine that we have some strings $x,y,z,t$ of length $n$ such
that $K(t|x,y,z)$ is small and want to construct some ``intermediate''
strings $u_1,\ldots,u_s$ such that in the sequence
$$
x,y,z,u_1,u_2,\ldots,u_s,t
$$
every string, starting from $u_1$, is conditionally simple with
respect to some \emph{pair} of its
predecessors. We can use our technique to show that this is not possible
if all $u_i$ have length close to $n$ and the number $s$ is not large.
\textbf{2}. As before, it would be nice to get lower bounds for some explicit
function $T(x,y,z)$ (even a non-optimal lower bound,
like in Theorem~\ref{indexing}) for
the algorithmic information theory version of decomposition problem.
\textbf{3}. Many results of multi-source algorithmic information theory have some
counterparts in classical information theory. Can we find some statement
that corresponds to the lower bound for decomposition complexity?
\textbf{4}. Is it possible to use the techniques of \cite{hansen-lachish-miltersen}
to get some bounds for explicit functions in algorithmic information
theory setting?
|
2,877,628,089,107 | arxiv | \section{introduction}
In 1995, P. Lindqvist \cite{l} studied the generalized trigonometric
and hyperbolic functions with parameter $p>1$. Thereafter
several authors became interested to work on the equalities and inequalities of
these generalized functions, e.g, see
\cite{bv1,bv2,bv3,bbv,be,egl,jq,take} and the references therein.
Recently, Kl\'en et al. \cite{kvz} were motivated by many results on these generalized
trigonometric and hyperbolic functions, and they
generalized some classical inequalities in terms of generalized
trigonometric and hyperbolic functions, such as
Mitrinovi\'c-Adamovi\'c inequality, Huygens' inequality, and
Wilker's inequality.
In this paper we prove the conjecture posed by Kl\'en et al. in \cite{kvz}, and in Theorem \ref{thm3.2} we generalize the
inequality
$$\frac{1}{\cosh(x)^a}<\frac{\sin(x)}{x}<\frac{1}{\cosh(x)^b},$$
where $a=\log(\pi/2)/\log(\cosh(\pi/2)) \approx 0.4909$ and $b=1/3$, due to Neuman and S\'andor
\cite[Theorem 2.1]{neusan}.
For the formulation of our main results we give the definitions of the
generalized trigonometric and hyperbolic functions as below.
The increasing homeomorphism function $F_{p}:[0,1]\to [0,\pi_{p}/2]$ is defined by
$$F_{p}(x)={\rm arcsin}_{p}(x)=\int_0^x{(1-t^p)}^{-1/p}\,dt,$$
and its inverse $\sin_{p,q}$ is
called generalized sine function, which is defined
on the interval $[0,\pi_{p}/2]$,
where
$${\rm arcsin}_{p}(1)=\pi_{p}/2.$$
The function $\sin_p$ is strictly increasing and concave on $[0,\pi_p/2]$, and it is also called the eigenfunction of the Dirichlet eigenvalue problem for the one-dimensional $p-$Laplacian \cite{dm}.
In the same way, we can define the
generalized cosine function, the generalized tangent, and
also the corresponding hyperbolic functions.
The generalized cosine function is defined by
$$\frac{d}{dx}\sin_{p}(x)=\cos_{p}(x),\quad x\in[0,\pi_{p}/2]\,.$$
It follows from the definition that
$$\cos_{p}(x)=(1-(\sin_{p}(x))^p)^{1/p}\,,$$
and
\begin{equation}\label{equ2}
|\cos_{p}(x)|^p+|\sin_{p}(x)|^p=1,\quad x\in\mathbb{R}.
\end{equation}
Clearly we get
$$\frac{d}{dx}\cos_p(x)=-\cos_p(x)^{2-p}\sin_p(x)^{p-1}.$$
The generalized tangent function $\tan_{p}$ is defined by
$$\tan_{p}(x)=\frac{\sin_{p}(x)}{\cos_{p}(x)}.$$
For $x\in(0,\infty)$, the inverse of generalized hyperbolic sine function $\sinh_{p}(x)$ is defined by
$${\rm arsinh}_{p}(x)=\int^x_0(1+t^p)^{-1/p}dt,\,$$
and generalized hyperbolic cosine and tangent functions are defined by
$$\cosh_{p}(x)=\frac{d}{dx}\sinh_{p}(x),\quad \tanh_{p}(x)=\frac{\sinh_{p}(x)}{\cosh_{p}(x)}\,,$$
respectively. It follows from the definitions that
\begin{equation}\label{equ3}
|\cosh_{p}(x)|^p-|\sinh_{p}(x)|^p=1.
\end{equation}
From above definition and (\ref{equ3})
we get the following derivative formulas,
$$\frac{d}{dx}\cosh_p(x)=\cos_p(x)^{2-p}\sin_p(x)^{p-1},\quad \frac{d}{dx}\tanh_p(x)=1-|\tanh_p(x)|^p.$$
Note that these generalized trigonometric and hyperbolic functions coincide with usual functions for $p=2$.
Our main result reads as follows:
\begin{theorem}\cite[Conjecture 3.12]{kvz}\label{thm3.1}
For $p\in[2,\infty)$, the function
$$f(x)=\frac{\log(x/\sin_p(x))}{\log(\sinh_p(x)/x)}$$
is strictly increasing from $(0,\pi _p/2)$ onto $(1,p)$. In particular,
$$\left(\frac{x}{\sinh_p(x)}\right)^p<\frac{\sin_p(x)}{x}<\frac{x}{\sinh_p(x)}.$$
\end{theorem}
\begin{theorem}\label{thm3.2}
For $p\in[2,\infty)$, the function
$$g(x)=\frac{\log(x/\sin_p(x))}{\log(\cosh_p(x))}$$
is strictly increasing in $x\in(0,\pi_p/2)$. In particular, we have
$$\frac{1}{\cosh_p(x)^\beta}<\frac{\sin_p(x)}{x}< \frac{1}{\cosh_p(x)^\alpha},$$
where
$\alpha = 1/(1+p)$
and
$\beta=\log(\pi_p/2)/\log(\cosh_p(\pi_p/2))$
are the best possible constants.
\end{theorem}
\section{Preliminaries and proofs}
The following lemmas will be used in the proof of main result.
\begin{lemma}\cite[Theorem 2]{avv1}\label{lem1}
For $-\infty<a<b<\infty$,
let $f,g:[a,b]\to \mathbb{R}$
be continuous on $[a,b]$, and be differentiable on
$(a,b)$. Let $g^{'}(x)\neq 0$
on $(a,b)$. If $f^{'}(x)/g^{'}(x)$ is increasing
(decreasing) on $(a,b)$, then so are
$$\frac{f(x)-f(a)}{g(x)-g(a)}\quad and \quad \frac{f(x)-f(b)}{g(x)-g(b)}.$$
If $f^{'}(x)/g^{'}(x)$ is strictly monotone,
then the monotonicity in the conclusion
is also strict.
\end{lemma}
\begin{lemma}\label{(2.1-eq)}
For $p\in[2,\infty)$, the function
$$f(x)=\frac{p\sin_p(x)\log\left(x/\sin_p(x)\right)}{\sin_p(x)-x\cos_p(x)}$$
is strictly decreasing from $(0,\pi_p/2)$ onto $(1,p\log(\pi_p/2))$. In particular,
$$\exp\left(\frac{1}{p}\left(\frac{x}{\tan_p(x)}-1\right)\right)
< \frac{\sin_p(x)}{x}<
\exp\left(\left(\log\frac{\pi_p}{2}\right)\left(\frac{x}{\tan_p(x)}-1\right)\right).$$
\end{lemma}
\begin{proof} Write
$$f_1(x)=p\sin_p(x)\log\left(x/\sin_p(x)\right), \quad f_2(x)=\sin_p(x)-x\cos_p(x),$$
and clearly $f_1(0)=f_2(0)=0$.
Differentiation with respect $x$ gives
\begin{eqnarray*}
\frac{f_1'(x)}{f_2'(x)}&=&\frac{(\sin_p(x))/x+\cos_p(x)(\log(x/\sin_p(x))-1)}{x\cos_p(x)^{2-p}\sin_p(x)^{p-1}}\\
&=&\frac{1}{x\tan_p(x)^{p-1}}\left(\frac{1}{x\cos_p(x)}+\log\left(\frac{x}{\sin_p(x)}\right)-1\right),
\end{eqnarray*}
which is the product of two decreasing functions, this implies that $f_1'/f_2'$ is decreasing.
Hence the function $f$ is decreasing by Lemma \ref{lem1}. The limiting values follows from the l'H\^{o}spital rule.
\end{proof}
\begin{lemma}\label{(2.2-eq)}
For $p\in[2,\infty)$ the function
$$g(x)=\frac{p\sinh_p(x)\log\left(\sinh_p(x)/x\right)}{x\cosh_p(x)-\sinh_p(x)}$$
is strictly increasing from $(0,\infty)$ onto $(1,p)$. In particular, we have
$$\exp\left(\frac{1}{p}\left(\frac{x}{\tanh_p(x)}-1\right)\right)
< \frac{\sinh_p(x)}{x}<
\exp\left(\left(\frac{x}{\tanh_p(x)}-1\right)\right).$$
\end{lemma}
\begin{proof}
Write
$$g_1(x)=\sinh_p(x)\log\left(\frac{\sinh_p(x)}{x}\right),\quad g_2(x)=x\cosh_p(x)-\sinh_p(x), $$
clearly $g_1(0)=g_2(0)=0$.
Differentiation with respect $x$ gives
\begin{eqnarray*}
\frac{g_1'(x)}{g_2'(x)}&=&\frac{\cosh_p(x)(1+\log(\sinh_p(x)/x)-\sinh_p(x)/x}{x\cosh_p(x)^{2-p}\sinh_p(x)^{p-1}}\\
&=& \frac{\sinh_p(x)}{x}\frac{\cosh_p(x)(1+\log(1+\sinh_p(x)/x)-\sinh_p(x)/x}{\cosh_p(x)\tanh_p(x)^p},
\end{eqnarray*}
which is increasing, this implies that $g$ is increasing. The limiting values follows from the l'H\^{o}spital rule.
\end{proof}
\begin{lemma}\label{2.3-lem}
For all $x>0$ and $p>1$, we have
$$
\log(\cosh _p (x)) > \frac{x} {p}\tanh _p(x)^{p - 1} x.
$$
\end{lemma}
\begin{proof}
Let \begin{equation*}f(x) = \log (\cosh _p (x)) - \frac{x} {p}\tanh _p(x)^{p
- 1}.
\end{equation*}
A simple computation yields
\begin{eqnarray*}
f'(x) &=& \tanh _p(x)^{p - 1} - \left( \frac{\tanh _p(x)^{p - 1} }
{p} + \frac{(p - 1)}{p}\frac{x\tanh _p(x)^{p - 2}}{\cosh _p(x)^p } \right) \\
&=& \frac{p - 1}{p}\tanh _p(x)^{p - 2} \left(\tanh _p(x) - \frac{x}{\cosh _p(x)^p } \right),
\end{eqnarray*}
which is positive because $ \sinh _p (x) > x $ and $ \cosh _p (x) > 1 $
for all $x>0$. Thus $f(x)$ is strictly increasing and
$f(x)>f(0)=0$, this implies the proof.
\end{proof}
\noindent{\bf Proof of Theorem \ref{thm3.1}.} \rm Write $f(x)=f_1(x)/f_2(x)$ for $x\in(0,\pi_p/2)$, where
$$f_1 (x) = \log \left( {\frac{x}{{\sin _p (x)}}} \right),f_2 (x) = \log \left( {\frac{{\sinh _p (x)}}{x}} \right).$$
For the proof of the monotonicity of the function $f$, it is enough to prove that
$$f'(x)=\frac{f_1'(x)f_2(x)-f_1(x)f_2'(x)}{f_2(x)^2}$$
is positive. After simple computation, this is equivalent to write
$$
x\left( {f_2 (x)} \right)^2 f'(x) = \displaystyle\frac{{\sin _p (x) - x\cos _p (x)}}{{\sin _p (x)}}f_2(x)
- \displaystyle\frac{{x\cosh _p (x) - \sinh _p (x)}}{{\sinh _p (x)}}f_1(x),
$$
which is positive by Lemmas \ref{(2.1-eq)} and \ref{(2.2-eq)}. Hence, $f$ is strictly increasing, and limiting values
follows by applying the l'H\^opital rule.
This completes the proof.$\hfill\square$
\bigskip
\noindent{\bf Proof of Theorem \ref{thm3.2}.} \rm Write $g(x)=g_1(x)/g_2(x)$ for $x\in(0,\pi_p/2)$, where
$g_1 (x) = \log(x/\sin_p(x)),\, g_2 (x) = \log(\cosh_p(x))$.
Here we give the same argument as in the proof of the Theorem \ref{thm3.1}, and compute similary
\begin{eqnarray*}
\left( \log \left( \cosh _p (x) \right) \right)^2 g'(x)& =& \frac{\sin _p (x) - x\cos _p (x)}{x\sin _p (x)}
\log \cosh _p (x) - \tanh _p(x)^{p - 1} \log \left( \frac{x}{\sin _p (x)} \right) \\
&>& \frac{\sin _p (x) - x\cos _p (x)}{x\sin _p (x)\tanh _p(x)^{1-p} }\frac{x}{p}-\frac{\sin _p (x) - x\cos _p (x)}
{p\sin _p (x)\tanh _p(x)^{1-p} }\\
&=&0,
\end{eqnarray*}
by Lemmas \ref{(2.1-eq)} and \ref{2.3-lem}.
The limiting values follow from the l'H\^{o}spital rule easily, hence the proof is obvious.
$\hfill\square$\\
The following corollary follows from \cite[Lemma 3.3]{kvz} and Theorem \ref{thm3.2}.
\begin{corollary} For $p\in[2,\infty)$ and $x\in(0,\pi_p/2)$, we have
$$\cos_p(x)^\beta<\frac{1}{\cosh_p(x)^\beta}<\frac{\sin_p(x)}{x}< \frac{1}{\cosh_p(x)^\alpha}<1,$$
where $\alpha$ and $\beta$ are as in Theorem \ref{thm3.2}.
\end{corollary}
\vspace{.5cm}
{\sc Acknowledgments.} The authors are indebted to the
anonymous referee for his/her valuable comments.
\vspace{.5cm}
|
2,877,628,089,108 | arxiv | \section{Introduction}
The study of plasmas, is in general limited to the domain of classical
physics where temperature is high and particle density is low. In recent years, the study of plasmas such as dense
astrophysical plasmas {\cite{Astro}}, laser plasmas {\cite{Laser}} as well as miniature electronic devices that are under extreme physical
conditions requires {\cite{Electronics1},\cite{Electronics2}} quantum mechanical effects to be taken into account. In such systems, the scale length becomes comparable to the particle de Broglie wavelength rendering classical transport models unsuitable and quantum
mechanical effects to be relevant. In broad aspect there are mainly two approaches
to model quantum plasmas which are quantum hydrodynamic approach{\cite{Haas1},\cite{Manfredi}} and quantum
kinetic approach{\cite{Bonitz}} i.e, Wigner equation approach.
The plasma fluid equations with the inclusion of quantum diffraction and statistical pressure effects give rise to new physical
phenomena in the context of linear and nonlinear waves and instabilities. Haas\cite{Haas2} et al. have examined quantum quasilinear plasma turbulence using quasilinear equation derived from Wigner-Poisson system.
The quantum fluid equations being macroscopic in nature are relatively simple and are easily accessible for nonlinear calculations. However, working with such macroscopic models leads to loss of understanding in the situations where single particle effects like Landau damping are important and
which can be explored by moving into a kinetic picture.
The kinetic description of plasma possessing quantum mechanical features is provided
by the Wigner equation that can be considered as the quantum analogue of
the Vlasov equation. It describes the evolution of the
quantum mechanical phase space distribution function given by the
Wigner-Moyal distribution and can be a useful tool to look into
the microscopic nature of the system. The Wigner function is called quasi-distribution as it can have negative values although
its velocity moments give rise to various physical variables such
as density, current etc.
Gardner\cite{Gardner} derived
the full three-dimensional quantum hydrodynamic (QHD) model for the first time by a
moment expansion of the Wigner-Boltzmann equation.
So far nonlinear problems like KdV equation
and BGK modes have been tackled successfully
in classical plasma. Recently, Lange et al.\cite{Lange} have provided a generalization of the classical BGK modes by obtaining a solution of the stationary Wigner-Poisson equation.
In this work we have attempted to look into the quantum
KdV problem in the semi-classical limit.
For a classical plasma {\cite{OttSudan1}} Ott and Sudan have modeled nonlinear ion acoustic wave in a kinetic picture
taking the mass of electron into account. They obtained a KdV equation together with a Landau damping term as an evolution
equation for the ion acoustic wave. In order to explore the quantum corrections to the nonlinear evolution of an ion acoustic wave
in presence of Landau damping terms we have to replace the Vlasov equation by the Wigner equation.
In this article we have tried to investigate, in the semiclassical limit, the quantum corrections
to nonlinear ion acoustic wave with Landau damping.
We have derived a higher order KdV equation which has
higher order nonlinear quantum corrections with the usual classical Landau damping term and a term containing the quantum
corrections due to Landau damping as the dynamical evolution equation. The equation converges to the same equation as derived by Ott and Sudan in the classical limit i.e, when $\hbar$ tends to zero. The equation shows some features like conservation of total ion number , decay of initial waveform due to
Landau damping etc. In the next stage we have carried out the
perturbative approach of Bogoliubov and Mitropolsky to get the decay nature of KdV solitary wave amplitude.
For this purpose we have assumed the Landau damping parameter $\alpha_1 $ to be of the order of the quantum factor $Q$.
The procedure reveals that the amplitude decays inversely
with the square of time depending on the factor $Q$.
The paper is organized in the following manner. In section-II the derivation
of the evolution equation of ion acoustic wave with the Landau damping term and the quantum corrections
is given. Some relevant properties of this higher order kdV equation are discussed in section-III. Subsection III-A discusses the conservation of total number of ions. The Bogoliubov- Mitropolsky perturbation approach with the condition $\alpha_1 \approx Q$
and the decay nature of the KdV solitary wave is given in subsection III-B. The conclusive remarks are given in
section-IV.
\section{Derivation of the dynamical equation}
The Wigner distribution function is a function of the phase-space variables (x, v) and time, which,
is given by N single particle wave function $\psi_{\alpha}(x,t)$ each characterized by
a probability $P_{\alpha}$ satisfying $\sum_{\alpha=1}^N P_{\alpha}=1$.
It is given as,
\begin{equation}
f(x, v, t) = \sum_{\alpha=1}^N\frac{m}{2 \pi \hbar}P_{\alpha}\int_{-\infty}^{\infty}\psi_{\alpha}^* (x+\lambda/2,t)
\psi_{\alpha} (x-\lambda/2,t)e^\frac{ i m v \lambda}{\hbar} d\lambda,
\label{Wigner}
\end{equation}
where $m$ is the mass of the particle.
The Wigner function
follows the following evolution equation called the Wigner equation
\begin{eqnarray}
\frac{\partial f}{\partial t} + v \frac{\partial f}{\partial x} + \frac{ e m }{2 i \pi \hbar^2}
\iint[\phi(x+ \lambda/2)
- \phi(x- \lambda/2)] f(x, v', t) e^\frac{ i m \lambda (v - v')}{\hbar} d\lambda dv'=0,\label{WignerE}
\end{eqnarray}
where $ \hbar, \phi$ are the reduced Planck's constant and self- consistent
electrostatic potential.
Considering semi-classical limit,
we develop the integral upto $O(\hbar ^2)$ and neglect all higher order terms containing
$\hbar$ to obtain
\begin{equation}
\frac{\partial f}{\partial t} + v \frac{\partial f}{\partial x} + \frac{e}{m}
\frac{\partial \phi}{\partial x}\frac{\partial f}{\partial v}-
(\frac{e \hbar^2}{24 m^3}) \frac{\partial^3 \phi}{\partial x^3}\frac{\partial^3 f}{\partial v^3}=0
\label{TWignerE}\end{equation}
We can see from (\ref{TWignerE}) that the Vlasov equation is recovered in the limit $\hbar \rightarrow 0$.
In our work, we consider a situation where ions are cold ($T_{i}=0$) and
electrons have finite temperature and the quantum effects are relevant for electrons only. Therefore, we consider the usual
fluid equations for describing the dynamics of ions and the
Wigner equation for describing the electrons.
Hence in this case the relevant normalized system of one-dimensional equations are -
\begin{equation}
\frac{\partial n}{\partial t} + \frac{\partial (nu)}{\partial x} = 0,
\label{continuity}
\end{equation}
which is the continuity equation for ions. The momentum conservation equation for the ions is given by
\begin{equation}
\frac{\partial u}{\partial t} + u \frac{ \partial u}{\partial x} =
- \frac{\partial \phi}{\partial x},
\label{momentum}
\end{equation}
\begin{equation}
(\frac{\lambda_{D}}{L})^2\frac{\partial ^2 \phi}{\partial x^2} = n_{e} - n ,
\label{Poisson}
\end{equation}
which is the Poisson's equation appropriate for the description of
dispersive ion acoustic waves. The electron number density
if obtained as the velocity space average of the single particle
distribution function $f$
\begin{equation}
n_{e} = \int_{-\infty}^{\infty} f dv,
\label{numberdensity}
\end{equation}
that is described by the Wigner equation in the semiclassical limit
\begin{equation}
(\frac{m_{e}}{m_{i}})^\frac{1}{2}\frac{\partial f}{\partial t} + v \frac{\partial f}{\partial x} +
\frac{\partial \phi}{\partial x}\frac{\partial f}{\partial v}-
Q \frac{\partial^3 \phi}{\partial x^3}\frac{\partial^3 f}{\partial v^3}=0,
\label{Wigner},
\end{equation}
where $n_e, n, u$ are the
electron number density, ion number density and
ion velocity respectively, $\lambda_{D}=\sqrt{{KT_{e}}/{4\pi n_0 e^2}}$
is the Debye Length, $L$ is the characteristic length for variations of $n,u,\phi,n_e,f$ and
$Q$ is the quantum parameter $= {\hbar^2}/{24 m^2 c^2_{s} L^2}$.
Here the following normalization scheme has been used :
\begin{eqnarray}
x = \frac{\tilde{x}}{L}, t = \frac{c_{0}\tilde{t}}{L}, v = \frac{\tilde{v}}{c_s},
\phi = \frac{e \tilde{\phi}}{K_{B}T_{e}},
n = \frac{\tilde{n}}{n_{0}}, f = \frac{\tilde{f}}{n_{0}}, u = \frac{\tilde{u}}{c_{0}},
\end{eqnarray}
where $c_{0}$ is the ion acoustic sound speed $= \sqrt{{K T_e}/{m_i}}$, $c_s$ is the electron thermal
velocity $= \sqrt{{K T_e}/{m_e}}$ , $n_0$ is the ambient number density of electrons (ions)
and $T_e$ is the electron temperature.
As in case of {\cite{OttSudan1}}, here also three basic parameters enter into the problem
which are parameters due to Landau damping by electrons, measure of nonlinearity and
measure of dispersive effects. In this calculation we do not neglect the electron to ion mass ratio and
since $T_i = 0$, the Landau damping is provided solely by electrons. We consider all these three effects i.e.,
Landau damping, nonlinearity and dispersion to be small but of the same order of magnitude.
1)$\sqrt{({m_{e}}/{m_{i}})} = \alpha_1 \epsilon$, effect due to
Landau damping by electrons.
2)${\vartriangle n}/{n_{0}} = \alpha_2 \epsilon$, measure of the strength of nonlinearity.
3) $({\lambda_{D}}/{L})^2= 2 \alpha_3 \epsilon$, measure of strength of dispersive effects.
Here $\epsilon$ is smallness parameter. As is the usual mathematical procedure we transform our co-ordinates to a moving frame with a stretched time as
\begin{eqnarray}
\xi = x - t,\ \tau = \epsilon t,
\label{Movframe}
\end{eqnarray}
and expand the dependent variables for small nonlinearity as
\begin{eqnarray}
n = 1 + \alpha_{2}\epsilon n^{(1)} + \alpha_{2}^2 \epsilon^2 n^{(2)} + ...,\nonumber\\
u = \alpha_{2}\epsilon u^{(1)} + \alpha_{2}^2 \epsilon^2 u^{(2)} + ...,\nonumber\\
\phi = \alpha_{2}\epsilon \phi^{(1)} + \alpha_{2}^2 \epsilon^2 \phi^{(2)} + ...,\nonumber\\
n_{e} = 1 + \alpha_{2}\epsilon n_{e}^{(1)} + \alpha_{2}^2 \epsilon^2 n_{e}^{(2)} + ...,\nonumber\\
f = f^{(0)} + \alpha_{2}\epsilon f^{(1)} + \alpha_{2}^2 \epsilon^2 f^{(2)} + ...
\label{VarExpand}
\end{eqnarray}
Considering semiclassical limit, the form of $f^{(0)}$ is chosen as
\begin{equation}
f^{(0)}(v) = \frac{1}{\sqrt{2 \pi}}\exp{(\frac{-v^2}{2})}
\label{Maxwellian}
\end{equation}
Substituting Eqns. (\ref{Movframe}), (\ref{VarExpand}), (\ref{Maxwellian}) in (\ref{continuity})-(\ref{Wigner})
and equating coefficients
of $\epsilon$, $\epsilon^2$ to zero we get first and second order equations which
need to be solved.
\subsection {$\epsilon$ order calculation:}
From Eqns (\ref{continuity})-(\ref{Poisson}) we get
\begin{eqnarray}
\frac{\partial n^{(1)}}{\partial \xi}=\frac{\partial u^{(1)}}{\partial \xi}=\frac{\partial \phi^{(1)}}{\partial \xi},
n^{(1)} = n_{e}^{(1)}
\label{1unphi}
\end{eqnarray}
From equation (\ref{Wigner}) we get
\begin{equation}
v \frac{\partial f^{(1)}}{\partial \xi} = v \frac{\partial \phi^{(1)}}{\partial \xi} f^{(0)}+
Q \frac{\partial^3 \phi^{(1)}}{\partial \xi^3}(3v - v^3) f^{(0)},
\label{nonuneque1}
\end{equation}
which yields
\begin{equation}
\frac{\partial f^{(1)}}{\partial \xi} = \frac{\partial \phi^{(1)}}{\partial \xi} f^{(0)}+
Q \frac{\partial^3 \phi^{(1)}}{\partial \xi^3}(3 - v^2) f^{(0)} + \lambda(\xi,\tau)\delta(v),
\label{nonuneque}
\end{equation}
where $\delta(v)$ is the Dirac delta function and $\lambda (\xi,\tau)$ is an arbitrary function of $\xi,\tau$.
Here also the problem of non-uniqueness arises as in case of {\cite{OttSudan1},\cite{AnupB}} which can be removed by
taking a $\tau$ derivative term from higher $\epsilon$ order. Thus, we write
\begin{equation}
(\alpha_1 \epsilon^2)\frac{\partial f_{\epsilon}^{(1)}}{\partial \tau}+
v \frac{\partial f_{\epsilon}^{(1)}}{\partial \xi} = v \frac{\partial \phi^{(1)}}{\partial \xi} f^{(0)}+
Q \frac{\partial^3 \phi^{(1)}}{\partial \xi^3}(3v - v^3) f^{(0)},
\label{uneque1}
\end{equation}
where the first term of (\ref{uneque1}) has been taken from order $\epsilon^3$ equation.
Once $f_{\epsilon}^{(1)}$ is known,
$f^{(1)}$ can be determined uniquely by :
\begin{equation}
f^{(1)} = \lim_{\epsilon \to 0} f_{\epsilon}^{(1)}
\end{equation}
We introduce Fourier transform in $\xi$ and $\tau$ as
\begin{equation}
\widehat{ f_{\epsilon}^{(1)}}(\omega, k) = \frac{1}{2 \pi}\int_{\xi =-\infty}^{\infty}\int_{\tau=0}^{\infty}
f_{\epsilon}^{(1)}(\xi,\tau) \exp[{i(\omega \tau - k \xi)}] d\xi d\tau
\end{equation}
Now ,
\begin{equation}
\widehat{(\frac{\partial f_{\epsilon}^{(1)}}{\partial \xi})}(\omega, k)=
(i k )\widehat{ f_{\epsilon}^{(1)}}(\omega, k),
\end{equation}
and
\begin{equation}
\widehat{(\frac{\partial f_{\epsilon}^{(1)}}{\partial \tau})}(\omega, k)=-(i \omega )\widehat{ f_{\epsilon}^{(1)}}
(\omega, k) -\frac{1}{2 \pi}\int_{\xi =-\infty}^{\infty} \exp[ -i k \xi]
f_{\epsilon}^{(1)}|_{\tau = 0} d \xi ,
\end{equation}
and
\begin{equation}
\widehat{\frac{\partial^3 \phi^{(1)}}{\partial \xi^3}}(\omega, k)=
(-i k^3 )\widehat{{ \phi^{(1)}}}(\omega, k)
\end{equation}
Now applying these Fourier transforms on (\ref{uneque1}), letting $\epsilon \rightarrow 0$
and using
\begin{equation}
\lim_{\epsilon \to 0} \frac{1}{(k v - \omega \alpha_1 \epsilon^2)} = P(\frac{1}{k v}) + i \pi \delta(k v)
\end{equation}
we get,
\begin{equation}
\widehat{ f^{(1)}}(\omega, k) = \widehat{ \phi^{(1)}} (\omega, k)f^{(0)}
- Q k^2(3 - v^2)\widehat{ \phi^{(1)}}(\omega, k) f^{(0)} ,
\end{equation}
where $P$ is the principal part of the integral.
Taking inverse Fourier transform we get the form of $f^{(1)}$ as,
\begin{equation}
f^{(1)} = \phi^{(1)} f^{(0)} +Q (3 - v^2) \frac{\partial^2 \phi^{(1)}}{\partial \xi^2} f^{(0)}
\label{f1}
\end{equation}
The first term of (\ref{f1}) is same with the classical case whereas the second term is the quantum correction
term.
Thus the procedure yields that $\lambda (\xi,\tau)$ appearing in (\ref{nonuneque}) is zero.
\subsection{$\epsilon^2$ order calculation:}
From equations (\ref{continuity})- (\ref{Poisson}), we can obtain in a straightforward way,
\begin{equation}
2 \frac{\partial n^{(1)}}{\partial {\tau}} + 3 \alpha_2 n^{(1)}\frac{\partial n^{(1)}}{\partial {\xi}}+2 \alpha_3
\frac{\partial^3 n^{(1)}}{\partial \xi^3} = \alpha_2 \frac{\partial}{\partial \xi}(n_e^{(2)}-
\phi^{(2)})
\label{AnotherE}
\end{equation}
From equation (\ref{Wigner}) we get,
\begin{equation}
(\alpha_1 \epsilon^2)\frac{\partial f_{\epsilon}^{(2)}}{\partial {\tau}}
+ v \frac{\partial f_{\epsilon}^{(2)}}{\partial {\xi}} - v f^{(0)} \frac{\partial \phi^{(2)}}{\partial \xi}
- Q \frac{\partial^3 \phi^{(2)}}{\partial \xi ^3}\frac{\partial^3 f^{(0)}}{\partial v^3}
= C(\xi, \tau, v) ,\label{mainE1}
\end{equation}
where the $\tau$ derivative term is taken from $\epsilon^4 $ order and terms which are product of
quantum term and second order perturbation term are neglected as small compared to other terms.
Here $C(\xi, \tau, v)$ is defined as
\begin{equation}
C(\xi,\tau,v) = [C_a (\xi,\tau) + C_b (\xi,\tau) v + C_c(\xi, \tau)v^2 + C_d(\xi,\tau)v^3] f^{(0)},
\end{equation}
where
\begin{equation}
C_a(\xi,\tau) = (\frac{\alpha_1}{\alpha_2})[\frac{\partial \phi^{(1)}}{\partial \xi} +
3 Q \frac{\partial^3 \phi^{(1)}}{\partial \xi^3}]
\end{equation}
\begin{equation}
C_b(\xi,\tau) = [\phi^{(1)} \frac{\partial \phi^{(1)}}{\partial \xi} +
5 Q \frac{\partial^2 \phi^{(1)}}{\partial \xi^2}\frac{\partial \phi_1}{\partial \xi}
+ 3Q \phi^{(1)} \frac{\partial^3 \phi^{(1)}}{\partial \xi^3}]
\end{equation}
\begin{equation}
C_c(\xi,\tau) = -Q (\frac{\alpha_1}{\alpha_2})[\frac{\partial^3 \phi^{(1)}}{\partial \xi^3} ]
\end{equation}
\begin{equation}
C_d(\xi,\tau) = [
- Q \frac{\partial^2 \phi^{(1)}}{\partial \xi^2}\frac{\partial \phi_1}{\partial \xi}
-Q \phi^{(1)} \frac{\partial^3 \phi^{(1)}}{\partial \xi^3}]
\end{equation}
Introducing Fourier transform in (\ref{mainE1}) and letting $\epsilon$ tends to zero we get,
\begin{eqnarray}
\widehat{f^{(2)}}(\omega,k) - f^{(0)} \widehat{\phi^{(2)}}(\omega,k)=
- i \widehat{C_a}[P(\frac{1}{k v})+ i \pi \delta(k v)] f^{(0)} - i v \widehat{C_b} P(\frac{1}{k v}) f^{(0)}\nonumber\\-
i v^2 \widehat{C_c} P(\frac{1}{k v}) f^{(0)}-i v^3 \widehat{C_d} P(\frac{1}{k v}) f^{(0)}
\end{eqnarray}
Multiplying by $(i k)$ and integrating over v yields
\begin{equation}
i k \widehat{n^{(2)}} - i k \widehat{\phi^{(2)}} = i \sqrt{\frac{\pi}{2}}\widehat{C_a} sgn(k)
+ \widehat{C_b} + \widehat{C_d},
\label{n2phi2f}
\end{equation}
where we have used $k \delta(k v) = sgn(k) \delta(v)$.
Now taking inverse Fourier transform of equation (\ref{n2phi2f}) we obtain,
\begin{eqnarray}
\frac{\partial}{\partial \xi}(n^{(2)} - \phi^{(2)}) = C_b + C_d -
\frac{1}{\sqrt{2 \pi}}[P \int_{-\infty}^{\infty}(\frac{\alpha_1}{\alpha_2})\frac{\partial n^{(1)}}{\partial \xi'}
\frac{d \xi'}{\xi - \xi'}+P \int_{-\infty}^{\infty}(\frac{ 3 Q \alpha_1}{\alpha_2})\frac{\partial^3 n^{(1)}}
{\partial \xi'^3}
\frac{d \xi'}{\xi - \xi'}],
\label{n2phi2}
\end{eqnarray}
Now using (\ref{AnotherE}) and (\ref{n2phi2}) we get finally,
\begin{eqnarray}
\frac{\partial n^{(1)}}{\partial \tau} +
\alpha_{2} n^{(1)}\frac{\partial n^{(1)}}{\partial \xi}
+\alpha_{3}\frac{\partial^3 n^{(1)}}{\partial \xi^3} -
Q \alpha_{2}\frac{\partial}{\partial \xi}
[\frac{\partial n^{(1)}}{\partial \xi}]^2 -
Q \alpha_{2} n^{(1)}
\frac{\partial^3 n^{(1)}}{\partial \xi^3}+
\nonumber\\
\frac{\alpha_{1}}{\sqrt{8\pi}}[P \int_{-\infty}^{\infty}\frac{\partial n^{(1)}}{\partial \xi'}\frac{d \xi'}{(\xi - \xi')}]
+\frac{3\alpha_{1} Q}{\sqrt{8\pi}}[P \int_{-\infty}^{\infty}\frac{\partial^3 n^{(1)}}{\partial \xi'^3}
\frac{d \xi'}{(\xi - \xi')}],
= 0 \nonumber\\
\label{FinalE}
\end{eqnarray}
which is the main equation of interest
of this work. This equation implies the evolution equation of motion of nonlinear ion acoustic wave taking into account
the Landau damping effect with quantum corrections arising from semiclassical kinetic approach i.e, the
Wigner equation approach.
The fourth and fifth terms of (\ref{FinalE})
are nonlinear quantum corrections
and the last term of the LHS is the quantum correction on the Landau damping. We can see
that the equation converges exactly to the equation derived by
Ott and Sudan {\cite{OttSudan1}} in the limit $\hbar$ $\rightarrow$ 0. The equation is like a higher order
KdV equation which have higher order nonlinear quantum correction terms and Landau damping term
with its quantum correction. Due to
the nature of the equation we can show that it conserves total number of particles. The presence of
Landau damping terms also assure that the amplitude of soliton must decay with time.
These relevant facts are derived in the next section.
\section{Some relevant properties}
\subsection{Conservation of ion number }
The equation (\ref{FinalE}) is the higher order KdV equation with Landau damping terms.
Integrating (\ref{FinalE})w.r.to $\xi$ and assuming $n^{(1)}, {\partial n^{(1)}}/{\partial \xi},
{\partial^2 n^{(1)}}/{\partial \xi^2} = 0$ at $\xi = \pm \infty$ and renaming
$n^{(1)} = U $, we can show that
\begin{equation}
\frac{\partial}{\partial \tau} \int_{-\infty}^{\infty} U d\xi = 0
\end{equation}
Here we have used the fact that
\begin{equation}
P \int_{-\infty}^{\infty} \frac{d \xi}{\xi-\xi'} = 0
\end{equation}
Hence ion number is conserved.
\subsection{Decay of solitary wave}
Ott and Sudan in their paper {\cite{OttSudan1,OttSudan2}} considered $\alpha_1$ to be a small
perturbation parameter and used the fact that due to Landau
damping the amplitude of KdV solitary wave will decrease with time. Then
using Bogoliubov- Mitropolsky {\cite{Bogoliubov}} approximation method they
found the decay rate of amplitude, which depends on the small parameter $\alpha_1$ .
In (\ref{FinalE}),we see that there are higher order KdV terms with Landau damping term and its quantum correction.
But since exact Sech- solitary wave solution of a general higher order KdV equation of above form is possible
only when (coefficient of the term $\frac{\partial U}{\partial \xi}
[\frac{\partial U}{\partial \xi}]^2$) = -2
(coefficient of the term $ U
\frac{\partial^3 U}{\partial \xi^3}$), which is not present in (\ref{FinalE}), hence the exact solitary wave
solution of the higher order KdV equation and its decay due to Landau damping terms cannot be worked out here.
Also it can be seen that (\ref{FinalE}) contains $2$ small parameters $\alpha_1$ and $Q$
where $\alpha_2, \alpha_3$ are assumed to be $\approx 1$. Hence in the subsequent part of the work, the quantum correction
terms and the Landau damping term are treated as perturbation term to the KdV equation.
But since perturbation with multiple small parameters will include multiple time scales in the calculation,
hence it will be too complicated to be computed analytically.
In order to simplify the case and find out the nature of
decay of the KdV solitary wave amplitude we will assume that $\alpha_1 \approx \alpha_2 Q$.
For example, in the case of hydrogen plasma $\alpha_1$ is approximately 0.025 and in {\cite{QElecHoles1,
QElecHoles2}},
the factor $Q$ is taken to be equal to be order of
$ 0.01$. Assuming this relation between small parameters we can consider that
the quantum correction to the Landau damping term
which appears as the last term of (\ref{FinalE}) is $\alpha_1\approx$ $Q^2$, and
hence it can be neglected as small compared to the other terms.
Now we have to apply the
well known method of Bogoliubov and Mitropolsky \cite{Bogoliubov,OttSudan1,OttSudan2} with $\alpha_2 Q
= C $ as small perturbation parameter. Hence $\alpha_1$ can be taken as
$\alpha_1 = \beta C$ where $\beta$ is any number $\approx$ unity. In order for the perturbation
analysis to be consistent with the condition of validity of (\ref{FinalE}) it is also required that
$1 \gg C \gg \epsilon$.
Assuming a new phase co-ordinate to have the form
\begin{equation}
\phi(\xi, \tau) =\sqrt{\frac{N(\tau)\alpha_2}{12 \alpha_3}}(\xi - \frac{\alpha_2}{3}\int_{0}^{\tau}N(\tau) d\tau),
\label{phase}
\end{equation}
where $N(\tau)$ is assumed to vary slowly with time.
We introduce two time scales
following \cite{Bogoliubov} as
\begin{equation}
t_0 = \tau, t_1 = C \tau ,
\label{timescale}
\end{equation}
and $N = N(C, \tau)$
and shall seek a solution
of the form
\begin{equation}
U (\phi, C, \tau) = U_0(\phi, t_0, t_1) + O(C),
\label{amplexp}
\end{equation}
where (\ref{amplexp}) is to be valid for long times,i.e., times as large as $\tau \sim O(1/C)$.
In order to find such a solution, valid for long times, we first expand $u(\phi, \tau, C)$ to $O(C)$:
\begin{equation}
U(\phi, \tau, C) = U_0(\phi, t_0, t_1) + C U_1(\phi, t_0) + O(C^2)
\label{amplexp2}
\end{equation}
Using (\ref{phase}), (\ref{timescale}), (\ref{amplexp2}) in (\ref{FinalE}) we get an equation containing
different powers of $C$ and equating coefficients of each power of $C$ we get different order equations
which need to be solved.
Since we are interested in the damping of solitary waves, we have the following initial and boundary conditions:
\begin{eqnarray}
U(\phi, 0, C) = N_{0} sech^2(\phi), \nonumber\\
U (\pm{\infty}, \tau, C) = 0
\label{IBc1}
\end{eqnarray}
Solving the order unity equation which is
\begin{equation}
\rho \frac{\partial U_0}{\partial t_0} + \frac{\partial^3 U_0}{\partial \phi^3}
-4 \frac{\partial U_0}{\partial \phi} + \frac{12}{N} U_0 \frac{\partial U_0}{\partial \phi} = 0,
\label{unity}
\end{equation}
we get
\begin{equation}
U_{0} (\phi, t_0, t_1) = N(t_1)sech^2(\phi),
\end{equation}
where $\rho= {24 \sqrt{3\alpha_3}}/{(N \alpha_2)\sqrt{N \alpha_2}}$ and $N(t_1)$ is an arbitrary function of $t_1$
except for the initial condition $N(0) = N_0$. Hence $U_0$ doesn't depend on $t_0$.
The order $C$ equation is
\begin{equation}
\frac{\partial U_1}{\partial t_0} + L[U_1] = M[U_0],
\end{equation}
where
\begin{eqnarray}
M[U_0] = -\frac{\partial U_0}{\partial t_1} - \frac{\phi}{2N}\frac{\partial U_0}{\partial \phi} \frac{dN}{dt_1}
+ \frac{1}{(\rho \alpha_3)}[ \frac{\partial^3 U_0}{\partial \phi^3} U_0
+2 \frac{\partial U_0}{\partial \phi} \frac{\partial^2 U_0}{\partial \phi^2}]
-\frac{\beta}{\sqrt{8 \pi}}[P \int_{-\infty}^{\infty}\sqrt{\frac{N(\tau)\alpha_2}{12 \alpha_3}}
\frac{\partial U_0}{\partial \phi'}
\frac{d \xi'}{\xi - \xi'}],
\label{Mu0}
\end{eqnarray}
\begin{eqnarray}
L[U_1] = \frac{1}{\rho}\frac{\partial^3 U_1}{\partial \phi^3} - \frac{4}{\rho}\frac{\partial U_1}{\partial \phi}
+ \frac{12}{(N \rho)}\frac{\partial (U_0 U_1)}{\partial \phi}
\end{eqnarray}
Again the boundary and initial conditions are
\begin{eqnarray}
U_1 (\pm \infty, t_0) = 0,
U_1(\phi, 0) = 0
\label{IBc2}
\end{eqnarray}
In order that (\ref{amplexp2}) to be valid for times as large as $\tau \sim O(1/C)$
it is required that $U_1(\phi, t_0)$ does not behave secularly with $t_0$. To eliminate secular behavior
of $U_1$ it is necessary that $M[U_0]$ be orthogonal to all solutions, $g(\phi)$,
of $L^+[g] = 0$
which satisfy (\ref{IBc2})[i.e, $g(\pm \infty) = 0$], where $L^+$ is the operator adjoint to $L$ given by,
\begin{equation}
L^+ = -\frac{1}{\rho}\frac{\partial^3}{\partial \phi^3} + \frac{4}{\rho}\frac{\partial}{\partial \phi}
-\frac{12}{\rho} sech^2(\phi)\frac{\partial}{\partial \phi}.
\end{equation}
The only solution of $L^+[g] = 0$, $g(\pm \infty) = 0$, is $g(\phi) = sech^2(\phi)$.
Thus,
\begin{equation}
\int_{-\infty}^{\infty} sech^2(\phi) M[U_0] d\phi = 0
\label{OrthogC}
\end{equation}
In order to evaluate this integral we have to consider term by term of (\ref{Mu0}). The first 2 terms of
$M[U_0]$ together give $-{d N}/{d t_1}$ after integration.
The third and fourth terms which come from the nonlinear quantum correction terms give zero after
integration due to the odd nature of the integrand. Finally the last term, i.e the Landau damping term gives
$(2.92)\frac{-\beta ({\alpha_2}}/{\sqrt{96 \pi \alpha_3}}) N\sqrt{N} $,
where we have used that
\begin{equation}
P \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} sech^2(\phi) \frac{\partial(sech^2(\phi'))}{\partial \phi'}
d\phi
\frac{d \phi'}{(\phi - \phi')} = (24/\pi^2) \zeta(3) = 2.92
\end{equation}
Thus we get a first order differential equation in $N$, solving which we get
\begin{equation}
N = \frac{N(0)}{[1 + (\frac{1}{2} \beta_1 \alpha_2 Q N(0)^{(\frac{1}{2})}) \tau]^2},
\label{decayrate}
\end{equation}
where $\beta_1 = (2.92) \beta \sqrt{{\alpha_2}/{96 \pi \alpha_3}}$.
From eqn (\ref{decayrate})
we see that the decay law of amplitude depends on the quantum factor $Q$. A full numerical computation of
(\ref{FinalE}) could reveal the total dynamical nature of the solution.
\begin{figure}[!h]
\centering
{
\includegraphics[width=7 cm, angle=0]{decayrate.eps}
}
\caption{Decay of soliton amplitude with time when when $Q =0.01 , N(0) = 1, \alpha_3=1,
\alpha_2=6, $
and $\beta=1$
}
\label{fig:1}
\end{figure}
\section{Concluding remarks:}
In this work we have extended the methodology of the work of
Ott and Sudan to include the semiclassical quantum effects to obtain a new evolution equation in the context of a nonlinear
ion acoustic wave. This equation is of the form of a higher order KdV equation having higher order nonlinear terms as quantum corrections, together with a classical Landau damping term as well
as quantum contribution coming from resonant particle effects.
Using the fluid equations for ions and the classical kinetic Vlasov equation for electrons, Ott and Sudan obtained a KdV equation with a Landau damping term as
the evolution equation for the nonlinear ion acoustic wave.
In order to introduce the quantum corrections, the classical Vlasov equation is replaced
by an appropriate quantum analog i.e, the Wigner equation.
In a similiar approach using the Wigner equation in place of the Vlasov equation gives rise to our higher order KdV equation with Landau damping terms.
The equation exactly converges to the equation done in
{\cite{OttSudan1}} when $\hbar$ tends to zero i.e, in the classical limit. The mathematical nature of the equation shows that it conserves the total number of ions. The importance of the higher order KdV equation derived here, lies in the fact that its solution would give the quantum modification of the KdV solitary wave. But unfortunately, exact solitary wave solutions of this equation cannot be obtained. Since there are two small parameters in the equation,
$\alpha_1$ and $Q$, we treat the quantum corrections as well as the Landau damping terms as perturbation to the KdV equation.
In order to carry out the Bogoliubov and Mitropolsky approximation technique, multiple time scales stretched by these small parameters have to be introduced. Such a technique is too complicated to comprehend analytically. Hence in order to get a
useful analytical result , we have assumed $\alpha_1 \approx Q$.
Hence, the quantum correction to Landau damping term turns out to be of the order of $Q^2$ and therefore neglected.
In the perturbative approach, the contribution to the decay rate coming from the nonlinear quantum correction terms turns out to be zero because of the odd nature of the integrand.
The final contribution to the decay of solitary wave amplitude comes from the classical Landau terms, whose coefficient, due to the perturbation scheme, turns out to be of the order of $Q$.
The amplitude is shown to decay inversely
with the square of time
depending on the quantum factor $Q$. In our final equation
of decay rate no terms come from the quantum correction, i.e
quantum nonlinear part goes to zero when the integration over $\phi$ is performed and
the quantum Landau damping terms being of order $Q^2$
are neglected. This is due to our chosen scheme,
and application of perturbation scheme with multiple time scales
could give rise to solutions with more appropriate dependance
on quantum effects. But the
importance of the equation cannot be turned down and
could be the initiator of numerical computation that would reveal the entire dynamical nature of the solution with the inclusion of quantum mechanical effect.
\newpage
|
2,877,628,089,109 | arxiv | \section{Introduction}
\label{sec:intro}
General Relativity (GR) has successfully passed all the tests performed so far in various regimes of gravity. Solar System experiments have placed very tight constraints on weak-field modifications to GR~\cite{TEGP,Will:LRR}. Various cosmological observations have placed bounds on large scale modifications~\cite{Jain:2010ka,Clifton:2011jh,Joyce:2014kja,Koyama:2015vza,Salvatelli:2016mgy}. The recent discovery of gravitational waves (GWs) from binary black hole (BH) coalescences by the LIGO Scientific Collaboration (LSC) and the Virgo Collaboration~\cite{Abbott:2016blz,Abbott:2016nmj,TheLIGOScientific:2016pea,Abbott:2017vtc,Abbott:2017oio} allowed one to probe gravity in both the strong and dynamical field regimes for the first time~\cite{TheLIGOScientific:2016src,Yunes:2016jcc}. Future GW observations of binary BH coalescences will significantly improve the ability of probing gravity in such a regime~\cite{Gair:2012nm,Yunes:2013dva,Berti:2015itd,Yagi:2016jml,Barausse:2016eii,Chamberlain:2017fjl}.
Neutron star (NS) observations are also excellent tools to perform strong-field tests of gravity~\cite{psaltis-review,Stairs:2003yja,Wex:2014nva}. One very timely example is the detection of GWs and electromagnetic waves from a binary neutron star merger GW170817~\cite{TheLIGOScientific:2017qsa}. This observation allows for bounds on the propagation speed of GWs, on gravitational Lorentz violation and on the equivalence principle~\cite{Monitor:2017mdv,Hansen:2014ewa}. Unlike BH observations, one can use NS observations to probe how non-GR effects may arise in spacetimes with matter. For example, binary pulsar observations~\cite{Damour:1996ke,Damour:1998jk,freire,Shao:2017gwu} have been used to probe spontaneous scalarization in scalar-tensor theories~\cite{Damour:1992we,Damour:1993hw}, which arises due to the nonlinear coupling between matter and a scalar field, and is absent in BH spacetimes. Since the density inside a NS exceeds the nuclear saturation density, one can use independent measurements of NS quantities, such as the mass and radius, to probe the equation of state (i.e.~the relation between pressure and density) of supranuclear matter, which is currently unknown~\cite{lattimer_prakash2001,lattimer-prakash-review,Lattimer:2012nd,steiner-lattimer-brown,Ozel:2012wu,Miller:2013tca,Ozel:2015fia,Ozel:2016oaf}. Unfortunately, however, this means that there typically exists a degeneracy between equation-of-state effects and strong-field modifications in NS observations, rendering the latter unfeasible without knowledge of the former.
One way to overcome such a problem is to use approximate universal relations among NS observables, i.e.~relations that do not depend strongly on the underlying equation of state (EoS), to project out uncertainties in nuclear physics and allow for tests of GR. Universal relations are known to exist for example between the oscillation frequencies of NSs and the stellar average density~\cite{andersson-kokkotas-1996,andersson-kokkotas-1998}, or between the binding energy of a star and its compactness~\cite{lattimer_prakash2001}. Recently, two of us found another example of universal relations, one that holds among dimensionless versions of the moment of inertia ($I$), the tidal Love number (Love) and the quadrupole moment ($Q$)~\cite{I-Love-Q-Science,I-Love-Q-PRD}. The universality, in fact, depends sensitively on how one adimensionalizes these NS observables~\cite{Majumder:2015kfa}. For example, the I-Q universality is lost for rapidly-rotating NSs if one fixes the angular velocity~\cite{doneva-rapid}, but it is restored if one fixes the dimensionless spin parameter instead~\cite{pappas-apostolatos,Chakrabarti:2013tca}.
What is the origin of the I-Love-Q universality? Reference~\cite{Stein:2013ofa} argued that the fact that isodensity contours inside a star are approximately elliptically self similar is the reason. In other words, the fact that the eccentricity of such isodensity contours is approximately constant throughout the star is responsible for the approximate universality. Indeed, Ref.~\cite{Stein:2013ofa} showed that the NS eccentricity varies by only $\sim 20\%$ throughout its interior for all EoSs they considered, providing evidence for this explanation. Moreover, Ref.~\cite{Yagi:2014qua} showed that the relations become significantly less universal if one artificially forces the variation of the eccentricity of isodensity contours to increase. This explanation is also consistent with the results of~\cite{Sham:2014kea,Chan:2015iou}, who suggested the origin of the universality is associated with all realistic EoSs being ``similar'' to an incompressible (constant density star) one, in which case the elliptical isodensity approximation becomes exact.
The I-Love-Q relations have applications to strong-field tests of gravity, astrophysics~\cite{Newton:2016weo}, gravitational-wave physics~\cite{I-Love-Q-Science,I-Love-Q-PRD} and nuclear physics~\cite{Baubock:2013gna,Psaltis:2013fha} (see~\cite{Yagi:2016bkt,Doneva:2017jop} and references therein for detailed and recent reviews), but let us concentrate on the former. Future radio observations may provide the first measurements of the moment of inertia of the primary pulsar in J0737-3039~\cite{lattimer-schutz,kramer-wex}, while future gravitational-wave observations may provide the first measurements of the NS tidal Love number~\cite{read-markakis-shibata,flanagan-hinderer-love,hinderer-lackey-lang-read,damour-nagar-villain,lackey,lackey-kyutoku-spin-BHNS,Read:2013zra,Hotokezaka:2016bzh}\footnote{The recent GW170817 detection already placed upper bounds on the tidal Love number with the NS mass of 1.4$M_\odot$~\cite{TheLIGOScientific:2017qsa}, although these are quite weak.}. These two independent observations define a region in the I-Love plane, whose size is determined by the uncertainty in the observations. If GR holds, then the GR I-Love relation will pass through this region, thus placing constraints on any deviation. Modified theories that predict parametric deviations in the GR I-Love curve can then be constrained, as is the case in scalar tensor theories~\cite{Doneva:2014faa,Pani:2014jra,Doneva:2016xmf}, $f(R)$ gravity~\cite{Doneva:2015hsa}, Einstein-dilaton Gauss-Bonnet gravity~\cite{Kleihaus:2014lba,Kleihaus:2016dui}, dynamical Chern-Simons (dCS) gravity~\cite{I-Love-Q-Science,I-Love-Q-PRD} and Eddington-inspired Born-Infeld gravity~\cite{Sham:2013cya}. Whether such I-Love tests are more constraining than other existing tests, using for example Solar System observations, depends of course on how stringent the other constraints are.
In this paper, we study the I-Love-Q universal relations in dCS gravity~\cite{jackiw:2003:cmo,Smith:2007jm,CSreview}, a parity violating effective field theory of gravity. The effective action in this theory consists of the Einstein-Hilbert term and a Pontryagin density (i.e.~the contraction of the Riemann tensor and its dual) coupled to a dynamical scalar field with a standard kinetic term. Such a modification to the action arises naturally in string theory~\cite{polchinskiVol1,polchinskiVol2}, in loop quantum gravity upon the promotion of the Barbero-Immirzi parameter to a field~\cite{taveras,calcagni}, and in effective field theories of inflation~\cite{weinberg-CS}. Stellar solutions in dCS gravity have of course been studied before: non-rotating solutions do not acquire any modifications, while slowly-rotating solutions to first order in rotation were found in~\cite{yunes-CSNS} and to second order in rotation in~\cite{Yagi:2013mbt}, following the Hartle-Thorne formalism~\cite{hartle1967,hartlethorne}.
We here extend previous results in several ways. First, we evaluate the quadrupole moment in dCS gravity following~\cite{Yagi:2013mbt}, and study the universality in the I-Q and Q-Love relations. Second, we investigate how the universality depends on the normalization of the coupling constant in dCS gravity. Third, we look at the eccentricity profile of isodensity contours inside a NS in dCS gravity to determine whether the elliptical isodensity explanation continues to hold in this theory. Lastly, we investigate the dependence of the projected bounds on dCS gravity using the I-Love relation on the measurement accuracy of the moment of inertia and the tidal Love number.
\begin{figure}[htb]
\centering
\includegraphics[width=9.5cm,clip=true]{IQ.pdf}
\caption{\label{fig:I-Q} (Color Online) (Top) Universal relation between the dimensionless moment of inertia $\bar I$ and quadrupole moment $\bar Q$ for various realistic EoSs in dCS gravity, using a dimensionless coupling constant normalized by the NS mass and set to $\bar \xi = 10^3$. We also show a global fit to all dCS I-Q curves (black solid), together with a global fit to all I-Q curves in GR~\cite{I-Love-Q-Science,I-Love-Q-PRD} (black dashed). Observe how the relation deviates from the GR one as one decreases $\bar Q$ (as one considers more massive and compact NSs). (Bottom) Fractional difference between any one I-Q curve in dCS gravity and the global fit, together with the fractional difference between two I-Q relations in GR and a global fit in GR using the Shen and ALF2 EoSs. Observe that the amount of universality in dCS gravity is similar to that in GR for large or small $\bar Q$, while the dCS universality is stronger when $\bar Q \sim 5$. }
\end{figure}
\subsection*{Executive Summary}
The first topic we investigate is the universality of the I-Love-Q relations. The top panel of Fig.~\ref{fig:I-Q} presents the I-Q relation in dCS gravity for various EoSs, where we adimensionalize the dCS coupling constant by the NS mass. Observe that the relation is still approximately EoS universal, with dCS deviations from the GR I-Q relation becoming more prominent for more compact (smaller $\bar Q$) stars. The bottom panel shows the fractional difference between any one I-Q curve in dCS and a global fit constructed from all dCS data sets, together with the EoS variation found in GR for two EoSs for comparison. Observe that the EoS variation in dCS gravity is similar to that in the GR case, for large and small $\bar Q$. Observe also that when $\bar Q \sim 5$, the EoS variation in dCS gravity is roughly one order of magnitude smaller than that in GR; this is because when $\bar Q$ is large (small), the EoS variation is dominated by the GR (dCS) contribution, while the dCS and GR contributions partially cancel each other when $\bar Q \sim 5$ for the specific value of the dimensionless coupling constant we chose in this example.
The second topic we investigate is how the degree of universality changes when one chooses different normalizations for the dCS coupling constant. As we showed in Fig.~\ref{fig:I-Q}, the degree of universality in dCS gravity is similar to (and in some cases even stronger than) the degree of universality in the GR case, when one adimensionalizes the coupling constant by the NS mass. We find, however, that if one adimensionalizes the coupling constant with either the curvature length of the system, or simply with a solar mass, the universality is lost. These results demonstrate that, even in dCS gravity, the existence of universality in the I-Love-Q relations depends sensitively on how one adimensionalizes the dimensional quantities of the problem. We expect this to remain true in other theories of gravity that depend on dimensional coupling constants, such as in Einstein-dilaton Gauss-Bonnet gravity.
The third topic we investigate is whether the self-similar elliptical isodensity explanation of the I-Love-Q universality persists in dCS gravity. We find that as one increases the dCS coupling, the eccentricity variation throughout the star becomes smaller \emph{and} the universality becomes stronger, thus supporting the self-similar elliptical isodensity explanation~\cite{Stein:2013ofa,Yagi:2014qua}. However, if one increases the dCS coupling too much, the EoS variation is dominated by the dCS contribution, and then, although the eccentricity variation continues to decrease, the universality does not necessarily improve. This suggests that the origin of the universality in modified theories of gravity may be more complicated than in GR, as additional non-GR contributions come into play.
With the universality of the I-Love-Q relations established and explored, we then investigate the dependence of I-Love constraints on dCS on the accuracy to which the moment of inertia and the Love number could be measured by binary pulsar and gravitational wave observations. Figure~\ref{fig:alpha_constraint} shows how the bounds on the dCS coupling constant varies with the measurement accuracy. Observe that the bounds are $\mathcal{O}(10^2)$km, namely six orders of magnitude stronger than current Solar System~\cite{alihaimoud-chen} and table top~\cite{kent-CSBH} experiments. This is because NS observations allow one to probe stronger gravitational field regimes, where the dCS corrections would be naturally larger than those present in the weak field regime. Observe also that these constraints only depend on these measurements being possible, with the strength of the constraint not strongly dependent of the accuracy to which these quantities are measured.
\begin{figure}[hbtp]
\begin{center}
\includegraphics[width=9.5cm,clip=true]{Contour.pdf}
\caption{\label{fig:alpha_constraint}~(Color Online) Contour plot of projected upper bounds on the dCS coupling constant $\sqrt{\alpha}$ in km as a function of the fractional measurement accuracy of the NS moment of inertia (horizontal axis) and tidal Love number (vertical axis). We assume that the former is measured from future radio observations of binary pulsars, while the latter is measured from future GW observations of NS binaries. Observe that the constraints are always six orders of magnitude stronger than current bounds ($\sqrt{\alpha} < \mathcal{O}(10^8)$km) from Solar System~\cite{alihaimoud-chen} and table-top~\cite{kent-CSBH} experiments, approximately independently of the accuracy to which these quantities can be measured.}
\end{center}
\end{figure}
The remainder of this paper presents the details of the results summarized above and it is organized as follows. Section~\ref{sec:review} briefly reviews dCS gravity and explains a few approximations that we adopt throughout the paper. Section~\ref{sec:matt-space-decomp} describes the perturbation scheme that we use to construct slowly-rotating NS solutions in the theory by extending the Hartle-Thorne formalism. Section~\ref{sec:I-L-Q} reviews the structure of the field equations and how we numerically obtain dCS corrections to physical observables, such as the moment of inertia, the quadrupole moment, the tidal Love number and the stellar eccentricity. We also explain how we adimensionalize the moment of inertia, tidal Love number and quadrupole moment and derive dCS corrections for such dimensionless quantities. We further comment on how to evaluate the stellar eccentricity. Section~\ref{sec:num-res} presents all of our numerical results. Section~\ref{sec:disc} provides a final summary of our work and describes possible avenues for future work.
\section{Review of Dynamical Chern-Simons gravity}
\label{sec:review}
In this section, we review dCS gravity and explain the approximations that we adopt throughout the paper. We mostly follow~\cite{Yagi:2013awa,Yagi:2013mbt}.
\subsection{Action and Field Equations}
\label{sec:action}
We begin by presenting the action in dCS gravity.
The modification to the Einstein Hilbert action is introduced by adding the Pontryagin density coupled to a dynamical scalar field $\vartheta$ to the Lagrangian, namely~\cite{CSreview}
\ba
S &\equiv & \int d^4x \sqrt{-g} \Big[ \kappa_g \mathcal{R} + \frac{\alpha}{4} \vartheta \, \mathcal{R}_{\nu\mu \rho \sigma} {}^* \mathcal{R}^{\mu\nu\rho\sigma} - \frac{\beta}{2} \nabla_\mu \vartheta \nabla^{\mu} \vartheta + \mathcal{L}_{{\mbox{\tiny mat}}} \Big]\,.
\label{action}
\ea
Here, $\mathcal{R}$ is the Ricci scalar, $\mathcal{L}_{{\mbox{\tiny mat}}}$ is the matter Lagrangian density, $g$ is the determinant of the metric, $\kappa_g \equiv 1/(16\pi)$, and $\alpha$ and $\beta$ are coupling constants of the theory. The dual of the Riemann tensor $^*\mathcal{R}^{\mu\nu\rho\sigma}$ is here defined by
\be
{}^* \mathcal{R}^{\mu\nu\rho\sigma} \equiv \frac{1}{2} \varepsilon^{\rho \sigma \alpha \beta} \mathcal{R}^{\mu\nu}{}_{\alpha \beta}\,,
\ee
where $\varepsilon^{\rho \sigma \alpha \beta}$ is the Levi-Civita tensor. For simplicity, we have neglected any potential for the scalar field. We take $\vartheta$ and $\beta$ to be dimensionless, which forces $\alpha$ to have dimensions of (length)$^{2}$~\cite{yunespretorius,kent-CSBH}. Solar System~\cite{alihaimoud-chen} and table-top~\cite{kent-CSBH} experiments place bounds on the characteristic length scale of the theory, $\xi_{\mbox{\tiny CS}}^{1/4} < \mathcal{O}(10^8)$ km~\cite{alihaimoud-chen,kent-CSBH} where
\be
\label{xi-def}
\xi_{\mbox{\tiny CS}} \equiv \frac{\alpha^2}{\beta \kappa_g}\,.
\ee
We next look at the field equations derived from the action in Eq.~\eqref{action}. The modified Einstein equations are given by
\be
G_{\mu\nu} + \frac{\alpha}{\kappa_g} C_{\mu\nu} =\frac{1}{2\kappa_g} (T_{\mu\nu}^\mathrm{mat} + T_{\mu\nu}^\vartheta)\,.
\label{field-eq}
\ee
Here, $G_{\mu\nu}$ is the Einstein tensor and $T_{\mu\nu}^\mathrm{mat}$ is the matter stress-energy tensor. The C-tensor and the scalar field stress-energy tensor in Eq.~\eqref{field-eq} are given by
\begin{align}
C^{\mu\nu} & \equiv (\nabla_\sigma \vartheta) \epsilon^{\sigma\delta\alpha(\mu} \nabla_\alpha \mathcal{R}^{\nu)}{}_\delta + (\nabla_\sigma \nabla_\delta \vartheta) {}^* \mathcal{R}^{\delta (\mu\nu) \sigma}\,, \\
\label{eq:Tab-theta}
T_{\mu\nu}^\vartheta & \equiv \beta (\nabla_\mu \vartheta) (\nabla_\nu \vartheta) -\frac{\beta}{2} g_{\mu\nu} \nabla_\delta \vartheta \nabla^\delta \vartheta\,.
\end{align}
The dynamical scalar field $\vartheta$ satisfies the evolution equation
\be
\square \vartheta = -\frac{\alpha}{4 \beta} \mathcal{R}_{\nu\mu \rho \sigma} {}^*\mathcal{R}^{\mu\nu\rho\sigma}\,.
\label{scalar-wave-eq}
\ee
Taking the divergence of the field equation~\eqref{field-eq}, and using the Bianchi identity and Eq.~\eqref{scalar-wave-eq}, we find that the matter stress-energy tensor obeys the same conservation law as in GR:
\be
\nabla_{\nu} T_{\mbox{\tiny mat}}^{\mu\nu}=0\,.
\label{mat-cons}
\ee
\subsection{Slow-rotation and Small-coupling Approximation}
Throughout this paper, we work within the slow-rotation approximation. We assume that the dimensionless spin parameter of a NS satisfies
\be
\chi \equiv \frac{J}{M_*^{2}} \ll 1\,,
\ee
where $M_*$ is the NS mass and $J \equiv |\vec{J}|$ is the magnitude of the spin angular momentum.
This is an excellent approximation for old NSs, such as binary pulsars and NSs in a binary that are about to coalesce, one of the main sources of GWs for ground-based detectors. We estimate NS quantities in dCS gravity within this slow-rotation approximation, keeping terms up to ${\cal{O}}(\chi^{2})$.
Throughout this paper we also work in the small-coupling approximation. For isolated NSs, this approximation requires that the dimensionless coupling
\be
\label{zeta-def}
\zeta \equiv \frac{\xi_{\mbox{\tiny CS}} M_*^2}{R_*^6} \ll 1\,,
\ee
where $R_*$ and $M_*$ are the NS radius and mass, $\sqrt{R_*^3/M_*}$ is the curvature length scale of the star, and $\xi_{\mbox{\tiny CS}}$ is defined in Eq.~\eqref{xi-def}. This requirement ensures that the dCS modification term (second term) in the action in Eq.~\eqref{action} is always much smaller than the Einstein-Hilbert term. This, in turn, allows us to treat dCS gravity as an effective theory, so that we can safely neglect possible higher-order terms in the action, and allowing the theory to easily reduce to GR in the low energy limit. When calculating NS quantities in dCS gravity we will work in the small-coupling approximation to ${\cal{O}}(\zeta)$.
One may wonder whether the small-coupling approximation is consistent with the current constraint on dCS gravity. Normalizing the small-coupling condition to neutron stars, one can rewrite Eq.~\eqref{zeta-def} as
\begin{align}
\xi_{\mbox{\tiny CS}}^{1/4} &\ll 25 \; {\textrm{km}} \; \left(\frac{M_*}{1.4 M_\odot}\right) \left(\frac{0.18}{C}\right)^{3/2}\,,
\end{align}
where $C \equiv M_*/R_*$ is the stellar compactness. Observe that such a requirement is much more stringent than current bounds from Solar System~\cite{alihaimoud-chen} and table-top~\cite{kent-CSBH} experiments, $\xi_{\mbox{\tiny CS}}^{1/4} \leq \mathcal{O}(10^8)$ km, mentioned earlier~\cite{alihaimoud-chen,kent-CSBH}. Therefore, requiring the small-coupling approximation is clearly not in conflict with current bounds on dCS gravity.
Given these approximations, all solutions we obtain will be \emph{bivariate expansions}, i.e.~expansions in two independent small parameters, $\chi$ and $\zeta$. Therefore, it will be natural to first decompose any quantity $A$ as follows
\be
A = A^{\mbox{\tiny GR}} + \alpha'{}^2\;A^{\mbox{\tiny CS}}\,,
\ee
and then to further decompose the GR and dCS pieces as a sum over $\chi$ to ${\cal{O}}(\chi^{2})$, where $\alpha'$ is a book-keeping parameter that labels the half-order in the small-coupling approximation, i.e.~${\cal{O}}(\zeta) = {\cal{O}}(\alpha'^{2})$. Composing both expansions, we can typically write any quantity $A$ as
\begin{align}
\label{eq:decomp}
A = \sum_{k=0}^{k=2} \sum_{\ell=0}^{\ell=2} \chi'^{k} \alpha'^{\ell} A_{(k,\ell)}\,,
\end{align}
where $\chi'$ is another book-keeping parameter that labels the order in the slow-rotation approximation.
\section{Ansatz for Slowly-Rotating Neutron Stars}
\label{sec:matt-space-decomp}
In this section we first describe the metric ansatz and the stress-energy tensor for matter that we use to construct slowly-rotating NS solutions in dCS gravity.
\subsection{Metric Ansatz and Bivariate Expansion}
We begin by considering the following ansatz for the metric, which corresponds to a generic stationary and axisymmetric metric in the Hartle-Thorne coordinates~\cite{hartle1967}:
\begin{align}
ds^2 &= -e^{\bar{\nu}(r)} \left[1+2 \bar{h}(r,\theta) \right] dt^2 + e^{\bar{\lambda}(r)} \left[ 1+\frac{2\bar{m}(r,\theta)}{r-2\bar{M}(r)} \right] dr^2 \nonumber \\
& + r^2 \left[ 1+2\bar{k}(r,\theta) \right] \left\{ d\theta^2 + \sin^2 \theta \left[ d\phi - \bar{\omega}(r,\theta) dt \right]^2 \right\}\,.
\label{metric-ansatz-rth}
\end{align}
Here $\bar{\nu}$ and $\bar{\lambda}$ are functions on $r$ only that represent spin-independent contributions, while $\bar{h}$, $\bar{k}$, $\bar{m}$ and $\bar{\omega}$ are functions of $(r,\theta)$ and correspond to spin corrections. The quantity
\be
\bar{M}(r) \equiv \frac{\left( 1-e^{-\bar{\lambda}(r)} \right)r}{2}
\ee
measures the enclosed mass within the radius $r$.
Since we are treating rotation perturbatively, we need to be careful with the coordinate system we work in~\cite{hartle1967}. When considering rotating stars in polar coordinates ($r$,$\theta$), the fractional change in quantities like the energy density $\bar \rho$ and pressure $\bar p$ does not remain small near the surface. Thus, we adopt new coordinates ($R,\Theta$), first proposed by Hartle~\cite{hartle1967}, such that the energy density in the new radial coordinate $\rho(R)$ in the rotating configuration is the same as that in the non-rotating configuration:
\be
\bar \rho \left[ r(R,\Theta ), \Theta \right] = \rho (R) = \rho^\mathrm{(non-rot)}(R), \quad \Theta = \theta\,.
\ee
Since the energy density and pressure are connected via an EoS, it follows that the pressure also does not acquire any spin corrections in the new coordinate system:
\be
\rho(R) = \rho^\mathrm{(non-rot)}(R) \; \rightarrow \; p(R)=p^\mathrm{(non-rot)}(R)\,.
\ee
Moreover, since we treat rotation perturbatively and the changes in the density and pressure enter at ${\cal{O}}(\chi^{2})$, we can introduce the new coordinates perturbatively via
\be
r(R,\Theta) = R + \xi(R,\Theta)\,,
\ee
where $\xi = {\cal{O}}(\chi^{2})$.
In the new coordinates, the line element becomes
\ba
ds^2 &=& - \left[\left( 1+2h+\xi \frac{d \nu}{dR} \right)e^\nu - R^2 \omega^2 \sin^2 \Theta \right]dt^2 \nonumber \\
& & -2 R^2 \omega \sin^2 \Theta dt d\phi + \left[ R^2 (1+2k) + 2 R \xi \right] \sin^2\Theta d\phi^2 \nonumber \\
& & + e^\lambda \left(1 +\frac{2m}{R-2M} + \xi \frac{d\lambda}{dR} + 2 \frac{\partial \xi}{\partial R} \right) dR^2 \nonumber \\
& & + 2e^\lambda \frac{\partial \xi}{\partial \Theta} dRd\Theta + \left[ R^2 (1+2k) + 2R\xi \right]d\Theta^2\,,
\label{metric-ansatz-RTh}
\ea
where we have retained terms only up to ${\cal{O}}(\chi^{2})$ and we have defined
\be
M(R) \equiv \frac{\left( 1-e^{-\lambda (R)} \right)R}{2}\,,
\label{M-lambda}
\ee
with the stellar mass $M_* = M(R_*)$. Bivariately expanding the above line element, we then find that all metric functions and the coordinate function $\xi$ can be expanded as in Eq.~\eqref{eq:decomp}:
\begin{align}
\nu (R) &= \nu_{(0,0)} (R)\,, \\
\lambda (R) &= \lambda_{(0,0)} (R)\,, \\
\omega (R,\Theta) &= \chi' \omega_{(1,0)} (R,\Theta)+\alpha'^2 \chi' \omega_{(1,2)} (R,\Theta) + \mathcal{O}(\chi'^3) \,, \\
A (R,\Theta) &= \chi'^2 A_{(2,0)}(R,\theta)+ \alpha'^2 \chi'^2 A_{(2,2)} (R,\Theta)+ \mathcal{O}(\chi'^4) \,,
\end{align}
with $A = (h,k,m,\xi)$.
Let us make several observations about the above decomposition. First, the metric functions $\nu$ and $\lambda$ do not contain dCS corrections because non-rotating NS solutions in dCS gravity are the same as those in GR due to parity protection~\cite{jackiw:2003:cmo,Yunes:2007ss,CSreview}. Second, also because of parity considerations, only the $(t,\phi)$ component of the metric contains terms odd in $\chi'$. Third, terms of $\mathcal{O}(\alpha')$ do not appear in the metric because of the structure of the field equations, which force the scalar field $\vartheta$ to be proportional to $\alpha$ in the small-coupling approximation~\cite{Yagi:2013mbt}. The above decomposition is therefore the minimal line element for a slowly-rotating star in dCS gravity within the small-coupling approximation.
\subsection{Matter Stress-energy Tensor and EoS}
We assume we can describe the matter inside the NS as a perfect fluid, so that its stress-energy tensor is given by
\be
T_{\mu\nu}^{\mbox{\tiny mat}} = (\rho + p ) u_\mu u_\nu + p \; g_{\mu\nu}\,,
\ee
where $u^\mu$ is the fluid's four-velocity given by
\be
u^\mu = (u^0, 0,0,\Omega_* u^0)\,.
\ee
The quantity $\Omega_*$ is the \emph{constant} angular velocity of the fluid, and thus of the entire NS, since we model stars in uniform rotation. The normalization condition of the four-velocity $u_\mu u^\mu = -1$ determines $u^0$ in terms of the metric functions~\cite{Yagi:2013mbt}.
To close the system of equations, one needs to specify the EoS that relates $p$ and $\rho$. In this paper, we consider six different EoSs: APR~\cite{Akmal:1998cf}, SLy~\cite{Douchin:2001sv}, Lattimer-Swesty with nuclear incompressibility of 220 MeV (LS220)~\cite{LATTIMER1991331,OConnor:2009iuz}, Shen~\cite{Shen:1998gq,Shen:1998by,OConnor:2009iuz}, WFF1~\cite{Wiringa:1988tp} and ALF2~\cite{Alford:2004pf}. For the LS220 and Shen EoSs, we impose neutrino-less, beta-equilibrium conditions. The APR, SLy and WFF1 EoSs are relatively soft, while LS220 is intermediate and Shen is a stiff EoS. The ALF2 EoS is a hybrid between APR nuclear matter and color-flavor-locked quark matter.
\section{Constructing Neutron Star Solutions}
\label{sec:I-L-Q}
In this section, we first review the structure of the field equations at each order in the dimensionless spin parameter. We then explain how we make each of the I-Love-Q quantities dimensionless following~\cite{I-Love-Q-Science,I-Love-Q-PRD} and derive dCS corrections to these quantities. We also derive the dCS correction to the stellar eccentricity in terms of the metric perturbation.
\subsection{Structure of the Field Equations}
\label{sec:structure}
The field equations are first expanded order by order in both $\chi'$ and $\alpha'$ and then solved numerically also order by order. At any given order, the equations are first solved numerically in the stellar interior, imposing regularity at the center, and then analytically in the exterior region, imposing asymptotic flatness at spatial infinity. One then matches the two solutions at the stellar surface, the radial location where the pressure vanishes, by requiring continuity and smoothness of all metric functions. From these metric functions, we then construct NS observables, such as the mass, moment of inertia and quadrupole moment. Below, we present a brief summary of the structure of the field equations at each order in spin; the full expressions can be found in~\cite{Yagi:2013mbt}.
At zeroth order in spin, there is no dCS correction. The field equations reduce to the Einstein equations for a non-rotating configuration. Using the (t,t) and (R,R) components of the equations, together with the conservation of matter stress-energy tensor from Eq.~\eqref{mat-cons}, one finds the Tolman-Oppenheimer-Volkoff (TOV) equation. The stellar mass $M_*$ appears as an integration constant in the exterior solution.
At first order in spin, the only non-vanishing part of the modified Einstein equations is the $(t,\phi)$ component, which can be decomposed into a GR and dCS part of $\mathcal{O}(\alpha'{^2})$. The dCS part depends on the scalar field, whose evolution equation forces it to be of $\mathcal{O}(\alpha')$. One can Legendre decompose both of these equations such that the set of partial differential equations becomes a set of ordinary differential equations. Boundary conditions at the stellar center and at spatial infinity make the $\ell=1$ mode the only non-vanishing piece of the equations. The magnitude of the angular momentum $J$ can be read off from the asymptotic behavior of the $g_{t\phi}$ metric component, and the moment of inertia is then found by
\be
I \equiv \frac{J}{\Omega_*}\,.
\ee
At second order in spin, one can again apply a Legendre decomposition to find that both the $\ell = 0$ and $\ell = 2$ modes contribute in the modified Einstein equations. Both modes have a GR and dCS contribution, with the latter again entering at $\mathcal{O}(\alpha'{^2})$. The $\ell=0$ mode gives a spin correction to the mass, which is irrelevant in this paper. We thus focus on the $\ell =2$ mode, which allows us to calculate the quadrupole moment $Q$ from the integration constants in the exterior solution. In particular, the gravitational potential $U$, which can be read off from the $g_{tt}$ component, has the asymptotic form\footnote{This definition of $Q$ is different from the one used in~\cite{Yagi:2013mbt} by a factor of 2. We choose to use this definition here so that the GR contribution matches the Geroch-Hansen multipole moments~\cite{geroch,hansen} and the moments used by Hartle and Thorne~\cite{hartlethorne}.}
\be
\label{eq:UA_CS}
U^{{\mbox{\tiny CS}}} = -\frac{3}{2} \frac{Q}{R^{3}} \hat{S}^{i}\hat{S}^{j} n^{<ij>} + \mathcal{O}\left( \frac{M_*^4}{R^4} \right)\,,
\ee
where $\hat{S}^{i}$ is the unit spin angular momentum vector of a NS and $n^i$ is the unit vector that points to a field point.
The above solutions automatically lead to the mass and radius of the star $M_{*}$ and $R_{*}$, the moment of inertial $I$ and the quadrupole moment $Q$, but other important quantities can also be computed, such as the electric-type tidal Love number $\lambda^\mathrm{(tid)}$. This quantity characterizes the quadrupolar tidal deformability of a star and it is defined as~\cite{hinderer-love}
\be
Q^\mathrm{(tid)} = - \lambda^\mathrm{(tid)} \, \mathcal{E}^\mathrm{(tid)}\,,
\ee
where $Q^\mathrm{(tid)}$ is the \emph{tidally-induced} quadrupole moment (not to be confused with the spin-induced quadrupole moment $Q$), while $\mathcal{E}^\mathrm{(tid)}$ is the external tidal field strength. One can treat tidal perturbations in a similar manner to perturbations due to spin. The main difference is that one must set $\omega=0$ (i.e. no parity-odd perturbation) in Eq.~\eqref{metric-ansatz-RTh} and not impose asymptotic flatness when solving the equations. Then, $Q^\mathrm{(tid)}$ and $\mathcal{E}^\mathrm{(tid)}$ can be read off from the coefficients of the $P_2(\Theta)/R^3$ and $P_2(\Theta) R^2$ parts of the metric perturbation $h$ when asymptotically expanded away from the NS surface, with $P_2$ the $\ell=2$ Legendre polynomial. Since dCS gravity modifies the odd-parity sector, $\lambda^\mathrm{(tid)}$ does not acquire any dCS correction~\cite{Yagi:2011xp}.
\subsection{Dimensionless I-Love-Q and Eccentricity Profiles}
We next explain how we adimensionalize physical quantities.
Let us first focus on the moment of inertia. We make this quantity dimensionless via
\begin{align}
\label{eq:Ibar}
\bar{I} \equiv \frac{I}{M_*^3} &= \frac{I_{\mbox{\tiny GR}}+ \alpha'^2 I_{\mbox{\tiny CS}}}{M_{\mbox{\tiny GR}}^3}\,,
\end{align}
which we expand as
\begin{align}
\bar{I} = \bar{I}_{\mbox{\tiny GR}} + \alpha'^2 \bar{I}_{\mbox{\tiny CS}}\,,
\end{align}
where clearly
\begin{equation}
\bar{I}_{\mbox{\tiny GR}} \equiv \frac{I_{\mbox{\tiny GR}}}{M_{\mbox{\tiny GR}}^3}\,, \quad
\label{eq:IGR}
\bar{I}_{\mbox{\tiny CS}} \equiv \frac{I_{\mbox{\tiny CS}}}{M_{\mbox{\tiny GR}}^3}\,.
\end{equation}
We recall that the NS mass does not acquire dCS corrections to leading order in spin.
Next, we look at the dCS correction to the dimensionless quadrupole moment. The latter is defined by
\begin{align}
\label{eq:Qbar}
\bar{Q} \equiv - \frac{Q}{M_*^3 \chi^2} &= -\frac{Q_{\mbox{\tiny GR}}+\alpha'^2 Q_{\mbox{\tiny CS}}}{M_{\mbox{\tiny GR}}^3(\chi_{\mbox{\tiny GR}}+\alpha'^2 \chi_{\mbox{\tiny CS}})^2}\,,
\end{align}
which we expand as
\begin{align}
\bar{Q} = \bar{Q}_{\mbox{\tiny GR}} + \alpha'^2 \bar{Q}_{\mbox{\tiny CS}} + {\cal O}(\alpha'^4),
\end{align}
where we have defined
\begin{equation}
\bar{Q}_{\mbox{\tiny GR}} \equiv -\frac{Q_{\mbox{\tiny GR}}}{M_{\mbox{\tiny GR}}^3 \chi_{\mbox{\tiny GR}}^2}\,, \quad
\label{eq:QbarCS}
\bar{Q}_{{\mbox{\tiny CS}}} \equiv -\frac{ Q_{{\mbox{\tiny CS}}}}{M_{{\mbox{\tiny GR}}}^3 \chi_{{\mbox{\tiny GR}}}^2} + 2 \bar{Q}_{{\mbox{\tiny GR}}}\frac{I_{{\mbox{\tiny CS}}}}{I_{{\mbox{\tiny GR}}}}.
\end{equation}
Here we used the relation $\chi_{\mbox{\tiny CS}}/\chi_{\mbox{\tiny GR}} = I_{\mbox{\tiny CS}}/I_{\mbox{\tiny GR}}$. The second term in $\bar{Q}_{{\mbox{\tiny CS}}}$ above typically dominates the first term in the stellar solutions we consider in this paper.
Let us finally focus on the Love number. We make this quantity dimensionless via
\begin{align}
\label{eq:lovebar}
\bar{\lambda}^\mathrm{(tid)} \equiv \frac{\lambda^\mathrm{(tid)}}{M_*^{5}} = \frac{\lambda^\mathrm{(tid)}_{{\mbox{\tiny GR}}}}{M_{{\mbox{\tiny GR}}}^{5}}\,,
\end{align}
where the last equality follows from the fact that the electric-type tidal deformability is not modified in dCS gravity as already explained.
Before presenting numerical results, let us also discuss the eccentricity contours inside the star at a constant radius. Following~\cite{hartlethorne}, the eccentricity can be defined via
\begin{align}
e(R)=\sqrt{-3\left(k_2(R) + \frac{\xi_2(R)}{R}\right)}\,,
\end{align}
which we expand as
\be
e = e_{\mbox{\tiny GR}} + \alpha'{}^2 e_{\mbox{\tiny CS}} + {\cal O}(\alpha'^4)\,,
\ee
where we have defined
\begin{align}
e_{{\mbox{\tiny GR}}}(R)&=\sqrt{-3\left(k_2^{{\mbox{\tiny GR}}}(R) + \frac{\xi_2^{{\mbox{\tiny GR}}}(R)}{R}\right)}\,,
\\
e_{{\mbox{\tiny CS}}}(R) &=- \frac{1}{2}\sqrt{\frac{-3}{k_2^{{\mbox{\tiny GR}}}(R) + \frac{\xi_2^{{\mbox{\tiny GR}}}(R)}{R}}}\left(k_2^{{\mbox{\tiny CS}}}(R)+ \frac{\xi_2^{{\mbox{\tiny CS}}}(R)}{R}\right)\,.
\end{align}
Here $k_2^{{\mbox{\tiny GR}}}$ and $\xi_2^{{\mbox{\tiny GR}}}$ ($k_2^{{\mbox{\tiny CS}}}$ and $\xi_2^{{\mbox{\tiny CS}}}$) are the coefficients of the $\ell = 2$ modes of $k_{(2,0)}$ and $\xi_{(2,0)}$ ($k_{(2,2)}$ and $\xi_{(2,2)}$ ) in a Legendre decomposition. Adimensinalizing this quantity, one further finds
\begin{align}
\frac{e(R)}{\chi} &= \frac{e_{{\mbox{\tiny GR}}}+\alpha'^2 e_{{\mbox{\tiny CS}}}}{\chi_{{\mbox{\tiny GR}}}+\alpha'^2 \chi_{{\mbox{\tiny CS}}}} \nonumber \\
&= \frac{e_{{\mbox{\tiny GR}}}}{\chi_{{\mbox{\tiny GR}}}} +\alpha'^2 \left( \frac{ e_{{\mbox{\tiny CS}}}}{\chi_{{\mbox{\tiny GR}}}} - \frac{e_{{\mbox{\tiny GR}}}}{\chi_{{\mbox{\tiny GR}}}} \frac{I_{{\mbox{\tiny CS}}}}{I_{{\mbox{\tiny GR}}}} \right) + {\cal O}(\alpha'^4)\,.
\end{align}
\section{Numerical Results}
\label{sec:num-res}
In this section we present numerical results obtained by solving the field equations order by order, and calculating the dimensionless quantities defined in the previous section. We conclude this section with a discussion of projected bounds we will be able to place on the theory given future observations of the moment of inertia and the tidal deformability.
\subsection{I-Love-Q relations}
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=7.5cm,clip=true]{ILoveCS.pdf} \quad
\includegraphics[width=7.5cm,clip=true]{QLoveCS.pdf}
\caption{\label{fig:IQ-Love-CS-correction-only} (Color Online) (Top) dCS corrections to the I-Love and Q-Love relations, normalized to $\bar \xi$, for various EoSs and using the global fit of Eq.~\eqref{fitting-func}. The top axis shows the corresponding NS mass for a given $\bar \lambda^\mathrm{(tid)}$ assuming the APR EoS in GR. (Bottom) Relative fractional difference between the data and the global fit, which represents the EoS variation in the relations. Observe that the universality holds to better than $5\%$.
}
\end{center}
\end{figure*}
We begin by studying dCS corrections to the I-Love and Q-Love relations, which we present in the top panels of Fig.~\ref{fig:IQ-Love-CS-correction-only}. We recall that $\bar \lambda^\mathrm{(tid)}$ does not acquire any dCS corrections, and hence corrections to the I-Love-Q relations originate from changes to $\bar I$ and $\bar Q$. In Fig.~\ref{fig:IQ-Love-CS-correction-only}, we normalize $\bar I_{\mbox{\tiny CS}}$ and $\bar Q_{\mbox{\tiny CS}}$ with
\be
\bar \xi \equiv \frac{\xi_{\mbox{\tiny CS}}}{M_*^4}\,,
\ee
so that the dependence on the dCS coupling constant is factored out from corrections to the I-Love-Q relations in this figure. Observe that the dCS correction to the relations remains universal with respect to various EoSs. To estimate the degree to which these relations remain universal, we construct a global fit (shown by the black solid curves in the figure) for all the numerical data points in the form
\be
\label{fitting-func}
y_i = \exp \left[a_i + b_i(\log x_i) +c_i(\log x_i)^{2} \right]\,,
\ee
where the coefficients are summarized in Table~\ref{Fitting-params}. The bottom panel of Fig.~\ref{fig:IQ-Love-CS-correction-only} shows the fractional difference between each numerical data point and the fit, which corresponds to the EoS variation in these relations. Observe that the relations are universal to within $5\%$.
\begin{table}
\begin{center}
\begin{tabular}{ m{1.5cm} m{1.5cm} m{1.5cm} m{1.5cm} m{1.5cm}}
\hline \hline
$y_i$ & $x_i$ & $a_i$ & $b_i$ & $c_i$ \\
\hline
$\bar{I}_{\mbox{\tiny CS}}/\bar \xi$ & $\bar{\lambda}^\mathrm{(tid)}$ & -2.943 & -0.718 & -0.011 \\
$\bar{Q}_{\mbox{\tiny CS}}/\bar \xi$ & $\bar{\lambda}^\mathrm{(tid)}$ & -3.443 & -0.550 & -0.023\\
\hline \hline
\end{tabular}
\caption{\label{Fitting-params} Estimated numerical coefficients for the fitting formula in Eq.~\eqref{fitting-func} for dCS corrections to the I-Love and Q-Love relations.}
\end{center}
\end{table}
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=7.5cm,clip=true]{ILove_zeta_xiCS_xi.pdf} \quad
\includegraphics[width=7.5cm,clip=true]{QLove_zeta_xiCS_xi.pdf}
\caption{\label{fig:fixed_zeta_xiCS_xi} (Color Online) dCS corrections to the I-Love (left) and Q-Love (right) relations with $\zeta$ = 0.1 (top), $\xi_{{\mbox{\tiny CS}}}$ = 11.57 $\times 10^{3}$ km$^4$ (middle) and $\bar{\xi}$= 0.16$\times 10^{3}$ (bottom). Observe that the relations remain universal only if one fixes $\bar \xi$.
}
\end{center}
\end{figure*}
Figure~\ref{fig:IQ-Love-CS-correction-only} corresponds to fixing the value of $\bar \xi$ (to unity), but how does the universality change if one normalizes the I-Love-Q relations differently? To address this question, Fig.~\ref{fig:fixed_zeta_xiCS_xi} presents the dCS corrections to the I-Love and Q-Love relations fixing $\zeta$ (top), $\xi_{\mbox{\tiny CS}}$ (middle) and $\bar \xi$ (bottom), where the first two quantities are given in Eqs.~\eqref{zeta-def} and~\eqref{xi-def} respectively. We choose $\zeta=0.1$ to ensure the validity of the small-coupling approximation, and thus we choose values for $\xi_{\mbox{\tiny CS}}$ and $\bar \xi$ that correspond to $\zeta=0.1$ for a NS with mass $2M_\odot$ and radius of $10$km. Observe that the universality is lost when the dCS correction is normalized by $\zeta$ or $\xi_{\mbox{\tiny CS}}$, while it is recovered if one fixes $\bar \xi$. This clearly shows that whether the relations remain universal in modified theories of gravity depends crucially on how one normalizes observables with respect to the coupling constants of the theory.
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=7.5cm,clip=true]{ILove.pdf} \quad
\includegraphics[width=7.5cm,clip=true]{QLove.pdf}
\caption{\label{fig:IL_QL} (Color Online) (Top) I-Love (left) and Q-Love (right) relations for various EoSs in dCS gravity with $\bar{\xi}=10^3$. The black dashed curve in each panel represents the best-fit relation in GR. (Bottom) Fractional difference in each relation with respect to a global fit. For reference, we also present the fractional difference for the Shen and ALF2 EoSs in GR. Observe that the EoS variation in dCS gravity is comparable to that in GR for the I-Love relation, while the former is larger than the latter in the Q-Love relation for relativistic NSs with relatively small $\bar \lambda^\mathrm{(tid)}$.}
\end{center}
\end{figure*}
Having studied the dCS corrections to the I-Love-Q relations and which parameter one should fix to retain universality, we now study the full I-Love-Q relations in dCS gravity, including the GR contribution. Figures~\ref{fig:I-Q} and~\ref{fig:IL_QL} present such relations with $\bar \xi = 10^3$, together with the fractional difference from the global fit. Observe that the EoS variation is of $\mathcal{O}(0.1\%)$ for the I-Love and I-Q relations, and of $\mathcal{O}(1\%)$ for the Q-Love relation. We can compare this EoS variability with that present in GR, represented here through the fractional difference between the Shen and ALF2 EoSs relations in GR. Observe that in general, the EoS variation in dCS gravity is comparable to that in GR for the I-Love and I-Q relations, while the former is larger than the latter for the Q-Love relation, in particular for relativistic stars with smaller $\bar \lambda^\mathrm{(tid)}$. Observe also that there is a certain parameter range for each relation in dCS gravity where the EoS variation is highly suppressed (e.g. $\bar \lambda^\mathrm{(tid)} \sim 300$ for the I-Love relation). This is because the GR (dCS) contribution in the EoS variation dominates for large (small) $\lambda^\mathrm{(tid)}$, while these two contributions partially cancel in the intermediate region.
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=7.5cm,clip=true]{Eccentricity.pdf} \quad
\includegraphics[width=7.5cm,clip=true]{IQEccen_diff_xi_Shen.pdf}
\caption{\label{fig:Eccentricity} (Color Online) (Left) Stellar eccentricity profile with $\bar{Q}=3.5$ (top) and $\bar{Q}=8.5$ (bottom) for various EoSs in GR and dCS gravity with $\bar \xi = 10^3$. (Top Right) Stellar eccentricity profile for the Shen EoS with $\bar Q = 3.5$ and various $\bar \xi$. Observe that the eccentricity variation throughout the star becomes smaller as one increases $\bar \xi$. (Bottom Right) Fractional difference from the fit in the I-Q relation for the Shen EoS and various $\bar \xi$. The vertical dashed line corresponds to $\bar Q = 3.5$. Observe that the EoS variation tends to become smaller for larger $\bar \xi$ where the eccentricity variation becomes also smaller. However, the EoS variation increases again if one increase $\bar \xi$ too much.
}
\end{center}
\end{figure*}
\subsection{Eccentricity Profile in dCS gravity}
Let us now study whether the elliptical isodensity explanation as the origin of the universality in GR~\cite{Yagi:2014qua} mentioned in Sec.~\ref{sec:intro} is still applicable in modified theories of gravity. First, we look at how the stellar eccentricity variation inside a star in dCS gravity changes from that in GR. The left panels of Fig.~\ref{fig:Eccentricity} present the eccentricity profile for various EoSs with $\bar Q = 3.5$ (top) and $\bar Q = 8.5$ (bottom) in GR and dCS with $\bar \xi = 10^3$. Observe that the eccentricity variation is smaller in dCS than in GR when $\bar Q$ is relatively small. On the other hand such a variation in dCS is almost indistinguishable from that in GR for relatively large $\bar Q$. This is because one approaches a Newtonian regime for larger $\bar Q$ values, in which dCS corrections are suppressed. The top right panel of Fig.~\ref{fig:Eccentricity} shows the eccentricity profile in dCS gravity in more detail. Here we choose the Shen EoS and $\bar Q = 3.5$, and study the eccentricity profile dependence on $\bar \xi$. One clearly sees that the variation becomes smaller for larger $\bar \xi$. \
We now investigate how such an eccentricity variation is related to the amount of EoS variation in the I-Love-Q relations. The bottom right panel of Fig.~\ref{fig:Eccentricity} presents the fractional difference in the I-Q relation for the Shen EoS from the fit for various values of $\bar \xi$. The vertical dashed line corresponds to $\bar Q = 3.5$. For this value of $\bar Q$, observe that the fractional difference becomes smaller as one increases $\bar \xi$ from 0 to 500, with which the eccentricity variation becomes also smaller as explained in the previous paragraph. This finding supports the conclusion in~\cite{Yagi:2014qua} that the I-Q relation (or relations among multipole moments) becomes stronger as the eccentricity variation becomes smaller. However, if one goes beyond $\bar \xi = 500$ (as done also e.g.~in Fig.~\ref{fig:IL_QL}), the EoS variation in the I-Q relation starts to increase, although the eccentricity variation keeps decreasing. This feature is opposite of what happens in GR and of what one would expect if the self-similarity of isodensity contours was solely responsible of the universality. This suggests that the origin of the universality in modified theories of gravity may be more complicated than that in GR due to the additional degrees of freedom present.
\subsection{Future Observational Bounds}
\begin{figure}[hbtp]
\begin{center}
\includegraphics[width=9.5cm,clip=true]{ILove_allfits.pdf}
\caption{\label{fig:GR_test} (Color Online) The best-fit I-Love relation in dCS gravity with various $\bar \xi$. We also show a 10\% measurement accuracy of the NS moment of inertia with future double binary pulsar observations, and a 40\% measurement accuracy of the NS tidal Love number with future gravitational wave observations, with a fiducial mass of $1.338M_\odot$ assuming the Shen EoS, which corresponds to the observed mass of the primary pulsar in J0737-3039. Observe that such observations, if realized, allow us to place the bound $\bar \xi < 1.85 \times 10^4$, which is six orders of magnitude more stringent than those from Solar System~\cite{alihaimoud-chen} and table-top~\cite{kent-CSBH} experiments in terms of the characteristic length scale $\xi_{\mbox{\tiny CS}}^{1/4}$.
}
\end{center}
\end{figure}
Universal relations project out uncertainties in nuclear physics, and thus they are very useful in probing strong-field gravity with NS observations. Let us then begin by reviewing how one can use these relations to probe dCS gravity following~\cite{I-Love-Q-Science,I-Love-Q-PRD}. Let us imagine that we measure the moment of inertia of the primary pulsar (whose mass is estimated as $1.338 M_\odot$~\cite{burgay,lyne,kramer-double-pulsar}) in the double binary pulsar J0737-3039 to 10\% accuracy~\cite{lattimer-schutz,kramer-wex}. Let us also assume that we measure the tidal Love number of a $\sim 1.338 M_\odot$ NS\footnote{Even if we detect GWs from a NS binary with component masses different than the mass of the primary pulsar in J0737-3039, one can still measure the tidal Love number of a $1.338 M_\odot$ NS by Taylor expanding the tidal Love number as a function of the NS mass about $M_*=1.338 M_\odot$ and measuring the leading-order coefficient, as done in~\cite{messenger-read,delpozzo,Agathos:2015uaa,Yagi:2015pkc,Yagi:2016qmr}.} with an accuracy of 40\%~\cite{I-Love-Q-Science,I-Love-Q-PRD}. Figure~\ref{fig:GR_test} presents these errors in the I-Love plane as dashed lines with the fiducial values taken to be a $1.338 M_\odot$ NS (black star) assuming the Shen EoS (which gives the most conservative bound on dCS gravity among all the EoSs considered in this paper). The black solid curve corresponds to the I-Love relation in GR.
The relation in modified theories of gravity has to pass through the error box in order to be consistent with the above hypothetical measurements. In Fig.~\ref{fig:GR_test}, we show the I-Love relation in dCS gravity with three different choices of $\bar \xi$. Observe that the relation with $\bar \xi < 1.85 \times 10^4$ is consistent with the measurement errors, thus ruling out $\bar \xi > 1.85 \times 10^4$. Using $M_*=1.338 M_\odot$, one can map such a bound to a constraint on the characteristic length scale $\sqrt{\alpha} \lesssim 86$ km, which is six orders of magnitude stronger~\cite{I-Love-Q-Science,I-Love-Q-PRD} than the bounds from Solar System~\cite{alihaimoud-chen} and table-top~\cite{kent-CSBH} experiments. Such a large improvement on the bound is realized because NS observations allow us to probe the strong-field regime, where dCS corrections become naturally large. One can also check that the above bound corresponds to $\zeta < 0.1$ in terms of the dimensionless coupling constant, and thus satisfies the small-coupling approximation. This finding shows the impact of using universal relations on probes of strong-field gravity.
We now go one step further and study how this putative bound on dCS changes with different measurement accuracy of the moment of inertia $\delta \bar I$ and tidal Love number $\delta \bar \lambda^\mathrm{(tid)}$. The contours in Fig.~\ref{fig:alpha_constraint} present the bounds on $\sqrt{\alpha}$ as a function of the measurement accuracy $\delta \bar I / \bar I$ and $\delta \bar \lambda^\mathrm{(tid)} / \bar \lambda^\mathrm{(tid)}$. Observe that the bounds are all of $\mathcal{O}(10^2)$km and are not very sensitive to $\delta \bar I$ or $\delta \bar \lambda^\mathrm{(tid)}$. This is because to map from bounds on $\bar{\xi}$ to bounds on $\sqrt{\alpha}$ one must take a fourth root, $\sqrt{\alpha} \sim \bar{\xi}^{1/4}$, which then softens the dependance of the $\sqrt{\alpha}$ constraint on $\delta \bar I$ and $\delta \bar \lambda^\mathrm{(tid)}$. One should therefore be able to place bounds on dCS gravity that are roughly six orders of magnitude stronger than current ones if the moment of inertia and tidal Love number are measured with future NS observations, irrespective of the precise details of the measurement accuracy.
\section{Discussion}
\label{sec:disc}
In this paper we studied the I-Love-Q relations in dCS gravity. We found that whether such relations remain universal depends on what coupling parameter one fixes. If one fixes $\bar \xi$, the relations are universal, while the universality is lost for fixed $\zeta$ or $\xi_{\mbox{\tiny CS}}$. By fixing $\bar \xi$, we found that the I-Love and I-Q relations are universal to $\mathcal{O}(0.1\%)$, which is comparable to that in GR. On the other hand, the Q-Love relation is universal to $\mathcal{O}(1\%)$, which is larger than in the GR case. We next studied whether the elliptical isodensity explanation as the origin of the universality still holds in dCS gravity. We found that the eccentricity variation inside a star in dCS is smaller that that in GR, and the universality in the I-Q relation becomes stronger as one increases $\bar \xi$. However, if one increases $\bar \xi$ too much, the universality becomes weaker while the eccentricity variation keeps decreasing. This suggests that the origin of the universality in non-GR theories may be more complicated than in GR. Finally, we studied how one can use the I-Love relation to probe strong-field gravity. We found that future radio and gravitational wave observations should be able to place bounds on dCS gravity that are roughly six orders of magnitude stronger than current bounds, irrespective of the details of the measurement accuracy of the moment of inertia and tidal Love number.
One obvious direction for future work is to study the I-Love-Q relations for rapidly-rotating NSs. This could be done by modifying the publicly-available code RNS~\cite{stergioulas_friedman1995}. Such an extension is important to e.g. apply the I-Q relations to rapidly-spinning NS observations using NICER. Another avenue for future work includes studying other universal relations, such as those among higher-order multipole moments~\cite{Yagi:2014bxa} and various tidal Love numbers~\cite{Yagi:2013sva}. In particular, we studied parity-even tidal Love numbers in this paper but it would be interesting to consider parity-odd ones~\cite{damour-nagar,binnington-poisson,Pani:2015nua,Landry:2015zfa,Landry:2015cva,Delsate:2015wia,Landry:2015snx}, as these quantities would acquire non-vanishing dCS corrections. One can also study the universal relations and the origin of the universality in theories other than dCS gravity. For example, universal relations have not been studied within Lorentz-violating theories of gravity, such as Einstein-\AE ther~\cite{Jacobson:2000xp,Eling:2004dk,Jacobson:2008aj} and khronometric~\cite{Blas:2010hb} gravity.
In these theories, non-rotating~\cite{eling-AE-NS} and slowly-moving~\cite{Yagi:2013qpa,Yagi:2013ava} NS solutions have been constructed but slowly-rotating NS solutions have not been studied in the literature yet. Work along this direction is currently in progress.
\section{Acknowledgements}
K.Y. acknowledges support from Simons Foundation and NSF grant PHY-1305682. N.Y. acknowledges support from NSF CAREER grant PHY-1250636 and NASA grants NNX16AB98G and 80NSSC17M0041. Some calculations used the computer algebra-systems \textsc{MAPLE}, in combination with the \textsc{GRTENSORII} package~\cite{grtensor}.
\section*{References}
\bibliographystyle{iopart-num}
|
2,877,628,089,110 | arxiv | \section{Introduction}
There have been numerous studies of ``slow light'' and ``fast light'' where
resonant structures~\cite{Smith08} have been used to enhance or
reduce the slope of the dispersion $dk/d\Omega$, where $k(\Omega)$ is the wave vector as a function of light angular
frequency $\Omega$. One application that has been
proposed is to increase the Sagnac response, hence
create passive~\cite{Leonardt00} or active~\cite{Shahriar07} optical gyroscopes of enhanced sensitivity.
These studies associate the enhancement or decrease of the gyro response with
a change in pulse velocity. Even though these proposals were dealing with group velocities, they were aimed exclusively at
cw lasers, where there is no propagating wave packet, and it is difficult to associate the claimed enhancement/reduction of a
gyroscopic (Sagnac) response with a change in group velocity.
Furthermore, as has been pointed out by Arnold Sommerfeld in 1907 already~\cite{Sommerfeld1907},
the mathematical quantity group velocity does not represent the velocity of an
electromagnetic signal in frequency regions with large dispersion~\cite{BrillouinBook}.
Measurements presented here of the gyro response in a mode-locked
ring laser as a function of average pulse velocity in the cavity
demonstrate that the
concept of ``fast light'' or ``slow light'' applied to this situation is a misnomer since:
\begin{itemize}
\item Modifications of the
average pulse velocity of the laser leaves the gyro response unaffected.
\item The gyro response can indeed be altered by the dispersion of an intracavity element.
However, the repetition rate of the laser is not solely determined by this dispersion.
\end{itemize}
The expression ``gyro response'' is usually understood as the beat note frequency measured
between the two outputs corresponding to oppositely circulating beams in a ring laser, as a
function of the rotation of the laser about an axis orthogonal to its plane. This is only a particular case
of ``Intracavity Phase Interferometry'' (IPI) where a mode-locked laser is used as an
interferometer in which two pulses circulate independently~\cite{Arissian14b}. The frequency of the beat note
obtained by interfering
the two output frequency combs (corresponding to each of these intracavity pulses)
is a measure of a differential phase shift at each round trip between the circulating pulses.
IPI in this paper refers to the phase interferometry inside a {\em laser cavity} as opposed to the
enhancement of the phase response associated with a {\em passive Fabry-Perot}~\cite{Ma99}.
While the distinction may appear merely quantitative because the laser can be seen as a
Fabry-Perot of extreme finesse, the difference is qualitative because the response is
an optical frequency shift rather than an amplitude modulation.
Another distinction has to be made with the label
``active-cavity interferometer'' as used by Abramovici and Vager~\cite{Abramovici85}, where
two gain media are inserted in 2 branches of a Michelson,
which result in uncorrelated spontaneous emission noise. In IPI, there is only one gain medium
acting on both pulses.
The average velocity of a pulse circulating inside
a mode-locked laser is
continuously tuned by adjusting the angle of incidence of a Fabry-Perot etalon inserted
in the laser cavity~\cite{Masuda16}. By applying this technique to a ring laser,
it is demonstrated here that
the gyro response is not correlated to the
pulse envelope velocity. However, because the Fabry-Perot is coupled to the mode-locked laser cavity,
the resonant {\em dispersion} of the etalon creates a significant change in
the phase (or gyro) response of the laser.
\section{Phase response in a mode-locked laser}
\label{phase_response}
In the general case of ``Intracavity Phase Interferometry'' (IPI), which involves linear as well
as ring mode-locked lasers, a physical quantity to be measured
(nonlinear index, magnetic field, rotation, acceleration, electro-optic coefficient,
fluid velocity, linear index) creates a differential phase shift $\Delta \phi$ between the two pulses, which,
because of the resonance condition of the laser, is translated into a difference in optical
frequency~\cite{Arissian14b}. This difference is measured as a beat note
produced when interfering the two frequency combs generated by the laser. The measured beat note
$\Delta \omega$ can be expressed as:
\begin{equation}
\Delta \omega = \frac{\Delta \phi}{\tau_\phi} = \omega \frac{\Delta P}{P},
\label{basic_phase_response}
\end{equation}
where $\tau_\phi$ is the round-trip time of the pulse circulating in a laser cavity of perimeter $P$ (in the
case of a linear cavity of length $L$, $P = 2L$ and $\Delta P = 2 \Delta L$), and $\omega$ is the average
optical pulse frequency. The technique of IPI has been shown to have extreme sensitivity, with the
ability to resolve phase shift differences as small as $\Delta \Phi \approx 10^{-8}$
(corresponding to a beat note bandwidth of 0.16 Hz for a cavity of $\tau_\phi$ =
10 ns~\cite{Arissian14b,Velten10}).
This corresponds to an optical path difference of only 0.4 fm. If applied to a square ring laser
of 4 m$^2$, the beat note bandwidth of 0.16 Hz corresponds to a sensitivity in
rotation rate change of $\approx 0.2$ revolution/year.
The principle of the ``fast light enhancement'' of the response of intracavity phase interferometry
(and in particular gyro response) is to make $\tau_\phi$ frequency dependent through
an element having a transfer function
$\tilde{\cal T}(\Omega) = \left | \tilde{\cal T}\right | \exp[-i \psi(\Omega)]$ with giant dispersion:
\begin{equation}
\tau_\phi = \tau_{\phi 0} + \left . \frac{d\psi}{d\Omega} \right |_{\omega_0}
\label{dispersion}
\end{equation}
where $\tau_{\phi 0} = (P n_p)/c$ is the round-trip time without dispersive element, $n_p$ is the {\em phase}
index of refraction at the central carrier frequency ${\omega_0}$ averaged over the elements of the cavity.
$-\psi(\Omega)$ is the phase of the transfer function of the dispersive optical element inserted in the cavity, with $\psi(\omega) = 0$.
By substituting Eq.~(\ref{dispersion}) in Eq.~(\ref{basic_phase_response}), the beat note is thus:
\begin{equation}
\Delta \omega = \frac{\frac{d \phi}{\tau_{\phi 0}}}{1 +
\frac{1}{\tau_{\phi 0}}\left .\frac{d \psi}{d\Omega} \right |_{\omega_0}} =
\frac{\Delta \omega_0}{1 +
\frac{1}{\tau_{\phi 0}}\left .\frac{d \psi}{d\Omega} \right |_{\omega_0}}
\label{basic_equation_disp}
\end{equation}
It should be noted that all the above considerations pertain to {\em phase} resonances and
velocities. In the case of normal dispersion, $d\psi/d\Omega|_{\omega_0}$ is positive,
resulting in a decrease of $\Delta \omega$. There is amplification of the phase response
if $d\psi/d\Omega|_{\omega_0}$ is negative, a case that
is most often quoted as a ``fast light'' response.
If we consider simply propagation through a transparent medium, $\psi = [k(\Omega)-k_0] d$,
where $k(\Omega) = \Omega n(\Omega)/c$, is the wavevector of a medium of
thickness $d$ and index $n(\Omega)$, and $k_0 = k(\omega_0)$, then
the second term in the denominator of Eq.~(\ref{basic_equation_disp}) is:
\begin{equation}
\frac{1}{\tau_{\phi 0}}\left .\frac{d \psi}{d\Omega} \right |_{\omega_0} = \frac{1}{\tau_{\phi 0}}\frac{d}{v_g},
\label{group-phase}
\end{equation}
where $v_g$ is the group velocity {\em in a dielectric}. Equation~(\ref{group-phase})
deals fundamentally with the {\em phase} of the light in a laser cavity, and not the envelope velocity of a
circulating pulse. As demonstrated in reference~\cite{Masuda16}, the envelope velocity
of a pulse circulating in
a mode-locked laser is not related to $\left .dk/d\Omega \right |_{\omega_0}$ for
a $k$ vector averaged in the cavity, but to the gain and loss dynamics inside the
laser. This point will be further emphasized in the present paper,
where it is shown that the envelope velocity of circulating
pulses or bunches of pulses can be varied, while the gyro response remains unchanged.
However, it will be shown that the teeth of the frequency comb of mode-laser
can be coupled to the modes of an intracavity etalon.
It is further demonstrated that a large dispersion results from
this coupling, with a magnitude such that $\frac{1}{\tau_{\phi 0}}\left .\frac{d \psi}{d\Omega} \right |_{\omega_0} $
is of the order of unity.
\section{Challenge in achieving laser dispersion}
\label{challenge}
In order to achieve the very large dispersion required to modify the phase response through
Eq.~(\ref{basic_equation_disp}), a very narrow-band resonant structure is required. Narrow bandwidth
implies long pulses or cw radiation, where most of the research in this field has focused.
For instance, theoretical estimates have found that large $d\psi/d\Omega$ can be produced
by two-peak gain and coupled
resonators~\cite{Yum10,Smith14}, or by an atomic medium~\cite{Smith09}. The latter
property has been verified experimentally. These regions of large dispersion have a small bandwidth,
which needs only to exceed the
largest beat note to be measured. For the mode-locked laser however,
the giant slope of the resonant phase $\psi(\Omega)$ versus frequency has to be seen
by every tooth of the comb, as illustrated in Fig.~\ref{teeth-modes}.
In a mode-locked laser gyro, as with any implementation of intracavity phase interferometry, the
two circulating pulses have to meet at the same point at every round-trip~\cite{Arissian14b}. As the pulses circulating
in opposite direction see an optical length differential, decreased or augmented by the giant dispersion, one would
expect that the crossing point cannot be maintained, if the pulse velocity were simply equal to
$1/(dk/d\Omega)$. However, it has been established that the average
envelope velocity in a mode-locked laser
is dominated by gain and loss dynamics of the entire cavity,
and that the crossing point of the two pulses can be maintained~\cite{Arissian14b,Masuda16}.
\begin{figure} [h!]
\centering
\includegraphics*[width=\linewidth]
{teethmodesnew.eps} \caption[]{\small (a) Frequency comb(s) out of a ring mode-locked laser.
At rest, the teeth of the two countercirculating combs coincide (vertical red lines).
In presence of a relative phase shift/round-trip (Sagnac effect in the case of a laser gyro),
the teeth split (dashed lines). According to Eq.~(\ref{basic_equation_disp}), this frequency splitting can be
modified by the dispersion of a sharp resonance, which should be present at each tooth of the comb at rest.
(b) If a Fabry-Perot etalon (i.e. a 15 mm thick piece of fused silica) is inserted in the laser cavity,
it has been shown~\cite{Masuda16} that each mode of the Fabry-Perot couples to a mode of the cavity.
The laser comb being locked to the Fabry-Perot modes, each tooth (or pair of teeth) experiences
the dispersion of the etalon having acquired the finesse of the laser.}
\label{teeth-modes}
\end{figure}
\subsection{Giant dispersion of an intracavity etalon}
In order for the ``fast light enhancement'' or ``slow light reduction'' of Section~\ref{challenge} to apply to a mode-locked laser, there
should be a resonant structure for each mode of the laser, as sketched in Fig. 1 (a). The dashed lines
indicate the position of the counter-circulating modes after rotation. The
alternative to having a resonant structure for each mode is to have a resonant structure with much larger mode spacing
(for instance 100 $\times$ larger) locked to the modes of the laser [Fig.~\ref{teeth-modes} (b)]. The mode-locked laser is known to
create a frequency comb with equally spaced modes~\cite{Udem99a, Udem99b,Jones00}. By the same mechanism explained in
reference~\cite{Arissian09}, the teeth of the frequency comb will be locked by the modes that are resonant with
those of the resonant structure in Fig.~\ref{teeth-modes} (b). A resonant structure could be a Fabry-Perot etalon
inserted in the cavity. At first sight this seems to be an impossibility, because:
\begin{itemize}
\item If the Fabry-Perot has a high finesse, it will filter the frequency spectrum of the laser, resulting
in long pulse or cw operation
\item If the etalon has a low finesse we do not have resonant enhancement
\item A complicated electronic feedback system would be required to maintain the modes of the
laser cavity and those of the etalon locked, such as has been realized in reference~\cite{Delfyett06}.
\end{itemize}
A study made of the characteristics of a mode-locked laser with an intracavity etalon~\cite{Masuda16} contradicts all
these points. A low finesse Fabry-Perot inserted in the mode-locked laser creates a high frequency
pulse train passively locked to the comb (modes) of the laser. The low finesse uncoated etalon,
when inserted in the mode-locked cavity, acquires a high finesse determined by the laser cavity.
All these points studied with a linear laser are confirmed in the ring configuration presented in the
next section.
\section{Experiments with a mode-locked ring Ti:sapphire laser}
\begin{figure} [h!]
\centering
\includegraphics*[width=\linewidth]
{ring_laser.eps} \caption[]{\small Ti:sapphire ring laser, mode-locked by the saturable absorber
Hexaindotricarbocyanine iodide (HITCI) dissolved in a jet of ethylene glycol ($S$). The gain medium ($G$)
and the phase modulator ($M$) are located at approximately 1/4 cavity perimeter from the
saturable absorber $S$. An output coupling is made near the other pulse crossing point, and the two
output pulse trains are made to interfere on a detector $D_b$ to monitor
the beat note between the two frequency combs.
Instead of rotating the laser, a phase shift/round-trip is provided by a phase modulator $M$ driven by
the detector $D_s$ at the cavity repetition rate (details in reference~\cite{Arissian14b}).}
\label{ring_laser}
\end{figure}
A Ti:sapphire ring laser mode-locked by a saturable absorber was constructed for a demonstration experiment.
As sketched in Fig.~(\ref{ring_laser}), the main components of the laser are a Ti:sapphire gain crystal pumped by
a frequency doubled vanadate laser, prisms for dispersion compensation, and a saturable absorber dye jet for mode-locking
and for defining the crossing point of two pulses circulating in the cavity. Such a ring laser is one of the
numerous mode-locked laser gyro and IPI configurations that has been demonstrated in past
research~\cite{Arissian14b,Lai92c}.
It has also been established that, when a saturable absorber is used, it should be
in a flowing configuration (dye jet)
to prevent phase coupling between the two counter-circulating pulses, by randomizing the phase of the backscattering of one pulse
into the other~\cite{Arissian14b}.
The response of intracavity phase interferometry is investigated by applying a differential phase shift per round trip
with a phase modulator inserted in the cavity, located preferably at 1/4 perimeter away from the pulse crossing point.
The phase modulator is a 100 $\mu$m thick plate of lithium niobate, oriented at Brewster angle,
with electrodes on one face to apply an electric field along the $z$ crystallographic axis.
The applied field is achieved by narrow band amplification of the signal of a detector monitoring
one of the output pulse trains (detector $D_s$ monitoring the pulse train from the clockwise circulating pulse
in Fig.~\ref{ring_laser}).
\section{Modifications of the phase response and envelope velocities with an intracavity Fabry-Perot etalon}
\subsection{Envelope velocities}
\begin{figure} [h!]
\centering
\includegraphics*[width=\linewidth]
{time-frequency.eps} \caption[]{\small (a) oscilloscope trace of a high repetition pulse train
created by the intracavity etalon. This picture is recorded with a fast
photodiode and an 8 GHz oscilloscope. (b) Spectrum of the nested frequency comb,
recorded with the same photodiode
and a spectrum analyzer.
The center frequency is 6.8 GHz, and the span 3 GHz. The marker is at 6.835 GHz.
The baseline step is an artifact of the spectrum analyzer.}
\label{pulsetrain}
\end{figure}
A 15.119 mm thick fused silica etalon, uncoated, is inserted in the ring cavity.
This material has a phase index of $n$ = 1.4534 at the laser wavelength of 800 nm,
and a group index $n_g$ = 1.46. Similar to the situation in a linear
mode-locked laser~\cite{Liu05,Masuda16}, despite the very low finesse of this Fabry-Perot, it influences the mode-locking
by creating a high frequency (close to the etalon round-trip frequency) pulse train
which repeats itself at a lower frequency (close to the original laser repetition rate).
Figure~\ref{pulsetrain} (a) shows the high repetition rate pulse train nested inside
the laser pulse train, which can be explained as follows.
At every round-trip, each pulse of the high frequency train adds coherently to the next one.
This coherence is established
through the resonance condition of the pulses within the laser cavity. Figure~\ref{pulsetrain} (b) shows the
spectrum of the nested frequency comb
associated with the double pulse train.
The changes in repetition rate of the laser cavity, after introduction of the Fabry-Perot, cannot be explained by
the traditional group delay introduced by the etalon. The transmission function of a Fabry-Perot of thickness $d$ and intensity reflectivity $R= |r|^2$
(where $r$ is the field reflectivity), at an internal angle
$\theta$ with the normal, is:
\begin{equation}
{\cal T}(\Omega) = \frac{(1 - R)e^{-ikd\cos \theta}}{1 - R e^{-2ikd\cos \theta}}.
\label{FPT}
\end{equation}
The group delay is the first derivative of the phase $\psi$ of this expression with respect to
frequency:
\begin{equation}
\left |\frac{d \psi}{d \Omega} \right |_{\omega_0} = \left (\frac{1 + R}{1 - R}\right )
\frac{1 + \tan^2 \delta}{\left [1 + \left( \frac{1 + R}{1 - R} \right)^2 \tan^2 \delta\right ] }\frac{nd}{c}
\label{FPgroup_delay}
\end{equation}
where $\delta = kd = \omega n d/c$.
This expression being correct near a resonance, we will make the approximation $\tan \delta \approx \delta$. To remain within the bandwidth of the
Fabry-Perot transmission,
$(1+R)/(1-R)\delta << 1$, and:
\begin{equation}
\left |\frac{d \psi}{d \Omega} \right |_{\omega_0} \approx \frac{1+R}{1-R} \frac{nd}{c}
\label{approx1}
\end{equation}
It has been demonstrated that the average velocity of the pulse circulating in the mode-locked cavity
differ considerably from the group delay of Eq.~(\ref{FPgroup_delay}) and~(\ref{approx1}). The group delays are determined by
dynamic gain and loss considerations. For instance, the continuous transfer of energy from each pulse of the
high frequency train into the next one results in a delay of the center of gravity of that train.
Saturable gain has the opposite effect of accelerating the pulse trains.
The average velocities, as modified by the Fabry-Perot etalon, as function of the tilt of the etalon,
have been measured and matched with theoretical simulations in reference~\cite{Masuda16}.
\begin{figure} [h!]
\centering
\includegraphics*[width=\linewidth]
{4.eps} \caption[]{\small (a) Plot of the tooth spacing
of the frequency comb corresponding to the clockwise circulating
group of pulses, as a function of the tilt angle of the etalon.
(b) Tooth spacing of the high frequency comb (solid red line)
and the low frequency comb (ring cavity repetition rate - blue dashed line)
versus cavity perimeter.}
\label{group_velocities}
\end{figure}
Figure~\ref{group_velocities} shows the repetition rate dependence of ring laser as a function of the
angular tilt of the etalon with respect to the normal (a), and as a function of the cavity perimeter (b).
Figure (a) cannot be explained by
the angular dependence of the ``group delay'' in Eq.~(\ref{FPgroup_delay}). Furthermore, there is a change
in repetition rate from 99.3072 MHz (10.06976 ns round-trip time) to 99.01 MHz (10.098847) ns round-trip time)
by insertion of the Fabry-Perot, which is a change of round-trip time of 29.087 ps.
This difference does not corresponds to the insertion of the etalon, which should add $(n_g-1) d/c
= (1.462 -1)\times 15.119/c = 13.5667$ mm. The measured 29.087 ps corresponds instead to the
insertion of an etalon of thickness 18.660 mm, or 3.54 mm more than the inserted glass!
Another evidence of the coupling between the modes of the laser and those of the etalon is the
plot of Fig.~\ref{group_velocities} (b) where the cavity length dependence of the
low frequency and high frequency combs are compared. The pulse round-trip period in the etalon and the big ring are both linked to the
perimeter of the large ring cavity. It should be noted that all these properties that have been
analyzed in reference~\cite{Masuda16} are only observed when the pulse duration
is much shorter than the Fabry-Perot etalon
round-trip time.
\subsection{Phase response}
\begin{figure} [h!]
\centering
\includegraphics*[width=\linewidth]
{5.eps} \caption[]{\small (a) Slope of the beat note response as a function of Fabry-Perot angle.
The slope remains at 0.8 kHz/V independently of the tilt of the Fabry-Perot. (b) Comparison of the beat note response
before (red curve) and after (blue curve) insertion of the fabry-Perot. }
\label{group_vs_beat}
\end{figure}
It is clear from the previous section that the modes of the laser comb and Fabry-Perot are coupled,
and therefore it can be expected that the laser comb will be influenced by the dispersion of
the Fabry-Perot. The previous measurements have also established that the
average pulse envelope velocity in the laser is {\em not related to $d\psi/d\Omega$} as is generally taken for granted.
Indeed, Figure~\ref{group_velocities} (a) has shown that the pulse envelope velocity varies significantly with
the angle $\theta$. Over that range, $d\psi/d\Omega$ remains constant, and so is the phase response plotted
as a function of tilt angle in Fig.~\ref{group_vs_beat} (a). In Fig.~\ref{group_vs_beat} (b), the
beat note is plotted as a function of applied voltage on the lithium niobate modulator, before (red solid curve) and
after insertion of the Fabry-Perot. The gyro response slope switches from 2.3 kHz/V without Fabry-Perot, to 0.8 kHz/V, which implies, from Eq.~(\ref{basic_equation_disp}),
that:
\begin{equation}
1 +
\frac{1}{\tau_{\phi 0}}\left .\frac{d \psi}{d\Omega} \right |_\omega = 2.3/0.8 = 2.9
\label{FP_resonant_disp}
\end{equation}
therefore, from Eq.~(\ref{approx1}) we note that the ratio of the slopes minus one is
the product of the ratio of the laser to etalon optical lengths, times $(1+R)/(1-R)$:
\begin{equation}
\frac{nd}{c\tau_{\phi 0}}\frac{1 + R}{1 - R} = 0.01456 \times \frac{1 + R}{1 - R} = 1.9.
\end{equation}
The ratio $(1+R)/(1-R)$ should thus be $1.9 \times 68.68 = 130$, which corresponds to a
value of effective reflectivity $R = 98$\%. We note that this corresponds to the
reflectivity needed to create a bunch of 20 pulses.
\subsection{Negative versus positive dispersion}
In the case of an intracavity etalon, the modes of the laser couple to those of the etalon
because this is the configuration of minimum losses.
This is also the reason that
normal dispersion ($d \psi/d\Omega > 0$) is observed, as the Kramers-Kr\"onig correspondent of a
negative loss line. Using the etalon in reflection would provide the negative dispersion
needed for amplification of the phase response, according to Eq.~(\ref{basic_equation_disp}).
However, the reflection characteristic would favor operation of the laser with the modes
{\em between} cavity resonances. The matching of the resonance could be forced by
active stabilization of the laser modes, or by synchronous pumping (OPO configuration),
which may also require active stabilization. One solution that addresses the phase problem without
introducing
periodic losses is to substitute a cavity mirror by a Gires-Tournois interferometer.
The latter is essentially an etalon of which one face has 100\% reflectivity, and the other
face a field reflectivity $r$.
Its transfer function is given by~\cite{Diels06}:
\begin{equation}
{\cal R} =
\frac{-r+e^{-i\delta}}{1-re^{-i\delta}} = e^{-i \psi} \label{IF-2}
\end{equation}
where $\delta = 2 k d \cos \theta$ is the phase delay,
$\theta$ the {\em internal} angle. Near a resonance $\delta = 2 N \pi$,
the phase shift of the device can be approximated by:
\begin{equation}
\psi(\Omega) = - \left [\arctan (\frac{1+r}{1-r}) \right ] \delta
\end{equation}
Near the resonance, the group delay is approximately:
\begin{equation}
\frac{d \psi}{d\Omega} \approx - \left ( \frac{1+r}{1-r} \right ) \frac{d \delta}{d\Omega}.
\end{equation}
which has indeed the correct sign for enhancement of the gyro response.
Adding a Gires Tournois of exactly the same thickness to the
present cavity will add a negative component to the denominator of Eq.~(\ref{basic_equation_disp}).
The Fabry-Perot with its resonances will lock the modes of the laser, which are then
also locked to those of the Gires-Tournois of the same thickness. The magic
value of the reflectivity that will make the denominator of Eq.~(\ref{basic_equation_disp})
equal to zero is $r$ = 99\% (intensity reflectivity of $r^2 = 0.98$).
\section{Conclusion}
This work addresses the phase response of a sensor based on Intracavity Phase Interferometry.
The device is a mode-locked laser in which two pulses circulate in the cavity, and are given a
phase shift relative to each other by the physical quantity to be measured.
The response of the device is a
beat frequency between the two frequency combs issued from the laser. The beat frequency
is proportional to the phase shift (or physical parameter to be measured).
It is shown that the proportionality constant (between beat frequency and phase)
can be modified by introducing a giant dispersion {\em for each tooth} of the frequency comb.
It is demonstrated experimentally with a mode-locked ring laser that the desired
coupling of a dispersion to all modes is obtained by inserting a low finesse etalon in
the laser cavity. The beat note (or gyroscopic) response modification is
a large reduction (by a factor 2.9), because the Fabry-Perot etalon introduced a
positive (normal) resonant dispersion. It is pointed out that a resonant negative dispersion,
that would enhance the beat note response, could be achieved with a
Gires-Tourois interferometer. There are numerous other possibilities, involving for instance
two photon absorption, that can be exploited to achieve a resonant dispersion affecting
all modes of the frequency comb, to enhance the sensitivity of this class of devices.
|
2,877,628,089,111 | arxiv | \section{Introduction}
In this paper we continue the investigation of the algebraic convolution
operation, which was initiated in \cite{GH}. Namely, we study the
following operation:
\begin{defn}
\label{def:convolution} Let $X_1$ and $X_2$ be algebraic varieties,
$G$ an algebraic group and let $\varphi_{1}:X_{1}\to G$ and $\varphi_{2}:X_{2}\to G$
be algebraic morphisms. We define their convolution by
\begin{gather*}
\varphi_{1}*\varphi_{2}:X_{1}\times X_{2}\to G\\
\varphi_{1}*\varphi_{2}(x_{1},x_{2})=\varphi_{1}(x_{1})\cdot\varphi_{2}(x_{2}).
\end{gather*}
In particular, the $n$-th convolution power of a morphism $\varphi:X\rightarrow G$
is
\[
\varphi^{*n}(x_{1},\ldots,x_{n}):=\varphi(x_{1})\cdot\ldots\cdot\varphi(x_{n}).
\]
\end{defn}
The convolution operation as above can be viewed as a geometric version
of the classical convolution as follows.
Firstly, recall that given $f_{1},f_{2}\in L^{1}(\reals^{n})$, their convolution is defined by
\[
(f_{1}\ast f_{2})(x)=\int_{\reals^{n}}f_{1}(t)f_{2}(x-t)dt,
\]
and it has improved smoothness properties, e.g.
\begin{itemize}
\item if $f_{1}\in C^{k}(\reals^{n})$ and $f_{2}\in C^{l}(\reals^{n})$, then $(f_{1}*f_{2})'=f_{1}'*f_{2}=f_{1}*f_{2}'$ and therefore $f_{1}*f_{2}\in C^{k+l}(\reals^{n})$.
\item In particular, if $f_1$ is smooth, then $f_1*f_2$ is smooth for every $f_2 \in L^1(\reals^n)$.
\end{itemize}
Now, for morphisms $\varphi_i:X_i\to G$ for $i=1,2$ as before,
consider the functions
\[
F_{\varphi_i}: G \to \mathrm{Schemes} \text{ by }F_{\varphi_i}(g)=\varphi_i^{-1}(g).
\]
Given a finite ring $A$, we naturally get maps $(\varphi_i)_A: X_i(A) \to G(A)$ and $(\varphi_1*\varphi_2)_A : X_1(A) \times X_2(A) \to G(A)$ from finite sets to the finite group $G(A)$, and furthermore,
\[
(\varphi_1 * \varphi_2)^{-1}_A (s)
= \biguplus\limits_{g\in G(A)} (\varphi_1)_A^{-1}(g) \times (\varphi_2)_A^{-1}(g^{-1}s).
\]
In particular, if we set $|F_{(\varphi_i)_A}|: G(A) \to \ints_{\geq 0}$ to be the function counting the size of the fibers, i.e
$|F_{(\varphi_i)_A}|(g)=|(\varphi_i)_A^{-1}(g)|$, we see that the algebraic convolution operation commutes with counting points over finite rings:
\[
|F_{(\varphi_1)_A}|*|F_{(\varphi_2)_A}|(s)
=\sum_{g\in G(A)}|F_{(\varphi_1)_A}|(g)\cdot|F_{(\varphi_2)_A}|(g^{-1}s)
=|F_{(\varphi_1 * \varphi_2)_A}|(s).
\]
It is thus natural to ask whether analogously to the analytic convolution operation, the algebraic convolution operation
improves smoothness properties of morphisms:
\begin{question}
\label{que:(convolution} Let $\varphi_{i}:X_{i}\rightarrow G$ for $i=1,2$ be two morphisms from varieties $X_{1}$ and $X_{2}$
to an algebraic group $G$, and assume that $\varphi_{1}$ satisfies a
singularity property $S$.
\begin{enumerate}
\item
When does $\varphi_{1}*\varphi_{2}$ have property
$S$ as well?
\item Which singularity properties can one obtain after finitely many
self-convolutions of $\varphi_{1}$?
\end{enumerate}
\end{question}
Concerning (1), the following proposition shows
that the convolution operation preserves singularity properties of
morphisms in the following sense:
\begin{prop}[{{\cite[Proposition 3.1]{GH}}}]
\label{prop:convpreservesgoodproperties} Let $X$ and $Y$ be varieties
over a field $K$, let $G$ be an algebraic group over $K$ and let
$S$ be a property of morphisms that is preserved under base change
and compositions. If $\varphi:X\to G$ is a morphism that satisfies
the property $S$, the natural map $i_{K}:Y\to\mathrm{Spec}(K)$ has
property $S$ and $\psi:Y\to G$ is arbitrary, then $\varphi*\psi$
and $\psi*\varphi$ has property $S$.
\end{prop}
The rest of this paper is devoted to the study of the second part of Question \ref{que:(convolution},
and generalizes the results from \cite{GH} in which
the case where $G=V$ is a vector space was dealt with.
If $\varphi$ is not smooth, then one in general can not guarantee
that some convolution power of $\varphi$ will be smooth (e.g. $\varphi:\mathbb{A}^{1}\to(\mathbb{A}^{1},+)$
via $x\mapsto x^{2}$, see Proposition \ref{prop:not smooth after convolutions}
for a more general statement). However, it is possible to achieve
other singularity properties as in Theorems \ref{Main result} and
\ref{thm: singularity properties obtained after convolution} in the
next section.
From here henceforth let $K$ denote a field of characteristic $0$.
The following property
plays a key role in this paper:
\begin{defn}[{The (FRS) property}]
\label{def:(FRS)} Let $X$ and $Y$ be smooth $K$-varieties. We
say that a morphism $\varphi:X\rightarrow Y$ is (FRS) if
it is flat and if every fiber of $\varphi$ is reduced and has rational
singularities (for rational singularities see Definition \ref{defn: rational sings}).
\end{defn}
The (FRS) property was first introduced in \cite{AA16}, where it was proved that for any semi-simple algebraic group $G$ the commutator map $[\cdot,\cdot]: G\times G \to G$ is (FRS) after $21$ self-convolutions.
This was then used to show in \cite{AA16} and \cite{AA18}
that
if $\Gamma$ is a compact $p$-adic group or an arithmetic group of higher rank
then its representation growth is polynomial and does not depend on $\Gamma$.
Explicitly, for $\Gamma$ as above and every
$c>40$ it holds that
\[
r_n(\Gamma):=\#\{ \text{irreducible }n\text{-dimensional }\complex\text{-representations of }\Gamma\text{ up to equivalence}\} = o(n^c).
\]
It was furthermore proved in \cite{AA18}, based on works of Denef \cite{Den87} and Musta\c{t}\u{a} \cite{Mus01}, that fibers of (FRS) morphisms have good asymptotic point count over finite rings of the form $\ints/p^k \ints$ (either in $p$ or in $k$, see \cite[Theorem A]{AA18} and \cite[Theorem 1.4]{Gla}).
This allows one to interpret Question \ref{que:(convolution}(2) with respect to the (FRS) property in a probabilistic way:
given $\varphi: X \to G$, then the (FRS) property of $\varphi^{*n}$
can be reformulated in terms of uniform $L^\infty$-boundedness
after $n$ steps, of a family of random walks on $\{ G(\ints/p^k\ints)\}_{p,k}$, which is obtained by pushing forward the family of uniform probability measures on $\{ X(\ints/p^k\ints)\}_{p,k}$ under $\varphi$.
For further discussion of the (FRS) property and its implications see \cite[Section 1.3]{GH}
or one of \cite{AA18,AA16}.
\subsection{{Main results}}
\subsubsection{Algebro-geometric results}
\begin{defn}
We say that a $K$-morphism $\varphi:X\to Y$ is \textit{strongly
dominant} if it is dominant when restricted to each absolutely irreducible
component of $X$.
\end{defn}
In this paper we verify a conjecture of Aizenbud and Avni (see \cite[Conjecture 1.6]{GH}),
showing that every strongly dominant morphism into an algebraic groups
becomes (FRS) after finitely many self-convolutions:
\begin{thmx}[Theorem \ref{Main result v2}]
\label{Main result} Let $X$ be a smooth $K$-variety, $G$ be a
connected algebraic $K$-group and let $\varphi:X\to G$ be a strongly
dominant morphism. Then there exists $N\in\mathbb{N}$ such that for
any $n>N$, the $n$-th convolution power $\varphi^{*n}$ is (FRS).
\end{thmx}
It is easy to see that one cannot give a universal bound (i.e. independent
of the map) on the number of convolutions needed in order to obtain
an (FRS) morphism. For example, the morphism $\varphi(x)=x^{n}$ requires
$n+1$ self convolutions in order to become an (FRS) morphism. For
other properties as below, an upper bound depending only on $\mathrm{dim}G$ can be given:
\begin{thmx}[see Propositions \ref{prop: upper bounds for properties} and \ref{prop:The-bounds-are tight}]
\label{thm: singularity properties obtained after convolution}
Let $m\in\nats$, let $X_{1},{\ldots},X_{m}$ be smooth $K$-varieties,
let $G$ be a connected algebraic $K$-group, and let $\{\varphi_{i}:X_{i}\rightarrow G\}_{i=1}^{m}$
be a collection of strongly dominant morphisms.
\begin{enumerate}
\item For any $1\leq i,j\leq m$ the morphism $\varphi_{i}*\varphi_{j}$
is surjective.
\item If $m\geq\mathrm{dim}G$ then $\varphi_{1}*{\ldots}*\varphi_{m}$
is flat.
\item If $m\geq\mathrm{dim}G+1$ then $\varphi_{1}*{\ldots}*\varphi_{m}$
is flat with reduced fibers.
\item If $m\geq\mathrm{dim}G+2$ then $\varphi_{1}*{\ldots}*\varphi_{m}$
is flat with normal fibers.
\item If $m\geq\mathrm{dim}G+k$, with $k>2$, then $\varphi_{1}*{\ldots}*\varphi_{m}$
is flat with normal fibers which are regular in codimension $k-1$.
\end{enumerate}
Furthermore, these bounds are tight.
\end{thmx}
It is a consequence of \cite{Elk78} and \cite[Corollary 2.2]{AA16},
that the (FRS) property is preserved under small deformations. This
allows us to extend our main result, Theorem \ref{Main result}, to
families of morphisms:
\begin{thmx}[{cf.~\cite[Theorem 7.1]{GH}}]
\label{main result for families} Let $K$ and $G$ be as in Theorem
\ref{Main result}, let $Y$ be a $K$-variety, let $\widetilde{X}$
be a family of varieties over $Y$, and let $\widetilde{\varphi}:\widetilde{X}\rightarrow G\times Y$
be a $Y$-morphism. Denote by $\widetilde{\varphi}_{y}:\widetilde{X}_{y}\rightarrow G$
the fiber of $\widetilde{\varphi}$ at $y\in Y$. Then,
\begin{enumerate}
\item The set $Y':=\{y\in Y:\widetilde{X}_{y}\text{ is smooth and }\widetilde{\varphi}_{y}:\widetilde{X}_{y}\rightarrow G\text{ is strongly dominant}\}$
is constructible.
\item There exists $N\in\nats$ such that for any $n>N$, and any $n$ points
$y_{1},\ldots,y_{n}\in Y'$, the morphism $\widetilde{\varphi}_{y_{1}}*\dots*\widetilde{\varphi}_{y_{n}}:\widetilde{X}_{y_{1}}\times\dots\times\widetilde{X}_{y_{n}}\rightarrow G$
is (FRS).
\end{enumerate}
\end{thmx}
As a consequence, we deduce the following theorem:
\begin{thmx}[{cf.~\cite[Corollary 7.8]{GH}, see \cite[Definition 7.7]{GH} for
the definition of complexity}]
\label{FRS on complexity} Let $G$ be a algebraic $K$-group. For
any $\mathrm{dim}G<D\in\nats$, there exists $N(D)\in\nats$ such
that for any $n>N(D)$ and $n$ strongly dominant morphisms $\{\varphi_{i}:X_{i}\rightarrow G\}_{i=1}^{n}$
of complexity at most $D$ where $\{X_{i}\}_{i=1}^{n}$ are smooth
$K$-varieties, the morphism $\varphi_{1}*\dots*\varphi_{n}$ is (FRS).
\end{thmx}
\subsubsection{Model-theoretic/analytic results}
The heart of the proof of Theorems \ref{Main result} and \ref{main result for families}
lies in proving the model theoretic statements Theorems
\ref{Main model theoretic result for vector spaces}, \ref{Main model theoretic result-Varieties},
\ref{Main model theoretic result for families of vector spaces} and
\ref{Model theoretic result- families of varieties}, which are interesting
on their own merits. We briefly explain this connection.
Let $\Ldp$ denote the first order Denef-Pas language (see Section
\ref{subsec:The-Denef-Pas-language,}) and let $\mathrm{Loc}$ denotes
the collection of all non-Archimedean local fields. We also use the
notation $F\in\mathrm{Loc}_{>}$ to denote ``$F\in\mathrm{Loc}$
with large enough residual characteristic''. Given an algebraic $\rats$-variety
$X$, its ring of $\Ldp$-motivic functions $\mathcal{C}(X)$ and
the notion of an $\Ldp$-motivic measure were defined (\cite[Definitions 3.7 and 3.12]{GH}),
building on the usual definition of the ring of motivic functions
attached to an $\Ldp$-definable set (see Definition \ref{def:motivic function}).
Any affine algebraic $\rats$-variety $X$ can be identified with
an $\Ldp$-definable set by choosing some $\ints$-model $\widetilde{X}$
of $X$. Roughly speaking, a motivic function $f$ on a $\rats$-variety
$X$ is a collection $\{f_{F}\}_{F\in\mathrm{Loc}_{>}}$ of functions
$f_{F}:X(F)\rightarrow\complex$, which is locally (on open affine
subsets) determined by a collection of $\Ldp$-formulas as in Definition
\ref{def:motivic function}. For a smooth $\rats$-variety $X$, a
collection of measures $\mu=\{\mu_{F}\}_{F\in\mathrm{Loc}_{>}}$ on
$\{X(F)\}_{F\in\mathrm{Loc}_{>}}$ is a \textit{motivic} measure on
$X$ if there exists an open affine cover $X=\bigcup\limits _{j=1}^{l}U_{j}$,
such that $\mu_{F}|_{U_{j}(F)}=(f_{j})_{F}\left|\omega_{j}\right|_{F}$,
where $f_{j}\in\mathcal{C}(U_{j})$ and $\omega_{j}$ is a non-vanishing
top differential form on $U_{j}$.
The (FRS) property of a morphism $\varphi:X\to G$ has an equivalent
analytic characterization in terms of continuity of the pushforward
measures $\varphi_{*}(\mu_{F})$, where $\{\mu_{F}\}_{F\in\mathrm{Loc}_{>}}$
is a certain collection of smooth, compactly supported measures on
$\{X(F)\}_{F\in\mathrm{Loc}_{>}}$ (see Theorem \ref{Analytic condition for (FRS)}
and Proposition \ref{prop:reduction to an analytic}). Since the collection
$\{\mu_{F}\}$ can be chosen to be motivic, and since the pushforward
of a motivic measure is motivic, Theorem \ref{Main result} can be
reduced to statements about motivic functions. These statements are
Theorems \ref{Main model theoretic result for vector spaces} and
\ref{Main model theoretic result-Varieties}.
\begin{thmx}
\label{Main model theoretic result for vector spaces}
Let $h\in\mathcal{C}(\mathbb{A}_{\rats}^{n})$ be a motivic function
and assume that $h_{F}\in L^{1}(F^{n})$ for any $F\in\mathrm{Loc}_{>}$.
Then there exists $\epsilon>0$, such that $h_{F}\in L^{1+\epsilon}(F^{n})$
for any $F\in\mathrm{Loc}_{>}$.
\end{thmx}
\begin{defn}
Let $X$ be an analytic variety over a non-Archimedean local field
$F$, with ring of integers $\mathcal{O}_{F}$.
\begin{enumerate}
\item A measure $\mu$ on $X$ is called \textsl{smooth} if for any $x\in X$
there exists $x\in U\subseteq X$ and an analytic diffeomorphism $\psi:U\rightarrow\mathcal{O}_{F}^{\mathrm{dim}X}$,
such that $\psi_{*}(\mu)$ is a Haar measure on $\mathcal{O}_{F}^{\mathrm{dim}X}$.
\item Let $f:X\rightarrow\mathbb{C}$ be a function on $X$ and $s\in\reals_{>0}$.
We say that $f$ is \textit{locally-$L^{s}$,} and write $f\in L_{\mathrm{Loc}}^{s}(X)$,
if for any open compact $U\subseteq X$, and for any (or equivalently
for some positive) smooth measure $\mu$ on $U$, we have $f\in L^{s}(U,\mu)$.
\end{enumerate}
\end{defn}
\begin{thmx}
\label{Main model theoretic result-Varieties}
Let $X$
be a smooth $\rats$-variety and let $h\in\mathcal{C}(X)$. Assume that $h_{F}\in L_{\mathrm{Loc}}^{1}(X(F))$
for any $F\in\mathrm{Loc}_{>}$. Then there exists $\epsilon>0$ such
that $h_{F}\in L_{\mathrm{Loc}}^{1+\epsilon}(X(F))$ for any $F\in\mathrm{Loc}_{>}$.
\end{thmx}
Theorems \ref{Main model theoretic result for vector spaces} and
\ref{Main model theoretic result-Varieties} can be generalized to
statements about families of functions, which can be used to deduce
Theorem \ref{main result for families}. These are Theorems \ref{Main model theoretic result for families of vector spaces}
and \ref{Model theoretic result- families of varieties}:
\begin{thmx}
\label{Main model theoretic result for families of vector spaces}
Let $h\in\mathcal{C}(\mathbb{A}_{\rats}^{n}\times Y)$ be a family
of motivic functions parameterized by a $\rats$-variety $Y$, and
assume that for any $F\in\mathrm{Loc}_{>}$ we have $h_{F}|_{F^{n}\times\{y\}}\in L^{1}(F^{n})$
for any $y\in Y(F)$. Then there exists $\epsilon>0$, such that for
any $F\in\mathrm{Loc}_{>}$ we have $h_{F}|_{F^{n}\times\{y\}}\in L^{1+\epsilon}(F^{n})$,
for any $y\in Y(F)$.
\end{thmx}
\begin{thmx}
\label{Model theoretic result- families of varieties}
Let $\varphi:X\rightarrow Y$ be a smooth morphism of smooth algebraic
$\rats$-varieties. Let $h\in\mathcal{C}(X)$ be a motivic function,
and assume that for any $F\in\mathrm{Loc}_{>}$ we have $h_{F}|_{X_{y}(F)}\in L_{\mathrm{Loc}}^{1}(X_{y}(F))$
for any $y\in Y(F)$. Then there exists $\epsilon>0$, such that for
any $F\in\mathrm{Loc}_{>}$ we have $h_{F}|_{X_{y}(F)}\in L_{\mathrm{Loc}}^{1+\epsilon}(X_{y}(F))$
for any $y\in Y(F)$.
\end{thmx}
\subsection{Further discussion of the main results}
In \cite{GH} we proved Theorems \ref{Main result}, \ref{main result for families}
and \ref{FRS on complexity} in the case where $G$ is a vector space.
The proof of Theorem \ref{Main result} can be divided into four parts:
\begin{enumerate}
\item Reduction to the case when $K=\rats$ (Proposition \ref{prop:reduction to Q},
cf. \cite[Section 6]{GH}).
\item Reduction to an analytic statement (Proposition \ref{prop:reduction to an analytic},
cf. \cite[Proposition 3.16]{GH}).
\item Reduction to a model theoretic statement:
\begin{enumerate}
\item Reduction of (2) to Theorem \ref{Main model theoretic result-Varieties}.
\item Further reduction to Theorem \ref{Main model theoretic result for vector spaces}.
\end{enumerate}
\item Proof of Theorem \ref{Main model theoretic result for vector spaces}
(the stronger Theorem \ref{Main model theoretic result for families of vector spaces}
is proved in Section \ref{sec:Main-analytic-result}).
\end{enumerate}
The proof of the first two parts is essentially the same as in \cite{GH}.
Let $\mu=\{\mu_{F}\}_{F\in\mathrm{Loc}}$ be a motivic measure on
$X$ such that $\mu_{F}$ is smooth, non-negative and supported on
$X(\mathcal{O}_{F})$ for every $F\in\mathrm{Loc}$. Such a measure
exists by \cite[Proposition 3.14]{GH}. The reduction to Proposition
\ref{prop:reduction to an analytic} implies that in order to deduce
Theorem \ref{Main result}, we need to find $N\in\mathbb{N}$, such
that for $F\in\mathrm{Loc}_{>}$ the measure $\varphi_{*}^{*N}(\mu_{F}\times\ldots\times\mu_{F})$
has continuous density with respect to the normalized Haar measure
on $G(\mathcal{O}_{F})$.
The difference between this paper and \cite{GH} lies in (3) and (4).
In \cite{GH}, the decay properties of the Fourier transform of $\varphi_{*}(\mu_{F})$
were studied (\cite[Theorem 5.2]{GH}), and were used to deduce that
after sufficiently many self-convolutions we obtain a measure with
continuous density (\cite[Corollary 5.3]{GH}). A key ingredient in
the proof was the fact that the Fourier transform is well behaved
with respect to motivic functions \footnote{Furthermore, Cluckers and Loeser formulated the commutative Fourier
transform in a motivic language, and introduced a class of motivic
exponential functions, which is preserved under Fourier transform,
see \cite[Section 7]{CL10} and \cite[Section 3.4]{CH18}.}.
If one wishes to use the line of proof of \cite{GH} in the general
case, then a non-commutative Fourier transform must be used and this
adds a serious complication. Due to this issue, we take a different
approach, showing that given a motivic measure $\sigma=\{\sigma_{F}\}_{F\in\mathrm{Loc}}$
on $G$, such that $\sigma_{F}$ is supported on $G(\mathcal{O}_{F})$
and has an $L^{1}$-density with respect to the normalized Haar measure
on $G(\mathcal{O}_{F})$, then there exists $\epsilon>0$ such that
$\sigma_{F}$ has $L^{1+\epsilon}$-density for any $F\in\mathrm{Loc}_{>}$.
This is Corollary \ref{Cor:Main model theoretic result for algebraic groups}
and it immediately follows from Theorem \ref{Main model theoretic result-Varieties}.
Taking $\sigma:=\varphi_{*}(\mu)$ for $\mu$ as above, and applying
Young's convolution inequality yields the existence of an $N(\epsilon)\in\nats$
such that $\varphi_{*}^{*N(\epsilon)}(\mu_{F}\times\ldots\times\mu_{F})$
has continuous density as required.
Since, locally, any smooth variety admits an \'etale morphism to
an affine space, and using the fact that \'etale morphisms preserves
the $L_{\mathrm{Loc}}^{1+\epsilon}$ property of functions on $F$-analytic
manifolds (see Lemma \ref{lem:etale preserves Lp}), it follows that
Theorem \ref{Main model theoretic result-Varieties} can be reduced
to an analogous claim about vector spaces, i.e. Theorem \ref{Main model theoretic result for vector spaces}.
\begin{rem}
Note that after the reduction to Theorem \ref{Main model theoretic result for vector spaces},
we are again in the realm of vector spaces, for which it is tempting
to use the results from \cite[Theorem 5.2]{GH}. In other words, a
possible naive approach for proving Theorem \ref{Main model theoretic result for vector spaces}
is to use \cite[Theorem 5.2]{GH} to show that functions $h$ as in
Theorem \ref{Main model theoretic result for vector spaces}, which
are also compactly supported, satisfy that their Fourier transform
$\left|\mathcal{F}(h_{F})(y)\right|$ decays faster then $\left|y\right|{}^{\alpha}$
for some $\alpha<0$, and then reduce to the following question:
\begin{question}
\label{que:L1+epsilon}Let $F$ be a local field, $h$ be a compactly
supported, $L^{1}$ function on $F^{n}$, whose Fourier transform
$\left|\mathcal{F}(h)(y)\right|$ decays faster then $\left|y\right|{}^{\alpha}$
for some $\alpha<0$. Is there an $\epsilon>0$ such that $h$ is
$L^{1+\epsilon}$?
\end{question}
A counter example to the above question is given in Appendix \ref{Appendix: L1+epsilon},
and in particular we see that the above naive approach fails.
\end{rem}
\subsubsection{Discussion of the model-theoretic results}
In \cite{Igu74,Igu75}, it was shown that given a polynomial $h\in\ints_{p}[x_{1},...,x_{n}]$
and $0<s\in\reals$, then the Igusa zeta function
\begin{equation}
Z_{h}(s,p):=\int_{\Zp^{n}}\left|h(x)\right|_{p}^{s}dx\label{eq:(1.1)}
\end{equation}
is a rational function in $p^{-s}$ for any $p$. In \cite{Den84,Pas89,Mac90,DL01},
variations of the above integral were studied, and the theorem on
the rationality of (\ref{eq:(1.1)}) was generalized for an integral
of the form
\[
Z_{h,\psi}(s,p)=\int_{W_{p}(\psi)}\left|h(x)\right|_{p}^{s}dx,
\]
where $\psi$ is an $\mathcal{L}_{\mathrm{DP}}$-formula and $W_{p}(\psi)=\{x\in\Qp^{n}:\psi(x)\text{ holds}\}$.
In \cite{BDOP13}, integrals of the form
\[
Z_{f,\psi}(s,F):=\int_{W_{F}(\psi)}\left|f_F(x)\right|_{F}^{s}dx,
\]
were investigated, where $f=\{f_{F}:F^{n}\rightarrow F\}_{F\in\mathrm{Loc}_{>}}$
is now an $\mathcal{L}_{\mathrm{DP}}$-definable function, $\psi$
is an $\mathcal{L}_{\mathrm{DP}}$-formula and $W_{F}(\psi)=\{x\in F^{n}:\psi(x)\text{ holds}\}$.
In Theorems \ref{Main model theoretic result for vector spaces} and
\ref{Main model theoretic result for families of vector spaces} we
investigate integrals of the form $I_{h}(s,F):=\int_{F^{n}}\left|h_{F}(x)\right|^{s}dx$,
where $h=\{h_{F}:F^{n}\rightarrow\reals\}$ is an $\Ldp$-motivic function
(note that we take the usual absolute value $| \cdot|$ on
$\reals$). This is a generalization of the last case, as $Z_{f,\psi}(s,F)=I_{h}(s,F)$,
with $h_{F}=\left|f_F(x)\right|_{F}\cdot1_{W_{F}(\psi)}$ and $f$ an $\Ldp$-definable function as before.
We want to find $\epsilon>0$ such that if $I_{h}(1,F)<\infty$
(i.e. $h_{F}$ is absolutely integrable) for $F\in\mathrm{Loc}_{>}$,
then $I_{h}(1+\epsilon,F)<\infty$ for $F\in\mathrm{Loc}_{>}$.
For $h_{F}=\left|f_F(x)\right|_{F}\cdot1_{W_{F}(\psi)}$ this can be deduced
from \cite[Theorem B]{BDOP13}. For $h$ motivic (Definition \ref{def:motivic function}),
some complications arise; $h$ is now a finite sum of terms $h=\sum\limits _{j=1}^{N}h_{i}$,
where each $h_{i}$ does not have to be absolutely integrable (at
least globally). In addition, each $h_{i}$ has a more complicated
description than a definable function. These complications are dealt
with in Section \ref{sec:Main-analytic-result}. The main idea in
both the definable and the motivic case, is to reduce $I_{h}(s,F)$
to certain exponential sums (the sums in the motivic case involve
polynomials as well), such that their convergence is easier to analyze.
Let us explain the method for $h=\{\left|f_F(x)\right|_{F}\}_{F \in \Loc_{>}}$, with $f$ definable. Let $q_{F}$
be the size of the residue field $k_{F}$ of $F$. We can write $I_{h}(s,F)$
as a sum over the level sets of $h_{F}$, that is $I_{h}(s,F)=\underset{k\in\ints}{\sum}\mu_{k,F}\cdot q_{F}^{-ks}$,
where $\mu_{k,F}$ is the measure of the level set $\{x \in F^n : \val(f_F(x))=k\}$.
The convergence of $I_{h}(s,F)$ then depends on the asymptotic
behavior of $\mu_{k,F}$ with respect to $k$. It can be shown (e.g.
\cite[proof of Theorem B]{BDOP13} or \cite[Theorem 5.1]{Pas89})
that each $\mu_{k,F}$ is a certain exponential sum, so that $I_{h}(s,F)$
can be written as a finite sum of expressions of the form
\begin{equation}
q_{F}^{-n}\sum_{\eta\in k_{F}^{r}}\sum_{\begin{array}{c}
l_{1},{\ldots},l_{n},k\in\mathbb{Z}\\
\sigma(\eta,l_{1},{\ldots},l_{n},k)
\end{array}}q_{F}^{-ks-l_{1}-{\ldots}-l_{n}},\label{eq:1.2}
\end{equation}
where $\sigma$ is an $\mathcal{L}_{\mathrm{DP}}$-formula. Using
elimination of quantifiers, and the rectilinearization Theorem (\cite[Theorem 2.1.9]{CGH14}),
we can write the expression appearing in (\ref{eq:1.2}) as a sum
of finitely many terms of the form
\begin{equation}
\sum_{(e_{1},{\ldots},e_{l})\in\mathbb{N}^{l}}q_{F}^{b_{1}(s)e_{1}+\ldots+b_{l}(s)e_{l}},\label{eq:1.3}
\end{equation}
where $b_{t}(s)$ are numbers depending on $s$. It can then be verified
that the set of $s\in\reals$ such that (\ref{eq:1.3}) is summable
is open, and does not depend on $F$, as required. In Section \ref{sec:Main-analytic-result}
we extend this result to the class of $\Ldp$-motivic functions, by
proving the more general Theorem \ref{Main model theoretic result for families of vector spaces}.
\subsection{Structure of the paper}
In Section \ref{sec:preliminaries} we recall relevant preliminary
material. In Section \ref{sec:Properties of convolutions of morphisms}
we prove Theorem \ref{thm: singularity properties obtained after convolution}.
In Section \ref{sec:Main-analytic-result} we prove Theorems \ref{Main model theoretic result for vector spaces},
\ref{Main model theoretic result-Varieties}, \ref{Main model theoretic result for families of vector spaces}
and \ref{Model theoretic result- families of varieties}. In Section
\ref{sec:Proof-of-the main result} we prove Theorems \ref{Main result},
\ref{main result for families} and \ref{FRS on complexity}.
In Appendix \ref{Appendix: L1+epsilon} we answer Question \ref{que:L1+epsilon}.
\subsection{Conventions}
Throughout the paper we use the following conventions:
\begin{itemize}
\item Unless explicitly stated otherwise, $K$ is a field of characteristic
$0$ and $F$ is a non-Archimedean local field
whose ring of integers is $\mathcal{O}_{F}$.
\item For any $K$-scheme $X$ and $x\in X$ we denote by $k(\{x\})$
its function field.
\item For a morphism $\varphi:X\rightarrow Y$ of $K$-schemes, the scheme
theoretic fiber at $y\in Y$ is denoted by either $X_{y,\varphi}$
or $\spec(K(\{y\}))\times_{Y}X$.
\item For a field extension $K'/K$ and a $K$-variety $X$ (resp $K$-morphism
$\varphi:X\rightarrow Y$), we denote the base change of $X$ (resp.
$\varphi$) by $X_{K'}:=X\times_{\spec(K)}\spec(K')$ (resp. $\varphi_{K'}:X_{K'}\rightarrow Y_{K'}$).
\item For a $K$-morphism $\varphi:X\rightarrow Y$ between $K$-varieties
$X$ and $Y$, we denote by $X^{\mathrm{sm}}$ (resp. $X^{\mathrm{ns}}$)
the smooth (resp. non-smooth) locus of $X$, and by $X^{\mathrm{sm,\varphi}}$ (resp. $X^{\mathrm{ns,\varphi}}$)
the smooth (resp. non-smooth) locus of $\varphi$ in $X$.
\item We use $F\in\mathrm{Loc}_{>}$ to denote ``$F\in\mathrm{Loc}$
with large enough residual characteristic''.
\end{itemize}
\subsection{Acknowledgements}
We thank Raf Cluckers, Ehud Hrushovski, Moshe Kamenski, Gady Kozma and Dan Mikulincer for useful conversations.
We thank Shai Shechter for both useful conversations and for reading a preliminary version of this paper.
A large part of this work was carried out while visiting Nir Avni at the mathematics department at Northwestern
University, we thank them and Nir for their hospitality.
We also wish to thank Nir for many helpful discussions, and for raising the question this paper
answers together with Rami Aizenbud.
Finally, it is a pleasure to thank our advisor
Rami Aizenbud for numerous useful conversation, for his guidance,
and for suggesting this question together with Nir.
Both authors where partially supported by ISF grant 687/13, BSF grant 2012247 and a Minerva
foundation grant.
\section{Preliminaries \label{sec:preliminaries}}
\subsection{Non-commutative Fourier transform}
In this subsection we follow \cite[Sections 2.3, 4.1, 4.2]{App14}.
Let $G$ be a compact Hausdorff second countable group and let $\hat{G}$
be the set of equivalence classes of irreducible representations of
$G$. Define the set $\mathcal{M}(\hat{G}):=\bigcup_{\pi\in\hat{G}}\mathrm{End}_{\complex}(\pi)$.
We say that a map $T:\hat{G}\rightarrow\mathcal{M}(\hat{G})$ is \textit{compatible}
if $T(\pi)\in\mathrm{End}_{\complex}(\pi)$ for any $\pi\in\hat{G}$.
We denote the space of compatible mappings by $\mathcal{L}(\hat{G})$.
The \textit{non-commutative Fourier transform} is the map $\mathcal{F}:L^{1}(G)\rightarrow\mathcal{L}(\hat{G})$,
defined by
\[
\mathcal{F}(f)(\pi)=\int_{G}f(g)\cdot\pi(g^{-1})dg,
\]
for each $\pi\in\hat{G}$, where $dg$ is the normalized Haar measure.
For $1\leq p<\infty$, we set $\mathcal{H}_{p}(\hat{G})$ to be the
linear space of all $T\in\mathcal{L}(\hat{G})$ for which
\[
\left\Vert T\right\Vert _{p}
:=\left(\sum_{\pi\in\hat{G}}\mathrm{dim}(\pi)\cdot\left\Vert T(\pi)\right\Vert _{\mathrm{Sch},p}^{p}\right)^{\frac{1}{p}}<\infty,
\]
where $\left\Vert T(\pi)\right\Vert _{\mathrm{Sch},p}:=\left(\mathrm{trace}(\left(T(\pi)T(\pi)^{*}\right)^{p/2})\right)^{\frac{1}{p}}$
is the Schatten $p$-norm. This gives $\mathcal{H}_{p}(\hat{G})$
a structure of a Banach space. In particular, $\left\Vert T\right\Vert _{2}^{2}=\underset{\pi\in\hat{G}}{\sum}\mathrm{dim}(\pi)\cdot\left\Vert T(\pi)\right\Vert _{\mathrm{HS}}^{2}<\infty$,
where $\| \cdot \|_{\mathrm{HS}}$ is the Hilbert-Schmidt
norm. This gives $\mathcal{H}_{2}(\hat{G})$ a structure of a complex
Hilbert space with an inner product
\[
\langle T_{1},T_{2}\rangle:=\sum_{\pi\in\hat{G}}\mathrm{dim}(\pi)\cdot\langle T_{1}(\pi),T_{2}(\pi)\rangle_{\mathrm{HS}}.
\]
The restriction of $\mathcal{F}$ to $L^{2}(G)$ has the following
properties:
\begin{thm}[{See e.g. \cite[Theorem 2.3.1]{App14}}]
~\label{Proposition Fourier L2}
\begin{enumerate}
\item (Fourier expansion) For all $f\in L^{2}(G)$, we have
\[
f(g)=\sum_{\pi\in\hat{G}}\mathrm{dim}(\pi)\cdot\mathrm{trace}(\mathcal{F}(f)(\pi)\pi(g))
\]
\item (Parseval-Plancherel identity) The operator $\mathcal{F}$ is an isometry
from $L^{2}(G)$ into $\mathcal{H}_{2}(\hat{G})$ so that for all
$f,f_{1},f_{2}\in L^{2}(G)$,
\[
\int_{G}\left|f(g)\right|^{2}dg=\sum_{\pi\in\hat{G}}\mathrm{dim}(\pi)\left\Vert \mathcal{F}(f)(\pi)\right\Vert _{\mathrm{HS}}^{2},
\]
and
\[
\int_{G}f_{1}(g)\cdot\overline{f_{2}(g)}dg=\sum_{\pi\in\hat{G}}\mathrm{dim}(\pi)\langle\mathcal{F}(f_{1})(\pi),\mathcal{F}(f_{2})(\pi)\rangle_{\mathrm{HS}}.
\]
\end{enumerate}
\end{thm}
Here are some additional properties of the Fourier transform:
\begin{thm}[{\cite[Theorem 2.3.2]{App14} and \cite[Section 2.14]{Edw72}}]
~\label{Fourier transform of functions}
\begin{enumerate}
\item If $1\leq p\leq2$ and $f\in L^{p}(G)$, then $\mathcal{F}(f)\in\mathcal{H}_{q}(\hat{G})$,
where $\frac{1}{p}+\frac{1}{q}=1$ and $\left\Vert \mathcal{F}(f)\right\Vert _{q}\leq\left\Vert f\right\Vert _{p}$.
\item Let $\mathcal{A}(G):=\{f\in L^{1}(G):\left\Vert \mathcal{F}(f)\right\Vert _{1}<\infty\}$.
Then $\mathcal{A}(G)$ consists of continuous functions, and
is a commutative Banach algebra with respect to convolution
$f_{1}*f_{2}(x)=\int_{G}f_{1}(g)f_{2}(g^{-1}x)dg$.
\end{enumerate}
\end{thm}
The Fourier transform can be defined for probability measures as well.
For a probability measure $\mu$ and any $\pi\in\hat{G}$ we define
\[
\mathcal{F}(\mu)(\pi)(v):=\int_{G}\pi(g^{-1})vd\mu.
\]
Notice that if $\mu$ is absolutely continuous with respect to $dg$
with density $f_{\mu}$ then $\mathcal{F}(\mu)(\pi)=\mathcal{F}(f_{\mu})(\pi)$.
\begin{prop}
\label{prop convolution+Fourier} Let $\mu_{1}$ and $\mu_{2}$ be
probability measures on $G$, and let $\pi\in\hat{G}$. Then
\[
\mathcal{F}(\mu_{1}*\mu_{2})(\pi)=\mathcal{F}(\mu_{1})(\pi)\cdot\mathcal{F}(\mu_{2})(\pi).
\]
\end{prop}
Finally, the spaces $\mathcal{H}_{p}(\hat{G})$ satisfy the classical
H\"older's inequality, as well as its generalization:
\begin{prop}[{Generalization of H\"older's inequality}]
\label{prop:(Generalization-of-Holder} Let $r\in(0,\infty]$ and
let $p_{1},{\ldots},p_{n}\in(0,\infty]$ such that $\sum_{k=1}^{n}\frac{1}{p_{k}}=\frac{1}{r}$.
Then for any collection $\{T_{k}\}_{k=1}^{n}$, with $T_{k}\in\mathcal{H}_{p_{k}}(\hat{G})$
we have $\prod_{k=1}^{n}T_{k}\in\mathcal{H}_{r}(\hat{G})$.
\end{prop}
\begin{proof}
The Schatten norms satisfy (a generalized version of) H\"older's inequality,
that is, for any $A_{1},A_{2},...,A_{n}\in\mathrm{End}_{\complex}(\pi)$
we have $\left\Vert \prod_{k=1}^{n}A_{k}\right\Vert _{\mathrm{Sch},r}\leq\prod_{k=1}^{n}\left\Vert A_{k}\right\Vert _{\mathrm{Sch},p_{k}}$.
Hence,
\begin{align*}
\left\Vert \prod_{k=1}^{n}T_{k}\right\Vert _{r} & =\left(\sum_{\pi\in\hat{G}}\mathrm{dim}(\pi)\cdot\left\Vert \prod_{k=1}^{n}T_{k}(\pi)\right\Vert _{\mathrm{Sch},r}^{r}\right)^{\frac{1}{r}}\\
& \leq\left(\sum_{\pi\in\hat{G}}\mathrm{dim}(\pi)\cdot\left(\prod_{k=1}^{n}\left\Vert T_{k}(\pi)\right\Vert _{\mathrm{Sch},p_{k}}\right)^{r}\right)^{\frac{1}{r}}\\
& \leq\prod_{k=1}^{n}\left(\sum_{\pi\in\hat{G}}\mathrm{dim}(\pi)\cdot\left\Vert T_{k}(\pi)\right\Vert _{\mathrm{Sch},p_{k}}^{p_{k}}\right)^{1/p_{k}}=\prod_{k=1}^{n}\left\Vert T_{k}\right\Vert _{\mathrm{p_{k}}}<\infty,
\end{align*}
where the second inequality follows from the generalized H\"older
inequality for $L^{p}(\hat{G},v)$, with respect to the measure $v(A)=\underset{\pi\in A}{\sum}\mathrm{dim}(\pi)$
for $A\subseteq\hat{G}$ (instead of the usual counting measure).
\end{proof}
\begin{lem}
\label{lem: continuous function after enough convolutions} Let $G$
be a compact group and let $f\in L^{s}(G)$ for $1<s<\infty$. Then
there exists $N(s)\in\nats$ such that the $N(s)$-th convolution
power $f^{*N(s)}$ of $f$ is continuous.
\end{lem}
\begin{proof}
By Young's convolution inequality, there exists $M(s)$ such that
$f^{*M(s)}\in L^{2}(G)$. Thus $\mathcal{F}(f^{*M(s)})\in\mathcal{H}_{2}(\hat{G})$.
Now, for $N(s)=2M(s)$ we have by Proposition \ref{prop convolution+Fourier}
\[
\mathcal{F}(f^{*N(s)})=\mathcal{F}(f^{*M(s)})\cdot\mathcal{F}(f^{*M(s)})\in\mathcal{H}_{1}(\hat{G}),
\]
and hence by Theorem \ref{Fourier transform of functions} the function $f^{*N(s)}$
is continuous.
\end{proof}
\subsection{\label{subsec:The-Denef-Pas-language,}The Presburger language, the
Denef-Pas language, and motivic functions }
The Presburger language, denoted
\[
\mathcal{L}_{\mathrm{Pres}}=(+,-,\leq,\{\equiv_{\mathrm{mod}~n}\}_{n>0},0,1)
\]
consists of the language of ordered abelian groups along with constants
$0,1$ and a family of $2$-relations $\{\equiv_{\mathrm{mod}~n}\}_{n>0}$
of congruences modulo $n$. We consider in this paper only the structures
isomorphic to $\ints$.
\begin{defn}[{See \cite[Definition 1]{Clu03} and \cite[Section 4.1]{CL08}}]
\label{def:-Linear function} Let $S\subseteq\ints^{n}$ and $X\subset S\times\ints^{m}$
be $\mathcal{L}_{\mathrm{Pres}}$-definable sets. We call a definable
function $f:X\to\ints$ \textit{$S$-linear} if there is an $\mathcal{L}_{\mathrm{Pres}}$-definable
function $\gamma:S\rightarrow\ints$ and integers $a_{i}$ and $0\leq c_{i}<n_{i}$
for $i=1,...,m$ such that $x_{i}-c_{i}\equiv0\text{ }\mathrm{mod}\text{ }n_{i}$
and $f(s,x_{1},...,x_{m})=\sum\limits _{i=1}^{m}a_{i}(\frac{x_{i}-c_{i}}{n_{i}})+\gamma(s)$.
If $S$ is a point (and hence $\gamma$ is a constant), we
say that $f$ is \textit{linear}.
\end{defn}
\begin{thm}[{Presburger cell decomposition \cite[Theorem 1]{Clu03}}]
\label{Presburger Cell decomposition} Let $S\subseteq\ints^{n}$,
$X\subset S\times\ints^{m}$ and $f:X\to\ints$ be $\mathcal{L}_{\mathrm{Pres}}$-definable.
Then there exists a finite partition $\mathcal{P}$ of $X$ into $S$-cells
(see \cite[Definition 4.3.1]{CL08}), such that the restriction $f|_{A}:A\to\ints$
is $S$-linear for each cell $A\in\mathcal{P}$.
\end{thm}
The Denef-Pas language $\Ldp=(\mathcal{L}_{\mathrm{Val}}, \mathcal{L}_{\mathrm{Res}}, \mathcal{L}_{\mathrm{Pres}},\text{\ensuremath{\val}, \ensuremath{\ac}})$
is a first order language with three sorts of variables:
\begin{itemize}
\item The valued field sort $\VF$ endowed with the language of rings $\mathcal{L}_{\mathrm{Val}}=(+,-,\cdot,0,1)$.
\item The residue field sort $\RF$ endowed with the language of rings $\mathcal{L}_{\mathrm{Res}}=(+,-,\cdot,0,1)$.
\item The value group sort $\VG$ (which we just call $\ints$), endowed
with the Presburger language $\mathcal{L}_{\mathrm{Pres}}=(+,-,\leq,\{\equiv_{\mathrm{mod}~n}\}_{n>0},0,1)$.
\item A function $\val:\VF\backslash\{0\}\rightarrow\VG$ for a valuation
map.
\item A function $\ac:\VF\rightarrow\RF$ for an angular component map.
\end{itemize}
Let $\mathrm{Loc}$ be the collection of all non-Archimedean local
fields and $\mathrm{Loc}_{M}$ be the set of $F\in\mathrm{Loc}$ such
that $F$ has residue field $k_{F}$ of characteristic larger than
$M$. We will use the notation $F\in\mathrm{Loc}_{>}$
to denote ``$F\in\mathrm{Loc}$ with large enough residual characteristic''.
For any $F\in\mathrm{Loc}$ and choice of a uniformizer $\pi$ of
$\mathcal{O}_{F}$,
the pair $(F,\pi)$ is naturally a structure of $\Ldp$.
Since our results are independent of the choice of a uniformizer, we omit it from our notations.
Therefore, given a formula $\phi$ in ${\Ldp}$, with $n_{1}$ free
valued field variables, $n_{2}$ free residue field variables and
$n_{3}$ free value group variables, we can naturally interpret it
in $F\in\mathrm{Loc}$, yielding a subset $\phi(F)\subseteq F^{n_{1}}\times k_{F}^{n_{2}}\times\mathbb{Z}^{n_{3}}$.
\begin{defn}[{{See \cite[definitions 2.3-2.6]{CGH16}}}]
\label{def:motivic function} Let $n_{1},n_{2},n_{3}$ and $M$ be
natural numbers.
\begin{enumerate}
\item A collection $X=(X_{F})_{F\in\mathrm{Loc}_{M}}$ of subsets $X_{F}\subseteq F^{n_{1}}\times k_{F}^{n_{2}}\times\mathbb{Z}^{n_{3}}$
is called a \textit{definable set} if there is an ${\Ldp}$-formula
$\phi$ and $M'\in\nats$ such that $X_{F}=\phi(F)$ for every $F\in\mathrm{Loc}_{M'}$.
\item Let $X$ and $Y$ be definable sets. A \textit{definable function}
is a collection $f=(f_{F})_{F\in\mathrm{Loc}_{M}}$ of functions $f_{F}:X_{F}\rightarrow Y_{F}$,
such that the collection of their graphs $\{\Gamma_{f_{F}}\}_{F\in\mathrm{Loc}_{M}}$
is a definable set.
\item Let $X$ be a definable set. A collection $h=(h_{F})_{F\in\mathrm{Loc}_{M}}$
of functions $h_{F}:X_{F}\rightarrow\mathbb{R}$ is called a \textit{motivic}
(or \textit{constructible}) function on $X$, if for $F\in\mathrm{Loc}_{M}$
it can be written in the following way (for every $x\in X_{F}$):
\[
h_{F}(x)=\sum_{i=1}^{N}|Y_{i,F,x}|q_{F}^{\alpha_{i,F}(x)}\left(\prod_{j=1}^{N'}\beta_{ij,F}(x)\right)\left(\prod_{j=1}^{N''}\frac{1}{1-q_{F}^{a_{ij}}}\right),
\]
where,
\begin{itemize}
\item $N,N'$ and $N''$ are integers and $a_{il}$ are non-zero integers.
\item $\alpha_{i}:X\rightarrow\mathbb{Z}$ and $\beta_{ij}:X\rightarrow\mathbb{Z}$
are definable functions.
\item $Y_{i,F,x}=\{\xi\in k_{F}^{r_{i}}:(x,\xi)\in{Y}_{i,F}\}$ is the fiber
over $x$ where $Y_{i}\subseteq X\times\mathrm{RF}^{r_{i}}$ are definable
sets and $r_{i}\in\nats$.
\item The integer $q_{F}$ is the size of the residue field $k_{F}$.
\end{itemize}
\end{enumerate}
The set of motivic functions on a definable set $X$ forms a ring,
which we denote by $\mathcal{C}(X)$.
\end{defn}
The following lemma, which is an easy
variant of \cite[Lemma 2.1.8]{CGH14}, and Theorem \ref{thm:-(Uniform Rectilinearization)}
are used in the proof of the main analytic result, Theorem \ref{Main model theoretic result for families of vector spaces}.
\begin{lem}[{{cf. \cite[Lemma 2.1.8]{CGH14}}}]
\label{lem: convergence of sums} Let $h:\ints_{\geq0}^{m}\to\reals$
be a non-zero function of the form
\[
h(x_{1},...,x_{m}):=\sum_{i=1}^{N}q^{d_{i1}x_{1}+\ldots+d_{im}x_{m}}P(x_{1},\ldots,x_{m})
\]
where $q\in\reals_{>1}$, $d_{it}\in\reals$ and $P\in\reals[x_{1},\ldots,x_{m}]$.
Furthermore assume $N\in\nats$ is minimal.
Then
\[
\sum\limits _{(e_{1},\ldots,e_{m})\in\ints_{\geq0}}|h(e_{1},\ldots,e_{m})|<\infty
\]
if and only if $d_{it}<0$ for every $i$ and $t$.
\end{lem}
\begin{thm}[{{Uniform rectilinearization, see \cite[Theorem 4.5.4]{CGH14}}}]
\label{thm:-(Uniform Rectilinearization)} Let $Y$ and $X\subseteq Y\times\ints^{m}$
be $\Ldp$-definable sets. Then there exist finitely many $\Ldp$-definable
sets $A_{i}\subset Y\times\ints^{m}$ and $B_{i}\subset Y\times\ints^{m}$
and $\Ldp$-definable isomorphisms $\rho_{i}:A_{i}\to B_{i}$ over
$Y$ such that for $F\in\mathrm{Loc}_{>}$ the following hold:
\begin{enumerate}
\item The sets $A_{i,F}$ are disjoint and their union equals $X_{F}$,
\item For every $i$, the function $\rho_{i,F}$ can be written as
\[
\rho_{i,F}(y,x_{1},...,x_{m})=(y,\alpha_{i,F}(x_{1},...,x_{m})+\beta_{i,F}(y)),
\]
with $\alpha_{i}$ $\mathcal{L}_{\mathrm{Pres}}$-linear, and $\beta_{i}$
an $\Ldp$-definable function from $Y$ to $\ints$.
\item For each $y\in Y_{F}$, the set $B_{i,F,y}$ is a set of the form
$\Lambda_{y}\times\ints_{\geq0}^{l_{i}}$ for a finite set $\Lambda_{y}\subset\ints_{\geq0}^{m-l_{i}}$
depending on $y$, with an integer $l_{i}\geq0$ depending only on
$i$.
\end{enumerate}
\end{thm}
\subsection{The (FRS) property}
Recall that given a variety $X$, a resolution of singularities is
a proper birational map $p:\widetilde{X}\to X$ from a smooth variety
$\widetilde{X}$ to $X$.
\begin{defn}
\label{defn: rational sings} We say that $X$ has \textit{rational
singularities} if for any resolution of singularities $p:\widetilde{X}\to X$
the natural morphism $\mathcal{O}_{X}\to Rp_{*}(\mathcal{O}_{\widetilde{X}})$
is a quasi-isomorphism where $Rp_{*}(-)$ is the higher direct image
functor.
\end{defn}
\begin{defn}[{{\cite[Section 1.2.1, Definition II]{AA16}}}]
\label{def:FRS} Let $\varphi:X\to Y$ be a morphism between smooth
$K$-varieties $X$ and $Y$.
\begin{enumerate}
\item We say that $\varphi:X\to Y$ is (FRS) at $x\in X(K)$ if it is flat
at $x$, and there exists an open $x\in U\subseteq X$ such that $U\times_{Y}\{\varphi(x)\}$
is reduced and has rational singularities.
\item We say that $\varphi:X\to Y$ is (FRS) if it is flat and it is (FRS)
at $x$ for all $x\in X(\overline{K})$.
\end{enumerate}
\end{defn}
The (FRS) property has the following analytic characterization:
\begin{thm}[{{\cite[Theorem 3.4]{AA16}}}]
\label{Analytic condition for (FRS)} Let $\varphi:X\rightarrow Y$
be a map between smooth algebraic varieties defined over a finitely
generated field $K$ of characteristic $0$, and let $x\in X(K)$.
Then the following conditions are equivalent:
\begin{enumerate}
\item $\varphi$ is (FRS) at $x$.
\item There exists a Zariski open neighborhood $x\in U\subseteq X$ such
that for any $K\subseteq F\in\mathrm{Loc}$ and any smooth, compactly
supported measure $\mu$ on $U(F)$, the measure $(\varphi|_{U(F)})_{*}(\mu)$
has continuous density.
\item For any finite extension $K'/K$, there exists $K'\subseteq F\in\mathrm{Loc}$
and a non-negative smooth, compactly supported measure $\mu$ on $X(F)$
that does not vanish at $x$ such that $(\varphi|_{X(F)})_{*}(\mu)$
has continuous density.
\end{enumerate}
\end{thm}
\section{Properties of convolutions of morphisms \label{sec:Properties of convolutions of morphisms} }
In this section we discuss properties of the convolution operation
(as defined in Definition \ref{def:convolution}). We first recall
the following proposition from \cite{GH}, which is a consequence
of Proposition \ref{prop:convpreservesgoodproperties}.
\begin{prop}[{{\cite[Corollary 3.3]{GH}}}]
\label{prop: properties preserved under convolution} Let $X$ and
$Y$ be smooth algebraic $K$-varieties, $G$ be an algebraic $K$-group
and let $S$ be any of the following properties of morphisms:
\begin{enumerate}
\item Smoothness.
\item (FRS).
\item Flatness (or flatness with reduced fibers).
\item Dominance.
\end{enumerate}
Suppose $\varphi:X\to G$ has property $S$ and $\psi:Y\to G$ is
any morphism, then the maps $\varphi*\psi:X\times Y\to G$ and $\psi*\varphi:Y\times X\to G$
have the property $S$.
\end{prop}
\begin{rem}
Dominance is not preserved under base change, but is still preserved
under the convolution operation.
\end{rem}
\begin{defn}
\label{def:normality and (FAI)} Let $\varphi:X\to Y$ be a morphism
between $K$-schemes.
\begin{enumerate}
\item $\varphi$ is called \textit{normal at $x\in X$} if $\varphi$ is
flat at $x$ and the fiber $X_{\varphi(x)}$ is geometrically normal
at $x$ over $k(\{\varphi(x)\})$. $\varphi$ is called \textit{normal
}if it is normal at any $x\in X$ (see \cite[Definition 36.18.1]{Sta}).
\item $\varphi$ is called \textit{(FAI)} if it is flat with absolutely
irreducible fibers.
\end{enumerate}
\end{defn}
\begin{lem}
The following properties of morphisms are preserved under base change
and composition:
\begin{enumerate}
\item Normality between smooth $K$-varieties.
\item (FAI).
\end{enumerate}
\end{lem}
\begin{proof}
We start by showing that these properties are preserved under base
change; given a normal morphism $\varphi:X\rightarrow Y$, and a base
change $\widetilde{\varphi}:X\times_{Y}Z\rightarrow Z$ of $\varphi$
with respect to a morphism $\psi:Z\rightarrow Y$, the fibers of $\widetilde{\varphi}$
are base change of the fibers of $\varphi$ by a field extension,
and hence they are geometrically normal (see e.g. \cite[Lemma 32.10.4]{Sta}).
The proof for the (FAI) property is similar.
Let $\varphi_{1}:X\rightarrow Y$ and $\varphi_{2}:Y\rightarrow Z$
be two normal morphisms between smooth $K$-varieties. By Serre's
criterion $(S_{2}+R_{1})$ for normality (see e.g. \cite[Lemmas 10.151.4]{Sta})
and the fact that fibers of flat morphisms between smooth varieties
are Cohen-Macaulay (as they are local complete intersections), it
is enough to show that the fibers of $\varphi_{2}\circ\varphi_{1}$
are regular in codimension $1$. Let $W$ be a codimension one subvariety
of $X_{z,\varphi_{2}\circ\varphi_{1}}$, the fiber of $\varphi_{2}\circ\varphi_{1}$
at $z\in Z$. By flatness of $\varphi_{1}$, the set $\varphi_{1}(W)$
is of codimension at most one in $Y_{z,\varphi_{2}}$, and hence there
exists $y\in Y_{z,\varphi_{2}}$ such that $X_{y,\varphi_{1}}\cap W$
is not trivial, and $y$ is a smooth point of $Y_{z,\varphi_{2}}$,
or equivalently, $y$ is a smooth point of $\varphi_{2}$. But $X_{y,\varphi_{1}}\cap W$
is of codimension at most one in $X_{y,\varphi_{1}}$ and hence, by
assumption, there exists $x\in X_{y,\varphi_{1}}\cap W$ such that
$\varphi_{1}$ is smooth at $x$. This implies that $\varphi_{2}\circ\varphi_{1}$
is smooth at $x$ and hence $x$ is a smooth point of $X_{z,\varphi_{2}\circ\varphi_{1}}$,
so $W\cap X_{z,\varphi_{2}\circ\varphi_{1}}^{\mathrm{sm}}$ is not
empty as required.
Now let $\varphi_{1}:X\rightarrow Y$ and $\varphi_{2}:Y\rightarrow Z$
be two (FAI) morphisms. In order to prove that $\varphi_{2}\circ\varphi_{1}$
is (FAI) we need the following lemma:
\begin{lem}[{{See \cite[Lemma 5.8.12]{Sta}}}]
\label{Lemma 3.3-Auxilary} Let $f:X_{1}\rightarrow X_{2}$ be a
continuous map between topological spaces, such that $f$ is open,
$X_{2}$ is irreducible, and there exists a dense collection of points
$y\in X_{2}$ such that $f^{-1}(y)$ is irreducible. Then $X_{1}$
is irreducible.
\end{lem}
Now let $z\in Z$, set $K'=\overline{k(\{z\})}$ and denote $X_{2}:=(Y_{z,\varphi_{2}})_{K'}$,
$X_{1}:=\left(X_{z,\varphi_{2}\circ\varphi_{1}}\right)_{K'}$,
and $f:=\varphi_{1}|_{X_{1}}:X_{1}\rightarrow X_{2}$. Notice that
$f$ is flat as a base change of a flat map, and thus open. Moreover,
by our assumption, $X_{2}$ is irreducible and all the fibers of $f$
are irreducible. Since $f$ satisfies the conditions of the above
lemma, we deduce that $X_{1}$ is irreducible, as required.
\end{proof}
As a consequence, we arrive at the following:
\begin{cor}
\label{Cor 3.5} Proposition \ref{prop: properties preserved under convolution}
holds if the property $S$ is either normality or (FAI).
\end{cor}
\begin{prop}
\label{prop: upper bounds for properties} Let $m\in\nats$, let $X_{1},{\ldots},X_{m}$
be smooth $K$-varieties, let $G$ be a connected algebraic $K$-group,
and let $\{\varphi_{i}:X_{i}\rightarrow G\}_{i=1}^{m}$ be a collection
of strongly dominant morphisms.
\begin{enumerate}
\item For any $1\leq i,j\leq m$ the morphism $\varphi_{i}*\varphi_{j}$
is surjective.
\item We have $(X_{1}\times{\ldots}\times X_{m})^{\mathrm{ns},\varphi_{1}*{\ldots}*\varphi_{m}}\subseteq X_{1}^{\mathrm{ns},\varphi_{1}}\times{\ldots}\times X_{m}^{\mathrm{ns},\varphi_{m}}$,
and in particular the non-smooth locus of $\varphi_{1}*\ldots*\varphi_{m}$
is of codimension at least $m$ in $X_{1}\times{\ldots}\times X_{m}$.
\item If $m\geq\mathrm{dim}G$ then $\varphi_{1}*{\ldots}*\varphi_{m}$
is flat.
\item If $m\geq\mathrm{dim}G+1$ then $\varphi_{1}*{\ldots}*\varphi_{m}$
is flat with reduced fibers.
\item If $m\geq\mathrm{dim}G+2$ then $\varphi_{1}*{\ldots}*\varphi_{m}$
is flat with normal fibers (i.e. it is a normal morphism).
\item If $m\geq\mathrm{dim}G+k$, with $k>2$, then $\varphi_{1}*{\ldots}*\varphi_{m}$
is flat with normal fibers which are regular in codimension $k-1$.
\end{enumerate}
\end{prop}
\begin{proof}
~
\begin{enumerate}
\item Follows from the fact that every two open dense sets $U_{1},U_{2}\subseteq G$
satisfy $U_{1}\cdot U_{2}=G$.
\item This holds since smoothness is preserved under convolution and $X_{i}^{\mathrm{sm},\varphi_{i}}$
is of codimension at least $1$ for any $1\leq i\leq m$.
\item It is enough to show that every fiber of $\varphi_{1}*...*\varphi_{m}$
is of co-dimension $\mathrm{dim}G$ (cf. \cite[III, Exercise 10.9]{Har77}).
Since $(X_{1}\times{\ldots}\times X_{m})^{\mathrm{ns},\varphi_{1}*{\ldots}*\varphi_{m}}$
is of codimension at least $m$ and the irreducible components of
any fiber of $\varphi_{1}*...*\varphi_{m}$ are of codimension at
most $\dim G$, it is sufficient to choose $m\geq\mathrm{dim}G$ to
guarantee that any irreducible component of a given fiber of $\varphi_{1}*...*\varphi_{m}$
contains a smooth point of $\varphi_{1}*...*\varphi_{m}$ and hence
is of codimension $\mathrm{dim}G$.
\item Let $m\geq\mathrm{dim}G+1$ and let $Z$ be a fiber of $\varphi_{1}*...*\varphi_{m}$.
By (2) and (3) it follows that $(X_{1}\times{\ldots}\times X_{m})^{\mathrm{ns},\varphi_{1}*{\ldots}*\varphi_{m}}\cap Z$
is of codimension at least $m-\mathrm{dim}G$ in $Z$. In particular
$Z$ is generically reduced (by e.g. \cite[III, Theorem 10.2]{Har77}),
and since $\varphi_{1}*{\ldots}*\varphi_{m}$ is flat,
it follows that $Z$ is reduced as well (see e.g. \cite[Lemma 10.151.3]{Sta}).
\item Similar to the proof of (4), where we now use Serre's criterion $(S_{2}+R_{1})$
for normality.
\item Follows from the proof of (4).
\end{enumerate}
\end{proof}
\begin{prop}
\label{prop:The-bounds-are tight} The bounds in Proposition \ref{prop: upper bounds for properties}
are tight.
\end{prop}
\begin{proof}
Let $G=(\mathbb{A}^{m},+)$ and consider the map $\varphi:\mathbb{A}^{m}\rightarrow G$,
defined by
\[
\varphi(x_{1},{\ldots},x_{m})=(x_{1}^{2},\left(x_{1}x_{2}\right)^{2},\left(x_{1}x_{3}\right)^{2},\ldots,\left(x_{1}x_{m}\right)^{2}).
\]
Notice that the fibers of $\varphi$ are zero dimensional except the
fiber over $0$, which is $(m-1)$-dimensional. Now, $(\varphi^{*d})^{-1}(0)$
contains the $d(m-1)$-dimensional subvariety $\underbrace{\varphi^{-1}(0)\times\ldots\times\varphi^{-1}(0)}_{d\text{ times}}$.
As long as $d(m-1)>m(d-1)$, or equivalently $d<m=\mathrm{dim}G$,
the map $\varphi^{*d}:\mathbb{A}^{dm}\to\mathbb{A}^{m}$ cannot be
flat.
The map $\varphi^{*m}$ is flat, hence the fiber $(\varphi^{*m})^{-1}(0)$
is reduced if and only if it is generically reduced. Moreover, $(x_{1},{\ldots},x_{m^{2}})$
is a smooth point of $(\varphi^{*m})^{-1}(0)$ if and only if $\varphi^{*m}$
is smooth at $(x_{1},{\ldots},x_{m^{2}})$. But since $\varphi$ is
not smooth at $\varphi^{-1}(0)$, the morphism $\varphi^{*(m+k)}$ is
not smooth at $\underbrace{\varphi^{-1}(0)\times\ldots\times\varphi^{-1}(0)}_{m+k\text{ times}}$,
so $(\varphi^{*(m+k)})^{-1}(0)$ is not regular in codimension $k$.
In particular, $(\varphi^{*m})^{-1}(0)$ is not reduced and $(\varphi^{*(m+1)})^{-1}(0)$
is not normal.
\end{proof}
We conclude the section with the following observation:
\begin{prop}
\label{prop:not smooth after convolutions} Let $\varphi:X\rightarrow G$
be a morphism from a smooth algebraic $K$-variety $X$ to an algebraic
$K$-group $G$. Assume that $\varphi$ is not smooth at $x\in X(\overline{K})$
with $\varphi(x)$ in the center of $G(\overline{K})$. Then for any
$t\in\nats$ we have that $\varphi^{*t}:X^{t}\rightarrow G$ is not
smooth at $(x,...,x)$.
\end{prop}
\begin{proof}
Write $m:G^{t}\rightarrow G$ for the multiplication map. Since $\varphi(x)$
is central, the following holds for any $Y_{1},...,Y_{t}\in T_{x}(X)$:
\begin{align*}
d\varphi_{(x,...,x)}^{*t}(Y_{1},...,Y_{t}) & =dm_{(\varphi(x),...,\varphi(x))}\circ(d\varphi_{x},...,d\varphi_{x})(Y_{1},...,Y_{t})\\
& =\sum_{i=1}^{t}d\varphi_{x}(Y_{i})\subseteq\mathrm{Im}(d\varphi_{x}),
\end{align*}
so $d\varphi_{(x,...,x)}^{*t}$ is not surjective.
\end{proof}
We therefore see that by convolving a non-smooth morphism sufficiently
many times, we may achieve certain singularity properties as in Proposition
\ref{prop: properties preserved under convolution} and in Theorem
\ref{Main result}, but one can not hope in general to obtain a smooth morphism.
That said, the following example shows that such a situation might
still occur.
\begin{example}
{Let $G=U_3(\complex)$
be the group of upper triangular unipotent matrices with complex values and
consider the morphism $\varphi:\mathbb{A}_{\complex}^{3}\rightarrow G$ given
by
\[
\varphi(x_{1},x_{2},x_{3})=\left(\begin{array}{ccc}
1 & x_{1}-1 & x_{1}x_{3}\\
0 & 1 & x_{2}\\
0 & 0 & 1
\end{array}\right).
\]
Note that $\varphi$ is not smooth at $(0,x_{2},0)$ for any $x_{2}\in\complex$, but $\varphi^{*2}$ is already a smooth morphism:
\[
\varphi^{*2}(x_{1},x_{2},x_{3},y_{1},y_{2},y_{3})=\left(\begin{array}{ccc}
1 & x_{1}+y_{1}-2 & \left(x_{1}x_{3}+y_{1}y_{3}+(x_{1}-1)y_{2}\right)\\
0 & 1 & x_{2}+y_{2}\\
0 & 0 & 1
\end{array}\right).
\]}
\end{example}
\section{Main analytic results \label{sec:Main-analytic-result} }
In this section we prove Theorems \ref{Main model theoretic result for vector spaces},
\ref{Main model theoretic result-Varieties}, \ref{Main model theoretic result for families of vector spaces},
and \ref{Model theoretic result- families of varieties}.
In \cite[Theorem 5.2]{GH} we have shown that given a compactly supported
motivic function $h\in\mathcal{C}(\mathbb{A}_{\rats}^{n})$, with
$\left|h_{F}\right|$ integrable for any $F\in\mathrm{Loc}_{>}$,
then $\mathcal{F}(h_{F})(y)$ decays faster then $\left|y\right|{}^{\alpha}$
for some $\alpha\in\reals_{<0}$. We deduced that any compactly supported
motivic measure $\mu=\{\mu_{F}\}_{F\in\mathrm{Loc}_{>}}$ on $\{F^{n}\}_{F\in\mathrm{Loc}_{>}}$
has a continuous density after enough self-convolutions. Using the
analytic criterion for the (FRS) property (Theorem \ref{Analytic condition for (FRS)}),
we were then able to prove Theorem \ref{Main result} for the case
where $G$ is a vector space.
If $G$ is not abelian, a problem arises as the non-commutative Fourier
transform is not as well behaved with respect to the class $\mathcal{C}(G)$
of motivic functions. Thus, a direct generalization of the proof given
in \cite[Theorem 5.2]{GH} by bounding the decay rate of the Fourier
transform is harder. Hence, we would like to show that compactly supported,
$L^{1}$, motivic functions become continuous after sufficiently many
self convolutions, without having to estimate the decay rate of their
Fourier transform. Theorems \ref{Main model theoretic result for vector spaces}
and \ref{Main model theoretic result-Varieties} solve this problem.
Indeed, Theorem \ref{Main model theoretic result-Varieties} easily
implies the following corollary:
\begin{cor}
\label{Cor:Main model theoretic result for algebraic groups} Let
$G$ be an algebraic $\rats$-group, and let $\mu$ be a motivic measure
on $G$, such that $\mu_{F}$ is supported on $G(\mathcal{O}_{F})$
and is absolutely continuous with respect to the Haar measure $\nu_{F}$
on $G(\mathcal{O}_{F})$ with density $f_{F}$. Then there exists
$\epsilon>0$ such that $f_{F}\in L^{1+\epsilon}(G(\mathcal{O}_{F}),\nu_{F})$
for any $F\in\mathrm{Loc}_{>}$.
\end{cor}
Notice that in the setting of Corollary \ref{Cor:Main model theoretic result for algebraic groups},
since $f_{F}\in L^{1+\epsilon}(G(\mathcal{O}_{F}),\nu_{F})$ for any
$F\in\mathrm{Loc}_{>}$, then after enough self-convolutions it will
have a continuous density (see Lemma \ref{lem: continuous function after enough convolutions}),
as desired. Corollary \ref{Cor:Main model theoretic result for algebraic groups}
will be used in Section \ref{sec:Proof-of-the main result} to deduce
Theorem \ref{Main result}.
\subsection{\label{subsec:Proof-of-Theorem}Proof of Theorem \ref{Main model theoretic result for families of vector spaces}}
We now prove Theorem \ref{Main model theoretic result for families of vector spaces}
(which directly implies Theorem \ref{Main model theoretic result for vector spaces}):
\begin{thm}[Theorem \ref{Main model theoretic result for families of vector spaces}]
Let $h\in\mathcal{C}(\mathbb{A}_{\rats}^{n}\times Y)$ be a family
of motivic functions parameterized by a $\rats$-variety $Y$, and
assume that for any $F\in\mathrm{Loc}_{>}$ we have $h_{F}|_{F^{n}\times\{y\}}\in L^{1}(F^{n})$
for any $y\in Y(F)$. Then there exists $\epsilon>0$, such that for
any $F\in\mathrm{Loc}_{>}$ we have $h_{F}|_{F^{n}\times\{y\}}\in L^{1+\epsilon}(F^{n})$,
for any $y\in Y(\rats)$.
\end{thm}
\begin{proof}
We may assume that $Y$ is affine and embedded in $\mathbb{A}_{\rats}^{n'}$.
By choosing a $\ints$-model we may assume that $Y\subseteq\VF^{n'}$
is an $\Ldp$-definable set. By Definition \ref{def:motivic function},
$h_{F}(x,y)$ can be written as
\[
h_{F}(x,y)=\sum_{i=1}^{N}\left|Y_{i,F,x,y}\right|q_{F}^{\alpha_{i,F}(x,y)}\left(\prod_{j=1}^{N_{1}}\beta_{ij,F}(x,y)\right)\left(\prod_{j=1}^{N_{2}}\frac{1}{1-q_{F}^{a_{ij}}}\right),
\]
for $(x,y)\in F^{n}\times Y(F)\subseteq F^{n}\times F^{n'}$. By quantifier
elimination (see e.g. \cite[Theorem 2.8]{GH}), there exist a collection
$\{g_{i}\}_{i=1}^{N_{3}}$ of polynomials $g_{i}\in\mathbb{Z}[x_{1},\ldots,x_{n},y_{1},\ldots,y_{n'}]$,
such that each $Y_{i}\subseteq\VF^{n}\times Y\times\mathrm{RF}^{r}$
can be defined by a finite disjunction of formulas of the form
\[
\chi(\ac(g_{1}(x,y)),\ldots,\ac(g_{N_{3}}(x,y)),\vec{\xi}')\wedge\theta(\val(g_{1}(x,y)),\ldots,\val(g_{N_{3}}(x,y))),
\]
where $\vec{\xi}'\in\mathrm{RF}^{r}$, $\chi$ is a $\mathcal{L}_{\mathrm{Res}}$
formula and $\theta$ is a $\mathcal{L}_{\mathrm{Pres}}$ formula.
Define the following level sets (parameterized by the value group
variables $\vec{m},\vec{n},\vec{k}$ and the residue field variables
$\vec{\xi}$):
\begin{gather*}
A_{\vec{m},\vec{n}}(y):=\{x\in\VF^{n}:(\alpha_{i}(x,y),\beta_{ij}(x,y))=(m_{i},n_{ij})\text{ for all }1\leq i\leq N,~1\leq j\leq N_{1}\},\\
A'_{\vec{\xi},\vec{k}}(y):=\{x\in\VF^{n}:(\ac(g_{i}(x,y)),\val(g_{i}(x,y)))=(\xi_{i},k_{i})\text{ for all }1\leq i\leq N_{3}\}.
\end{gather*}
Clearly, for every $y\in Y(F)$ the functions $\alpha_{i,F}$ and
$\beta_{ij,F}$ are constant on each $A_{\vec{m},\vec{n},F}(y)$ and
$|Y_{i,F,x,y}|$ is constant on each $A'_{\vec{\xi},\vec{k},F}(y)$.
Set $a(\vec{m},\vec{n},\vec{k},\vec{\xi},y)=\int_{A_{\vec{m},\vec{n}}(y)\cap A'_{\vec{k},\vec{\xi}}(y)}dx$
and write $N'=N+NN_{1}+N_{3}$. Integrating $|h_{F}|^{s}$ over $F^{n}$
we get:
\begin{equation}
\int_{F^{n}}\left|h_{F}(x,y)\right|^{s}dx=\sum_{\vec{\xi}\in k_{F}^{N_{3}}}\sum_{(\vec{m},\vec{n},\vec{k})\in\ints^{N'}}a_{F}(\vec{m},\vec{n},\vec{k},\vec{\xi},y)\cdot\left|\sum_{i=1}^{N}c_{i}(q_{F},\vec{\xi},y)\cdot q_{F}^{m_{i}}\cdot\prod_{j=1}^{N_{1}}n_{ij}\right|^{s},\label{eq:(4.1)}
\end{equation}
where $c_{i}(q_{F},\vec{\xi},y)=\left|Y_{i,F,x,y}\right|\cdot\prod_{j=1}^{N_{2}}\frac{1}{1-q_{F}^{a_{ij}}}$.
Note $c_{i}$ depends only on $y$, on $\vec{\xi}$ (by elimination
of quantifiers) and on $q_{F}$.
Using \cite[Theorem 4.1]{BDOP13} repeatedly, as was done in \cite[proof of Theorem 5.1]{Pas89}
and \cite[proof of Theorem B]{BDOP13}, we can write $a_{F}(\vec{m},\vec{n},\vec{k},\vec{\xi},y)$
as a finite sum of terms of the form
\begin{equation}
q_{F}^{-n}\sum_{\vec{\eta}\in k_{F}^{r'}}\sum_{\begin{array}{c}
\vec{l}\in\mathbb{Z}^{n}\\
\sigma(\vec{l},\vec{m},\vec{n},\vec{k},\vec{\xi},\vec{\eta},y)
\end{array}}q_{F}^{-l_{1}-\ldots-l_{n}},\label{eq:(4.2)}
\end{equation}
where $\sigma$ is an $\mathcal{L}_{\mathrm{DP}}$-formula. By quantifier
elimination, the formula $\sigma$ is equivalent to
\[
\bigvee_{i=1}^{L}\chi_{i}(\vec{\xi},\vec{\eta},\ac(g'(y)))\wedge\theta_{i}(\vec{l},\vec{m},\vec{n},\vec{k},\val(g'(y))),
\]
where $\ac(g'(y)):=\ac(g'_{1}(y)),\ldots,\ac(g'_{N_{4}}(y))$ and
similarly for $\val(g'(y))$, and each $\chi_{i}$ is an $\mathcal{L}_{\mathrm{Res}}$-formula,
each $\theta_{i}$ is an $\mathcal{L}_{\mathrm{Pres}}$-formula and
$g'_{j}\in\ints[y_{1},\ldots,y_{n'}]$. Given $I\in\{0,1\}^{L}$,
set
\[
B_{I}(\vec{\xi},y):=\{\vec{\eta}\in\RF^{r'}|\forall1\leq i\leq L:\chi_{i}(\vec{\xi},\vec{\eta},y)\text{ holds }\Longleftrightarrow I(i)=1\}
\]
and $\theta_{I}:=\bigvee_{I(i)=1}\theta_{i}$. Then for each $y$
and $\vec{\xi}$, we have $\mathrm{RF}^{r'}=\underset{I\in\{0,1\}^{L}}{\bigcup}B_{I}(\vec{\xi},y)$,
and we can write (\ref{eq:(4.2)}) as
\[
\sum_{I\in\{0,1\}^{L}}\left|B_{I,F}(\vec{\xi},y)\right|\cdot\sum_{\begin{array}{c}
\vec{l}\in\mathbb{Z}^{n}\\
\theta_{I}(\vec{l},\vec{m},\vec{n},\vec{k},y)
\end{array}}q_{F}^{-n-l_{1}-{\ldots}-l_{n}}.
\]
Hence, (\ref{eq:(4.1)}) can be written as a sum of finitely many
expressions of the form
\[
\sum_{\vec{\xi}\in k_{F}^{N_{3}}}\sum_{(\vec{m},\vec{n},\vec{k})\in\ints^{N'}}\sum_{I\in\{0,1\}^{L}}\left|B_{I,F}(\vec{\xi},y)\right|\sum_{\begin{array}{c}
\vec{l}\in\mathbb{Z}^{n}\\
\theta_{I}(\vec{l},\vec{m},\vec{n},\vec{k},y)
\end{array}}q_{F}^{-n-l_{1}-{\ldots}-l_{n}}\cdot\left|\sum_{i=1}^{N}c_{i}(q_{F},\vec{\xi},y)\cdot q_{F}^{m_{i}}\cdot\prod_{j=1}^{N_{1}}n_{ij}\right|^{s}.
\]
Since each $|B_{I,F}(\vec{\xi},y)|$ is positive and $q_{F}^{-n}$
is constant, it is enough to prove that for every $I\in\{0,1\}^{L}$
there exists $\epsilon>0$, not depending on $y\in Y(F)$, on $\vec{\xi}\in k_{F}^{N_{3}}$
or on $F$, such that the following sum converges:
\begin{equation}
\sum_{\begin{array}{c}
(\vec{l},\vec{m},\vec{n},\vec{k})\in\mathbb{Z}^{n+N'}\\
\theta_{I}(\vec{l},\vec{m},\vec{n},\vec{k},y)
\end{array}}\left|\sum_{i=1}^{N}c_{i}(q_{F},\vec{\xi},y)\cdot q_{F}^{m_{i}-\frac{l_{1}+{\ldots}+l_{n}}{1+\epsilon}}\cdot\prod_{j=1}^{N_{1}}n_{ij}\right|^{1+\epsilon}.\label{eq:(4.3)}
\end{equation}
Furthermore, since (\ref{eq:(4.1)}) converges for $s=1$ (by assumption
$\int_{F^{n}}|h_{F}(x,y)|dx<\infty$ for every $y\in Y(F)$), the
sum (\ref{eq:(4.3)}) converges for $\epsilon=0$ for every $y\in Y(F)$.
Fix $I_{0}\in\{0,1\}^{L}$ and set $\theta:=\theta_{I_{0}}$. By Theorem
\ref{thm:-(Uniform Rectilinearization)} (uniform rectilinearization),
we have the following decomposition:
\[
\{(\vec{l},\vec{m},\vec{n},\vec{k},y)\in\ints^{n+N'}\times Y:\theta(\vec{l},\vec{m},\vec{n},\vec{k},y)\}=\bigcup\limits _{j=1}^{L'}C_{j},
\]
where for each $j$ there exists an $\mathcal{L}_{\mathrm{DP}}$-definable
isomorphism $\rho_{j}:C_{j}\to B_{j}'\subset Y\times\ints^{n+N'}$
over $Y$, such that for each $y\in Y(F)$,
\begin{equation}
\rho_{j}|_{C_{j,y}}:C_{j,y}:=\{(\vec{l},\vec{m},\vec{n},\vec{k})\in\ints^{n+N'}:(\vec{l},\vec{m},\vec{n},\vec{k},y)\in C_{j}\}\xrightarrow{\sim}\Lambda_{y}\times\mathbb{Z}_{\geq0}^{s'_{j}}=B'_{j,y}\label{eq:(4.4)}
\end{equation}
is $\mathcal{L}_{\mathrm{Pres}}$-linear (see Definition \ref{def:-Linear function})
for some integer $s'_{j}\geq0$ depending only on $C_{j}$, and a
finite set $\Lambda_{y}\subset\ints_{\geq0}^{n+N'-s'_{j}}$ depending
on $y$. By applying quantifier elimination to the function $\rho_{j}$,
we may choose a finite partition $\mathcal{P}$ of $Y$ into $\mathcal{L}_{\mathrm{DP}}$-definable
subsets, such that on each such subset $Z\in\mathcal{P}$, and any
$y\in Z(F)$, the restriction $\rho_{j}|_{C_{j,y}}$ is of the form
$\widetilde{\rho_{j}}(\vec{l},\vec{m},\vec{n},\vec{k},\val(g'(y)))$,
with $\widetilde{\rho_{j}}:\ints^{n+N'}\times\ints^{N_{4}}\rightarrow\ints^{n+N'}$
an $\mathcal{L}_{\mathrm{Pres}}$-definable function, which is $\mathcal{L}_{\mathrm{Pres}}$-linear
in the first $n+N'$ coordinates. We can therefore write
\begin{equation}
\rho_{j}|_{C_{j,y}}(\vec{l},\vec{m},\vec{n},\vec{k})=\beta_{j}(\vec{l},\vec{m},\vec{n},\vec{k})+\gamma_{j}(\val(g'(y))),\label{eq:(4.5)}
\end{equation}
where $\beta_{j}$ is $\mathcal{L}_{\mathrm{Pres}}$-linear and $\gamma_{j}$
is $\mathcal{L}_{\mathrm{Pres}}$-definable (functions to $\ints^{n+N'}$).
By Theorem \ref{Presburger Cell decomposition}, we may further assume
that $\gamma_{j}$ is $\mathcal{L}_{\mathrm{Pres}}$-linear. Since
the collection $\{C_{j}\}_{j=1}^{L'}$ is finite, it is enough to
show that for each $C_{j}$, there exists $\epsilon>0$ such that
for every $y\in Y(F)$ the sum (\ref{eq:(4.3)}) converges when summing
over $C_{j,y}$. Since $\mathcal{P}$ is finite, we may assume that
$\rho_{j}|_{C_{j,y}}$ has the form of (\ref{eq:(4.5)}).
Fix $C:=C_{j}$, set $s':=s'_{j}$ and let $y\in Y(F)$. We can reparameterize
according to (\ref{eq:(4.4)}) and use (\ref{eq:(4.5)}) to write
the sum (\ref{eq:(4.3)}) restricted to $C_{y}$ as follows:
\begin{equation}
\sum_{\lambda\in\Lambda_{y}}\sum_{(e_{1},\ldots,e_{s'})\in\mathbb{Z}_{\geq0}^{s'}}\left|\sum_{i=1}^{{N_5}}c_{i}(q_{F},\vec{\xi},y)q_{F}^{\frac{1}{1+\epsilon}T_{i}(\vec{e},y,\epsilon,\lambda)}P_{i,\lambda}(e_{1},\ldots,e_{s'},\val(g'(y)))\right|^{1+\epsilon},\label{eq:4.6}
\end{equation}
where $P_{i,\lambda}$ are polynomials with rational coefficients,
$c_{i}(q_{F},\vec{\xi},y)\neq0$ and
\[
T_{i}(\vec{e},y,\epsilon,\lambda)=\tau_{i}(\epsilon,\lambda)+\sum\limits _{j=1}^{s'}d_{ij}(\epsilon)e_{j}+\sum\limits _{j=1}^{N_{4}}d'_{ij}(\epsilon)\val(g'_{j}(y)),
\]
where $d_{ij}(\epsilon)$ and $d'_{ij}(\epsilon)$ (resp. $\tau_{i}(\epsilon,\lambda)$)
are affine functions in $\epsilon$ (resp. $\epsilon,\lambda$) with rational
coefficients. Write (\ref{eq:4.6}) as
\[
\sum_{\lambda\in\Lambda_{y}}\sum_{(e_{1},\ldots,e_{s'})\in\mathbb{Z}_{\geq0}^{s'}}\left|\sum_{i=1}^{{N_5}}c_{i}'(q_{F},\vec{\xi},y,\epsilon,\lambda)q_{F}^{\frac{1}{1+\epsilon}(d_{i1}(\epsilon)e_{1}+\ldots+d_{is'}(\epsilon)e_{s'})}P_{i,\lambda}(e_{1},\ldots,e_{s'},\val(g'(y)))\right|^{1+\epsilon}.
\]
By Lemma \ref{lem: convergence of sums}, since (\ref{eq:4.6}) converges
for $\epsilon=0$, we must have $d_{ij}(0)<0$ for all $i$ and $j$.
Lemma \ref{lem: convergence of sums} furthermore implies that for
every $1\leq i\leq{N_5}$ each of the inner terms is summable:
\[
\sum_{(e_{1},\ldots,e_{s'})\in\mathbb{Z}_{\geq0}^{s'}}\left|c_{i}'(q_{F},\vec{\xi},y,\epsilon,\lambda)q_{F}^{\frac{1}{1+\epsilon}(d_{i1}(\epsilon)e_{1}+\ldots+d_{is'}(\epsilon)e_{s'})}P_{i,\lambda}(e_{1},\ldots,e_{s'},\val(g'(y)))\right|<\infty.
\]
Since $\left|\cdot\right|^{1+\epsilon}$ is a quasinorm (it satisfies
$\left|x+y\right|^{1+\epsilon}\leq2^{\epsilon}\left(\left|x\right|^{1+\epsilon}+\left|y\right|^{1+\epsilon}\right)$
instead of the usual triangle inequality), it follows that the summability
of (\ref{eq:4.6}) is implied by summability of
\[
\sum_{\lambda\in\Lambda_{y}}\sum_{(e_{1},\ldots,e_{s'})\in\mathbb{Z}_{\geq0}^{s'}}\sum_{i=1}^{{N_5}}\left|c_{i}'(q_{F},\vec{\xi},y,\epsilon,\lambda)q_{F}^{\frac{1}{1+\epsilon}(d_{i1}(\epsilon)e_{1}+\ldots+d_{is'}(\epsilon)e_{s'})}P_{i,\lambda}(e_{1},\ldots,e_{s'},\val(g'(y)))\right|^{1+\epsilon},
\]
so it is enough to find $\epsilon>0$ such that each
\[
\left|c_{i}'(q_{F},\vec{\xi},y,\epsilon,\lambda)q_{F}^{\frac{1}{1+\epsilon}(d_{i1}(\epsilon)e_{1}+\ldots+d_{is'}(\epsilon)e_{s'})}P_{i,\lambda}(e_{1},\ldots,e_{s'},\val(g'(y)))\right|^{1+\epsilon}
\]
is summable over ${\ints}_{\geq0}^{s'}$ for any $i$ and any $\lambda\in\Lambda_{y}$.
The following lemma is immediate:
\begin{lem}
\label{lemma 4.3}For every $\alpha>0$ and $\epsilon>0$, and every
polynomial $P'\in\rats[x_{1},\ldots,x_{m}]$ there exists $C>0$ such
that
\[
\left|P'(e_{1},\ldots,e_{m})\right|^{1+\epsilon}\leq C\cdot q^{\alpha(e_{1}+\ldots+e_{m})}
\]
for every $q\geq2$ and all $e_{1},...,e_{m}\in\mathbb{Z}_{\geq0}^{m}$.
\end{lem}
Thus, it is enough to find $\epsilon>0$ and $\alpha>0$ such that
the following converges:
\[
\sum_{(e_{1},\ldots,e_{s'})\in\mathbb{Z}_{\geq0}^{s'}}\left|c_{i}'(q_{F},\vec{\xi},y,\epsilon,\lambda)\right|^{1+\epsilon}q_{F}^{(d_{i1}(\epsilon)+\alpha)e_{1}+\ldots+(d_{is'}(\epsilon)+\alpha)e_{s'}}.
\]
Choose $\epsilon>0$ such that $d_{ij}(\epsilon)<0$ for all $i$
and $j$, and then choose $\alpha=-\frac{1}{2}\max\limits _{i,j}\{d_{ij}(\epsilon)\}$.
Since $\{d_{ij}(\epsilon)\}$ does not depend on $F$ or $y$, we
are done.
\end{proof}
\subsection{Proof of Theorems \ref{Main model theoretic result-Varieties} and
\ref{Model theoretic result- families of varieties}}
For the proof of Theorem \ref{Model theoretic result- families of varieties}
we need the following lemma:
\begin{lem}
\label{lem:etale preserves Lp} Let $X$ and $Y$ be smooth $F$-varieties,
with $F\in\mathrm{Loc}$, $\omega_{X}$ and $\omega_{Y}$ be invertible
top forms on $X$ and $Y$ respectively, and let $\varphi:X\rightarrow Y$
be an \'etale map. Let $\mu_{F}$ be a compactly supported non-negative
measure on $X(F)$, which is absolutely continuous with respect to
$|\omega_{X}|_{F}$, with density $f$. Then $\varphi_{*}\mu_{F}$
is absolutely continuous with respect to $|\omega_{Y}|_{F}$ with
density $\widetilde{f}$, and for any $1\leq s<\infty$, we have that
$\widetilde{f}\in L^{s}(Y(F),|\omega_{Y}|_{F})$ if and only if $f\in L^{s}(X(F),|\omega_{X}|_{F})$.
\end{lem}
\begin{proof}
Recall $f$ is positive. Since $\varphi$ is \'etale, it is quasi-finite and smooth, and we
have
\[
\widetilde{f}(y)=\sum_{x\in\varphi^{-1}(y)(F)}f(x)\cdot(\omega_{\varphi,F})_{y}(x),
\]
where $(\omega_{\varphi,F})_{y}(x):=\left|\frac{\omega_{X}}{\varphi^{*}\omega_{Y}}|_{\varphi^{-1}(y)}\right|_{F}(x)$
is an invertible function. Let $B$ be a compact open subset of $X(F)$
such that $\mathrm{supp}(f)\subseteq B$ and $1 \leq s < \infty$. Then there exist constants $c(s),C(s)>0$
such that $c(s)<(\omega_{\varphi,F})_{y}^{s-1}(x)<C(s)$ for every $x\in B$. For
any $y\in Y(F)$ we may find an open compact neighborhood $V\subseteq Y(F)$,
such that $\varphi^{-1}(V)$ is a disjoint union of $U_{1},...,U_{N}$,
where each $U_{i}$ diffeomorphic to $V$.
Now, since $\varphi$ is quasi-finite and $|\cdot|^{s}$ is a quasi-norm, it holds that $\underset{y\in Y(F)}{\mathrm{sup}}\left|\varphi^{-1}(y)(F)\right|<M$
for some $M\in\nats$, and
there exists $d(s)>0$ such that
\[
\sum_{x\in\varphi^{-1}(y)(F)}f(x)^{s}\cdot(\omega_{\varphi,F})_{y}(x)^{s}\leq\widetilde{f}(y)^{s}\leq d(s)\sum_{x\in\varphi^{-1}(y)(F)}f(x)^{s}\cdot(\omega_{\varphi,F})_{y}(x)^{s}.
\]
Since $f$ and $\widetilde{f}$ are compactly supported, the following
implies the claim:
\begin{align*}
c(s)\int_{\varphi^{-1}(V)}f(x)^{s}\left|\omega_{X}\right|_{F} & <\int_{\varphi^{-1}(V)}f(x)^{s}\cdot(\omega_{\varphi,F})_{y}(x)^{s-1}\left|\omega_{X}\right|_{F}=\int_{\varphi^{-1}(V)}f(x)^{s}\cdot(\omega_{\varphi,F})_{y}(x)^{s}\left|\varphi^{*}\omega_{Y}\right|_{F}\\
& =\sum_{i=1}^{N}\int_{U_{i}}f(x)^{s}\cdot(\omega_{\varphi,F})_{y}(x)^{s}\left|\varphi^{*}\omega_{Y}\right|_{F}\leq\int_{V}\widetilde{f}(y)^{s}\left|\omega_{Y}\right|_{F}\\
& \leq d(s)\sum_{i=1}^{N}\int_{U_{i}}f(x)^{s}\cdot(\omega_{\varphi,F})_{y}(x)^{s}\left|\varphi^{*}\omega_{Y}\right|_{F}\\
& =d(s)\int_{\varphi^{-1}(V)}f(x)^{s}\cdot(\omega_{\varphi,F})_{y}(x)^{s-1}\left|\omega_{X}\right|_{F}\leq C(s)d(s)\int_{\varphi^{-1}(V)}f(x)^{s}\left|\omega_{X}\right|_{F}.
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorems \ref{Main model theoretic result-Varieties} and
\ref{Model theoretic result- families of varieties}]
Theorem \ref{Main model theoretic result-Varieties} follows from
Theorem \ref{Model theoretic result- families of varieties} by choosing
$Y=\mathrm{Spec}(\rats)$, so it is left to prove Theorem \ref{Model theoretic result- families of varieties}.
Since $\varphi$ is smooth we may assume that $Y$ is affine and that
$\varphi:X\rightarrow Y$ factors as follows (where $n:=\mathrm{dim}X-\mathrm{dim}Y$),
\[
\varphi:X\overset{\psi}{\rightarrow}\mathbb{A}_{\rats}^{n}\times Y\overset{\pi}{\rightarrow}Y
\]
with $\pi$ the projection to the second coordinate and $\psi$ \'etale.
For any $y\in Y(F)$ we can consider the base change $\psi|_{X_{y}}:X_{y}\rightarrow\mathbb{A}_{F}^{n}\times\{y\}$
which is an \'etale $F$-morphism. Set $f:=\psi_{*}(h)\in\mathcal{C}(\mathbb{A}_{\rats}^{n}\times Y)$
and notice that $f_{F}|_{F^{n}\times\{y\}}\in L_{\mathrm{Loc}}^{1}(F^{n})$.
We would like to find $\epsilon>0$ such that for $F\in\mathrm{Loc}_{>}$,
$f_{F}|_{F^{n}\times\{y\}}\in L_{\mathrm{Loc}}^{1+\epsilon}(F^{n})$
for any $y\in Y(F)$.
Let $\widetilde{f}\in\mathcal{C}(\mathbb{A}_{\rats}^{n}\times Y\times\mathbb{A}_{\rats}^{1})$
be the pullback of $f$ by the projection to $\mathbb{A}_{\rats}^{n}\times Y$,
and consider $\widetilde{f}\cdot1_{B}\in\mathcal{C}(\mathbb{A}_{\rats}^{n}\times Y\times\mathbb{A}_{\rats}^{1})$
with $B=\{(x,y,t):\val(x)>\val(t)\}\subseteq\VF^{n}\times Y\times\VF$,
where $\val(x)=\underset{1 \leq i \leq n}{\mathrm{min}}\val(x_{i})$
for $x=(x_{1},...,x_{n})$. By Theorem \ref{Main model theoretic result for families of vector spaces},
since $(\widetilde{f_{F}}\cdot1_{B(F)})|_{F^{n}\times\{(y,t)\}}\in L^{1}(F^{n})$
for any $(y,t)\in Y(F)\times F$, then there exists $\epsilon>0$
such that for $F\in\mathrm{Loc}_{>}$ we have $(\widetilde{f_{F}}\cdot1_{B(F)})|_{F^{n}\times\{(y,t)\}}\in L^{1+\epsilon}(F^{n})$
for any $(y,t)$. But this implies that $f_{F}|_{F^{n}\times\{y\}}\in L_{\mathrm{Loc}}^{1+\epsilon}(F^{n})$
for any $y\in Y(F)$. By Lemma \ref{lem:etale preserves Lp}, $h_{F}|_{X_{y}(F)}\in L_{\mathrm{Loc}}^{1+\epsilon}$
for any $y\in Y(F)$.
\end{proof}
\section{Proof of the main algebro-geometric results \label{sec:Proof-of-the main result}}
In this section we prove Theorems \ref{Main result}, \ref{main result for families}
and \ref{FRS on complexity}.
\begin{thm}[{Theorem \ref{Main result}}]
\label{Main result v2} Let $X$ be a smooth $K$-variety, $G$ be
a $K$-algebraic group and let $\varphi:X\to G$ be a strongly dominant
morphism. Then there exists $N\in\mathbb{N}$ such that for any $n>N$
the $n$-th convolution power $\varphi^{*n}$ is (FRS).
\end{thm}
We prove Theorem \ref{Main result v2} by first reducing to the case
$K=\rats$, using the same strategy as in the proof of \cite[Proposition 6.1]{GH}.
Hence we want to show the following:
\begin{prop}
\label{prop:reduction to Q} It is enough to prove Theorem \ref{Main result v2}
for $K=\rats$.
\end{prop}
The proof of Proposition \ref{prop:reduction to Q} is very similar
to the proof in the case where $G$ is a vector space (\cite[Proposition 6.1]{GH}).
The proof of \cite[Proposition 6.1]{GH} consists of four statements
\cite[Lemmas 6.2, 6.4 and 6.5 and Proposition 6.3]{GH}. Lemmas 6.2,
6.4 and 6.5 of \cite{GH} hold if $G$ is any algebraic group (i.e.
not necessarily a vector space). \cite[Proposition 6.3]{GH} can also
be generalized to an algebraic group $G$, only this time one needs
to use a non-commutative Fourier transform. For completeness, we prove
the generalization of \cite[Proposition 6.3]{GH}, and thus finish
the proof of Proposition \ref{prop:reduction to Q}.
\begin{prop}[{{Generalization of \cite[Proposition 6.3]{GH}}}]
\label{prop: generalization of 6.3} Let $\varphi:X\rightarrow G$
be a morphism over a finitely generated field $K'/\rats$. Assume
there exists $N\in\nats$ such that the $N$-th convolution power
$\varphi^{*N}$ is (FRS) at $(x,{\ldots},x)$ for any $x\in X(\overline{K'})$.
Then $\varphi^{*2N}$ is (FRS).
\end{prop}
\begin{proof}
Let $x_{1},{\ldots},x_{2N}\in X(K'')$ for some finite extension $K''/K'$
and let $K'''$ be a finite extension of $K''$. Let $p$ be a prime
such that $x_{1},{\ldots},x_{2N}\in X(\Zp)$ and $K'''\subseteq\Qp$
(there exists infinitely many such primes, see \cite[Lemma 3.15]{GH}).
Since $\varphi^{*N}$ is (FRS) at $(x_{i},{\ldots},x_{i})$ for any
$i$, there exist Zariski open neighborhoods $(x_{i},{\ldots},x_{i})\in U_{i}\subseteq X^{N}$
such that $\varphi^{*N}$ is (FRS) at each $U_{i}$. Note that $U_{i}(\Zp)$
contains an analytic neighborhood of the form $V_{i}\times\ldots\times V_{i}$,
where $x_{i}\in V_{i}$ is open in $X(\Zp)$. By Theorem \ref{Analytic condition for (FRS)},
since $\varphi^{*N}$ is (FRS) at $V_{i}\times\ldots\times V_{i}$,
there exists a non-negative smooth measure $\mu_{i}$, with $\mathrm{supp}(\mu_{i})=V_{i}$
such that the measure
\[
\varphi_{*}^{*N}(\mu_{i}\times{\ldots}\times\mu_{i})=\varphi_{*}(\mu_{i})*{\ldots}*\varphi_{*}(\mu_{i})
\]
has continuous density with respect to the normalized Haar measure
on $G(\Zp)$. Consider the non-commutative Fourier transform $\mathcal{F}(\varphi_{*}(\mu_{i}))$
of the measure $\varphi_{*}(\mu_{i})$ on $G(\Zp)$. Since the density
of $\varphi_{*}^{*N}(\mu_{i}\times{\ldots}\times\mu_{i})$ is continuous,
it lies in $L^{2}(G(\ints_{p}))$. By Theorem \ref{Proposition Fourier L2}(2),
we have that $\mathcal{F}(\varphi_{*}^{*2N}(\mu_{i}\times{\ldots}\times\mu_{i}))$
is in $\mathcal{H}_{1}(\hat{G(\Zp)})$ for any $i$. This implies
$\mathcal{F}(\varphi_{*}(\mu_{i}))\in\mathcal{H}_{2N}(\hat{G(\Zp)})$.
By a generalization of H\"older's inequality (Proposition \ref{prop:(Generalization-of-Holder}),
we have
\[
\mathcal{F}(\varphi_{*}^{*N}(\mu_{1}\times{\ldots}\times\mu_{2N}))=\prod_{i=1}^{2N}\mathcal{F}(\varphi_{*}(\mu_{i}))\in\mathcal{H}_{1}(\hat{G(\Zp)}).
\]
By Theorem \ref{Fourier transform of functions} it follows that $\varphi_{*}^{*2N}(\mu_{1}\times{\ldots}\times\mu_{2N})$
has continuous density, and by Theorem \ref{Analytic condition for (FRS)}
it follows that $\varphi^{*2N}$ is (FRS) at $(x_{1},{\ldots},x_{2N})$,
as required.
\end{proof}
We can now assume that $\varphi:X\rightarrow G$ is a strongly dominant
$\rats$-morphism. The following analytic statement, which is a straightforward
generalization of \cite[Proposition 3.16]{GH}, is the final ingredient
needed in order to prove Theorem \ref{Main result v2}:
\begin{prop}[{{See \cite[Proposition 3.16]{GH}}}]
\label{prop:reduction to an analytic} Let $X$ be a smooth
$K$-variety, $G$ be an algebraic $K$-group, $\varphi:X\to G$ be
a strongly dominant morphism and let $\mu=\{\mu_{F}\}_{F\in\mathrm{Loc}}$
be a motivic measure on $X$ such that for every $F\in\mathrm{Loc}$,
$\mu_{F}$ is a non-negative Schwartz measure and $\supp(\mu_{F})=X(\mathcal{O}_{F})$
(for existence of such a measure, see \cite[Proposition 3.14]{GH}).
Assume that there exists $n\in\mathbb{N}$ such that the measure $\varphi_{*}^{*n}(\mu_{F}\times\ldots\times\mu_{F})$
has continuous density with respect to the normalized Haar measure
on $G(F)$ for $F\in\mathrm{Loc}_{>}$. Then the map $\varphi^{*n}:X\times\ldots\times X\to G$
is (FRS).
\end{prop}
\begin{proof}[Proof of Theorem \ref{Main result v2}]
Let $\varphi:X\rightarrow G$ and let $\mu=\{\mu_{F}\}_{F\in\mathrm{Loc}}$
be a motivic measure on $X$ as in Proposition \ref{prop:reduction to an analytic}.
By Corollary \ref{Cor:Main model theoretic result for algebraic groups}
we can find $\epsilon>0$ such the the density of $\varphi_{*}(\mu_{F})$
lies in $L^{1+\epsilon}(G(\mathcal{O}_{F}),\lambda_{F})$, where $\lambda_{F}$
is the normalized Haar measure on $G(\mathcal{O}_{F})$, for any $F\in\mathrm{Loc}_{>}$.
By Lemma \ref{lem: continuous function after enough convolutions}
this implies there exists $N(\epsilon)\in\nats$ such that $\varphi_{*}^{*N(\epsilon)}(\mu_{F}\times\ldots\times\mu_{F})$
has continuous density. By Proposition \ref{prop:reduction to an analytic}
we are done.
\end{proof}
\begin{proof}[Proof of Theorems \ref{main result for families} and \ref{FRS on complexity}]
Theorem \ref{main result for families} is a direct generalization
of \cite[Theorem 1.9]{GH} (see \cite[Section 7]{GH}). Part (1) follows
from \cite[Lemma 7.4, Lemma 7.5]{GH}. The proof of (2) is essentially
the same as the proof of \cite[Theorem 7.1 (2)]{GH}, where we replace
the vector space $V$ with $G$, and use Theorem \ref{Main result}
instead of \cite[Theorem 1.7]{GH}. Theorem \ref{FRS on complexity}
easily follows from \cite[Corollary 7.8]{GH}.
One can also deduce Theorem \ref{main result for families}(2)
from Theorem \ref{Model theoretic result- families of varieties}
(assuming $K=\rats$). Indeed, let $\widetilde{\varphi}:\widetilde{X}\rightarrow G\times Y$
be a $Y$-morphism as in Theorem \ref{main result for families}.
Denote by $\pi_{\widetilde{X}}:\widetilde{X}\rightarrow Y$ and $\pi:G\times Y\rightarrow Y$
the structure maps. By Theorem \ref{main result for families}(1),
generic smoothness, and by Noetherian induction, we may assume that
$\pi_{\widetilde{X}}$ is a smooth morphism, with $\widetilde{X}$
and $Y$ smooth with invertible top forms $\omega_{\widetilde{X}}$
and $\omega_{Y}$ respectively, and that $\widetilde{\varphi}_{y}:\widetilde{X}_{y}\rightarrow G$
is strongly dominant for any $y\in Y$. Let $\omega_{G}$ be an invertible
top form on $G$. Let $\mu=\{1_{\widetilde{X}(\mathcal{O}_{F})}\left|\omega_{\widetilde{X}}\right|_{F}\}_{F\in\mathrm{Loc}}$
and consider the following family of motivic measures $\mu_{y}:=\{1_{\widetilde{X}_{y}(\mathcal{O}_{F})}\left|\frac{\omega_{\widetilde{X}}}{\pi_{\widetilde{X}}^{*}\omega_{Y}}\right|_{F}\}_{F\in\mathrm{Loc}}$.
In order to prove Theorem \ref{main result for families}(2),
it is enough by Proposition \ref{prop:reduction to an analytic} to
find an $\epsilon>0$, which is independent of $y$, such that $(\widetilde{\varphi}_{y})_{*}\mu_{y,F}=h_{y,F}\left|\omega_{G}\right|_{F}$,
with $h_{y,F}\in L^{1+\epsilon}(G(\mathcal{O}_{F}),\left|\omega_{G}\right|_{F})$
for any $F\in\mathrm{Loc}_{>}$. Indeed, if we choose $N:=2N(\epsilon)$
as in Lemma \ref{lem: continuous function after enough convolutions},
then Part (2) follows by Proposition \ref{prop:(Generalization-of-Holder}.
By \cite[Corollary 3.6]{AA16}) the measure $\widetilde{\varphi}_{*}\mu$
is absolutely continuous with respect to $\left|\omega_{G}\wedge\omega_{Y}\right|_{F}$
and $\widetilde{\varphi}_{*}\mu_{F}=f_{F}\cdot\left|\omega_{G}\wedge\omega_{Y}\right|_{F}$,
with
\begin{align*}
f_{F}(g,y) & =\int_{(\widetilde{X}^{\mathrm{sm},\widetilde{\varphi}}\cap\widetilde{\varphi}^{-1}(g,y))(\mathcal{O}_{F})}\left|\frac{\omega_{\widetilde{X}}}{\widetilde{\varphi}^{*}(\omega_{G}\wedge\omega_{Y})}\right|_{F}\\
& =\int_{(\widetilde{X}_{y}^{\mathrm{sm},\widetilde{\varphi}}\cap\widetilde{\varphi}_{y}^{-1}(g))(\mathcal{O}_{F})}\left|\frac{\omega_{\widetilde{X}}}{\widetilde{\varphi}^{*}\omega_{G}\wedge\pi_{\widetilde{X}}^{*}\omega_{Y})}\right|_{F}.
\end{align*}
Since $\widetilde{\varphi}_{y}$ is strongly dominant, we further
have by \cite[Corollary 3.6]{AA16} that $h_{y,F}\in L^{1}(G(\mathcal{O}_{F}),\left|\omega_{G}\right|_{F})$.
Setting $\eta:=\left(\frac{\omega_{\widetilde{X}}}{\pi_{\widetilde{X}}^{*}\omega_{Y}}\right)$, we get
\[
h_{y,F}(g)=\int_{(\widetilde{X}_{y}^{\mathrm{sm},\widetilde{\varphi}}\cap\widetilde{\varphi}_{y}^{-1}(g))(\mathcal{O}_{F})}\left|
\frac{\eta}
{\widetilde{\varphi}^{*}\omega_{G}}
\right|_{F}
=f_{F}(g,y).
\]
By Theorem \ref{Model theoretic result- families of varieties}, we
get $h_{y,F}=f_{F}|_{G\times\{y\}}\in L^{1+\epsilon}(G(\mathcal{O}_{F}),\left|\omega_{G}\right|_{F})$
where $\epsilon>0$ does not depend on $y$.
\end{proof}
|
2,877,628,089,112 | arxiv |
\section{Introduction}
Deciding which feature to build is a difficult problem for software development organizations. The effect of an idea and its return-on-investment might not be clear before its launch. Moreover, the evaluation of an idea might be expensive. Thus, decisions are based on experience or the opinion of the highest paid person~\cite{kohavi2007practical}.
Similarly difficult is the assessment of technical changes on products. It can be difficult to predict the effect of a change on software quality, as evidenced by the extensive research on e.g. defect prediction~\cite{fenton1999critique,wahono2015systematic} or software reliability estimation~\cite{ronchieri2018metrics}. Moreover, there are cases in which it is not feasible to test for all necessary scenarios, e.g. in all relevant software and hardware combinations.
Continuous experimentation (CE) addresses these problems.
It provides a method to derive information about the effect of a change by comparing different variants of the product to the unmodified product (i.e. A/B testing). This is done by exposing different users to different product variants and collecting data about their behavior on the individual variants. Thereafter, the gathered information allows making data-driven decisions and thereby reducing the amount of guesswork in the decision making.
In 2007, Kohavi et al. \citer{kohavi2007practical} published an experience report on experimentation at Microsoft and provided guidelines on how to conduct so-called \emph{controlled experiments}. It is the seminal paper about continuous experimentation and thus represents the start of the academic discussion on the topic. Three years later, a talk from the Etsy engineer Dan McKinley~\cite{mckinley2012design} gained momentum in the discussion. In the talk, the term \emph{continuous experimentation} was used to describe their experimentation practices. Other large organizations, like Facebook \citer{feitelson2013development} and Netflix \citer{uribe2015netflix}, which adopted data-driven decision making~\citer{kohavi2013online}, shared their experiences~\citer{borodovsky2011ab} and lessons learned~\citer{kohavi2009online} about experimentation over the years with the research community. In addition, researchers from industry as well as academia developed methods, models and optimizations of techniques that advanced the knowledge on experimentation.
After more than ten years of research, numerous work has been published in the field of continuous experimentation, including work on problems like the definition of an experimentation process~\citer{fagerholm2017right}, how to build infrastructure for large-scale experimentation~\citer{gupta2018anatomy}, how to select or develop metrics \citer{machmouchi2016principles}, or the considerations necessary for various specific application domains~\citer{eklund2012architecture}.
The purpose of this systematic literature review is threefold. First, to synthesize the models suggested by the research community to find characteristics of an essential framework for experimentation. This framework can be used by practitioners to identify elements in their experimentation framework. Second, to synthesize the various technical solutions that have been applied. In this inquiry, we also include to what degree the solutions are validated.
Finally, to summarize and categorize the challenges and benefits with continuous experimentation. Based on this the following four research questions are addressed in this work:
\begin{itemize}
\item[]\textbf{RQ1:} What are the core constituents of a CE framework?
\item[]\textbf{RQ2:} What technical solutions are applied in what phase within CE?
\item[]\textbf{RQ3:} What are the challenges with CE?
\item[]\textbf{RQ4:} What are the benefits with CE?
\end{itemize}
The research method of this study is based on two independently conducted mapping studies~\citer{auer2018current, ros2018continuous2}. We extended and validated the studies by cross-examining the included studies. Thereafter, we applied two qualitative narrative syntheses and a thematic synthesis on the resulting set of papers.
In the following Section \ref{sec:background} an overview of continuous experimentation and related software practices is given. Next, Section \ref{sec:research_method} describes the research method applied and Section \ref{sec:results} presents the results of the research. In Section \ref{sec:discussion} the findings are discussed. Finally, Section \ref{sec:conclusions} summarizes the research.
\section{Background}
\label{sec:background}
In this section we present an overview of continuous experimentation and related continuous software engineering practices. Further, we summarize our two previously published mapping studies.
For the novice reader, we recommend Fagerholm et al.'s descriptive model of continuous experimentation~\citer{fagerholm2017right}, or Kohavi et al.'s tutorial on controlled experiments~\citer{kohavi2008controlled}, which is a more hands on introduction for continuous experimentation.
\subsection{Continuous software engineering}
\label{sec:background_continuous_software_engineering}
In their seminal paper on controlled experiments on the web from 2007, Kohavi et al.~\citer{kohavi2007practical} explain how the ability to continuously release new software to users is crucial for efficient and continuous experimentation, which is now known as continuous delivery and continuous deployment. Together with continuous integration, these are the three software engineering practices that allow software companies to release software to users rapidly and reliably~\cite{shahin2017continuous} and are fundamental requirements for continuous experimentation.
\emph{Continuous integration} entails automatically merging and integrating software from multiple developers. This includes testing and building an artifact, often multiple times per day.
\emph{Continuous delivery} is the process by which software is ensured to be always in state to be ready to be deployed to production. Successful implementation of continuous integration and delivery should join the incentives of development and operations teams, such that developers can release often and operations get access to powerful tools. This has introduced the DevOps~\cite{ebert2016devops} role in software engineering with responsibility for numerous activities: testing, delivery, maintenance, etc.
Finally, with \emph{continuous deployment}, the software changes that successfully make it through continuous integration and continuous delivery can be deployed automatically or with minimal human intervention. Continuous deployment facilitates collection of user feedback through faster release cycles~\cite{fabijan2015customer,yaman2016customer}. With faster release cycles comes the ability to release smaller changes, the smaller the changes are the easier it becomes to trace feedback to specific changes.
Fitzgerald and Stol~\citer{fitzgerald2017continuous} describe many more continuous practices that encompass not only development and operations, but also business strategy; among them continuous innovation and continuous experimentation. Experiments are means to tie development, quality assurance, and business together, because experiments provide a causal link between software development, software testing, and actual business value. Holmstr{\"o}m Olsson et al.~\cite{olsson2012climbing} describe how ``R\&D as an experiment system'' is the final step in a process that moves through the continuous practices.
\subsection{Continuous experimentation}
\label{sec:background_process}
The process of conducting experiments in a cycle is called \emph{continuous experimentation}. The reasoning is that the results of an experiment often begets further inquires. Whether the original hypothesis was right or wrong, the experimenter learns something either way. This learning can lead to a new hypothesis which is subject to a new experiment. This idea of iterative improvement is known since long from the engineering cycle or from iterative process improvements, as explained in the models Plan-Do-Check-Act~\cite{deming1986out} or quality process improvement paradigm (QIP)~\cite{basili1985quantitative}. The term ``continuous experimentation'' as used by software engineering researchers refers to a holistic approach~\citer{fitzgerald2017continuous} which spans a whole organization. It considers the whole software life-cycle, from business strategy and planning over development to operations.
Some authors have included many methods of gathering feedback in continuous experimentation~\citer{fagerholm2017right,lindgren2016raising}, including qualitative methods and data mining. These methods are not the focus of this work, though they are also valuable forms of feedback~\cite{bosch2015user,yaman2016customer}. For example, qualitative focus groups in person with selected users can be used early in development on sketches or early prototypes. The human-computer interaction research field has studied this extensively---recently under the name of \emph{user experience research}---and it has also been the subject of software engineering literature reviews in combination with agile development~\cite{jurca2014integrating,salah2014systematic}. In contrast to the qualitative methods, a controlled experiment requires a completed feature before it can be conducted. It is focused on quantitative data, thus cannot easily answer questions on the rationale behind the results, as qualitative methods can. As such, these methods compliment each other, but they are different in terms of methodology, infrastructure, and process. We discuss the qualitative methods through the lens of controlled experimentation in Section~\ref{sec:results_qualitative}.
A \emph{randomized controlled experiment} (or A/B test, bucket test, or split test) is a test of an idea or a \emph{hypothesis} in which variables are systematically changed to isolate the effects. Because the outcome of an experiment is non-deterministic, the experiment is repeated with multiple \emph{subjects}. Each subject is randomly assigned to some of the variable settings. The goal of the experiment is to investigate whether changes in the variables have a causal effect on some output value, usually in order to optimize it. In statistical terminology, the variable that is manipulated is called the independent variable and the output value is called the dependent variable. The effect that changing the independent variables has on the dependent variable can be expressed with a statistical \emph{hypothesis test}. A significance test involves calculating a p-value\footnote{T-test is often used to compare whether the mean of two groups are equal, based on the t-score $t=\frac{\bar{x}_1-\bar{x}_2}{s/\sqrt{n}}$, where $\bar{x}$ is mean, $s$ is the standard deviation, and $n$ is the number of data points. The p-value is derived from the t-score through the t-distribution.} and the hypothesis is validated if the p-value is below a given \emph{confidence level}, often $95\%$. In addition, properly conducting a controlled experiment requires a power calculation\footnote{A simple approximate power calculation~\cite{kohavi2007practical} for fixed $95\%$ confidence level and $90\%$ statistical power is $n = (4rs/\Delta)^2$, where $n$ is the number of users, $r$ is the number of groups, $s$ is the standard deviation, and $\Delta$ is the effect to detect.} to decide experiment duration.
In software engineering, a controlled experiment is often used to validate a new product feature, in that case the independent variable is whether a previous baseline feature or the new feature should be used. These are sometimes called control and test group, or the A and B group, in which case the \emph{experiment design} is called an A/B test. In an A/B test, only one variable is changed; other experiment designs are possible~\citer{fisher1937design,kohavi2008controlled} but rarely used~\cite{ros2018continuous2}. To optimize software configuration settings is another use of controlled experiments in software engineering~\citer{letham2019constrained}. The dependent variable of the experiment is some measurable metric, designed with input from some business or customer needs. If there are multiple metrics involved with the experiment, then an overall evaluation criteria (OEC)~\cite{roy2001design} can be used, which is the most important metric for deciding on the outcome of the experiment. The subjects of the experiments are usually users, that is, each user provides one or more data points. In some cases the subjects are hardware or software parameters, for example, when testing optimal compiler settings.
The process of continuous experimentation (see Fig.~\ref{fig:ce_process}) has similarities to the tradition from science in software engineering research~\cite{wohlin2012experimentation} and elsewhere~\cite{fisher1937design}. However, we base the following process on the RIGHT model by Fagerholm et al.~\citer{fagerholm2017right}. There are five main phases of the process. 1) In the \emph{ideation} phase hypotheses are elicited and prioritized. 2) \emph{Implementation} of a minimum viable product or feature (MVP) that fulfill the hypothesis follows. 3) Then, a suitable \emph{experiment design} with an OEC is selected. 4) \emph{Execution} involves release engineers deploying the product into production and operations engineers monitoring the experiment in case something goes wrong. Finally, 5) an \emph{analysis} is conducted with either statistical methods by data scientists or by qualitative methods by user researchers. If the results are satisfactory the feature is included in the product and a new hypothesis is selected so the product can be further refined. Otherwise, a decision must be made if to persevere and continue the process or if a pivot should be made to some other feature or hypothesis. Lastly, the results should be generalized into knowledge so the experience gained can be used to inform future hypotheses and development on other features.
\begin{figure}
\centering
\small
\input{figure-ce-process.tex}
\caption{Continuous experimentation process overview in five phases. A hypothesis is prioritized and implemented as a minimum viable product, then an experiment is designed and conducted that evaluates the software change, finally a decision is made to continue or pivot to another feature. This simplified process is based on the RIGHT model. The roles involved in each phase are shown to the left.}
\label{fig:ce_process}
\end{figure}
Many of the papers included in this study are on improved analysis methods. One such direction that need additional explanation is \emph{segmentation}. It is used in marketing to create differentiated products for different segments of the market. In the context of experiments it is used to calculate metrics for various slices of the data in, e.g. gender, age groups, or country. Experimentation tools usually perform automated segmentation~\citer{gupta2018anatomy} and can, for example, send out alerts if a change affects a particular user group adversely.
\subsection{Previous Work}
\label{sec:previous_work}
Prior to this literature review, two independent mapping studies~\citer{ros2018continuous,auer2018current} were conducted by the authors. Although both studies were in the context of continuous experimentation, their objectives differed.
In their mapping study \citer{ros2018continuous}, Ros and Runeson provided a short thematic synthesis of the topics in the published research and examined the context of the research in terms of reported organisations and types of experiments that were conducted. They found that there is a diverse spread of organisations of company size, sector, etc. Although, continuous experimentation for software that does not require installation (e.g. websites) was more frequently reported. Concerning the experimentation treatment types, the authors found more reports about visual changes than algorithmic changes. In addition, the least common type of treatment encountered in literature was new features. Finally, it was observed that the standard A/B test was by far the most commonly used experiment design.
The mapping study \cite{auer2018current} by Auer and Felderer investigated the characteristics of the state of research on continuous experimentation. They observed that the intensity of research activities increased from year to year and that there is a high amount of collaboration between industry and academia. In addition, the authors observed that industrial and academic experts contributed equally to the field of continuous experimentation. Concerning the most influential publications (in terms of citations), the authors found that the most common research type among them is experience report. Another observation of the authors was that in total ten different terms were used for the concept of continuous experimentation.
To summarize, the two previous studies discussed continuous experimentation in terms of its applicability in industry sectors, the treatment types and experimentation designs reported, as well as the characteristics of the research in the field. In contrast to these two mapping studies, this study has a far more comprehensive synthesis. Furthermore, the two previous studies improved the rigor and completeness of the search and synthesis procedures.
\section{Research Method}
\label{sec:research_method}
Based on these two independently published systematic mapping studies~\cite{auer2018current,ros2018continuous2} we conducted a joint systematic literature review. Thus, the presented sets of papers from these two studies were used as starting sets. Forward snowballing was applied, by following the assumption from Wohlin~\cite{Wohlin2016} that publications acknowledge previous research. Relevant research publications were identified in the resulting sets. Next, the two sets were merged and the resulting set was studied to answer the respective research questions. Therefore, qualitative narrative syntheses~\cite{huang2018synthesizing} and a thematic synthesis~\cite{cruzes2011recommended} were conducted to answer the research question based on the found literature.
In the following, the research objective and the forward snowballing procedures are presented. Thereafter, the syntheses used to answer the research questions are described.
Finally, the threats to validity are discussed.
\iffalse
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth, trim=0cm 11cm 6cm 0.5cm,clip]{figure-research-process.pdf}
\caption{Applied research method. The set of papers A and B were individually processed until the data integration. Therefore, half of the authors worked on Set A and half of them on Set B. }
\label{fig:research-process}
\end{figure}
\fi
\subsection{Research objective}
The aim of this research is to give an overview of the current state of knowledge about specific aspects of continuous experimentation. The research questions as stated in the introduction are on: 1) core constituents of a CE framework, 2) technical solutions within CE, 3) challenges with CE, and 4) benefits with CE. Based on the prior mapping studies we observed that there were many papers on models for processes and infrastructure, technical solutions, and challenges for CE and identified these as suitable targets for a systematic review.
\subsection{Forward snowballing}
The two existing sets of papers emerging from the previous literature reviews \citer{auer2018current,ros2018continuous2}, were used as starting sets for forward snowballing. They were selected as starting sets, because both studies were in the field of continuous experimentation and they had similar research directions. Moreover, both studies were conducted within a short time of each other and had similar inclusion criteria. Hence, the authors are confident that the union of both selected paper sets is a good representation of the field of continuous experimentation in this context until 2017.
The forward snowballing was executed independently for each starting set. After having elaborated a protocol to follow, half of the authors worked on Set A (based on \citer{auer2018current}, with 82 papers) and half of them on the other Set B (based on \citer{ros2018continuous2}, with 62 papers).
In total, the starting sets contained 100 distinctive papers of which 44 papers were shared among both starting sets.
The citations were looked up on Google Scholar\footnote{\url{https://scholar.google.com/}}. Since the two previous mapping studies covered publications until 2017, the forward snowballing was conducted by considering papers within the time span 2017--2019. The snowballing was executed until no new publications were found.
In the process of snowballing, we used a joint set of inclusion and exclusion criteria. A paper was included if \emph{any} of the inclusion criteria applies, unless \emph{any} of the exclusion criteria applies. The decision was based primarily on the abstract of papers. If this was insufficient to make a decision, the full paper was examined. In doubt, the selection of a paper was discussed with at least one other author. The criteria were defined as such:
\paragraph{Inclusion criteria}
\begin{itemize}
\item Any aspect of continuous experimentation (process, infrastructure, technical considerations, etc.)
\item Any aspect of controlled experiments (designs, statistics, guidelines, etc.)
\item Techniques that complement controlled experiments
\end{itemize}
\paragraph{Exclusion criteria}
\begin{itemize}
\item Not written in English
\item Not accessible in full-text
\item Not peer reviewed or not a full paper
\item Not a research paper: track, tutorial, workshop, talk, keynote, poster, book
\item Duplicated study (the latest version is included)
\item Primary focus on business-side of experimentation, advertisement, user interface, recommender system
\end{itemize}
The quality and validity of the included research publications were ensured through the inclusion and exclusion criteria. For instance, publications that did not go through a scientific peer-reviewing process were not considered according to the exclusion criteria. Moreover, to ensure that only mature work was included both vision papers with no evidence based contribution and short papers with preliminary results were excluded.
To summarize, the forward snowballing based on the starting Set A~\citer{auer2018current} resulted in 100 papers (Set A') and the starting Set B~\citer{ros2018continuous2} resulted in 88 papers (Set B'). After merging the two paper sets, a total of 128 distinctive papers represent the result of the applied forward snowballing.
\subsection{Synthesis}
To answer each research question, the collection of found papers was studied in more detail with respect to the individual research question. Therefore, two qualitative narrative syntheses~\citer{huang2018synthesizing} and one thematic synthesis~\cite{cruzes2011recommended} were conducted.
For the first two research questions a narrative synthesis was conducted for each question. This type of synthesis aggregates qualitative recurring themes within papers and provides a foundation for evidence-based interpretations of the themes in a narrative manner.
Thus, the collected set of papers was studied under the heading of the two respective research questions (RQ1, RQ2) to identify relevant themes in it. Next, the found themes were summarized and identified patterns within them were reported. In addition, all papers were classified in terms of their \emph{research type} according to Wieringa et al.~\cite{wieringa2005requirements} to identify what solutions were applied (RQ2).
As a result, the findings represent an aggregated view on the components of a continuous experimentation framework (RQ1) and the technical solutions that are applied during experimentation (RQ2). The found components of a continuous experimentation framework are described in Section \ref{subsec:results_rq1}. An overview of the identified solutions can be found in Section~\ref{sec:results-rq2}.
For the third research question, a thematic synthesis following the steps and checklist proposed by Cruzes and Dyb\r{a} \citer{cruzes2011recommended} was conducted (see Fig.~\ref{fig:thematic-process}). In addition, the examples given in Cruzes et al.~\citer{Cruzes14empirical} were consulted. As an initial step, all 128 selected papers were read and in total 154 segments of text were identified.
Next, each text segment was labeled with a code. A total of 84 codes were used to characterize the text segments. These codes were loosely based on terms that were identified in previous literature studies \citer{auer2018current,ros2018continuous2} and evolved during the labeling of the text segments. Thereafter, the codes that had overlapping themes were reduced into 17 themes. In the last step, these 17 themes were arranged according to 6 higher-order themes. The result of this analysis can be found in Section~\ref{sec:results:challenges}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth, trim=0cm 10.5cm 10cm 0cm,clip ]{fig-thematic-analysis.pdf}
\caption{Applied thematic synthesis process (adapted from Cruzes et al. \cite{cruzes2011recommended,Cruzes14empirical})}
\label{fig:thematic-process}
\end{figure}
Fig.~\ref{fig:thematic-process-example} illustrates the thematic analysis process with the theme ``low impact''. Based on the reading of five papers, four text segments were extracted. These segments were labeled with the codes benefits, budget and experiment prioritization. In the next step, the common theme among the codes was identified and the codes were reduced to the theme ``low impact''. During the creation of the model of higher-order themes, this theme was assigned to the higher-order theme ``business challenges''. All text segments and codes can be found in the results of the study that are available online (see Section~\ref{subsec:threats-to-validity}).
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth, trim=0cm 7cm 0cm 0cm,clip ]{fig-thematic-analysis-example.pdf}
\caption{Schematic example of the thematic synthesis.}
\label{fig:thematic-process-example}
\end{figure}
\subsection{Threats to validity}\label{subsec:threats-to-validity}
In every step of this research possible threats to its validity were considered and minimized when possible. In the following, the potential threats are discussed to provide guidance in the interpretation of this work. This section is structured by the four criteria construct validity, internal validity, external validity and reliability by Easterbrook et al. \cite{Easterbrook_2008}.
\paragraph{Construct validity} This threat is about the validity of identification and selection of publications. A challenging threat to overcome is the completeness of the literature search without a biased view of the subject. To mitigate this threat, all papers from the start sets were used without any further exclusion. The larger start set (in comparison to applying the exclusion criteria from the start) was expected to lead to a broader coverage of the literature during the forward snowballing. Furthermore, the process of forward snowballing was adapted in the way that the candidate selection was tolerant about which papers to include, which increases the coverage of the literature search. However, publications may have been falsely excluded because of misjudgment. Nevertheless, we conducted two parallel forward snowballing searches by different authors based on slightly different starting sets, which should mitigate this threat.
\paragraph{Internal validity} Threats that are caused by faulty conclusions could happen because of authors bias at the selection, synthesis of publications and interpretation of the findings. To mitigate this threat, a second author was consulted in case of any doubt. Nevertheless, activities like paper inclusion/exclusion and thematic synthesis inevitably suffer from subjective decisions.
\paragraph{External validity} Threats to external validity covers to which extent the generalization of the results is justified. As the aim of this study is to give an overview of continuous experimentation and to explore the future work items in continuous experimentation, the results should not be generalized beyond continuous experimentation. Therefore, this threat is neglectable.
\paragraph{Reliability} This threat focuses on the reproducibility of the study and the results. To mitigate this threat every step and decision of the study were recorded carefully and the most important decisions are reported. The results of the study are available online~\cite{auer_ros_kaltenbrunner_runeson_felderer_2021}. This enables other researchers to validate the decision made on the data. Furthermore, it allows to repeat the study.
\section{Results}
\label{sec:results}
In this section the results of the literature review are presented according to the research questions.
\subsection{What are the core constituents of a CE framework (RQ1)?}\label{subsec:results_rq1}
To conduct continuous experimentation, an organization has to have some constituents of a framework for experimentation. There is some process involved (implicit or explicit) and some infrastructure is required, which includes a toolchain as well as organizational processes. In the following, both aspects of an experiment, the process and its supporting infrastructure are discussed in detail.
\subsubsection{Experiment process}
The experiment process can be described in a model that gives a holistic view of the phases and environment around experimentation. Most studies on experiment processes present qualitative models based on interview data. Two models describe the overall process of experimentation. First, the reference model RIGHT (Rapid Iterative value creation Gained through High-frequency Testing) by Fagerholm et al.~\citer{fagerholm2017right} contains both an infrastructure architecture and a process model for continuous experimentation. The process model builds on the Build, Measure, Learn~\cite{ries2011lean} cycle of Lean Startup. The process in Figure~\ref{fig:ce_process} is a simplified view of RIGHT. Second, the HYPEX (Hypothesis Experiment Data-Driven Development) model is another earlier process model by Holmstr{\"o}m Olsson and Bosch~\citer{olsson2014from}. In comparison to the RIGHT model, it is less complete in scope, however it does go into further details in hypothesis prioritization using a gap analysis.
Kevic et al.~\citer{kevic2017characterizing} present concrete numbers on the experiment process used at Microsoft Bing through a source code analysis. They have three main findings: 1) code associated with an experiment is larger in terms of files in a changeset, number of lines, and number of contributors; 2) experiments are conducted in a sequence of experiments lasting on average 42 days, where each experiment is on average conducted for one week; and 3) only a third of such sequences are eventually shipped to users.
In addition to the general models described above, several models deal with a specific part of the experiment cycle.
The CTP (Customer Touchpoint) model by Sauvola et al.~\citer{sauvola2018continuous} focuses on user collaboration and describes the various ways that user feedback can be involved in the experimentation stages.
Amatrian~\citer{amatriain2013beyond} and Gomez-Uribe and Hunt~\citer{uribe2015netflix} describe their process for experimentation on their recommendation system at Netflix, in particular how they use offline simulation studies with online controlled experiments.
In the ExG Model (Experimentation Growth), by Fabijan et al.~\citer{fabijan2017evolution,fabijan2018online2}, organizations can quantitatively gauge their experimentation on technical, organizational, and business aspects.
In another model by Fabijan et al.~\citer{fabijan2018effectivePAPER} they describe the process of analyzing the results of experiments and present a tool that can make the process more effective, by e.g. segmenting the participants automatically and highlighting the presence of outliers.
Finally, Mattos et al.~\citer{mattos2018activity} present a model that discuss details on activities and metrics on experiments.
\subsubsection{Infrastructure}
Depending on what type of experimentation is conducted, different infrastructure is required. For controlled experimentation, in particular, technical infrastructure in the form of an \emph{experimentation platform} is critical to increase the scale of experimentation. At the bare minimum it needs to divide users into experiment groups and report statistics. Gupta et al.~\citer{gupta2018anatomy} at Microsoft have detailed the additional functionality of their experimentation platform. Also Schermann et al.~\citer{schermann2018doing} have described attributes of system and software architecture suitable for experimentation, namely, that micro-service-based architectures seem to be favored. Some experimentation platforms are specialized to specific needs: automation~\citer{mattos2017your}, or describing deployment through formal models~\citer{schermann2016bifrost}, or how experimentation can be supported by non-software engineers~\citer{koukouvis2016ab,firmenich2018usability}.
There are also non-technical infrastructure requirements, regardless of the type of experimentation in use. The required roles are~\citer{fitzgerald2017continuous}: data scientists, release engineers, user researchers, and the standard software engineering roles. Also, an organizational culture~\citer{kohavi2009online,xu2015from} that is open towards experimentation is needed. For example, Kohavi et al.~\citer{kohavi2009online} explain that managers can hinder experimentation if they overrule results with their opinions. They call the phenomenon the highest paid persons opinion (HiPPO).
While experimentation is typically associated with large companies, like Microsoft or Facebook, there are three interview studies that discuss experimentation at startups specifically~\citer{bjorklund2013lean,gutbrod2017how,fagerholm2017right}. As argued by Gutbrod et al.~\citer{gutbrod2017how}, startup companies often guess or estimate roughly about the problems and customers they are addressing. Thus, there is a need for startup companies to be more involved with experimentation, although they have less infrastructure in place.
Finally, we would like to call attention to some of the few case studies and experience reports on experimentation on ``ordinary'' software companies, which are neither multi-national corporations nor startups~\citer{ros2018continuous,rissanen2015continuous,yaman2018continuous}; in e-commerce, customer relations, and gaming industry respectively. None of these papers are focused on infrastructure, but do mention that infrastructure needs to be implemented. Risannen et al.~\citer{rissanen2015continuous} mentions additional challenges when infrastructure must be implemented on top of a mature software product. In summary, this indicates that infrastructure requirements are modest unless scaling up to multi-national corporation levels with millions of users.
\subsection{What technical solutions are applied in what phase within CE (RQ2)}\label{sec:results-rq2}
The study of the selected publications revealed many different types of solutions that were summarized by common themes. Figure \ref{fig:rq2-overview} gives an overview of the identified solutions organized in the phases of experimentation in Figure~\ref{fig:ce_process}.
\begin{figure}
\centering
\small
\input{solutions.tex}
\caption{Solutions applied within continuous experimentation arranged by main phase of experimentation from Section~\ref{sec:background_process}.}
\label{fig:rq2-overview}
\end{figure}
\subsubsection{Data mining}
Data from previous experiments can be used to make predictions or mine insights to either improve the reliability of the experiment or for ideation. There were three specific solutions for data mining in continuous experimentation: 1)~calculating variance of metrics through bigger data sets than just one experiment at Netflix~\citer{xie2016improving}, Microsoft~\citer{deng2015objective,deng2013improving}, Google~\citer{hohnhold2015focusing}, and Oath~\citer{appiktala2017demystifying}; 2)~mining for invalid tests through automatic diagnosis rules at LinkedIn~\citer{chen2019how} and Sevenval Technologies~\citer{nolting2016context}; and finally 3)~to extract insights from segments of the users, by detecting if a treatment is more suitable for those specific circumstances~\citer{duivesteijn2017have}, this technique is applied at Snap~\citer{xie2018false} and Microsoft~\citer{fabijan2018effectivePAPER}.
\subsubsection{Metric specification}
Defining measurements for software is difficult. At Microsoft they have hundreds of metrics in place for each experiment, they recommend organizing metrics in a hierarchy~\citer{machmouchi2016principles} and evaluating how well metrics work~\citer{deng2016data,dmitriev2016measuring}. At Yandex, they pair OEC metrics with a statistical significance test to create an overall acceptance criteria (OAC) instead~\citer{drutsa2015practical}. Several pieces of work are on defining and improving usability metrics, especially from Yandex~\citer{budylin2018consistent,drutsa2017using,kharitonov2017learning}. Also at Microsoft they have a rule-based classifier where each user action is either a frustration or benefit signal~\citer{machmouchi2017beyond}.
Some general guidelines for defining metrics follow. At Microsoft~\citer{deng2016data,dmitriev2016measuring,machmouchi2016principles}, they have hundreds of metrics for each experiment (in addition to a few OEC). Machmouchi and Buscher~\citer{machmouchi2016principles} from Microsoft describe how their metrics are interpreted in a hierarchy in their tool (similar to Fabijan et al.~\citer{fabijan2018effectivePAPER} also at Microsoft). At the top of the hierarchy are statistically robust metrics (meaning they tend not to give false positives) and at the bottom are feature specific metrics that are allowed to be more sensitive. They have also developed methods to evaluate how well metrics work. Dmitriev et al.~\citer{dmitriev2016measuring} give an experience report on how metrics are evaluated at Microsoft system in practice. Deng et al.~\citer{deng2016data} define metrics for evaluating metrics: directionality and sensitivity. They measure respectively whether a change in the metric aligns with good user experience and how often it detects a change in user experience.
Usability metrics are hard to define since they are not directly measurable without specialized equipment, such as eye-tracking hardware or focus groups. The measurements that are available, such as clicks or time spent on the site, do not directly inform on whether a change is an improvement or degradation in user experience. In addition, good user experience does not necessarily correlate positively with business value, e.g. clickbait titles for news articles are bad user experience but generate short term revenue. Researchers from Yandex~\citer{budylin2018consistent,drutsa2015sign,drutsa2015future,drutsa2017using,kharitonov2017learning,poyarkov2016boosted} are active in this area, with the following methods focused on usability metrics: detecting whether a change in a metric is a positive or negative user experience~\citer{drutsa2015sign}; learning sensitive combinations of metrics~\citer{kharitonov2017learning}; quantifying and detecting trends in user learning~\citer{drutsa2017using}; predicting future behavior to improve sensitivity~\citer{drutsa2015future}; applying machine learning for variance reduction~\citer{poyarkov2016boosted}; and finally correcting misspecified usability metrics~\citer{budylin2018consistent}. Machmouchi et al.~\citer{machmouchi2017beyond}, at Microsoft, designed a rule-based classifier where each user action is either a frustration or benefit signal; the tool then aggregates all such user actions taken during a session into a single robust utility metric.
\subsubsection{Variants of controlled experiments design}
Most documented experiments conducted in industry are univariate A/B/n-tests~\citer{ros2018continuous2}, where one or more treatments are tested against a control. Extensions to classical designs include a two-staged approach to A/B/n tests~\citer{deng2014statistical} and a design to estimate causal effects between variables in a multivariate test (MVT)~\citer{peysakhovich2018learning}. MVTs are cautioned against~\citer{kohavi2014seven} because of their added complexity. In contrast, other researchers take an optimization approach using lots (see Section~\ref{subsec:automated_controlled}) of variables with multi-armed bandits~\citer{claeys2017regression,hill2017efficient,mattos2018optimization,ros2018continuous} or search-based methods ~\citer{miikkulainen2017conversion,ros2017automated,tamburrelli2014towards}. Also mixed methods research is used to combine quantitative and qualitative data. Controlled experiments require deployment, feedback from users at earlier stages of development can thus be cheaper. There are works on combining results of such qualitative methods~\citer{bosch2016speed} and collecting it in parallel with A/B tests~\citer{speicher2014ensuring}.
\subsubsection{Quasi-experiments}
A quasi-experiment (or natural experiment) is an experiment that is done sequentially instead of in parallel; this definition is the same as in empirical research in general~\cite{wohlin2012experimentation}. The reason for doing it is that it has a lower technical complexity. In fact, any software deployment can have its impact measured by observing the effect before and after deployment. The drawback of this is that analyzing the results can be difficult due to the high risk of having external changes affect the result. That is, if anything extraordinary happens roughly at the same time as the release it might not be possible to properly isolate the results. Since the world of software is in constant change the use of quasi-experiments is challenging. The research directions on quasi-experiments involve how to eliminate external sources of noise to get more reliable results. This is studied at Amazon~\citer{hill2015measuring} and Linkedin~\citer{xu2016evaluating}, particularly for environments were control is continuous deployment is hard (such as mobile app development).
\subsubsection{Automated controlled experimentation with optimization algorithms}
\label{subsec:automated_controlled}
With an optimization approach, the allocation of users to the treatment groups is dynamically varied to optimize an OEC, such that treatments that perform well continuously receive more and more traffic over time. With sufficient automation, these techniques can be applied to lots of treatment variables simultaneously. This is not a replacement for classical designs; in an interview study by Ros and Bjarnason~\citer{ros2018continuous}, they explain that such techniques are often validated themselves using A/B tests. In addition, based on the studies included here, only certain parameters are eligible, such as the design and layout of components in a GUI, or parameters to machine learning algorithms or recommender systems. Some of these optimizations are black-box methods, where multiple variables are changed simultaneously and with little opportunity to make statistical inferences from the experiments.
Tamburelli and Margara~\citer{tamburrelli2014towards} proposed search-based methods (i.e. genetic algorithms) for optimization of software, and Iitsuka and Matsuo~\citer{iitsuka2015website} demonstrated a local search method with a proof of concept on web sites. Miikkulainen~\citer{miikkulainen2017conversion}, at Sentient Technologies, have a commercial genetic algorithm profiled for optimizing e-commerce web sites.
Bandit optimization algorithms are also used in industry at Amazon~\citer{hill2017efficient} and AB Tasty~\citer{claeys2017regression}, it is a more rigorous formalism that requires the specification of a statistical model on how the OEC behaves. Ros et al.~\citer{ros2017automated} suggested a unified approach of genetic algorithms and bandit optimization.
Similar algorithms exist to handle continuous variables, as is needed for hardware parameters~\citer{gerostathopoulos2018tool,mattos2018optimization} and for optimizing machine learning and compiler parameters~\citer{letham2019constrained}.
Two studies apply optimization~\citer{kharitonov2015optimised,schermann2018search} to scheduling multiple standard A/B tests to users, where only a single treatment is administered to each user. The idea is to optimize an OEC without sacrificing statistical inference.
\subsubsection{Variability management}
Experimentation incurs increased variability\textemdash by design\textemdash in a software system. This topic deals with solutions in the form of tools and techniques to manage said variability. In terms of an experiment platform, this can be part of the \emph{experiment execution service} and/or the \emph{experimentation portal}~\citer{gupta2018anatomy}.
There have been attempts at imposing systematic constraints and structure in the configuration of how the variables under experimentation interact with formal methods. C{\'a}mara and Kobsa~\citer{cmara2009facilitating} suggest using a feature model of the software parameters in all experiments. This work has not advanced beyond a proof-of-concept stage.
Neither in our study, nor in the survey by Schermann et al.~\citer{schermann2018doing}, is there any evidence of formal methods in a dynamic and constantly changing experimentation environment. The focus of the tools in actual use are rather on flexibility and robustness~\citer{bakshy2014designing,tang2010overlapping}.
Rahman et al.~\citer{rahman2016feature} studied how feature toggles are used in industry. Feature toggles are ways of enabling and disabling features after deployment, as such they can be used to implement A/B testing. They were found to be efficient and easy to manage but adds technical debt.
A middle ground between formal methods and total flexibility has evolved in the tools employed in practice. Google has proprietary tools in place to manage overlapping experiments in large scale~\citer{tang2010overlapping}. In their tools, each experiment can claim resources used during experimentation and a scheduler ensures that experiments can run in parallel without interference. Facebook has published an open-source framework (PlanOut) specialized for configuring and managing experiments~\citer{bakshy2014designing}, it features a namespace management system for experiments running iteratively and in parallel. SAP has a domain-specific language~\citer{westermann2013experiment} for configuring experiments that aims at increasing automation.
Finally, Microsoft has the ExP platform, but none of the selected papers focus solely on the variability management aspect of it.
\subsubsection{Improved statistical methods}
The challenges with experimentation motivate improved statistical techniques specialized for A/B testing. There are many techniques for fixing specific biases, sources of noise, etc: a specialized test for count data at SweetIM~\citer{borodovsky2011ab}; fixing errors with dependent data at Facebook~\citer{bakshy2013uncertainty}; improvements from the capabilities of A/A testing on diagnosis (which tests control vs control expecting no effect) at Yahoo~\citer{zhao2016online} and Oath~\citer{chen2017faster}; better calculation of overall effect for features with low coverage at Microsoft~\citer{deng2015diluted}; fixing errors from personalization interference at Yahoo~\citer{das2013when}; fixing tests under telemetry loss at Microsoft~\citer{gupchup2018trustworthy}; correcting for selection bias at Airbnb~\citer{lee2018winners}; and algorithms for improved gradual ramp-up at Google~\citer{medina2018online} and LinkedIn~\citer{xu2018sqr}.
\subsubsection{Continuous monitoring}
Aborting controlled experiments pre-maturely in case of outstanding or poor results is a hotly debated topic on the internet and in academia, under the name of continuous monitoring, early stopping, or continuous testing. The reason for wanting to stop early is to reduce opportunity costs and to increase development speed. It is studied by Microsoft~\citer{deng2016continuous}, Yandex~\citer{kharitonov2015sequential}, Optimizely~\citer{johari2017peeking}, Walmart~\citer{abhishek2017nonparametric}, and Etsy~\citer{ju2019sequential}. This concept is similar to the continuous monitoring used by researchers in the DevOps community and continuous software engineering~\citer{fitzgerald2017continuous} where it refers to the practice of monitoring a software system and sending alerts in case of faults. The issue with continuous monitoring of experiments is the increased chance of getting wrong results if carried out incorrectly.
Traditionally, the sample size of an experiment is defined beforehand through a power calculation. If the experiment is continuously monitored with no adjustment, then the results will be skewed with inflated false negative and positive error rates.
\subsubsection{Qualitative feedback}
\label{sec:results_qualitative}
While the search strategy in this work was focused on controlled experiments, research on qualitative feedback was also included from experience reports on using many different types of feedback collecting methods, for example at Intuit~\citer{bosch2012building,bosch2016speed} and Facebook~\citer{feitelson2013development}. The qualitative methods are used as complements to quantitative methods, either as a way to better explain results or as a way to obtain feedback earlier in the process, before a full implementation is built. That is, qualitative feedback can be collected on early user experience sketches or mock-ups. Another use of qualitative methods is to elicit hypotheses that can be used as a starting point for an experiment. Examples of methods include focus groups, interviews, and user observations.
In addition, at Unister~\citer{speicher2014ensuring} the authors explain how they collect qualitative user feedback in parallel with A/B tests, such that the feedback is split by experiment group. According to the authors, this seems to be a way to get the best of both quantitative and qualitative worlds. It does require implementing a user interface for collecting the feedback in a non-intrusive way in the product. Also, the qualitative feedback will not be of as high quality as when it is done in person with e.g. user observation or focus groups.
\subsection{What are the challenges with continuous experimentation (RQ3)?}
\label{sec:results:challenges}
Continuous experimentation encompasses a lot of the software engineering process, it requires both infrastructure support and a rigorous experimentation process that connects the software product with business value. As such, many things can go wrong and the challenges presented here is an attempt at describing such instances. Most of the research on challenges is evaluation research, with interviews or experience reports.
Many of the challenges are severe, in that they present a hurdle that must be overcome to conduct continuous experimentation. A failure in any of the respective category of challenges will make an experiment: unfeasible due to technical reasons, not considered by unresponsive management, untrustworthy due to faulty use of statistics, or without a business case.
The analysis of the papers revealed six categories of challenges (see Table \ref{tab:challenges}) that are discussed in the following in more detail.
\begin{table}[tpbh]
\centering
\renewcommand{\arraystretch}{1.2}
\caption{Summary of challenges with continuous experimentation per category with description and key references that focus on them.}
\label{tab:challenges}
\footnotesize
\vspace{-15pt}
\begin{tabularx}{\textwidth}{@{}>{\raggedright}p{7.5em}Xr@{}}
\toprule
\textbf{Challenge} & \textbf{Description} & \textbf{References} \\
\midrule
\multicolumn{2}{l}{\emph{1. Cultural, organizational, and managerial challenges}} \vspace{-2pt}\\
\midrule
Knowledge building & There are many roles and skills required, so staff need continuous training. & \citer{kohavi2013online,rissanen2015continuous,yaman2017introducing} \\
Micromanagement & Experimentation requires management to focus on the process (c.f. HiPPO in Section \ref{sec:challenge_soft}). & \citer{kohavi2009online} \\
Lack of adaption & Engineers need to be onboarded on the process as well as managers. & \citer{lindgren2016raising} \\
Lack of communication & Departments and teams should share their results to aid each other. & \citer{rissanen2015continuous,yaman2017introducing} \\
\midrule
\multicolumn{2}{l}{\emph{2. Business challenges}} \vspace{-2pt}\\
\midrule
Low impact & Experimentation might focus efforts on incremental development with insufficient impact. & \citer{fitzgerald2017continuous,olsson2017experimentation} \\
Relevant metrics & The business model of a company might not facilitate easy measurement. & \citer{dmitriev2017dirty,fabijan2018online2,lindgren2016raising} \\
Data leakage & Companies expose internal details about their product development with experimentation. & \citer{conti2018spot} \\
\midrule
\multicolumn{2}{l}{\emph{3. Technical challenges}} \vspace{-2pt}\\
\midrule
Continuous delivery & The CI/CD pipeline should be efficient to obtain feedback fast. & \citer{fabijan2018online2,lindgren2016raising,schermann2018doing} \\
Continuous deployment & Obstacles exists to putting deliveries in production, e.g. on-premise installations in B2B. & \citer{rissanen2015continuous}\\
Experimental control & Dividing users into experimental groups have many subtle failure possibilities. & \citer{crook2009seven,dmitriev2016pitfalls,kohavi2008controlled} \\
\midrule
\multicolumn{2}{l}{\emph{4. Statistical challenges}} \vspace{-2pt}\\
\midrule
Exogenous effects & Changes in environment can impact experiment results, e.g. trend effects in fashion. & \citer{dmitriev2016pitfalls,kohavi2012trustworthy}\\
Endogenous effects & Experimentation itself causes effects, such as carry-over or novelty effects. & \citer{dmitriev2016pitfalls,lu2014separation} \\
\midrule
\multicolumn{2}{l}{\emph{5. Ethical challenges}} \vspace{-2pt}\\
\midrule
Data privacy & GDPR gives users extensive rights to their data which companies must comply with. & \citer{yaman2017notifying} \\
Dark patterns & A narrow focus on numbers only can lead to misleading user interfaces. & \citer{jiang2019whos} \\
\midrule
\multicolumn{2}{l}{\emph{6. Domain specific challenges}} \vspace{-2pt}\\
\midrule
Mobile & The app marketplaces impose constraints on deployment and variability. & \citer{lettner2013enabling,xu2016evaluating,yaman2018continuous} \\
Cyber-physical systems & Making continuous deployments can be infeasible for cyber-physical systems. & \citer{bosch2016data,giaimo2017considerations,mattos2018challenges} \\
Social media & Users of social media influence each other which impacts the validity of experiments. & \citer{azevedo2018estimatino,backstrom2011network,choi2017estimation} \\
E-commerce & Experimentation needs to be able to differentiate effects from products and software changes. & \citer{goswami2015controlled,wang2018designing} \\
\bottomrule
\end{tabularx}
\end{table}
\subsubsection{Cultural, organizational, and managerial challenges}
\label{sec:challenge_soft}
The challenges to organizations and management are broad in scope, including: difficulty in changing the organizational culture to embrace experimentation~\citer{lindgren2016raising}; building experimentation skills among employees across the whole organization~\citer{kohavi2013online,yaman2017introducing}; and finally communicating results and coordinating experiments in business to business, where there are stakeholders involved across multiple organizations~\citer{rissanen2015continuous,yaman2017introducing}.
A fundamental challenge that has to be faced by organizations adopting continuous experimentation, is the shift from the highest-paid person's opinion (HiPPO)~\citer{kohavi2007practical,kohavi2009online} to data-driven decision making. If managers are used to making decisions about the product then they might not take account of experimental results that might run counter to their intuition. Thus, decision-makers must be open to have their opinions changed by data, else the whole endeavor with experimentation is useless.
\subsubsection{Business challenges}
The premise behind continuous experimentation is to increase the business value of software development efforts. The most frequent challenge in realizing this is defining relevant metrics that measure business value~\citer{dmitriev2016pitfalls,dmitriev2017dirty,fabijan2018online2,lindgren2016raising,yaman2017introducing}. In some instances the metric is only indirectly connected to business, for example in a business-to-business (B2B) company with a revenue model that is not affected by the software product, then improving product quality and user experience will not have a direct business impact. Also, the impact of experiments might not be sufficient in terms of actual effect~\citer{kohavi2014seven,olsson2017experimentation}. Fitzgerald and Stol~\citer{fitzgerald2017continuous} argue that continuous experimentation and innovation can lead to incremental improvements only, at the expense of more innovative changes that could have had a bigger impact. Another business challenge of continuous experimentation was highlighted by Conti et al.~\citer{conti2018spot}; they crawled web sites repeatedly and tried to automatically detect a difference in server responses. Thereby they showed how easily such data leakage can facilitate industrial espionage on what competitors are developing.
\subsubsection{Technical challenges}
Efficient continuous deployment facilitates efficient experimentation. Faster deployment speed shortens the delay between a hypothesis and the result of an experiment. The ability to have an efficient continuous delivery cycle is cited as a challenge both for large~\citer{kohavi2008controlled} and small companies~\citer{fabijan2018online2,lindgren2016raising,schermann2018doing}. In addition, continuous deployment is further complicated in companies involved in business to business (B2B)~\citer{rissanen2015continuous}, where deployment has multiple stakeholders involved over multiple organizations.
In a laboratory experiment setting, it is possible to control variables such as ensuring homogeneous computer equipment for all groups and ensuring that all groups have equal distribution in terms of gender, age, education, etc. For online experiments, such controls are much harder due to subtle technical reasons. Examples therefore are: users assigned incorrectly to groups due to various bugs~\citer{kohavi2008controlled}; users changing groups because they delete their browsing history or multiple persons share the same computer~\citer{coey2016people,deng2017trustworthy,dmitriev2016pitfalls,kohavi2008controlled}; and robots from search engines cause abnormal traffic affecting the results~\citer{crook2009seven,kohavi2011unexpected}.
\subsubsection{Statistical challenges}
Classical experimental design as advocated by the early work on continuous experimentation and A/B-testing~\citer{kohavi2007practical} does not account for time series. Not only can it be hard to detect the presence of effects related to trends, but they can also have an effect on the results. Some of these trend effects occur due to outside influence, so-called \emph{exogenous effects}, for example, due to seasonality caused by fashion or other events which can affect traffic~\citer{dmitriev2016pitfalls,kohavi2012trustworthy}. With domain knowledge, these effects can be accounted for. For example in e-commerce, experiment results obtained during Christmas shopping week might not transfer to the following weeks.
Other statistical challenges are caused by the experimentation itself, called \emph{endogenous effects}, such as the carryover effect~\citer{kohavi2012trustworthy,lu2014separation} where the result of an experiment can affect the result of a following experiment. There are also endogenous effects caused intentionally, through what is known as ramp up, where the traffic to the test group is initially low (such as 5\%/95\%) and incrementally increased to the full 50\%/50\% split. This is done to minimize the opportunity cost of a faulty experiment design. It can be difficult to analyze the results of such experiments~\citer{crook2009seven,kohavi2011unexpected}. Furthermore, learning and novelty effects where the users change their impression of the feature after using it for a while are challenging~\citer{dmitriev2016pitfalls,lu2014separation}.
Endogenous effects will be hard to foresee until experimentation is implemented in a company. As such, handling statistical challenges is an ongoing process that will require more and more attention as experimentation is scaled up.
\subsubsection{Ethical challenges}
Whenever user data is involved there is a potential for ethical dilemmas.
When Yaman et al.~\citer{yaman2017notifying} surveyed software engineering practitioners, the only question they agreed on was that users should be notified if personal information is collected. Since GDPR went into effect in 2018 this is now a requirement.
Jian et al.~\citer{jiang2019whos} investigate how A/B testing tools are used in illegal discrimination for certain demographics, e.g., by adjusting prices or filtering job ads. These are examples of what is known as \emph{dark patterns} in the user experience (UX) research community~\cite{gray2018dark}. The study was limited to sites using front end Optimizely (a commercial experimentation platform) metadata.
\subsubsection{Domain specific challenges}
Some software sectors have domain-specific challenges or techniques required for experimentation, of which in the analysis of the papers four prominent domains were found: 1) mobile apps, 2) cyber-physical systems, 3) social media, and 4) e-commerce. Whether or not all of these concerns are domain-specific or not is debatable. However, these studies were all clear on what domain their challenges occurred in.
There is a bottleneck in continuous deployment to the proprietary application servers of Android Play or Apple's App Store, which imposes a bottleneck on experimentation for mobile apps. Lettner et al.~\citer{lettner2013enabling} and Adinata and Liem~\citer{adinata2014ab} have developed libraries that load new user interfaces at run time, which would otherwise (at the time of writing in year 2013 and 2014 respectively) require a new deployment on Android Play. Xu et al.~\citer{xu2016evaluating} at LinkedIn instead advocate the use of quasi-experimental designs. Finally, Yaman et al.~\citer{yaman2018continuous} have done an interview study on continuous experimentation, where they emphasize user feedback in the earlier stages of development (that do not require deployment).
Embedded systems, cyber-physical systems, and smart systems face similar challenges to mobile apps, namely continuous deployment. None of the studied publications of this study claims wide\-spread adoption of experimentation at an organizational level. This suggests that research of experimentation for embedded software is in an early stage.
Mattos et al.~\citer{mattos2018challenges} and Bosch and Holmstr{\"o}m Olsson~\citer{bosch2016data} outline challenges and research opportunities in this domain, among them are: continuous deployment, metric definition, and privacy concerns. Bosch and Eklund~\citer{bosch2012eternal,eklund2012architecture} describe required architecture for experimentation in this domain with a proof-of-concept on vehicle entertainment systems. Giaimo et al.~\citer{giaimo2017considerations,giaimo2016continuous} cite safety concerns and resource constraints for the lack of continuous experimentation.
The cyber-physical systems domain also includes experimentation where the source of noise is not human users, but rather hardware. The research on self-adaptive systems overlap with continuous experimentation: Gerostathopoulos et al.~\citer{gerostathopoulos2016architectural} have described an architecture for how self-adaptive systems can perform experimentation, with optimization algorithms~\citer{gerostathopoulos2018adapting} that can handle non-linear interactions between hardware parameters~\citer{gerostathopoulos2018cost}. In addition, two pieces of work~\citer{buchert2015survey,jayasinghe2013automated} on distributed systems focus on experimentation, with a survey and a tool on how distributed computing can support experimentation for e.g. cloud providers.
Backstrom et al.~\citer{backstrom2011network} from Facebook describe that users of social media influence each other across experiment groups (thus violating the independence assumption of statistical tests); they call it the network effect. It is also present at Yahoo~\citer{katzir2012framework} and LinkedIn~\citer{gui2015network,saveski2017detecting,xu2015from}. The research on the network effect includes: ways of detecting it~\citer{saveski2017detecting}, estimating its effect on cliques in the graph~\citer{azevedo2018estimatino,choi2017estimation}, and reducing the interference caused from it~\citer{eckles2016design}.
The final domain considerations come from e-commerce. At Walmart, Goswami et al.~\citer{goswami2015controlled} describe the challenges caused by seasonality effects during holidays and how they strive to minimize the opportunity cost caused by experimentation. At Ebay, according to Wang et al.~\citer{wang2018designing}, the challenges are caused by the large number of auctions that they need to group with machine learning techniques for the purpose of experimental control.
\subsection{What are the benefits with continuous experimentation (RQ4)?}
Many authors mention the benefits of CE only in passing as motivation~\citer{bosch2012building,kohavi2009online}, few papers explicitly mention them (e.g. \citer{fabijan2017benefits}).
Bosch~\citer{bosch2012building} mentions the reduced cost of collecting passive customer feedback with continuous experimentation in comparison with active techniques like surveys. Also, Bosch claims that customers have come to expect software services to continuously improve themselves and that experimentation can provide the means to do that in a process that can be visible to users.
Kohavi et al.~\citer{kohavi2009online} claim that edge cases that are only relevant for a small subset of users can take a disproportionate amount of the development time. Experimentation is argued for as a way to focus development, by first ensuring that a feature solves a real need with a small experiment and then optimizing the respective feature for the edge cases with iterative improvement experiments. In this way, unnecessary development on edge cases can be avoided if a feature is discarded early on.
Fabijan et al.~\citer{fabijan2017benefits} focus solely on benefits, differentiated between three levels as follows. 1) In the \emph{portfolio level}, the impact of changes on the customer as well as business value can be measured which is of great benefit to company-wide product portfolio development. 2) In the \emph{product level}, the product receives incrementally improvement quality and reduced complexity by removing unnecessary features.
Finally, 3) in the \emph{team level of benefits}, the findings of experiments support the related teams to prioritize their development activities given the lessons learned from the conducted experiments. Another benefit for teams with continuous experimentation is that team goals can be expressed in terms of metric changes and their progress is measurable.
\section{Discussion}
\label{sec:discussion}
This study builds on two prior independent mapping studies to provide an overview of the conducted research. This review has been conducted to answer four research questions that can guide practitioners. In the following, the results of the study are discussed for each research question, in the form of recommendations to practitioners and implications for researchers.
\subsection{Required frameworks (RQ1)}
The first research question (RQ1) about the core constituents of a framework for continuous experimentation revealed two integral parts of experimentation, the \emph{experimentation process} and the technical as well as organizational \emph{infrastructure}.
\subsubsection{Process for continuous experimentation}
In the literature, several experimentation process models were found on the phases of conducting online controlled experimentation. They describe the overall process~\citer{fagerholm2017right}, represent the established experiment process of organizations~\citer{kevic2017characterizing}, or cover specific parts of the experiment cycle~\citer{fabijan2018effectivePAPER}. Given that all models describe a process with the same overall objective of experimentation, it can become difficult to decide between them. Two reference models are published \citer{fagerholm2017right,olsson2014from}, which may be used as a basis for future standardization of the field. Future research is needed to give guidance in the selection between models and variants.
Many of the experience reports~\citer{kohavi2014seven,kohavi2011unexpected} warn about making experiments with too broad scope, instead they recommend that all experiments should be done on a minimum viable product or feature~\citer{fagerholm2017right}. However, the warnings all come from real lessons learned caused by having done such expensive experiments. We believe that the current process models do not put sufficient emphasis on conducting small experiments. For example, they could make a distinction between prototype experiments and controlled experiments on a completed project. That way if the prototype reveals flaws in the design it avoids a full implementation.
As such, our recommendation to practitioners in regards to process is to follow one of the reference experimentation processes~\citer{fagerholm2017right,olsson2014from} and in addition add the following two steps to minimize the cost of experiments. First, to spend more time before experimentation to ensure that experiments are really on a minimum viable feature by being diligent about what requirements are strictly needed at the time. Second, that experiments should be pre-validated with prototypes, data analysis, etc.
\subsubsection{Infrastructure for continuous experimentation}
The research on the infrastructure required to enable continuous experimentation was primarily focused on large scale applications within mature organizations (e.g. Microsoft \citer{gupta2018anatomy}). One reason for this focus may be the large number of publications (e.g. experience reports) from researchers associated with large organizations. The large number of industrial authors indicates a high interest of practitioners in the topic.
However, it should not restrict the community's focus on large scale applications only. The application of continuous experimentation within smaller organizations has many open research questions. These organizations provide additional challenges on experimentation because of their probably less already existing infrastructure and smaller user base. For example, the development of sophisticated experimentation platforms may not be feasible in the extent to which it is for large organizations. Thus, lightweight approaches to experimentation that do not require large up-front investments could make experimentation more accessible to smaller organizations.
Technical infrastructure has not been reported as being a significant hurdle for any of the organizations in which continuous experimentation was introduced in this study. The technical challenges seem to appear later on when the continuous experimentation process has matured and the scale of experimentation needs to ramp up. Rather, the organizational infrastructure seems to be what might cause an inability to conduct experimentation. The challenges presented in Section~\ref{sec:results:challenges} support this claim too, so the more severe infrastructural requirements appear to be organizational~\citer{fitzgerald2017continuous} and culture oriented~\citer{kohavi2009online,xu2015from}, at least to get started with experimentation. The reason for this is that experimentation often involves decision making that traditionally fall outside the software development organization. For example, deciding on what metric software should be optimized for might even need to involve the company board of directors. Following that, the recommendation to practitioners is to not treat continuous experimentation as a project that can be solved with only software development. The whole organization needs to be involved, e.g., to find metrics and ensuring that the user data to measure this can be acquired. Otherwise, if the software development organization conducts experimentation in isolation, the soft aspects of infrastructure might be lacking or the software might be optimized with the wrong goal in mind.
\subsection{Solutions applied (RQ2)}
Concerning the solutions that are applied within continuous experimentation (RQ2), the literature analysis revealed solutions about qualitative feedback, variants of controlled experiments design, quasi-experiments, automated controlled experimentation with optimization algorithms, statistical methods, continuous monitoring, data mining, variability management, and metric specification. For each of these solutions in literature, themes were proposed. One observation made was that the validation of most proposed solutions could be further improved by providing the used data sets, a context description or the necessary steps that allow to reproduce the presented results. Also, many interesting solutions would benefit from further applications that demonstrate their applicability in practice.
Another observation was that many solutions are driven by practical problems of the author's associated organization (e.g. evaluation of mobile apps \citer{hill2015measuring}). This has the advantage that the problems are of relevance for practice and the provided solutions are assumed to be applicable in similar contexts. Publications of this kind are guidelines for practitioners and valuable research contributions.
There are a lot of solutions for practitioners to choose from, most of them solve a very specific problem that has been observed at a company. In Figure~\ref{fig:rq2-overview}, the solutions are arranged by phase of the experimentation process. What follows is additional help to practitioners to know what solution to apply for a given problem encountered, which is in the design science tradition known as technological rules~\cite{engstrom2020software}:
\begin{itemize}
\item to achieve \emph{additional insights} in concluded experiments apply \emph{1) data mining} that automatically segments results for users' context;
\item to achieve \emph{more relevant results} in difficult to measure software systems apply \emph{2) metric specification techniques}.
\item to achieve \emph{richer experiment feedback} in continuous experimentation apply \emph{3) variants of controlled experiments design} or \emph{9) qualitative feedback}.
\item to achieve \emph{quantitative results} in environments where parallel deployment is challenging apply \emph{4) quasi-experiments};
\item to achieve \emph{optimized user interfaces} in software systems that can be evaluated on a single metric apply \emph{5) automated controlled experimentation with optimization algorithms};
\item to achieve \emph{higher throughput of experiments} in experimentation platforms apply \emph{6) variability management techniques} to specify overlapping experiments;
\item to achieve \emph{trustworthy results} in online controlled experiments apply \emph{7) improved statistical methods} or \emph{1) data mining} to calibrate the statistical tests;
\item to achieve \emph{faster results} in online controlled experiments apply \emph{8) continuous monitoring} to help decide when experiments can be stopped early.
\end{itemize}
\subsection{Challenges (RQ3)}
Many authors of the studied literature mentioned challenges with continuous experimentation in their papers. The thematic analysis of the challenges identified six fundamental challenge themes. Here they are presented along with the recommendations to mitigate the risks.
The \emph{cultural, organizational and managerial} challenges seem to indicate that the multi-disciplinary characteristic of continuous experimentation introduces new requirements to the team. It requires amongst others the collaboration of cross-functional stakeholders (i.e. business, design, and engineering). This can represent a fundamental cultural change within an organization. Hence, the adaption of continuous experimentation involves technical as well as cultural changes. Challenges like the lack of adaption support this interpretation. Mitigating these challenges involves taking a whole organizational approach to continuous experimentation so that both engineers and managers are in agreement about conducting experimentation.
Another theme among challenges is \emph{business}. The challenges assigned to this theme highlight that continuous experimentation has challenges in its economic application with respect to the financial return on investment. The focus of experimentation needs to be managed appropriately in order to prevent investing in incremental development with insufficient impact. Also, that changes cannot be measured with a relevant metric is another business challenge.
One possible approach for further research on these challenges could be the transfer from solutions in other disciplines to continuous experimentation. An example therefore is the overall evaluation criteria \cite{van2002design} that was adapted to continuous experimentation by Kohavi et al.~\citer{kohavi2007practical}. As with the previous challenge theme, this theme of challenges does not have an easy fix. It might be the case that experimentation is simply not applicable for all software companies but further research is needed to determine this.
Concerning the \emph{technical} challenges, the literature review showed that there are challenges related to continuous deployment/delivery and experiment control. The delivery of changes to production is challenging especially for environments that are used to none or infrequent updates, like embedded devices. For such edge cases, new deployment strategies have to be found that are suitable for continuous experimentation. Although solutions from continuous deployment seem to be fitting, they need to be extended with mechanisms to control the experiment at run-time (e.g. to stop an experiment). This can be challenging in environments for which frequent updates are difficult. There is proof-of-concept research~\citer{eklund2012architecture} to handle these challenges so they do not seem to be impossible blockers to get started on experimentation.
The \emph{statistical} challenges mentioned in the studied literature indicate that there is a need for solutions to cope with the various ways that the statistical assumptions done in a controlled experiment are broken by changes in the real world. There are both changes in the environment (exogenous) and changes caused by experimentation (endogenous). Changes in the environment (e.g. the effect of an advertisement campaign run by the marketing department) can alter the initial situation of an experiment and thus may lead to wrong conclusions about the results. Therefore, the knowledge about an experiment's environment and possible influences needs to be systematically advanced and the experiments themselves should be designed to be more robust. Mitigating these challenges involves identifying and applying the correct solution for the specific problem. There is further research opportunity to document and synthesize such problem-solution pairs.
\emph{Ethical} aspects are not investigated by many studies. The experience reports and lessons learned publications do not, for example, mention user consent or user's awareness of participation. Furthermore, ethical considerations about which experiments should be conducted or not were seldom discussed in the papers. There were still two challenges identified in this study, involving data privacy and dark patterns.
However, examples like the Facebook emotional manipulation study, which changed the user's news feed to determine whether it affects the subsequent posts of a user, show the need for ethical considerations in experimentation \cite{flick2016informed}. Although this was an experiment in the context of an academic study in psychology, the case nevertheless shows that there are open challenges on the topic of ethics and continuous experimentation. There is not enough research conducted for a concrete recommendation other than raising awareness of the existence of ethical dilemmas involving experimentation.
Continuous experimentation is applied in various domains that require \emph{domain specific} solutions. The challenges on continuous experimentation range from infrastructure challenges, over measurement challenges, to social challenges.
Examples are the challenge to deploy changes in cyber-physical systems (infrastructural challenge), to differentiate the effects of one change from another (measurement challenge), and the influence of users on each other across treatment groups (social challenge).
Each challenge is probably only relevant for certain domains, however the developed solutions may be adaptable to other domains. Thus, the research on domain-specific challenges could take optimized solutions for specific domains to solutions for other domains.
\subsection{Benefits (RQ4)}
In many publications about continuous experimentation the benefits of experimentation are mentioned as motivation only; i.e. it increases the quality of the product based on the chosen metrics.
The two publications on explicit benefits~\citer{bosch2012building,fabijan2017benefits} mention improvements not only on the product in business-related metrics and usability but also on the product portfolio offering and generic benefits for the whole organization (better collaboration, prioritization, etc.). More studies are needed to determine, e.g., if there are more benefits, whether the benefits apply for all companies involved with experimentation, or whether the benefits could be obtained through other means.
Another benefit is the potential usage of continuous experimentation for software quality assurance. Continuous experimentation could support or even change the way quality assurance is done for software. Software change, for example, could only be deployed if key metrics are not degraded in the related change experiment. Thus, quality degradation could become quantifiable and measurable. Although some papers, like \citer{fabijan2017benefits}, mention the usage of continuous experimentation for software quality assurance.
\section{Conclusions}
\label{sec:conclusions}
This paper presents a systematic literature review of the current state of controlled experiments in continuous experimentation. Forward snowballing was applied on the selected paper sets of two previous mapping studies in the field. The 128 papers that were finally selected, were qualitatively analyzed using thematic analysis.
The study found two constituents of a continuous experimentation framework (RQ1): an experimentation process and a supportive infrastructure. Based on experience reports that discuss failed experiments in the context of large-scale software development, the recommendation to practitioners is to apply one of the published processes, but also expand it by placing more emphasis on the ideation phase by making prototypes. As for the infrastructure, several studies discuss requirements for controlled experiments to ramp up the scale and speed of experimentation. Our recommendation for infrastructure is to consider the organizational aspects to ensure that, e.g., the necessary channels for communicating results are in place.
Ten themes of solutions (RQ2) were found that were applied in the various phases of controlled experimentation: data mining, metric specification, variants of controlled experiment design, quasi-experiments, automated controlled experimentation, variability management, continuous monitoring, improved statistical methods, and qualitative feedback. We have provided recommendations on what problem each solution theme solves for what context in the discussion.
Finally, the analysis of challenges (RQ3) and benefits (RQ4) of continuous experimentation revealed that only two papers focused explicitly on the benefits of experimentation. In contrast, multiple papers focused on challenges. The analysis identified six themes of challenges: cultural/organizational, business, technical, statistical, ethical, and domain-specific challenges. While the papers on challenges do outnumber the papers on benefits, there is no cause for concern, as the benefits to product quality are also mentioned in many papers as motivation to conduct the research. The challenges to experimentation also come with recommendations in the discussion on how to mitigate them.
As a final remark, we encourage practitioners to investigate the large body of highly industry-relevant research that exists for controlled experimentation in continuous experimentation and for researchers to follow the many remaining gaps in literature revealed within.
\section*{Acknowledgements}
This work was partially supported by the Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP) funded by Knut and Alice Wallenberg Foundation and the Austrian Science Fund (FWF): I 4701-N.
\section*{\refname}
\bibliographystyle{elsarticle-num}
|
2,877,628,089,113 | arxiv | \section{Introduction}
The Nekrasov's instanton partition function \cite{Nekrasov:2002qd} for 4D $\mathcal{N}=2$ gauge theories
has uncovered various non-perturbative phenomena in these
theories. For instance, the Seiberg-Witten prepotential was
derived from the path integral \cite{Nekrasov:2002qd, Nekrasov:2003rj},
a relation to
integrable systems was discovered \cite{Nekrasov:2009rc}, and a novel 2d/4d
correspondence called the AGT correspondence was discovered
\cite{Alday:2009aq, Wyllard:2009hg}.
A generalization of the above success to theories
coupled to a
strongly-coupled superconformal field theories (SCFTs) has partially been
studied. In particular, the AGT correspondence has been generalized in
\cite{Bonelli:2011aa, Gaiotto:2012sf} to gauge theories coupled to
Argyres-Douglas (AD) theories. We call these gauge theories ``gauged AD
theories.'' Since
AD theories have no weak-coupling limit, supersymmetric localization is not available for these
theories. As a result, the generalized AGT correspondence has been
the only promising way of evaluating the
instanton partition function of these theories.
One restriction of the generalized AGT correspondence was, however, that
it
was only applied to non-conformally gauged AD
theories.\footnote{Here, by ``non-conformally gauged,'' we mean that the
beta function of the gauge coupling is asymptotically free.} The reason
for this is that conformally gauged AD theories have no known
realization from 6d (2,0) $A_1$ theory, and therefore the AGT
correspondence is not directly applied to them. As a result, until recently, the instanton
partition function of conformally gauged AD theories was not evaluated.
A first idea of computing the instanton partition function of
conformally gauged AD theories has been
provided in \cite{Kimura:2020krd}. A key ingredient is the $U(2)$ version of the generalized AGT
correspondence, which is stated in terms of irregular states of the direct sum of Virasoro and Heisenberg
algebra $Vir\oplus H$.
For instance,
let us consider $SU(2)$ gauge theory coupled to a fundamental
hypermultiplet and two copies of AD theory called $(A_1,D_4)$ (Fig.~\ref{fig:quiver1}). Here, the ``matter'' sector is precisely chosen so
that the beta function of the $SU(2)$ gauge coupling
vanishes. This coupled theory is also known as the
``$(A_3,A_3)$ theory.'' While the AGT correspondence cannot be directly
applied to the $(A_3,A_3)$ theory,
one can apply it to a factor in the following decomposition
of the partition function:
\begin{align}
\mathcal{Z} =
\mathcal{Z}_\text{pert}\sum_{Y_1,Y_2}q^{|Y_1|+|Y_2|}\mathcal{Z}^\text{vec}_{Y_1,Y_2}(a)\mathcal{Z}^\text{fund}_{Y_1,Y_2}(a,M)\prod_{i=1}^2\mathcal{Z}^{(A_1,D_{4})}_{Y_1,Y_2}(a,m_i,d_i,u_i)~,
\label{eq:U2-1}
\end{align}
where $a$ is the vacuum expectation value (VEV) of the Coulomb branch
operator in the vector multiplet, $q$ is the exponential of
the gauge coupling, the sum runs over pairs of Young
diagrams $(Y_1,Y_2)$, $|Y|$ stands for the number of boxes in a Young
diagram $Y$,
and $\mathcal{Z}^\text{vec}_{Y_1,Y_2}$ and
$\mathcal{Z}^\text{fund}_{Y_1,Y_2}$ are the contributions from the vector and hypermultiplets.\footnote{Here $\mathcal{Z}_\text{pert}$ is a prefactor that makes the
$q$-series start with $1$.}
The factor
$\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_4)}$ is the contribution from an
$(A_1,D_4)$ theory, which is hard to evaluate via localization but can
be evaluated via the $U(2)$-version of the generalized AGT
correspondence \cite{Kimura:2020krd}.
\begin{figure}
\begin{center}
\begin{tikzpicture}[gauge/.style={circle,draw=black,inner sep=0pt,minimum size=8mm},flavor/.style={rectangle,draw=black,inner sep=0pt,minimum size=8mm},AD/.style={rectangle,draw=black,fill=red!20,inner sep=0pt,minimum size=8mm},auto]
\node[AD] (0) at (-1.8,0) {\;$(A_1,D_{4})$\;};
\node[gauge] (1) at (0,0) [shape=circle] {\;$2$\;} edge (0);
\node[AD] (2) at (1.8,0) {\;$(A_1,D_{4})$\;} edge (1);
\node[flavor] (3) at (0,1.3) {\;1\;} edge (1);
\end{tikzpicture}
\caption{The $(A_3,A_3)$ theory is
identical to the conformal $SU(2)$ gauge theory coupled to two
$(A_1,D_4)$ theories and a fundamental hypermultiplet of
$SU(2)$. Here, the middle circle with $2$ inside stands for an
$SU(2)$ vector multiplet, and the top box with $1$ inside stands
for a fundamental hypermultiplet.}
\label{fig:quiver1}
\end{center}
\end{figure}
The reason for the ``$U(2)$-version'' is that the decomposition
\eqref{eq:U2-1} is possible only when the gauge group is $U(2)$ instead
of $SU(2)$.
The difference between $U(2)$ and
$SU(2)$ gives rise to a prefactor of the partition function, known as the
$U(1)$-factor. By factoring out the $U(1)$-factor, one can read off the
partition function and the prepotential of
the $(A_3,A_3)$ theory from \eqref{eq:U2-1}. As discussed in \cite{Kimura:2020krd}, when dimensionful parameters are turned off except for $a$, the prepotential
$\mathcal{F}_{(A_3,A_3)}(q;a)$ of the $(A_3,A_3)$ theory read off as
above is in a surprising relation to the prepotential
$\mathcal{F}_{SU(2)}^{N_f=4}(q;a)$ of $SU(2)$ gauge theory with four
fundamental flavors, i.e.,
\begin{align}
2\mathcal{F}_{(A_3,A_3)}(q;a) = \mathcal{F}_{SU(2)}^{N_f=4}(q^2;a)~.
\label{eq:rel1}
\end{align}
This remarkable relation was then used to read off how the S-duality of
$(A_3,A_3)$ acts on the UV gauge coupling $q$.
While the above $U(2)$-version of the generalized AGT correspondence provides a novel way of
evaluating the instanton partition function of conformally gauged AD theories, one of its restrictions is that the formula
provided in \cite{Kimura:2020krd} is only for $(A_1,D_\text{even})$ theories. The
reason for this is that only irregular states of {\it integer}
ranks were constructed in \cite{Kimura:2020krd}, and those of {\it half-integer} ranks
are still to be identified.\footnote{As explained in the next section,
the rank of an irregular state $|I\rangle$ is defined by the maximal
$n\in \mathbb{N}/2$ such that $L_{2n}|I\rangle \neq 0$.}
In this paper, we extend the result of \cite{Kimura:2020krd} to the case of
$(A_1,D_\text{odd})$ theories, under the condition that all couplings
and VEVs of Coulomb branch operators in $(A_1,D_\text{odd})$ are turned off. This is done by explicitly
identifying the action of $Vir\times H$ on irregular states of
half-integer ranks. This action turns out to be very simple in the
classical limit $\epsilon_1,\epsilon_2\to 0$, when the above condition
is satisfied.
As an application of our extension, we evaluate the prepotential
of the
$(A_2,A_5)$ theory, which is the conformal $SU(2)$ gauge theory coupled
to a fundamental hypermultiplet and AD theories called $(A_1,D_6)$ and
$(A_1,D_3)$ (Fig.~\ref{fig:quiver2}).\footnote{See
\cite{Giacomelli:2020ryy} for a recent discussion on
the conformal manifold of $(A_n, A_m)$ theories.} To compute the
partition function $\mathcal{Z}_{(A_2,A_5)}$ of this theory, one needs to know the contribution
of the $(A_1,D_3)$ and $(A_1,D_6)$ theories at each fixed point on the instanton moduli
space, i.e.~, $\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_3)}$
and $\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_6)}$. While the latter can be
evaluated via the method of \cite{Kimura:2020krd}, computing the former
needs a prescription that we develop in this paper. We then read off
from $\mathcal{Z}_{(A_2,A_5)}$ an expression for the
prepotential $\mathcal{F}_{(A_2,A_5)}$ of the $(A_2,A_5)$ theory, which
turns out to be in a
surprising relation to
$\mathcal{F}_{SU(2)}^{N_f=4}$:
\begin{align}
3\mathcal{F}_{(A_2,A_5)}(q;a) = \mathcal{F}_{SU(2)}^{N_f=4}(q^3;a)~.
\end{align}
Note that this relation is quite similar to \eqref{eq:rel1} but different. From
this relation, we read off how the S-duality group acts on the UV gauge
coupling $q$ of the $(A_2,A_5)$ theory.
A generalization of our result to the case of
all dimensionful parameters turned on is left for future work.
The organization of this paper is the following. In
Sec.~\ref{sec:review}, we review the generalized AGT correspondence and
its $U(2)$-version. In Sec.~\ref{sec:U2}, we consider the generalization
of the $U(2)$-version to $(A_1,D_\text{odd})$. In Sec.~\ref{sec:A2A5},
we apply a formula developed in Sec.~\ref{sec:U2} to the $(A_2,A_5)$
theory and show that the prepotential of $(A_2,A_5)$ is related to that
of $SU(2)$ superconformal QCD by a change of variables. In
Sec.~\ref{sec:SW}, we show that the prepotential relation found in
Sec.~\ref{sec:A2A5} is consistent with the Seiberg-Witten curve.
\begin{figure}
\begin{center}
\begin{tikzpicture}[gauge/.style={circle,draw=black,inner sep=0pt,minimum size=8mm},flavor/.style={rectangle,draw=black,inner sep=0pt,minimum size=8mm},AD/.style={rectangle,draw=black,fill=red!20,inner sep=0pt,minimum size=8mm},auto]
\node[AD] (1) at (-1.8,0) {\;$(A_1,D_{3})$\;};
\node[gauge] (2) at (0,0) [shape=circle] {\;$2$\;} edge (1);
\node[AD] (3) at (1.8,0) {\;$(A_1,D_{6})$\;} edge (2);
\node[flavor] (4) at (0,1.3) {\;1\;} edge (2);
\end{tikzpicture}
\caption{The $(A_2,A_5)$ theory is an $\mathcal{N}=2$ SCFT, which is identical to a conformal $SU(2)$
gauging of $(A_1,D_3)$ and $(A_1,D_6)$ theories together with a
fundamental hypermultiplet of $SU(2)$.}
\label{fig:quiver2}
\end{center}
\end{figure}
\section{$U(2)$-version of generalized AGT for $(A_1,D_\text{even})$}
\label{sec:review}
In this section, we give a brief review of the $U(2)$-version of the
generalized AGT correspondence for $(A_1,D_{N})$ theories with even $N$.
\subsection{Generalized AGT correspondence}
\label{subsec:GAGT}
We first review the original generalized AGT correspondence.
Recall that the $(A_1,D_{N})$ theory for a positive integer $N\geq 2$ is realized by compactifying the 6d
(2,0) $A_1$ theory on a sphere with two punctures, one of which is a
regular puncture and the other is an irregular puncture of rank
$N/2$ \cite{Bonelli:2011aa,Gaiotto:2012sf, Xie:2012hs}. These punctures specify how the Higgs field $\Phi(z)$ in the
corresponding Hitchin system behaves
around them; $\Phi(z)$ has a simple pole at a regular puncture
while it behaves as $\Phi(z) \sim 1/z^{N/2+1}$ around an irregular
puncture of rank $N/2$, where we take $z=0$ as the
locus of the puncture.
According to the generalized AGT correspondence \cite{Bonelli:2011aa, Gaiotto:2012sf}, the regular puncture
corresponds to a Virasoro primary state $|a\rangle$, and the
irregular puncture corresponds to an irregular state $|I^{(N/2)}\rangle$ of
Virasoro algebra at central charge $c=1+6Q^2$.
While there are two different characterizations of $|I^{(N/2)}\rangle$, we will use the one discussed in \cite{Gaiotto:2012sf}.
Here, the irregular state
is not a primary state but
a simultaneous eigen state of $L_k$ for $k\geq \lceil N/2
\rceil$, with vanishing eigenvalues for $k> N$.
Therefore, an irregular state $|I^{(N/2)}\rangle$ satisfies
\begin{align}
L_k|I^{(N/2)}\rangle =
\left\{
\begin{array}{l}
0 \qquad \text{for}\qquad N<k
\\[2mm]
\lambda_k|I^{(N/2)}\rangle\qquad
\text{for}\qquad \left\lceil \frac{N}{2}
\right\rceil\leq k\leq N\\
\end{array}
\right.~,
\label{eq:eigen1}
\end{align}
for a set of eigenvalues $\{\lambda_{\lceil
N/2\rceil},\cdots,\lambda_N\}$.\footnote{While $\lceil
N/2\rceil = N/2$ for even $N$, we here write things so that they can be
easily generalized to odd $N$ in the next section.}
This characterization of the irregular
state is such that
\begin{align}
x^2 = -\frac{\langle a | T(z)|I^{(N/2)}\rangle}{\langle
a|I^{(N/2)}\rangle}
\label{eq:curve0}
\end{align}
is equivalent to the Seiberg Witten (SW) curve of the 4d theory. Indeed,
from \eqref{eq:eigen1}, we see that \eqref{eq:curve0} is evaluated as
\begin{align}
x^2 = -\frac{\lambda_N}{z^{N+2}} - \frac{\lambda_{N-1}}{z^{N+1}} - \cdots - \frac{a(Q-a)}{z^2}~,
\label{eq:curve1}
\end{align}
which is identical to the SW curve of the $(A_1,D_N)$ theory.
Given the above regular state $|a\rangle$ and the irregular state
$|I^{(N/2)}\rangle$, the generalized AGT correspondence states that
\begin{align}
\mathcal{Z}_{(A_1,D_N)} = \langle a |I^{(N/2)}\rangle~,
\label{eq:ADN}
\end{align}
is identified with the Nekrasov partition function of the $(A_1,D_N)$
theory. Note here that, since no weakly-coupled description is known for
this theory, the above partition function cannot be evaluated by
supersymmetric localization.
\begin{figure}
\begin{center}
\begin{tikzpicture}[gauge/.style={circle,draw=black,inner sep=0pt,minimum size=8mm},flavor/.style={rectangle,draw=black,inner sep=0pt,minimum size=8mm},AD/.style={rectangle,draw=black,fill=red!20,inner sep=0pt,minimum size=8mm},auto]
\node[AD] (1) at (-1.8,0) {\;$(A_1,D_{N})$\;};
\node[gauge] (2) at (0,0) [shape=circle] {\;$2$\;} edge (1);
\node[AD] (3) at (1.8,0) {\;$(A_1,D_{N})$\;} edge (2);
\end{tikzpicture}
\caption{$SU(2)$ gauge theory coupled to two $(A_1,D_N)$ theories}
\label{fig:quiver4}
\end{center}
\end{figure}
Similarly, 4d $\mathcal{N}=2$ $SU(2)$ gauge theory coupled to two copies
of $(A_1,D_N)$ (Fig.~\ref{fig:quiver4}) is constructed by compactifying 6d (2,0) $A_1$ theory on
sphere with two irregular singularities of rank $N/2$. The generalized
AGT correspondence then implies that the Nekrasov partition function of this
theory is given by
\begin{align}
\mathcal{Z}_{SU(2)}^{2\times (A_1,D_N)} = \langle I^{(N/2)}|
I^{(N/2)}\rangle~.
\label{eq:AD-MN}
\end{align}
Note here that the characterization \eqref{eq:eigen1} does not fix the irregular state $|I^{(N/2)}\rangle$. In
particular, the actions of $L_0,\cdots,L_{\lfloor N/2\rfloor}$ are not
specified there. When $N$ is even, these
actions are expressed in terms of differential operators with respect to
$(N/2 +1)$ parameters, $c_0,\cdots,c_{N/2}$ \cite{Gaiotto:2012sf}:
\begin{align}
L_k|I^{(N/2)}\rangle = \left\{
\begin{array}{l}
0 \qquad \text{for}\qquad N<k
\\[2mm]
\lambda_k|I^{(N/2)}\rangle \qquad \text{for} \quad \frac{N}{2}\leq k\leq N
\\[2mm]
\left(\lambda_k + \sum_{\ell = 1}^{N/2-k}\ell\, c_{\ell+k} \frac{\partial
}{\partial c_{\ell}}\right)|I^{(N/2)}\rangle \qquad \text{for} \quad
0\leq k < \frac{N}{2}
\\
\end{array}
\right.~,
\label{eq:eigen2}
\end{align}
where the non-vanishing eigenvalues, $\lambda_k$, of $L_{N/2},\cdots,L_N$ are fixed by $c_0,\cdots,c_{N/2}$
as
\begin{align}
\lambda_k =
\left\{
\begin{array}{l}
-\sum_{\ell=k-N/2}^{N/2} c_\ell c_{k-\ell} \quad \text{for}\quad \frac{N}{2}<k\leq N
\\[2mm]
-\sum_{\ell=0}^{k} c_\ell c_{k-\ell} + (k+1)Qc_k \quad \text{for}\quad k\leq \frac{N}{2}
\end{array}
\right.~.
\label{eq:lambda}
\end{align}
The above actions of $L_0,\cdots,L_{N/2}$ follow from the
construction of $|I^{(N/2)}\rangle$ for even $N$ in a colliding limit of
regular primary operators.
Similar colliding limit is not known for odd $N$, and therefore the
actions of $L_0,\cdots,L_{\frac{N-1}{2}}$ have not been identified in
the literature..\footnote{There is another characterization of
the irregular state $|I^{(N/2)}\rangle$ \cite{Bonelli:2011aa}, where
$|I^{(N/2)}\rangle$ has an explicit expression and is an eigen state of
$L_N$ and $L_1$ for even and odd $N$. In this paper, we use the one discussed in
\cite{Gaiotto:2012sf} since it can easily be extended to the $U(2)$-version
that we will review in the next sub-section. It would be an interesting
open problem to consider the $U(2)$-version of the one discussed in \cite{Bonelli:2011aa}.}
\subsection{$U(2)$-version for even $N$}
\label{subsec:U2}
In this sub-section, we discuss the $U(2)$-version of the generalized AGT
correspondence. Here, we focus on irregular states $|I^{(N/2)}\rangle$
for even $N$, and
therefore on $(A_1,D_\text{even})$ theories.
Such a $U(2)$-version was considered in \cite{Kimura:2020krd} in order to
compute the instanton partition function of the $(A_3,A_3)$
theory. Here, the $(A_3,A_3)$ theory is an $\mathcal{N}=2$ superconformal $SU(2)$ gauge theory coupled
to two $(A_1,D_4)$ theories and a fundamental hypermultiplet (Fig.~\ref{fig:quiver1} ).
When the fundamental hypermultiplet is absent, one can compute the
partition function via the generalized AGT correspondence as in
\eqref{eq:AD-MN}, but its generalization to the $(A_3,A_3)$ theory is
not straightforward. The reason for this is that $(A_3,A_3)$ has no known realization from 6d
(2,0) $A_1$ theory.
Therefore, a more indirect route was taken in \cite{Kimura:2020krd} to compute the
partition function of the $(A_3,A_3)$ theory. First, the generalized AGT
correspondence was extended to the case of $U(2)$ gauge
group. Corresponding to the extra $U(1)$ part of the gauge group, the
Virasoro algebra on the 2d side is now accompanied with an extra Heisenberg
algebra \cite{Alba:2010qc}. The Virasoro irregular state $|I^{(N/2)}\rangle$ is then promoted to an irregular state $|\hat{I}^{(N/2)}\rangle$
of the direct sum of
Virasoro and Heisenberg algebras $Vir\oplus H$, which This state is generally
decomposed as
\begin{align}
|\hat{I}^{(N/2)}\rangle = |I^{(N/2)}\rangle \otimes |I^{(N/2)}_H\rangle~,
\end{align}
where $|I^{(N/2)}\rangle$ is the Virasoro irregular state satisfying \eqref{eq:eigen2}, and $|I^{(N/2)}_H\rangle$ is an
irregular state of the Heisenberg algebra characterized by
\begin{align}
a_k|I^{(N/2)}_H\rangle = \left\{
\begin{array}{l}
0 \qquad \text{for}\qquad N/2<k~,
\\[2mm]
-ic_k|I^{(N/2)}\rangle\qquad \text{for}\qquad 1\leq k\leq n~.
\end{array}
\right.~,
\label{eq:irreg-H}
\end{align}
where $a_k$ is the basis of the Heisenberg algebra such that
$\{a_k,a_\ell\} = \frac{k}{2}\delta_{k+\ell,0}$~.
Given the irregular state $|\hat{I}^{(N/2)}\rangle$ of $Vir \oplus
H$, the partition function of $U(2)$ gauge theory coupled to two $(A_1,D_N)$ theories is identified as
\begin{align}
\mathcal{Z}_{U(2)}^{2\times (A_1,D_N)} = \langle
\hat{I}^{(N/2)}|\hat{I}^{(N/2)}\rangle~,
\label{eq:U2}
\end{align}
which is a natural generalization of \eqref{eq:AD-MN} to the $U(2)$
gauge group.
A nice feature of this generalization is that the
highest-weight module of $Vir\oplus H$ has an orthogonal basis
$|a;Y_1,Y_2\rangle$ labeled by two Young diagrams, $Y_1$ and $Y_2$,
that satisfies \cite{Alba:2010qc}
\begin{align}
{\bf 1} =
\sum_{Y_1,Y_2}\mathcal{Z}_{Y_1,Y_2}^\text{vec}(a)\,|a;Y_1,Y_2\rangle
\langle a;Y_1,Y_2|~,
\label{eq:decomp}
\end{align}
where $\mathcal{Z}_{Y_1,Y_2}^\text{vec}(a)$ is the contribution from a
$U(2)$ vector multiplet to the Nekrasov partition function, at the fixed
point corresponding to $(Y_1,Y_2)$ on the moduli space of $U(2)$ instantons. Here
$|a;Y_1,Y_2\rangle$ is a linear combination of states of the form
$L_{-n_1}^{p_1}\cdots L_{-n_k}^{p_k} a_{-m_1}^{q_1}\cdots
a_{-m_\ell}^{q_\ell}|a\rangle$, and $\langle a;Y_1,Y_2|$ is obtained by
replacing each of these states with $\langle a |a_{m_\ell}^{q_{\ell}}
\cdots a_{1}^{q_1} L_{n_k}^{p_k}\cdots L_{n_1}^{p_1}$ {\it without
changing the coefficients of the linear combination.}
One
can use \eqref{eq:decomp} to decompose \eqref{eq:U2} as
\begin{align}
\mathcal{Z}_{U(2)}^{2\times (A_1,D_N)} =
\mathcal{Z}_\text{pert}\sum_{Y_1,Y_2}\Lambda^{b_0(|Y_1|+|Y_2|)}\mathcal{Z}_{Y_1,Y_2}^\text{vec}(a)
\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_N)}(a,m,\pmb{d},\pmb{u})\tilde{\mathcal{Z}}_{Y_1,Y_2}^{(A_1,D_N)}(a,\tilde{m},\tilde{\pmb{d}},\tilde{\pmb{u}})~,
\label{eq:U2A1DN}
\end{align}
where $\mathcal{Z}_\text{pert} \equiv \langle
\hat{I}^{(N/2)}|a\rangle\langle a|\hat{I}^{(N/2)}\rangle,\, \Lambda
\equiv -\zeta^2c_{N/2}\tilde{c}^{*}_{N/2}$ and
\begin{align}
\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_N)}(a,m,\pmb{d},\pmb{u}) &\equiv (\zeta c_{N/2})^{-\frac{2(|Y_1|+|Y_2|)}{N}}
\frac{\langle a;Y_1,Y_2|\hat{I}^{(N/2)}\rangle}{\langle a|
\hat{I}^{(N/2)}\rangle}~,
\label{eq:AD1}
\\[2mm]
\tilde{\mathcal{Z}}_{Y_1,Y_2}^{(A_1,D_N)}(a,\tilde{m},\tilde{\pmb{d}},\tilde{\pmb{u}}) &\equiv (-\zeta \tilde{c}^{*}_{N/2})^{-\frac{2(|Y_1|+|Y_2|)}{N}}
\frac{\langle \hat{I}^{(N/2)}|a;Y_1,Y_2\rangle}{\langle\hat{I}^{(N/2)}|
a\rangle}~.
\label{eq:AD2}
\end{align}
Here, $m,\,\pmb{d}\equiv(d_1,\cdots, d_{\frac{N}{2}-1})$ and $\pmb{u} \equiv
(u_1,\cdots,u_{\frac{N}{2}-1})$ are respectively, a mass parameter, relevant couplings and the VEVs of Coulomb branch
operators. These are related to two-dimensional parameters by
\begin{align}
d_k &=
\sum_{\ell=\frac{N}{2}-k}^{\frac{N}{2}}\frac{c_{\ell}c_{N-k-\ell}}{(c_{\frac{N}{2}})^{2-\frac{2k}{N}}}~,\qquad
m =
\sum_{\ell=0}^{\frac{N}{2}}\frac{c_{\ell}c_{\frac{N}{2}-\ell}}{c_{\frac{N}{2}}}~,
\label{eq:dk}
\\
u_k &=
\sum_{\ell=0}^{\frac{N}{2}-k}\frac{c_{\ell}c_{\frac{N}{2}-k-\ell}}{(c_{\frac{N}{2}})^{1-\frac{2k}{N}}}-
\sum_{\ell=1}^k \ell
\frac{c_{\frac{N}{2}+\ell-k}}{(c_{\frac{N}{2}})^{1-\frac{2k}{N}}}\frac{\partial
\mathcal{F}_{(A_1,D_N)}}{\partial
c_\ell}~,
\label{eq:uk}
\end{align}
where $\mathcal{F}_{(A_1,D_N)} \equiv \lim_{\epsilon_i\to 0}
\left(-\epsilon_1\epsilon_2 \log \langle a |I^{(N/2)}\rangle\right)$ is
the prepotential of the $(A_1,D_N)$ theory.
The parameter, $\zeta$, is a free parameter that can
be absorbed or emerged by rescaling the dynamical scale $\Lambda$.
Given the expression \eqref{eq:U2A1DN}, the factors \eqref{eq:AD1} and \eqref{eq:AD2} are interpreted as the contribution of
the $(A_1,D_{N})$ theories at the fixed point corresponding to
$(Y_1,Y_2)$ on the $U(2)$ instanton moduli space. Note that the gauge group is
now $U(2)$ instead of $SU(2)$, and the difference between \eqref{eq:AD1}
and \eqref{eq:AD2} is how the $U(1)\subset U(2)$ is coupled to the
$(A_1,D_N)$ theory.
An advantage of the expression \eqref{eq:U2A1DN} is that one can easily
introduce an extra fundamental hypermultiplet by multiplying
$\mathcal{Z}^\text{fund}_{Y_1,Y_2}(a,M)$ to the summand, where $M$
is the mass of the hypermultiplet. In particular, setting $N=4$ in
\eqref{eq:U2A1DN} and introducing an extra fundamental hypermultiplet,
the partition function is now
\begin{align}
\mathcal{Z}_{U(2)} = \mathcal{Z}_\text{pert}\sum_{Y_1,Y_2}q^{|Y_1|+|Y_2|}\mathcal{Z}_{Y_1,Y_2}^\text{vec}(a)\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_4)}(a,b,u)\tilde{\mathcal{Z}}^{(A_1,D_4)}_{Y_1,Y_2}(a,\tilde{b},\tilde{u})\mathcal{Z}_{Y_1,Y_2}^\text{fund}(a,M)~,
\end{align}
where $\Lambda^{b_0}$ is now replaced by $q$ since the $SU(2)$ gauge
coupling is exactly marginal.
This is almost equivalent to the instanton partition function of the
$(A_3,A_3)$ theory. The only difference from the $(A_3,A_3)$ is that the
$SU(2)$ gauge group in Fig.~\ref{fig:quiver1} is replaced by
$U(2)$, which gives rise to an extra prefactor, $\mathcal{Z}_{U(1)}$, of
the partition function. Therefore, the partition function of the
$(A_3,A_3)$ theory is evaluated as
\begin{align}
\mathcal{Z}_{(A_3,A_3)} = \frac{\mathcal{Z}_{U(2)}}{\mathcal{Z}_{U(1)}}~.
\end{align}
\section{$U(2)$-version of generalized AGT for $(A_1,D_\text{odd})$}
\label{sec:U2}
In this section, we will extend the $U(2)$-version of the generalized
AGT correspondence reviewed in Sec.~\ref{sec:review} to the case of
$(A_1,D_N)$ theories for odd $N$. Specifically, we will generalize
\eqref{eq:AD1} to the case of odd $N$.\footnote{The same generalization
is possible for \eqref{eq:AD2}, but we will focus on generalizing
\eqref{eq:AD1} here to make our argument concise.}
Even when $N$ is odd, the $(A_1,D_{N})$ theory is still realized by
compactifying 6d (2,0) $A_1$ theory on sphere with an irregular and a
regular puncture. Therefore, exactly the same discussion as in
Sec.~\ref{subsec:GAGT} leads us to identifying
\begin{align}
\mathcal{Z}_{(A_1,D_N)} = \langle a | I^{(N/2)}\rangle~,
\end{align}
as the partition function of the $(A_1,D_N)$ theory.
From the equivalence between
\eqref{eq:curve0} and \eqref{eq:curve1}, we see that the non-vanishing
eigenvalues, $\lambda_N,\cdots,\lambda_{\frac{N+1}{2}}$, in
\eqref{eq:eigen1} appear as the coefficients of the first
$\frac{N-1}{2}$ non-trivial terms in the SW curve:\footnote{Here we
absorbed $\lambda_N$ in front of $z^{N-2}$ by rescaling $z$ and $x$ so that
$xdz$ is kept fixed. The fact that we can absorb $\lambda_N$ this way
reflects the conformal invariance of $(A_1,D_{N})$.}
\begin{align}
x^2 &= \frac{1}{z^{N+2}} -
\frac{\lambda_{N-1}}{(-\lambda_N)^{\frac{N-1}{N}}}\frac{1}{z^{N+1}} -
\frac{\lambda_{N-2}}{(-\lambda_N)^{\frac{N-2}{N}}}\frac{1}{z^N} - \cdots -
\frac{\lambda_{\frac{N+1}{2}}}{(-\lambda_N)^{\frac{N+1}{2N}}}\frac{1}{z^{\frac{N+5}{2}}}
+ \cdots~,
\label{eq:A1DN}
\end{align}
which are identified as the relevant couplings
of $(A_1,D_{N})$ for odd $N$ \cite{Cecotti:2010fi, Xie:2012hs}. Therefore the relevant couplings of $(A_1,D_N)$ theories
are all encoded in the eigenvalues of $L_{\frac{N+1}{2}},\cdots,L_{N-2}$
and $L_{N-1}$ (normalized by that of $L_N$). This is a straightforward generalization of what we reviewed
in Sec.~\ref{subsec:GAGT} to odd $N$.
One difficulty for odd $N$ is, however, the
irregular state $|I^{(N/2)}\rangle$ cannot be obtained in a colliding limit of regular
primary operators. As such, any result derived via the colliding limit for
even $N$ is not available for odd $N$. For instance, while $\lambda_k$
are translated into $c_k$ through \eqref{eq:lambda} for even $N$,
a similar translation is not available for odd $N$. As a result, an explicit expression for the action of
$L_1\cdots,L_{\frac{N-1}{2}}$ on $|I^{(N/2)}\rangle$ has not been identified for
odd $N$.
The lack of a colliding-limit construction gives rise to another difficulty when
considering the $U(2)$-version of the generalized AGT
correspondence. Generalizing the argument in Sec.~\ref{subsec:U2}, it is
natural to expect that there exists an irregular state $|\hat{I}^{(N/2)}\rangle$ of $Vir\oplus H$ such
that
\begin{align}
\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_N)} \sim \frac{\langle
a;Y_1,Y_2|\hat{I}^{(N/2)}\rangle}{\langle a|\hat{I}^{(N/2)}\rangle}~,
\label{eq:conj}
\end{align}
is identified, even for odd $N$, as the contribution from an $(A_1,D_N)$ sector at each
fixed point on the $U(2)$ instanton moduli space for the gauge theory
described by the quiver in Fig.~\ref{fig:quiver4}. Here, the irregular state
$|\hat{I}^{(N/2)}\rangle$ is decomposed as $|\hat{I}^{(N/2)}\rangle =
|I^{(N/2)}\rangle \otimes |I_H^{(N/2)}\rangle$, where
$|I^{(N/2)}\rangle$ is the irregular state of Virasoro algebra discussed
in the previous two paragraphs, and $|I^{(N/2)}_H\rangle$ is a rank-$\frac{N}{2}$
irregular state of Heisenberg algebra. For even $N$,
$|I^{(N/2)}_H\rangle$ is completely characterized by \eqref{eq:irreg-H},
which was derived via the colliding-limit construction of
$|I^{(N/2)}_H\rangle$. However, for odd $N$, the lack of a
colliding-limit construction makes it difficult to find a similar
characterization of $|I^{(N/2)}_H\rangle$.
The above discussions imply that, due to the lack of colliding-limit, we do not know how $L_1,\cdots,
L_{\frac{N-1}{2}}$ and $a_{k>0}$ act on the irregular state
$|\hat{I}^{(N/2)}\rangle = |I^{(N/2)}\rangle \otimes
|I^{(N/2)}_H\rangle$ when $N$ is odd.
Without knowing these actions, one cannot compute
\begin{align}
\frac{\langle a |a_{m_\ell}^{q_{\ell}}
\cdots a_{1}^{q_1} L_{n_k}^{p_k}\cdots L_{n_1}^{p_1}
|\hat{I}^{(N/2)}\rangle}{\langle a|\hat{I}^{(N/2)}\rangle}~,
\label{eq:matrix-element}
\end{align}
for $n_i>0$ and $m_i>0$. This generically makes it hard
to compute \eqref{eq:conj} since $\langle a;Y_1,Y_2|$ is a linear
combination of vectors of the form $\langle a |a_{m_\ell}^{q_{\ell}}
\cdots a_{1}^{q_1} L_{n_k}^{p_k}\cdots L_{n_1}^{p_1}$.
In the next four sub-sections, however, we will argue that this difficulty can be overcome
when we focus on the classical limit $\epsilon_1,\epsilon_2\to 0$ and
turn off relevant couplings and the VEVs of Coulomb branch operators in the $(A_1,D_{N})$-sector.
\subsection{Classical limit as the commutative limit}
While the irregular state $|I^{(N/2)}\rangle$ is an eigen state of
$L_{\frac{N+1}{2}},\cdots,L_N$ with non-vanishing eigenvalues, it is not an eigen state of
$L_1,\cdots,L_{\frac{N-1}{2}}$. Indeed, the Virasoro algebra
\begin{align}
[L_n,L_m] = (n-m)L_{n+m} + \frac{n(n^2-1)}{12}\delta_{n+m,0}
\label{eq:Vir}
\end{align}
forbids $L_{1},\cdots,L_N$ to have non-vanishing eigenvalues when
$N>2$. This is the
main reason that \eqref{eq:eigen2} (which is only for even $N$) involves differential operators on
the RHS for $0\leq k<\frac{N}{2}$.
However, when computing the matrix element \eqref{eq:matrix-element} in the classical limit $\epsilon_1,\epsilon_2\to
0$, the sub-algebra formed by $\{L_{n> 0}\}$ reduces to a commutative
algebra. The reason for this is the following. First, in the context of
the generalized AGT correspondence, the SW curve \eqref{eq:curve1} of a 4d theory is
identified as \eqref{eq:curve0} on the 2d side. This and the fact that the SW 1-form,
$xdz$, has scaling dimension $1$ imply that $z$ and $T(z)$ in \eqref{eq:curve0} has
four-dimensional scaling dimensions $\Delta_\text{4d}(z)=-2/N$ and
$\Delta_\text{4d}(T(z)) = \Delta_\text{4d}(x^2) = 2(1+2/N)$, respectively. Since the stress tensor is
expanded as
\begin{align}
T(z) = \sum_{n\in\mathbb{Z}}\frac{L_n}{z^{n+2}}~,
\end{align}
this implies that, when acting on $|I^{(N/2)}\rangle$, $L_n$ is
associated with four-dimensional scaling dimension
\begin{align}
\Delta_\text{4d}(L_n)= 2\left(1-\frac{n}{N}\right)~.
\label{eq:dimLn}
\end{align}
Recall here that, in the AGT correspondence, the 4d scaling dimensions
are invisible since we set $\epsilon_1\epsilon_2=1$, as explained around Eq.~(3.2) of \cite{Alday:2009aq}. To recover the
correct scaling dimensions, we need to multiply every quantity of
dimension $\Delta_\text{4d}$ by
$(\epsilon_1\epsilon_2)^{\Delta_\text{4d}/2}$. This particularly means
the replacement $L_n \to (\epsilon_1\epsilon_2)^{1-\frac{n}{N}}L_n$, and
therefore
\begin{align}
[L_n,L_m] = (n-m)(\epsilon_1\epsilon_2)L_{n+m}~,
\end{align}
for $m,n>0$.
This implies that, when focusing on the leading term in
the limit $\epsilon_1,\epsilon_2\to0$, the sub-algebra formed
by $\{L_{n>0}\}$ reduces to a commutative algebra.
Therefore, in the computation of \eqref{eq:matrix-element} in the
classical limit, one can regard all $L_n$ and $a_m$ as
commutative and simultaneously diagonalizable.
This suggests the following conjecture: {\it in the classical limit $\epsilon_1,\epsilon_2\to 0$,
the irregular states $|I^{(N/2)}\rangle$ approaches to a simultaneous
eigen state of $\{L_{n>0}\}$ and $\{a_{m>0}\}$.} As seen in
\eqref{eq:eigen2}, this is indeed
the case when $N$ is even; in the third line of the RHS of \eqref{eq:eigen2},
$\sum_{\ell=1}^{N/2-k}\ell \,c_{\ell+k}\frac{\partial}{\partial c_\ell}$
is sub-leading in the classical limit, and therefore $|I^{(N/2)}\rangle$
approaches to a simultaneous eigen state of $\{L_k\}$ and $\{a_k\}$ in
the classical limit. We here assume that the above conjecture is
also satisfied for odd $N$. Then the matrix
element \eqref{eq:matrix-element} can be evaluated in the classical
limit as
\begin{align}
\frac{\langle a |a_{m_\ell}^{q_{\ell}}
\cdots a_{1}^{q_1} L_{n_k}^{p_k}\cdots L_{n_1}^{p_1}
|\hat{I}^{(N/2)}\rangle}{\langle a|\hat{I}^{(N/2)}\rangle} \;=\; \left(\prod_{i=1}^\ell(\mathfrak{a}_{m_i})^{q_i}\right)\left(\prod_{j=1}^{k}(\mathfrak{b}_{n_j})^{p_j}\right)~,
\label{eq:reduced-matrix-element}
\end{align}
where $\mathfrak{a}_i$ and $\mathfrak{b}_i$ are defined by
\begin{align}
\mathfrak{a}_m \equiv \frac{\langle a |
a_m|\hat{I}^{(N/2)}\rangle}{\langle a|\hat{I}^{(N/2)}\rangle}~,\qquad
\mathfrak{b}_n \equiv \frac{\langle
a|L_n|\hat{I}^{(N/2)}\rangle}{\langle a|\hat{I}^{(N/2)}\rangle}~,
\label{eq:eigens}
\end{align}
for $m,n>0$.\footnote{The reduction of
\eqref{eq:matrix-element} to \eqref{eq:reduced-matrix-element} was
explicitly observed in the case of $N=4$ in Sec.~5.1 of \cite{Kimura:2020krd}.}
Note here that, from \eqref{eq:eigen1}, we see that $\mathfrak{b}_{n} =
0$ for $n>N$. Therefore,
\eqref{eq:reduced-matrix-element} is a function of $\mathfrak{b}_n$ for $n=1,\cdots,N$ and $\mathfrak{a}_{m}$
for $m>0$. Note also that, for $\frac{N+1}{2}\leq n \leq N$,
$\mathfrak{b}_n$ is identical to the eigenvalues $\lambda_n$ in \eqref{eq:eigen1}.
\subsection{4d scaling dimensions of 2d parameters}
Here we evaluate the 4d scaling dimensions of the parameters
$\{\mathfrak{a}_m\}$ and $\{\mathfrak{b}_n\}$ defined above. We will use
them in the next sub-section to argue that, when all the couplings and VEVs
of Coulomb branch operators of $(A_1,D_N)$ are turned off, one has $\mathfrak{b}_n =
\mathfrak{a}_m=0$ for all $n\neq N$ and $m>0$.
To that end, we first see from \eqref{eq:dimLn} that
\begin{align}
\Delta_\text{4d}\left(\mathfrak{b}_n\right) =
2\left(1-\frac{n}{N}\right)~,
\label{eq:dimb}
\end{align}
which implies that $\mathfrak{b}_n$ for $n>N$ are of negative dimensions and
therefore irrelevant in the infrared. Since the Nekrasov partition
function is the quantity defined in the infrared,
\eqref{eq:conj} must be independent of such parameters. This is
consistent with
the condition $\mathfrak{b}_n =0$ for $n>N$.
Let us now turn to the scaling dimensions of $\mathfrak{a}_m$. To evaluate them, one needs to
use explicit expressions for the basis $|a;Y_1,Y_2\rangle$ of the
highest weight module of $Vir\oplus H$. As shown in \cite{Alba:2010qc}, the state $|a;Y_1,Y_2\rangle$ is generally a linear combination of descendants of the
highest weight state $|a\rangle$ of degree $(|Y_1|+|Y_2|)$. Here, the
degree is defined by the sum of the degrees in the sense of Virasoro and
Heisenberg algebras; for instance, the degree of
$(L_{-1})^2a_{-5}|a\rangle$ is evaluated as seven. A few
examples of $|a;Y_1,Y_2\rangle$ are shown below:
\begin{align}
|a;\emptyset,\emptyset\rangle &= |a\rangle~,
\label{eq:basis1}
\\
|a;{\tiny \yng(1)},\emptyset\rangle &=
\Big(-i\left(\epsilon_1+\epsilon_2+2a\right)a_{-1}-L_{-1}\Big)|a\rangle~,
\\
|a;{\tiny \yng(2)},\emptyset\rangle
&=\Big(-i\epsilon_1(\epsilon_1+\epsilon_2+2a)(2\epsilon_1+\epsilon_2)a_{-2}-(\epsilon_1+\epsilon_2+2a)(2\epsilon_1+\epsilon_2+2a)a_{-1}^2~,
\nonumber\\
&\qquad \qquad +2i(2\epsilon_1+\epsilon_2+2a) a_{-1}L_{-1} -
\epsilon_1(\epsilon_1+\epsilon_2+2a)L_{-2} + L_{-1}^2
\Big)|a\rangle~,
\\
|a;{\tiny \yng(1)},{\tiny \yng(1)}\rangle
&=\Big(-i(\epsilon_1+\epsilon_2)a_{-2}-
(\epsilon_1^2+\epsilon_2^2+\epsilon_1\epsilon_2-4a^2)a^2_{-1}+2i(\epsilon_1+\epsilon_2)a_{-1}L_{-1}
-L_{-2}+L_{-1}^2\Big)|a\rangle~,
\label{eq:basis4}
\end{align}
where we recovered the complete $\epsilon_i$-dependence.
In the context of the AGT correspondence, the highest weight
$a$ of $|a\rangle$ is identified as the mass of the W-boson that arises on
the Coulomb branch of $SU(2)$ gauge theory, and therefore has scaling
dimension one. Similarly the
$\Omega$-deformation parameters
$\epsilon_i$ have scaling dimension one, i.e.,
\begin{align}
\Delta_\text{4d}(a) = \Delta_\text{4d}(\epsilon_1) =
\Delta_\text{4d}(\epsilon_2)=1~.
\label{eq:dima}
\end{align}
Combining this with the expressions for
$|a;Y_1,Y_2\rangle$ shown in \eqref{eq:basis1}--\eqref{eq:basis4},
one can read off the 4d scaling dimensions of $\mathfrak{a}_m =
\langle a|a_m|\hat{I}^{(N/2)}\rangle/\langle a
|\hat{I}^{(N/2)}\rangle$.
For instance, we see from \eqref{eq:basis1} and \eqref{eq:eigens} that $\mathcal{Z}_{{\tiny
\yng(1)},\emptyset}^{(A_1,D_N)} \sim \langle a;{\tiny
\yng(1)},\emptyset|\hat{I}^{(N/2)}\rangle/\langle
a|\hat{I}^{(N/2)}\rangle$ is evaluated as\footnote{Here, we recall that $\langle a;Y_1,Y_2|$ is obtained by expanding it as
a linear combinations of $L_{-n_1}^{p_1}\cdots
L_{-n_k}^{p_k}a_{-m_1}^{q_1}\cdots a_{-m_\ell}^{q_\ell}|a\rangle$ and replacing each of these vectors with $\langle a |a_{m_\ell}^{q_{\ell}}
\cdots a_{1}^{q_1} L_{n_k}^{p_k}\cdots L_{n_1}^{p_1}$ with the expansion
coefficients kept fixed.}
\begin{align}
\mathcal{Z}_{{\tiny \yng(1)},\emptyset}^{(A_1,D_N)} \sim
-i(\epsilon_1+\epsilon_2+2a)\mathfrak{a}_1 - \mathfrak{b}_1~,
\label{eq:ex1}
\end{align}
Since the two terms in \eqref{eq:ex1} must have the same scaling dimensions,
we see that
\begin{align}
\Delta_\text{4d}(\mathfrak{a}_1) = \Delta_\text{4d}(\mathfrak{b}_1)-1 =
1- \frac{2}{N}~.
\end{align}
The same analysis for $\mathcal{Z}_{{\tiny
\yng(2)},\emptyset}^{(A_1,D_N)}$ implies
\begin{align}
\Delta_\text{4d}(\mathfrak{a}_{2}) = 2\Delta_\text{4d}(\mathfrak{b}_1)
-3 = 1- \frac{4}{N}~.
\end{align}
It is straightforward to do the same analysis for
all $\mathcal{Z}_{Y_1,\emptyset}^{(A_1,D_N)}$ with $Y_1= [1,\cdots,1]$. As
shown in \cite{Alba:2010qc}, the state $|a;Y_1,\emptyset\rangle$ is concisely
expressed as
\begin{align}
|a;Y_1,\emptyset\rangle &= \Omega_{Y_1}(a)\text{\bf
J}_{Y_1}^{(-\epsilon_2^2)}(x)|a\rangle~,
\label{eq:left}
\end{align}
where $\Omega_{Y_1}(a) \equiv (-\epsilon_1)^{|Y_1|}\prod_{(j,k)\in Y_1}(2a +j\epsilon_1+k\epsilon_2)$, and $\text{\bf J}_{Y_1}^{(1/g)}(x)$ is the
normalized Jack polynomial of variables $x\equiv
(x_1,x_2,\cdots)$.\footnote{Note that we have $g=-\epsilon_2^2$ here.}
Here, the variables $(x_1,x_2,\cdots)$ are related to the $\{L_n\}$ and
$\{a_m\}$ as follows. First, write the Virasoro generators $L_{n\neq 0}$ as
\begin{align}
L_n = \sum_{k\neq 0,n}c_kc_{k-n} + i(nQ -2a)c_n
\end{align}
in terms of $\{c_k\}$ such that $[c_k,c_\ell] =
\frac{k}{2}\delta_{k+\ell,0}$. Then $x=(x_1,x_2,\cdots)$ is related to $\{c_k\}$
and $\{a_m\}$ by the identifications
\begin{align}
a_{-n} - c_{-n} = -i\epsilon_1 p_n(x)~,
\end{align}
where $p_n(x) \equiv \sum_{i=1}^{|Y_1|}x_i^n$. Therefore, to express
\eqref{eq:left} in terms of $\{a_m\}$ and $\{L_m\}$, one first needs to write
$\text{\bf J}_{Y_1}^{(-\epsilon_2^2)}(x)$ in terms of $\{p_n(x)\}$, and
then replace $p_n(x)$ with $i(a_{-n}-c_{-n})/\epsilon_1$.
When $Y_1=[1,\cdots,1]$, the Jack polynomial is simply ${\bf J}_{Y_1}^{(1/g)}(x)
= |Y_1|! \, \prod_{i=1}^{|Y_1|}x_i$. Rewriting this in terms of
$p_n(x) = i(a_{-n}-c_{-n})/\epsilon_1$ for $n\in\mathbb{N}$, one finds
that the expression \eqref{eq:left} for $Y_1= [1,\cdots,1]$ is of the form
\begin{align}
|a;\,Y_1 = [1,\cdots,1],\,\emptyset\rangle &=
\Bigg(
\mathcal{N}(Y_1)\,
\epsilon_1^{|Y_1|-1}\left(\prod_{j=1}^{|Y_1|}(2a+j\epsilon_1+\epsilon_2)\right)a_{-|Y_1|}
\nonumber\\
& \qquad +\; \left(-L_{-1}\right)^{|Y_1|} \;+\;
\cdots \Bigg)|a\rangle~,
\label{eq:special}
\end{align}
where $\mathcal{N}(Y_1)$ is a numerical factor independent of $\epsilon_1$ and $\epsilon_2$.
Note here that the presence of $(-L_{-1})^{|Y_1|}$ on the right-hand
side of \eqref{eq:special} is already stressed in \cite{Alba:2010qc}.
The expression \eqref{eq:special} implies that, for $Y_1=[1,\cdots,1]$,
\begin{align}
\mathcal{Z}_{Y_1=[1,\cdots,1], \emptyset}^{(A_1,D_N)} \sim
\mathcal{N}(Y_1)\,\epsilon_1^{|Y_1|-1}\prod_{j=1}^{|Y_1|}(2a+j\epsilon_1+\epsilon_2)\mathfrak{a}_{|Y_1|}
+ (-\mathfrak{b}_{1})^{|Y_1|} + \cdots~.
\end{align}
For the first two terms on the right-hand side to be of the same scaling
dimension, we must have
\begin{align}
\Delta_\text{4d}(\mathfrak{a}_m) = m\Delta_\text{4d}(\mathfrak{b}_1) -2m+1
= 1-\frac{2m}{N}~.
\label{eq:dima}
\end{align}
\subsection{Computation of matrix elements for odd $N$}
In the rest of this paper, we focus on the classical limit
$\epsilon_1,\epsilon_2\to 0$ so that
\eqref{eq:reduced-matrix-element} is valid. In this case, it is sufficient to identify
the values of \eqref{eq:eigens} for the computation of \eqref{eq:conj}.
Here we argue that, when the relevant couplings and VEVs of Coulomb
branch operators of $(A_1,D_N)$ are all turned off, the only non-vanishing parameter among
\eqref{eq:eigens} is $\mathfrak{b}_N$ and therefore
\eqref{eq:reduced-matrix-element} reduces to
\begin{align}
\frac{\langle a |a_{m_\ell}^{q_{\ell}}
\cdots a_{1}^{q_1} L_{n_k}^{p_k}\cdots L_{n_1}^{p_1}
|\hat{I}^{(N/2)}\rangle}{\langle a | \hat{I}^{(N/2)}\rangle} &=\left\{
\begin{array}{l}
1 \qquad \text{for}\quad \ell=k=0
\\[2mm]
\delta_{n_1,N} (\mathfrak{b}_N)^{p_1} \qquad \text{for}\quad
\ell=0,\;\;k=1
\\[2mm]
0\qquad \text{for the others}\\
\end{array}
\right.~.
\label{eq:presc}
\end{align}
To derive \eqref{eq:presc}, we first note that all parameters of $(A_1,D_N)$ on the Coulomb branch are
encoded in the SW curve \eqref{eq:curve0}. Through the equivalence of
\eqref{eq:curve0} and \eqref{eq:curve1}, these are related to $a$ and the
non-vanishing components of $\mathfrak{b}_n$.
The interpretation of non-vanishing $\mathfrak{b}_n$ in four dimensions is as
follows. From \eqref{eq:dimb}, we see that
$\mathfrak{b}_1,\cdots,\mathfrak{b}_{\frac{N-1}{2}}$ are identified as
the VEVs of Coulomb branch operators since they have scaling dimensions
larger than one \cite{Argyres:1995jj, Argyres:1995xn, Eguchi:1996vu}. Similarly, $\mathfrak{b}_{\frac{N+1}{2}},\cdots,\mathfrak{b}_{N-1}$
are identified as relevant couplings since their dimensions are smaller
than one. Note that, the $(A_1,D_N)$ theory has no exactly
marginal coupling, and therefore the dimensionless parameter $\mathfrak{b}_N$
has no counterpart in four dimensions. This implies that the final result
must be independent of $\mathfrak{b}_N$, as discussed in the next sub-section.
Since the Coulomb branch of $(A_1,D_N)$ is completely
characterized by $\{\mathfrak{b}_n\}$ and $a$, any physical quantity of the $(A_1,D_N)$ theory (on the Coulomb
branch) should be determined by these parameters.
In particular, $\mathfrak{a}_m$ must be a function of $\{\mathfrak{b_n}\}$
and $a$. When $N$ is even, this function was
identified in \cite{Kimura:2020krd} via the colliding-limit construction of
$|\hat{I}^{(N/2)}\rangle$, where $\mathfrak{a}_m$ turned out to be
independent of $a$. Here we assume this independence to hold for odd
$N$ as well, and therefore $\mathfrak{a}_m$ is a function
only of
$\{\mathfrak{b}_n\}$.
While it is beyond the scope of this paper to compute
$\mathfrak{a}_m$ for generic values of $\{\mathfrak{b}_n\}$, one can easily compute it when all the relevant
couplings and VEVs of Coulomb branch operators are turned off in
the $(A_1,D_N)$ theory. Indeed, turning off these couplings and VEVs
implies that
\begin{align}
\mathfrak{b}_n = 0~,\qquad \text{for}\qquad n\neq N~.
\end{align}
Note that this is equivalent to the condition that $\mathfrak{b}_n=0$
unless $\Delta_\text{4d}(\mathfrak{b}_n) = 0$. Since $\mathfrak{a}_m$ is
assumed to be
a function only of $\{\mathfrak{b}_n\}$, this implies that $\mathfrak{a}_m =
0$ unless $\Delta_\text{4d}(\mathfrak{a}_m) = 0$.\footnote{Note here that, since we are already taking the classical
limit $\epsilon_1,\epsilon_2\to 0$, the only non-vanishing dimensionful
parameter in the $(A_1,D_N)$ sector is now $a$. Since
$\mathfrak{a}_m$ is assumed to be independent of $a$, we see that $\mathfrak{a}_m =
0$ unless $\Delta_\text{4d}(\mathfrak{a}_m)=0$.}
From \eqref{eq:dima}, we see
that $\Delta_{\text{4d}}(\mathfrak{a}_m) = 0$ occurs if and only
if $m=N/2$, but this condition is never satisfied for odd $N$. Hence, we conclude that
\begin{align}
\mathfrak{a}_m = 0~,
\end{align}
for all $m$, when the relevant couplings and the VEVs of Coulomb
branch operators of $(A_1,D_N)$ are turned off.
The above discussion implies that the matrix element \eqref{eq:reduced-matrix-element}
reduces to \eqref{eq:presc} when focusing on the classical limit
$\epsilon_1,\epsilon_2\to 0$ and turning off all the relevant couplings
and VEVs of Coulomb branch operators of the
$(A_1,D_N)$ theory.
\subsection{Removing an unphysical degree of freedom}
Suppose that we turn off all the relevant couplings and VEVs of Coulomb
branch operators in $(A_1,D_N)$. Then one can compute the RHS of
\eqref{eq:conj} using \eqref{eq:conj}
and \eqref{eq:presc}. From \eqref{eq:presc}, we
see that the result depends on $\mathfrak{b}_N$.
Note that \eqref{eq:dimb} implies
$\Delta_\text{4d}\left(\mathfrak{b}_N\right)=0$, and therefore
$\mathfrak{b}_N$ must be an exactly marginal coupling if it is a
physical degree of freedom. However, the $(A_1,D_N)$ theory has no such coupling. This
means that $\mathfrak{b}_N$, that appears on the RHS of \eqref{eq:dimb},
is not a physical parameter in four dimensions. The fact that $\mathfrak{b}_N$ is unphysical can also be seen in the SW curve \eqref{eq:A1DN} of the $(A_1,D_N)$
theory; $\lambda_N = \mathfrak{b}_N$ can be absorbed by a change of
variables.
Hence, to make the relation \eqref{eq:dimb} more
precise, one has to introduce a prefactor on the RHS to remove this
unphysical degree of freedom.\footnote{This is exactly the same situation as in
\eqref{eq:AD1} for even $N$, where $(\zeta
c_{N/2})^{-\frac{2(|Y_1|+|Y_2|)}{N}}$ removes a degree of freedom
that has no physical meaning in the corresponding four-dimensional
theory.}
As shown in \cite{Alba:2010qc}, the basis $|a;Y_1,Y_2\rangle$ is a
descendant at level $|Y_1|+|Y_2|$.\footnote{Here, the level of a
descendant means the sum of the level of the Virasoro descendant and that
of a Heisenberg descendant. For instance, $L_{-1}a_{-3}|a\rangle$ is a
descendant at level four.} Combining this fact with \eqref{eq:presc}, we
find that $\langle a;Y_1,Y_2|\hat{I}^{(N/2)}\rangle/\langle
a|\hat{I}^{(N/2)}\rangle$ is proportional to
$(\mathfrak{b}_N)^{\frac{|Y_1|+|Y_2|}{N}}$. This means that the following
expression is independent of $\mathfrak{b}_N$:
\begin{align}
\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_{N})}(a) = \left(\xi \mathfrak{b}_{N}\right)^{-\frac{|Y_1|+|Y_2|}{N}}\frac{\langle
a;Y_1,Y_2|\hat{I}^{(N/2)}\rangle}{\langle a | \hat{I}^{(N/2)}\rangle}~,\label{eq:AD3}
\end{align}
where $\xi$ is a numerical free parameter that can be absorbed or
emerged by rescaling the dynamical scale.
We therefore identify
\eqref{eq:AD3} as the precise expression for the contribution from
$(A_1,D_N)$ to the instanton partition function. Note that this is the
``odd-$N$ version'' of \eqref{eq:AD1}.
We will apply the above formula in the next
section to the computation of the instanton partition function of the $(A_2,A_5)$ theory.
\section{Application to the $(A_2,A_5)$ theory}
\label{sec:A2A5}
In this section, we compute the instanton partition function of the
$(A_2,D_5)$ theory using our method described in the previous section.
\subsection{Partition function}
Recall that the $(A_2,A_5)$ theory is $SU(2)$ gauge theory
described by the quiver diagram in Fig.~\ref{fig:quiver2}.
We first replace the gauge group with $U(2)$, and then the partition function of
the theory is evaluated as
\begin{align}
\mathcal{Z}_{U(2)} = \mathcal{Z}^{U(2)}_\text{pert}\sum_{Y_1,Y_2}q^{|Y_1|+|Y_2|}
\mathcal{Z}_{Y_1,Y_2}^\text{vec}(a)\mathcal{Z}_{Y_1,Y_2}^\text{fund}(a,M)\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_3)}(a,d,u)
\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_6)}(a,m,\pmb{d},\pmb{u})~.
\label{eq:Z-A2A5}
\end{align}
Here $\mathcal{Z}_{Y_1,Y_2}^\text{vec}$ and
$\mathcal{Z}_{Y_1,Y_2}^\text{fund}$ are contributions respectively from
the vector multiplet and fundamental hypermultiplet \cite{Nekrasov:2002qd,Nekrasov:2003rj}, which have simple product expressions \cite{Flume:2002az,
Bruzzo:2002xf, Fucito:2004gi} as reviewed in (A.1) and (A.3) of \cite{Kimura:2020krd}.
On the other hand, $\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_3)}$ and
$\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_6)}$ are contributions respectively from the
$(A_1,D_3)$ and $(A_1,D_6)$ sectors in Fig.~\ref{fig:quiver2}. Here, $q$
is the exponential of the exactly marginal gauge coupling, $d$ and $u$ are respectively the relevant coupling and VEV of Coulomb branch operator
in the $(A_1,D_3)$ theory, and $m$, $\pmb{d}=(d_1,d_2)$ and
$\pmb{u}=(u_1,u_2)$ are respectively the mass parameter,
relevant couplings and VEVs of Coulomb branch operators in the
$(A_1,D_6)$ theory.
The scaling dimensions of these parameters are as follows:
\begin{align}
[q]=0,\qquad [d_1]=\frac{1}{3}~,\qquad
[d]=[d_2]=\frac{2}{3}~,\qquad [u]=[u_1]=\frac{4}{3}~, \qquad [u_2]=\frac{5}{3}~.
\end{align}
In the rest of this section, we set $d=u=0$ so that our formula
derived in the previous section is available.
Using \eqref{eq:AD3}, we identify the contribution of the $(A_1,D_3)$ theory as
\begin{align}
\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_3)}(a)=\left(\xi\mathfrak{b}_3\right)^{-\frac{|Y_1|+|Y_2|}{3}}\frac{\langle
a;Y_1,Y_2|\hat{I}^{\left(3/2\right)}\rangle}{\langle
a|\hat{I}^{\left(3/2\right)}\rangle}~.
\label{eq:piece1}
\end{align}
Since we turn off the relevant coupling and the VEV of the Coulomb
branch operator, the RHS of \eqref{eq:piece1} can be computed via
\eqref{eq:presc}.
The contribution of the $(A_1,D_6)$
theory was already identified in \cite{Kimura:2020krd} and have reviewed
in \eqref{eq:AD1}; substituting $N=6$ we find
\begin{align}
\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_6)}(a,m,\pmb{d},\pmb{u})=\left(\zeta c_3\right)^{-\frac{|Y_1|+|Y_2|}{3}}\frac{\langle
a;Y_1,Y_2|\hat{I}^{(3)}\rangle}{\langle a|\hat{I}^{(3)}\rangle}~,
\label{eq:piece2}
\end{align}
where $\pmb{d}=(d_1,d_2)$ and $\pmb{u}=(u_1,u_2)$ are identified as in
\eqref{eq:dk} and \eqref{eq:uk}.
We choose the free parameter $\zeta$ to be $\zeta = 2/\xi$ so that the
expressions in the next sub-section are simple. Changing the value of
$\zeta$ or $\xi$ just corresponds to rescaling $q$.
The RHS of \eqref{eq:piece2} can be evaluated by using $|\hat{I}^{(3)}\rangle = |I^{(3)}\rangle \otimes
|I^{(3)}_H\rangle$ and the following equations:
\begin{align}
L_k|I^{(3)}\rangle &= 0\quad \text{for}\quad k\geq 7~,
\\
L_6|I^{(3)}\rangle &= -c_3^2|I^{(3)}\rangle~,
\label{eq:L6}
\\
L_5|I^{(3)}\rangle &= -2c_2c_3|I^{(3)}\rangle~,
\\
L_4|I^{(3)}\rangle &= -\left(c_2^2+2c_3c_1\right)|I^{(3)}\rangle~,
\\
L_3|I^{(3)}\rangle &= -2\left(c_1c_2+c_3(c_0-2Q)\right)|I^{(3)}\rangle~,
\\
L_2|I^{(3)}\rangle &=
\left(c_3\frac{\partial}{\partial c_1}-c_2(2c_0-3Q)-c_1^2\right)|I^{(3)}\rangle~,
\label{eq:L2}
\\
L_1|I^{(3)}\rangle &= \left(2c_3\frac{\partial}{\partial c_2}+c_2\frac{\partial}{\partial c_1}
-2c_1(c_0-Q)\right)|I^{(3)}\rangle~,
\label{eq:L0}
\end{align}
and
\begin{align}
a_k|I_H^{\left(3\right)}\rangle =
\left\{
\begin{array}{l}
-ic_k|I_H^{\left(3\right)}\rangle \quad \text{for} \quad k=1,2,3
\\[2mm]
0 \quad \text{for} \quad k>3
\\
\end{array}~.\right.
\label{eq:ak}
\end{align}
Using \eqref{eq:Z-A2A5}, \eqref{eq:piece1} and \eqref{eq:piece2},
one can evaluate $\mathcal{Z}_{U(2)}/\mathcal{Z}_\text{pert}^{U(2)}$ order by order in $q$.
Recall that we have replaced the $SU(2)$ gauge group in
Fig.~\ref{fig:quiver2} with $U(2)$. This induces an extra prefactor of
the partition function, $\mathcal{Z}_{U(1)}$, that is called the
``$U(1)$-factor.'' The partition function of the original $(A_2,A_5)$
theory is then recovered by removing $\mathcal{Z}_{U(1)}$ from
$\mathcal{Z}_{U(2)}$, i.e.,
\begin{align}
\mathcal{Z}_{(A_2,A_5)} = \frac{\mathcal{Z}_{U(2)}}{\mathcal{Z}_{U(1)}}~.
\end{align}
Since $a$ is the VEV of a scalar field
in the $SU(2)$ vector multiplet, $a$ is neutral under $U(1)$. Therefore
we expect that $\mathcal{Z}_{U(1)}$ is
independent of $a$. This means that, up to an $a$-independent prefactor,
$\mathcal{Z}_{U(2)}$ and $\mathcal{Z}_{(A_2,A_5)}$ are identical.
\subsection{S-duality from the prepotential relation}
\label{subsec:prepotential}
We here focus on the prepotential of the $(A_2,A_5)$ theory:
\begin{align}
\mathcal{F}^{(A_2,A_5)} &\equiv \lim_{\epsilon_i\to 0}
\left(-\epsilon_1\epsilon_2\log \mathcal{Z}_{(A_2,A_5)}\right)~.
\label{eq:F}
\end{align}
Up to the $a$-independent term $\lim_{\epsilon_i\to
0}(-\epsilon_1\epsilon_2 \log \mathcal{Z}_{U(1)})$, this is identical to
\begin{align}
\lim_{\epsilon_i\to
0}\left(-\epsilon_1\epsilon_2\log\mathcal{Z}_{U(2)}\right).
\end{align}
The prepotential \eqref{eq:F} is generally decomposed into the perturbative and instanton parts as
\begin{align}
\mathcal{F}^{(A_2,A_5)}=\mathcal{F}^{(A_2,A_5)}_{\text{pert}} +\mathcal{F}^{(A_2,A_5)}_{\text{inst}}~.
\end{align}
Again, up to $a$-independent terms affected by the $U(1)$-factor, the instanton part
$\mathcal{F}_{inst}^{(A_2,A_5)}$ is identical to
\begin{align}
\lim_{\epsilon_i\to 0}\left(-\epsilon_1\epsilon_2\log
\frac{\mathcal{Z}_{U(2)}}{\mathcal{Z}_\text{pert}^{U(2)}}\right)~,
\label{eq:F2}
\end{align}
which one can compute using the formula \eqref{eq:Z-A2A5}.\footnote{Note here that we are setting $d=u=0$ in \eqref{eq:Z-A2A5}, and therefore
\eqref{eq:F2} can be unambiguously computed via
\eqref{eq:Z-A2A5} with \eqref{eq:piece1} and \eqref{eq:piece2}.}
Below, we will compute this instanton part, and read off from it how the
S-duality group acts on the UV gauge coupling of the $(A_2,A_5)$ theory.
To study the S-duality of the theory, it is useful to turn off the
couplings and VEVs in the
$(A_1,D_6)$ sector as well, i.e., $\pmb{d}=(0,0)$ and $\pmb{u}=(0,0)$ in
\eqref{eq:Z-A2A5}.
In this case, $\mathcal{F}^{(A_2,A_5)}_\text{inst}$ is a function
of $q, a$ and two mass parameters $M$ and $m$. Using \eqref{eq:piece1}
and \eqref{eq:piece2}, one obtains
\begin{align}
\mathcal{F}_\text{inst}^{(A_2,A_5)}(q;a,m,M) &\sim
\frac{1}{6}\left(a^2 + \frac{mM^3}{2}a^{-2}\right)q^3 \notag
\\
&\qquad + \frac{1}{192} \Bigg[13a^2 + \left(\frac{3}{4}m^2 M^2 + 8 mM^3 + 3 M^4\right)a^{-2} \notag
\\
&\qquad - \left(\frac{9}{4}m^2 M^4 + 3 M^6\right)a^{-4} + \frac{5}{4}m^2M^6 a^{-6}\Bigg]q^6 + \mathcal{O}(q^9)~,
\label{eq:F-A2A5}
\end{align}
where ``$\sim$'' means that the LHS and RHS are identical up to $a$-independent terms affected by the the $U(1)$-factor.
Remarkably, the above expression is in a striking resemblance to the
instanton part $\mathcal{F}_\text{inst}^{N_f=4}$ of the prepotential of $SU(2)$ gauge theory with four fundamental
flavors. Indeed, comparing \eqref{eq:F-A2A5} with \eqref{eq:F-Nf=4} in
appendix \ref{app:Nf=4}, we see that the relation
\begin{align}
3\mathcal{F}_\text{inst}^{(A_2,A_5)}\left(q;a,m,M\right) =
\mathcal{F}_\text{inst}^{N_f=4}\left(q^3;a,\frac{m}{2},M,M,M\right)
\label{eq:relation-M}
\end{align}
holds, at least up to $\mathcal{O}(q^9)$!\footnote{To be precise, we
have only checked this relation up to $\mathcal{O}(q^9)$, and also up to terms affected by the $U(1)$-factor. }
Note that one of the four mass parameters on the RHS is
related to the mass parameter $m$ in the $(A_1,D_6)$ sector on the
LHS, while the remaining
three masses on the RHS are identified with the mass $M$ of the single
fundamental hypermultiplet on the LHS.
In the next sub-section, we will show that these mass relations
are consistent with the SW curves of $(A_2,A_5)$ and $SU(2)$
gauge theory with four flavors.
In the same spirit as \cite{Kimura:2020krd},
we
conjecture that the relation \eqref{eq:relation-M} extends
to the full prepotential. This
particularly implies
that, when the mass parameters are also turned off, one finds
\begin{align}
3\mathcal{F}^{(A_2,A_5)}(q;a) =
\mathcal{F}^{N_f=4}(q^3,a)~.
\label{eq:relation}
\end{align}
This prepotential relation is extremely powerful since one
can study the S-duality of the $(A_2,A_5)$ theory via that of $SU(2)$
gauge theory with four flavors.
To see this, first note that the prepotentials of the two theories must
be written as
\begin{align}
\mathcal{F}^{(A_2,A_5)}(q;a) = \left(\log q_\text{IR}\right)a^2~,\qquad
\mathcal{F}^{N_f=4}(q;a) = \left(\log \tilde{q}_\text{IR}\right)a^2~,
\label{eq:FqIR}
\end{align}
for dimensional reasons, where $q_\text{IR}$ and $\tilde{q}_\text{IR}$
are functions of the UV gauge coupling $q$. One can regard $q_\text{IR}$
and $\tilde{q}_\text{IR}$ as IR gauge couplings of these theories on the
Coulomb branch. Indeed, in the weak coupling limit, both $q_\text{IR}$ and
$\tilde{q}_\text{IR}$ coincide with the UV gauge coupling $q$.
For $SU(2)$ gauge theory
with four flavors, the IR and UV gauge couplings are known to be related
by \eqref{eq:UV-IR-Nf=4} in appendix \ref{app:Nf=4} \cite{Grimm:2007tm}. This
theory is known to be invariant under an action of
$PSL(2,\mathbb{Z})$. Its action on the IR gauge coupling is written as
\begin{align}
T: \tilde{\tau}_\text{IR} \to \tilde{\tau}_\text{IR} + 1~,\qquad
S:\tilde{\tau}_\text{IR} \to -\frac{1}{\tilde{\tau}_\text{IR}}~,
\label{eq:TS1}
\end{align}
where $\tilde{\tau}_\text{IR} \equiv \frac{1}{\pi i}\log \tilde{q}_\text{IR}$. Through
\eqref{eq:UV-IR-Nf=4}, one can translate the above as
\begin{align}
T: q\to \frac{q}{q-1}~,\qquad S: q\to 1-q~.
\label{eq:TS2}
\end{align}
Similarly, the $(A_2,A_5)$ theory is known to be invariant under
$PSL(2,\mathbb{Z})$ \cite{DelZotto:2015rca, Cecotti:2015hca, Buican:2015tda}. Indeed, the SW curve of the $(A_2,A_5)$ theory reduces to a
genus-one curve when dimensionful parameters except for $a$ are all
turned off. One difference from the previous paragraph is that
the action of
$PSL(2,\mathbb{Z})$ on $q$ has not been identified, since the relation between
$q$ and $q_\text{IR}$ has been unclear for $(A_2,A_5)$. However, from the prepotential relation
we found above, one can now identify the
explicit relation between $q$ and $q_\text{IR}$ for the $(A_2,A_5)$
theory. Specifically, we see from \eqref{eq:relation} that
$\mathcal{F}^{(A_2,A_5)}(q;a)$ is obtained from
$\mathcal{F}^{N_f=4}(q;a)$ by the replacement
\begin{align}
q \longrightarrow q^3~,\qquad \tilde{q}_\text{IR} \longrightarrow
q_\text{IR}^3~.
\label{eq:replacement}
\end{align}
Applying this replacement to \eqref{eq:UV-IR-Nf=4}, we find the
following relation between the UV and IR gauge couplings of the
$(A_2,A_5)$ theory:
\begin{align}
q^3=\frac{\theta_2(q_\text{IR}^3)^4}{\theta_3(q_\text{IR}^3)^4}~.
\label{eq:UV-IR-A2A5}
\end{align}
This suggests that the $PSL(2,\mathbb{Z})$ acts on the IR gauge coupling
$\tau_\text{IR}\equiv \frac{3}{\pi}\log q_\text{IR}$
as
\begin{align}
T:\tau_\text{IR} \to \tau_\text{IR} + 1~,\qquad S:\tau_\text{IR} \to
-\frac{1}{\tau_\text{IR}}~,
\label{eq:TS3/2}
\end{align}
and on the UV gauge coupling as
\begin{align}
T: q^3\to\frac{q^3}{q^3-1}~,\quad S: q^3\to 1-q^3~.
\label{eq:TS3/2-2}
\end{align}
Indeed, applying \eqref{eq:replacement} to \eqref{eq:TS1} and
\eqref{eq:TS2}, one obtains \eqref{eq:TS3/2} and \eqref{eq:TS3/2-2}.
Remarkably, the above $PSL(2,\mathbb{Z})$-action on the $(A_2,A_5)$ theory
can be extended to a more non-trivial situation. Let us now turn on
$u_1$ and $u_2$ in \eqref{eq:Z-A2A5} while keeping $d,d_1,d_2$ and $u$ vanishing. Then
the resulting $\mathcal{F}_\text{inst}^{(A_2,A_5)}$ is a function of
$a,m,M,u_1,u_2$ and $q$. We find that this
$\mathcal{F}_\text{inst}^{(A_2,A_5)}$ is invariant under the following
change of variables:
\begin{align}
q&\to \frac{e^{\frac{\pi i}{3}}q}{(1-q^3)^{\frac{1}{3}}}~,\qquad
m \to -m~,\qquad u_1 \to e^{\frac{2\pi i}{3}}u_1~,\qquad u_2 \to e^{\frac{\pi i}{3}}u_2~,
\label{eq:T-trans}
\end{align}
where $M$ and $a$ are kept fixed.
We checked this invariance up to $\mathcal{O}(q^{6
})$.
Note that the transformation \eqref{eq:T-trans} is a natural
extension of the $T$-transformation in \eqref{eq:TS3/2-2}. We believe
this can be further extended to the case of non-vanishing $d,d_1,d_2$ and
$u$. In particular, we believe the $T$-transformations for non-vanishing
$d,d_1$ and $d_2$ involve a non-trivial $q$-dependence as in the case of
$(A_3,A_3)$ theory studied in \cite{Kimura:2020krd}.
We leave a careful study of it for future work.\footnote{As
discussed in Sec.~\ref{sec:U2},
our formula for $\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_3)}$ is
only for vanishing $d$ and $u$. Therefore, our
discussion on the S-duality here is limited to the case of $d=u=0$.
Since $d$ and $d_2$
are of the same dimension, the $T$-transformation is expected to mix
them, which is why we turn off $d_2$ as well in the main text.
}
\section{Consistency with the Seiberg-Witten curve}
\label{sec:SW}
In this section, we show that the surprising relation
\eqref{eq:relation} is consistent with the SW curve of
the $(A_2,A_5)$ theory. In particular, we will show that the relation
between the two sets of mass parameters can also be seen in the SW curve.
We also show that
the $T$-transformation \eqref{eq:T-trans} corresponds to a symmetry of
the curve.
The SW curve of the $(A_2,A_5)$ theory can be written as \cite{Cecotti:2010fi, Xie:2012hs}
\begin{align}
0 = x^{3}&+z^{6}- \frac{1}{\mathsf{q}}x^{2}z^{2}-\mathsf{q}xz^{4}+c_{20}x^{2}+x(c_{11}z+c_{10}) \notag \\
& +c_{05}z^{5}+c_{04}z^{4}+c_{03}z^{3}+c_{02}z^{2}+c_{01}z-c_{00}~,
\label{eq:curve}
\end{align}
where $\mathsf{q}$ corresponds to the exactly marginal gauge coupling,
and is a non-trivial function of $q_{\text{IR}}$.
The SW 1-form is given by $\lambda=xdz$.
Since the mass of a BPS state is given by $\oint \lambda$, the 1-form
$\lambda$ has scaling dimension one, which fixes the dimensions of the
parameters in \eqref{eq:curve} as
\begin{align}
[x]=\frac{2}{3}~,\qquad [z]=\frac{1}{3}~,\qquad [c_{ij}]=2-\frac{2i+j}{3}~,\qquad [\mathsf{q}]=0~.
\end{align}
The coefficients $c_{ij}$ with $0<[c_{ij}]<1$ are regarded as relevant
couplings, while
those with $[c_{ij}]>1$
are regarded as the VEV of Coulomb branch operators.
The remaining parameters, $c_{11}$ and $c_{03}$, are two mass parameters.
\subsection{Three sectors in the $(A_2,A_5)$ theory}
We first show that the curve \eqref{eq:curve} splits into three sectors in
the weak gauge coupling limit $\mathsf{q}\to 0$.
To see this, let us study the behavior of the curve
for $\mathsf{q}\sim 0$. As discussed in \cite{Buican:2014hfa},
the coefficients $c_{ij}$ of the curve must be renormalized so that as
many periods as possible are kept finite in the limit $\mathsf{q}\to
0$. We find that the correctly renormalized coefficients are as follows:
\begin{align}
C_{ij}\equiv \mathsf{q}^{\frac{[c_{ij}]}{2}}c_{ij}\quad \text{for} \quad i\ne
j~,
\qquad C_{11}\equiv \mathsf{q} c_{11}~,
\qquad C_{00}\equiv \mathsf{q} c_{00}~.
\label{eq:normalize}
\end{align}
In terms of these renormalized parameters, the curve (\ref{eq:curve}) is
written as
\begin{align}
0 = x^{3}&+z^{6}- \frac{1}{\mathsf{q}}x^{2}z^{2}-\mathsf{q}xz^{4}+\mathsf{q}^{-\frac{1}{3}}C_{20}
x^{2}+x(\mathsf{q}^{-1}C_{11}z+\mathsf{q}^{-\frac{2}{3}}C_{10}) \notag \\
& +\mathsf{q}^{-\frac{1}{6}}C_{05}z^{5}+\mathsf{q}^{-\frac{1}{3}}C_{04}z^{4}
+\mathsf{q}^{-\frac{1}{2}}C_{03}z^{3}+\mathsf{q}^{-\frac{2}{3}}C_{02}z^{2}
+\mathsf{q}^{-\frac{5}{6}}C_{01}z^{1}-\mathsf{q}^{-1}C_{00}~.
\label{eq:curve3}
\end{align}
One can show that the curve \eqref{eq:curve3} splits into the following
three sectors when we take $\mathsf{q}\to 0$ with $C_{ij}$ kept finite.
\begin{itemize}
\item
In the region $|z/x|\sim \mathsf{q}^{-1/3}$, one has the curve
\begin{align}
0=-\tilde{x}^{2}\tilde{z}^2+\tilde{z}^{6}+C_{11}\tilde{x}\tilde{z} + C_{05}\tilde{z}^{5}
+C_{04}\tilde{z}^{4}+C_{03}\tilde{z}^3+C_{02}\tilde{z}^2+C_{01}\tilde{z}-C_{00}~,
\end{align}
where we defined $\tilde{x}=\mathsf{q}^{-\frac{1}{6}}x$ and $\tilde{z}=\mathsf{q}^{\frac{1}{6}}z$.
One can shift $\tilde{x}$ as $\tilde{x}\to
\tilde{x}+C_{11}/(2\tilde{z})$ so that the curve coincides
with a known expression for the $(A_1,D_6)$ theory:
\begin{align}
\tilde{x}^{2}=\tilde{z}^{4}+C_{05}\tilde{z}^{3}
+C_{04}\tilde{z}^{2}+C_{03}\tilde{z}+C_{02}+\frac{C_{01}}{\tilde{z}}-\frac{C_{00}-\frac{C_{11}^2}{4}}{\tilde{z}^{2}}~.
\label{eq:A1D6curve}
\end{align}
Note that the above shift of $\tilde{x}$ preserves the SW 1-form up to
exact terms. Here, we see that $C_{05}$ and $C_{04}$ are relevant
couplings, $C_{02}$ and $C_{01}$ are the VEVs of Coulomb
branch operators, and $C_{03}$ and $\sqrt{C_{00} - C_{11}^2/4}$ are
mass parameters of the $(A_1,D_6)$ theory. In particular, $\sqrt{C_{00}-C_{11}^2/4}$ is associated with the $SU(2)$
flavor sub-group that is gauged by the $SU(2)$ vector
multiplet in Fig.~\ref{fig:quiver2}.
\item
In
the region $|z/x|\sim \mathsf{q}^{2/3}$, the curve reduces to
\begin{align}
0= \tilde{x}^3-\tilde{x}^2\tilde{z}^2 + C_{20}\tilde{x}^2 + \tilde{x}(C_{11}\tilde{z}+C_{10}) - C_{00}~,
\end{align}
where we defined
$\tilde{x}=\mathsf{q}^{-\frac{1}{3}}x$ and
$\tilde{z}=\mathsf{q}^{\frac{1}{3}}z$.
By shifting and rescaling
the coordinates, this curve
is further rewritten as
\begin{align}
0=X^{2}+Z^{4}+2^{\frac{1}{3}}C_{20}Z^{2}+4\sqrt{C_{00}-\frac{C_{11}^{2}}{4}}Z-2^{\frac{2}{3}}\left(C_{10}-\frac{C_{20}^{2}}{4}\right)~,
\label{eq:A1D3curve}
\end{align}
where we defined $X\equiv 2^{\frac{1}{3}} i \big(\tilde{x}+\frac{1}{2}(\tilde{z}^2 -
C_{20})\big)$ and $Z\equiv -2^{-\frac{1}{3}} i \tilde{z}$.
We note that
this coincides with the curve of the $(A_1,D_3)$ theory. In particular,
$C_{20}$ is the relevant coupling, $(C_{10}-C_{20}^2/4)$ is
the VEV of the Coulomb branch operator, and
$\sqrt{C_{00}-C_{11}^2/4}$ is the mass parameter associated
with the $SU(2)$ flavor symmetry.
\item
In the region $|z/x|\sim 1$,
the curve reduces to
\begin{align}
0 = -x^2 z^2 + C_{11}xz -C_{00}~,
\end{align}
which describes a weak coupling limit of the $SU(2)$
superconformal QCD as discussed in \cite{Buican:2014hfa}. In
particular, $C_{11}$ is identified as the mass parameter of a
fundamental hypermultiplet.
\end{itemize}
As seen above, in the limit $\mathsf{q}\to 0$, the curve of the
$(A_2,A_5)$ theory splits into the curves of the three
sectors shown in Fig.~\ref{fig:quiver2}.
Moreover, we have seen physical meanings of $C_{ij}$ in these
three sectors, which leads to
the following identification of parameters in
\eqref{eq:Z-A2A5} in terms of those in the SW curve
\eqref{eq:curve3}:\footnote{Here, numerical factors in front of $C_{03}$
and $C_{11}$ are not physical. They are introduced here just to avoid
unimportant numerical coefficients below.}
\begin{align}
d_1=C_{05}~&,\qquad d_2=C_{04}~,\qquad m=-\frac{C_{03}}{6}~,\qquad u_1=C_{02}~,\qquad u_2=C_{01}~, \notag
\\
&d=C_{20}~,\qquad u=C_{10}-\frac{C_{20}^2}{4}~,\qquad M=-\frac{C_{11}}{12}~.
\label{eq:identify}
\end{align}
\subsection{S-duality from the curve}
We now show that the $T$-transformation \eqref{eq:T-trans} that we
identified in Sec.~\ref{subsec:prepotential} corresponds to a symmetry
of the SW curve \eqref{eq:curve}. We first note that the curve \eqref{eq:curve} is invariant under the following transformation:
\begin{align}
&\mathsf{q}\to e^{\frac{2\pi i}{3}}\mathsf{q}~,\qquad c_{10} \to e^{-\frac{4\pi i}{9}}c_{10}~,\qquad c_{11} \to e^{-\frac{2\pi i}{3}}c_{11}~,\qquad c_{20} \to e^{-\frac{2\pi i}{9}}c_{20}~, \notag \\
& c_{01} \to -e^{\frac{\pi i}{9}}c_{01}~,\qquad c_{02} \to -e^{-\frac{\pi i}{9}}c_{02}~,\qquad c_{03} \to e^{\frac{2\pi i}{3}}c_{03}~, \notag\\
&c_{04} \to e^{\frac{4\pi i}{9}}c_{04}~,\qquad c_{05} \to e^{\frac{2\pi i}{9}}c_{05}~,\qquad c_{00} \to -e^{\frac{\pi i}{3}}c_{00}~.
\label{eq:T-trans2}
\end{align}
\footnote{
At the same time, we take the change of coordinates in the curve \eqref{eq:curve}
\begin{align}
(x,z) \to (e^{-\frac{2\pi i}{9}}x,e^{\frac{2\pi i}{9}}z)~.
\end{align}
}
In the weak coupling limit $\mathsf{q}\to 0$, one can translate the
above transformation into a transformation of parameters in the three sectors. Indeed, \eqref{eq:normalize}
and \eqref{eq:identify} imply that \eqref{eq:T-trans2} is equivalent to
\begin{align}
\mathsf{q}\to e^{\frac{2\pi i}{3}}\mathsf{q}~,\qquad &d_{1}\to
-e^{\frac{2\pi i}{3}}d_{1}~,\qquad d_{2}\to -e^{\frac{\pi i}{3}}d_{2}~,
\qquad m\to -m~,\notag\\
& u_{1}\to e^{\frac{2\pi i}{3}}u_{1}~,\qquad u_{2}\to e^{\frac{\pi i}{3}}u_{2}~,
\end{align}
in the weak coupling limit. Note that this is in perfect agreement with
our $T$-transformation \eqref{eq:T-trans}.\footnote{Recall
that we have set $d_2=0$ in Sec.~\ref{subsec:prepotential} and therefore
consistent with $d_2\to -e^{\frac{\pi i}{3}}d_2$.}
This means that our $T$-transformation \eqref{eq:T-trans} corresponds
to a symmetry of the SW curve.
One can show that the above symmetry transformation \eqref{eq:T-trans2}
coincides with an S-duality transformation of the theory.
To see this,
let us turn off $c_{ij}$ except for $c_{00}$.
In this case, the curve is written as
\begin{align}
0=(x-\sqrt{\mathsf{q}}z^2)(x+\sqrt{\mathsf{q}}z^2)\left(x-\frac{z^2}{\mathsf{q}}\right)-c_{00}~.
\label{eq:curve2}
\end{align}
By changing the coordinates,\footnote{
In terms of $w=x/z^2$ and $v=z^3$, the curve (\ref{eq:curve2}) is written as
\begin{align}
v^2=\frac{c_{00}}{(w^2-\mathsf{q})\left(w-\frac{1}{\mathsf{q}}\right)}
\end{align}
We consider the following
change of variables:
\begin{align}
w\to \frac{w\mathsf{q}^{\frac{1}{2}}\sqrt{1+\sqrt{f}}+\mathsf{q}^{\frac{1}{2}}\sqrt{\frac{1-\sqrt{f}}{1+\sqrt{f}}}}{w\sqrt{1-\sqrt{f}}+1}~, \quad
v\to \frac{\sqrt{1+\sqrt{f}}}{2\mathsf{q}^{\frac{1}{2}}\sqrt{f}}v\left(w\sqrt{1-\sqrt{f}+1}\right)^2~,
\end{align}
where $f$ is defined by $f\equiv 1-\mathsf{q}^3$.
The curve is now
written as
\begin{align}
v^2=\frac{\tilde{u}}{(w^2+1)-fw^4}~,
\label{eq:curve5}
\end{align}
where $\tilde{u}$ is defined by $\tilde{u}\equiv\frac{2(1-f)^\frac{1}{3}}{\sqrt{1+\sqrt{f}}}c_{00}$.
The SW curve is now written as $\frac{1}{3}wdv$.
In terms of
$\tilde{x}\equiv i\sqrt{\tilde{u}}w$ and
$y\equiv \tilde{u}^{\frac{3}{2}}/v$, the curve \eqref{eq:curve5} is expressed as \eqref{eq:4flavors}.
}
the curve is expressed as
\begin{align}
y^2=(\tilde{x}^2-\tilde{u})^2-f\tilde{x}^4~,
\label{eq:4flavors}
\end{align}
where $f$ is defined by $f\equiv 1-\mathsf{q}^3$ and
the SW 1-form is now written as $\frac{i\tilde{u}}{3}\frac{d\tilde{x}}{y}$ up to exact terms.
This is a standard expression for the curve of SU(2) conformal QCD. As
discussed in \cite{Seiberg:1994aj, Argyres:1995wt}, there is an S-duality
transformation involving
\begin{align}
\sqrt{1-f} \to -\sqrt{1-f}~, \qquad \tilde{u}\to \tilde{u}~,
\end{align}
which is equivalent in our case to
\begin{align}
\mathsf{q} \to e^{\frac{2\pi i}{3}}\mathsf{q}~,\qquad c_{00}\to
-e^{\frac{\pi i}{3}}c_{00}~.
\end{align}
Since this is precisely the action of \eqref{eq:T-trans2} on
$\mathsf{q}$ and $c_{00}$, we conclude that our T-transformation
\eqref{eq:T-trans2} (or equivalently \eqref{eq:T-trans}) is an extension
of this S-duality transformation to the case of
generic values of $c_{ij}$.
\subsection{Relation between mass parameters}
We have shown in \eqref{eq:relation-M} that the prepotential of
$(A_2,A_5)$ and that of $SU(2)$ gauge theory with four flavors are in a
surprising relation. In particular, one of the four mass parameters of
the latter is identified with the mass of the fundamental hypermultiplet
of the former, and the other three masses of the latter are identified
with the mass in the $(A_1,D_6)$ sector. In this sub-section, we
rederive this mass relation from the SW curve.
As seen above, the curve of the $(A_2,A_5)$ theory is identical to that
of $SU(2)$ conformal QCD when $c_{ij}=0$ except for
$c_{00}$. This can be generalized to the case of non-vanishing mass
parameters. When we turn on the two mass parameters $c_{03}$ and $c_{11}$, the curve
\eqref{eq:curve2} of the $(A_2,A_5)$ theory is slightly modified.
In terms of $w\equiv x/z^2$ and $v\equiv z^3$, the modified curve is
written as
\begin{align}
0 &=
v^2\left(w-\sqrt{\mathsf{q}}\right)(w+\sqrt{\mathsf{q}})\left(w-\frac{1}{\mathsf{q}}\right)
+ v\left(c_{03}+c_{11}w\right) - c_{00}~.
\end{align}
Defining $P_3(w)\equiv
(w-\sqrt{\mathsf{q}})(w+\sqrt{\mathsf{q}})(w-1/\mathsf{q})$ and shifting
$v$ as $v\to v-(c_{03}+c_{11}w)/(2P_3(w))$, we can rewrite the above as
\begin{align}
v^2 &=
\frac{c_{00}}{P_3(w)}
+ \frac{\left(c_{03}+c_{11}w\right)^2}{4P_3(w)^2}~,
\label{eq:curve6}
\end{align}
where the SW 1-form is now $\lambda = -\frac{1}{3}vdw$ up to exact
terms.
We see that
\eqref{eq:curve6} is precisely of the same form as the mass-deformed
curve of $SU(2)$ conformal QCD with four flavors
\cite{Gaiotto:2009we}:
\begin{align}
v^2 = \frac{U}{P_3(w)} + \frac{M_4(w)}{P_3(w)^2}~,
\label{eq:curve7}
\end{align}
where $U$ stands for a coordinate of the Coulomb branch, and $M_4(w)$
is a fourth-order polynomial of $w$ and related to the mass
parameters of the theory. Since there exists one constraint on the
coefficients of $M_4(w)$, there are four independent coefficients of
$M_4(w)$. These four independent degrees of freedom are encoded in the residues of the SW 1-form at
$w=\pm \sqrt{\mathsf{q}},\,1/\mathsf{q}$ and $\infty$. These residues
are known to be identified with the following linear combinations of the
mass parameters, $m_1,\cdots,m_4$, of fundamental hypermultiplets:
\begin{align}
m_1\pm m_2~,\qquad m_3\pm m_4~.
\label{eq:residueNf=4}
\end{align}
Comparing \eqref{eq:curve6} and \eqref{eq:curve7}, we see that $(c_{03}
+ c_{11}w)^2$ in \eqref{eq:curve6} is identified with $M_4(w)$ in \eqref{eq:curve7}.
This implies that the four
mass parameters of the latter theory are related to
the two
mass parameters of the former.
To see more concretely the relation between the mass parameters, let us compute the residues
of the SW 1-form of the $(A_2,A_5)$ theory. From \eqref{eq:curve6}, we see that the residues of the 1-form
$\lambda = -\frac{1}{3}vdw$ at $w=\pm \sqrt{\mathsf{q}}, 1/\mathsf{q}$
and $\infty$
are respectively
\begin{align}
-\frac{c_{03}\pm c_{11}\sqrt{\mathsf{q}}}{12
(\mathsf{q}-1/\sqrt{\mathsf{q}})}~,\qquad
-\frac{c_{03} + \frac{c_{11}}{\mathsf{q}}}{6\left(\frac{1}{\mathsf{q}} -
\sqrt{\mathsf{q}}\right)\left(\frac{1}{\mathsf{q}}+\sqrt{\mathsf{q}}\right)}~,\qquad 0~,
\end{align}
which reduce in the weak-coupling limit $\mathsf{q}\to 0$ to
\begin{align}
\frac{m \pm 2M}{2}~,\qquad
2M~,\qquad 0~.
\end{align}
We see that these residues coincide with \eqref{eq:residueNf=4} if we identify
\begin{align}
m_1 = \frac{m}{2}~,\qquad m_2=m_3=m_4=M~.
\label{eq:mass-relation}
\end{align}
This implies that the mass-deformed SW curve of $(A_2,A_5)$ is
identical to that of the $SU(2)$ conformal QCD when the four
mass parameters of the latter are restricted as in
\eqref{eq:mass-relation}. Note here that the restriction
\eqref{eq:mass-relation} of mass parameters is
precisely equivalent to the one observed in the relation \eqref{eq:relation-M} for
the prepotentials of these theories!\footnote{The coincidence of the
numerical factor $1/2$ in front of $m$ is a consequence of our
identification \eqref{eq:identify}, and therefore is not non-trivial. What
is non-trivial here is the coincidence that, both in
\eqref{eq:relation-M} and \eqref{eq:mass-relation}, three of the four mass parameters of
the $SU(2)$ conformal QCD are equal and proportional to $M$, and the
remaining one is proportional to $m$.} This is a very non-trivial
consistency check of \eqref{eq:F-A2A5} and our formula for
$\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_3)}$ that we developed in Sec.~\ref{sec:U2}.
\section{Conclusion and Discussions}
In this paper, we have considered the $U(2)$-version of the generalized AGT
correspondence for $(A_1,D_N)$ theories for odd $N$, in terms of irregular states of the direct sum
of Virasoro and Heisenberg algebras $Vir\oplus H$. In contrast to the
$(A_1,D_\text{even})$ case, the action of $Vir\oplus H$ on the irregular
state cannot be obtained in a colliding limit of primary operators,
which makes it very difficult to compute the (normalized) inner product of the form
in \eqref{eq:conj}. However, we have shown that, when the relevant couplings and the VEVs of
Coulomb branch operators of the $(A_1,D_N)$ theory are turned off, one can
compute the inner product as in \eqref{eq:AD3}.
Using the formula \eqref{eq:AD3}, we have computed the instanton
partition function of the $(A_2,A_5)$ theory, i.e., the coupled system
of an $SU(2)$ vector multiplet, a fundamental hypermultiplet,
$(A_1,D_6)$ and $(A_1,D_3)$ as described in Fig.~\ref{fig:quiver2}. Our
result implies a surprising relation \eqref{eq:relation-M} between the
prepotential of the $(A_2,A_5)$ theory and that of the $SU(2)$
superconformal QCD. A similar relation was found in
\cite{Kimura:2020krd} for the $(A_3,A_3)$ theory. Using the relation
\eqref{eq:relation-M}, we have read off how the S-duality group acts on
parameters including the UV gauge coupling. We have also checked in
Sec.~\ref{sec:SW} that the relation \eqref{eq:relation-M} is consistent
with the Seiberg-Witten curves of the $(A_2,A_5)$ theory and the $SU(2)$
superconformal QCD.
One can also apply our formula for
$\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_\text{odd})}$ to other gauged AD
theories. For instance, let us consider the $SU(2)$ gauge theory coupled to three copies of the $(A_1,D_3)$
theory. As in the case of $(A_2,A_5)$, the $SU(2)$ gauge coupling
of this theory is exactly marginal. Using our formula for
$\mathcal{Z}^{(A_1,D_3)}_{Y_1,Y_2}$, one can then compute the prepotential of
this theory, up to terms affected by the $U(1)$-factor, at least when the relevant
coupling and the VEV of Coulomb branch operator of the $(A_1,D_3)$
sectors are turned off. We have done this computation and checked up to
$\mathcal{O}(q^6)$ that the resulting prepotential has no instanton
correction at all.
Note that the same situation occurs for the prepotential of
$\mathcal{N}=4$ super Yang-Mills theories (SYMs). Indeed, a peculiar connection
between the $SU(2)$ gauge theory coupled to three $(A_1,D_3)$ theories and
$\mathcal{N}=4$\, $SU(2)$ SYM has already been pointed out in \cite{Buican:2020moo};
the Schur index of these two theories are related by a simple change of
variables. It would be very interesting to study this connection further.
There are clearly many future directions. One of the most important
directions is to understand the reason for the peculiar relation
\eqref{eq:relation-M} for the prepotentials.
Another interesting
direction is to study the Nekrasov-Shatashvili limit of the instanton
partition function \cite{Nekrasov:2009rc}, which should be combined with
the recent results on the quantum periods of AD theories
\cite{Ito:2017iba, Ito:2018hwp, Ito:2019twh, Ito:2020lyu}.
The uplift of
our formula \eqref{eq:AD3} to five dimensions would also be an
interesting direction. It would also be interesting to search for a
matrix model description of the instanton partition function of
$(A_2,A_5)$, generalizing the ones studied in
\cite{Nishinaka:2012kn, Grassi:2018spf, Itoyama:2018wbh, Itoyama:2018gnh, Itoyama:2021nmj, Oota:2021qky}.
\section*{Acknowledgements}
We are grateful to Matthew Buican, Kazunobu Maruyoshi, Jaewon Song, Yuji
Sugawara, Yuji Tachikawa and Takahiro Uetoko for discussions.
T.~N. is especially grateful to Matthew Buican for helpful discussions in
many collaborations on related topics.
T.~N.'s research is partially supported by
JSPS KAKENHI Grant Numbers JP18K13547 and JP21H04993. This work was also
partly supported by Osaka Central Advanced Mathematical Institute: MEXT Joint Usage/Research Center on Mathematics and Theoretical Physics JPMXP0619217849.
|
2,877,628,089,114 | arxiv | \section{Introduction}
Almost a century ago, the work of Einstein~\cite{einstein1956investigations} and Perrin~\cite{perrin1916} laid the foundations of modern physics of colloids---liquids containing structures on the scale of roughly 10~nm to 1~$\mu$m that are stable against sedimentation. Since then colloids with well-defined particle size, shape and interaction lengths have been widely used as model systems in fundamental studies of statistical physics phenomena~\cite{hunter2001foundations}, phase transitions and optical trapping~\cite{Dholakia2011} to name a few. Propagation of light beams through some common colloidal media such as fog, clouds, smoke, paints, and milk finds increasingly important applications in science and technology, ranging from optical bar-coding for applications in genomics, proteomics and drug discovery~\cite{B200038P}, free-space communication technologies~\cite{Wu:08} and weather control~\cite{Rohwetter2010} to security and defense~\cite{mcaulay2011military}.
Recent progress in the development of artificial materials, or metamaterials, with fundamentally new physical properties opens new opportunities for tailoring the properties of colloids. Metamaterials are built of resonant elements with dimensions much smaller than the wavelength of light, sometimes referred to as meta-atoms, enabling light-matter interactions that are difficult or impossible to realize using naturally available materials. A majority of photonic metamaterials that have been demonstrated to date were solid-state materials. However, the concept of meta-atoms can be extended further to realize artificial media with novel electromagnetic properties in liquid~\cite{ADMA:ADMA201670049} or gaseous~\cite{Kudyshev2013a} phases at frequencies ranging from microwave to visible. In particular, at optical frequencies, engineered colloidal suspensions offer as a promising platform for engineering polarizabilities and realization of large and tunable nonlinearities. Recent studies have shown that the nonlinearity of colloidal suspensions has exponential character and can be either supercritical, in case of particles with positive polarizability, or saturable, for negative polarizability particles~\cite{El-Ganainy:07,El-Ganainy:07b,PhysRevLett.111.218302,Kelly:16}.
To date, such engineered colloidal systems have been studied using simple Gaussian beams. However, recent progress in structuring amplitude and phase properties of optical beams opens new remarkable opportunities for manipulating and controlling light-matter interactions in such engineered media. Compared to the conventionally used Gaussian beams, optical vortices that, are characterized by the doughnut-shaped intensity profile and a helical phase front, offer even more degrees of freedom for optical trapping~\cite{95302849a5214942adf2c240ebe9bee6} or imaging applications~\cite{Xie:13}. Optical vortices can be used to trap and circulate colloidal particles, constituting a model test-bed for studying many-body hydrodynamic coupling and instabilities in mesoscopic, many-particle systems with potential applications in lab-on-a-chip systems~\cite{Dholakia2011,Ladavac:04,Lee:06,Reichert2006Hydro-9458}.
In this letter, we experimentally investigate the evolution of the optical vortex beams of different topological charges in engineered nano-colloidal suspensions with saturable nonlinearities, in which the particles with negative polarizability are repelled away from the high-intensity region. As the high-intensity vortex beam propagates in such a medium, the modulation instability (MI) phenomenon leads to an exponential growth of weak perturbations. As we predicted in our linear stability analysis and numerical simulations~\cite{Silahli:15}, the perturbations with an orbital angular momentum (OAM) of a particular charge is amplified leading to the formation of a necklace beam with a well-defined number of peaks. The experimental results are in excellent agreement with the analytical and numerical predictions. Besides contributing to the fundamental science of light-matter interactions in engineered soft-matter media, our work might bring about new possibilities for dynamic optical manipulation and transmission of light through scattering media as well as formation of complex optical patterns and light filamentation~\cite{PhysRevLett.95.193901,Vincotte2006163,Walasik2017} in naturally existing colloids such as fog and clouds.
\begin{figure}[!t]
\centering
\includegraphics[width=0.7\textwidth,clip=true,trim= 0 0 0 0]{Fig1.png}
\caption{Propagation of a charge one optical vortex beam in a colloidal solution with negative polarizability. In free space, the helical wave front (right) and the doughnut intensity profile of the beam (left) are schematically shown. Inside the colloidal medium, the input vortex beam transforms into a rotating necklace beam. The repulsion of the particles in the path of the high intensity beam leads to a local nonlinear index change.}
\label{fig:scheme}
\end{figure}
Let us consider an optical vortex beam propagating along the $z$-direction in a nano-colloidal system consisting of dielectric particles with refractive index $n_p$ lower than the refractive index of the background medium $n_b$. If $n_p<n_b$, the colloidal suspension has a negative polarizability, as schematically illustrated in \cref{fig:scheme}, the nano-particles are driven away from the high intensity region of the beam, resulting in a change of the local refractive index in the suspension, which exhibits a focusing nonlinearity. For large input intensities, the beam becomes unstable due to the well-known phenomenon of MI. This effect reveals itself as the exponential growth of weak perturbations or noise in the presence of an intense pump beam propagating in a nonlinear medium. As a result of the MI, the original vortex beam of a doughnut shape may split into a necklace-like beam with several bright spots, whose number is intrinsically determined by the topological charge of the vortex beam. This process is described by the nonlinear Schr\"odinger equation (NLSE) (see Methods). Following the standard linear stability analysis~\cite{Silahli:15,Vincotte2006163} we assume that the high-intensity optical beam with a topological vortex charge $m\in\mathbb{Z}$ is accompanied by an azimuthal perturbation:
\begin{equation}
E(\theta,z) = \left[ |E_0| + a_1 e^{-i(M\theta + \mu z)} + a_2^* e^{i(M\theta + \mu^* z)} \right] e^{i(m\theta + \lambda z)}
\end{equation}
where $E_0 = E(r=r_m,z=0)$ is the electric field amplitude of the rotationally invariant steady state solution of \cref{eq:NLSE} (see Methods) with charge $m$, taken at the average radius $r_m$, $a_1$, $a_2$ are the amplitudes of the small perturbations, and $M\in \mathbb{Z}$ is the deviation from $m$ of the perturbation charge. The topological charge of the perturbation is given by $m \pm M$, $\lambda$ is the propagation constant of the steady state solution, and $\mu$ is the propagation constant correction for the perturbation.
The linear stability analysis allows us to calculate the MI gain for the perturbations with the charge $m \pm M$ imposed atop the main beam with the charge $m$ and the corresponding averaged radius $r_m$. The gain is given by~\cite{Silahli:15}:
\begin{equation}
\mathrm{Im}(\mu) = g_m(M) = \frac{M}{2 k_0 n_b r_m} \times \mathrm{Im}\sqrt{ \frac{M^2}{r_m^2} - \frac{|\alpha|}{2 k_B T L^2} |E_0|^2 \exp\left(\frac{\alpha}{2 k_B T} |E_0|^2\right) }
\label{eq:gain}
\end{equation}
where $L^2 = (2 k_0^2 n_b |n_p - n_b| V_p \rho_0)^{-1} $. Here, the particle polarizability is denoted by $\alpha$, and $k_B T$ is the thermal energy, with the Boltzmann constant $k_B$ and at temperature $T$. $V_p$ is the volume of a particle, $\rho_0$ is the unperturbed particle concentration, $k_0=\frac{2\pi}{\lambda_0}$ is the wave number, and $\lambda_0$ is the free-space wavelength.
\begin{figure}[!b]
\centering
\includegraphics[width=\textwidth,clip=true,trim= 90 0 90 0]{Fig2.pdf}
\caption{Azimuthal modulation instability gain. (a) Analytically computed instability gain $g_m(M)$ as a function of the perturbation azimuthal index deviation $M$, for negative polarizability particle-based systems for different topological charges $m$ of the initial steady-state vortex solution. (b)--(c) Inverse of the beam breakup distance recorded in numerical simulations of seeded MI. Dashed lines show analytical curves with rescaled magnitude that help guide the eye.}
\label{fig:analytical}
\end{figure}
\Cref{eq:gain} is used in the following to predict the MI gain for vortices propagating in the nano-colloidal media. We study the propagation of light with the free-space wavelength $\lambda_0 = 532$~nm, the negative-polarizability suspension made of low refractive index polytetrafluoroethylene (PTFE) particles ($n_p=1.35$) dispersed in glycerin water ($n_b = 1.44$) with the volume filling fraction of $\rho_0 = 0.7\%$. The radius of the particles is assumed to be $150$~nm and the experiments were performed at room temperature. For this set of the parameters, \cref{fig:analytical}(a) shows the gain curves $g_m(M)$ as a function of the perturbation azimuthal index deviation $M$ for different values of the vortex charge $m$. The analytical predictions are only valid when the perturbation intensity is significantly lower than that of the main beam. Above this limit the dynamics of MI has to be studied using numerical simulations of a three-dimensional NLSE (see Methods).
In order to confirm the analytical prediction for the number of maxima and the shape of the gain curves, we have numerically solved the NLSE in the absence of scattering losses ($\sigma=0$) using the split-step Fourier method~\cite{Feit:78,doi:10.1063/1.328442}. First, based on the theoretical predictions, we have found the parameters of the stationary vortex solitons with charges $m=2$ and $m=4$. We have numerically confirmed that for a given parameters of the stable beam (power and average radius $r_m$) the vortex propagates in a stable manner, provided that the medium is lossless. Addition of the random noise on top of the stable solution resulted in the MI induced beam breakup into a necklace beam with the number of maxima predicted by the analytical results ($N=4$ for the main vortex charge $m=2$, and $N=7$ for the main vortex charge $m=4$).
Simulations of the MI allow us to determine the rate at which the pattern with $N$ maxima grows. In order to seed the growth of a pattern with $N$ maxima, we add only the perturbation of charges $m \pm M = N$. The distance $z_0$ at which the pattern with $N$ maxima emerges is inversely proportional to the modulation gain $g_m(M)$. The distance $z_0$ is read from the light intensity maps $I(r, \theta, z)$, and its choice is somewhat arbitrary. We have chosen $z_0$ to be the distance at which the contrast between the $N$ maxima and the minima in between them is the highest. The results of $1/z_0$ for main vortex charges $m=2$ and $m=4$ are shown in \cref{fig:analytical}(b), (c). We can see a great agreement with the rescaled analytical curves showing the MI gain $1/z_0$.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\textwidth,clip=true,trim= 0 0 0 0]{Fig3.png}
\caption{Experimental setup used to study (seeded) modulation instability of vortex beams in colloidal media. Collimated beam from Verdi V6 laser ($\lambda_0 =532$~nm) is initially split into two beams using beam splitters with reflectivity varying in the range from $0.6\%$ to $8\%$ of the total power. The high intensity beam is transmitted through a spiral phase plate (SPP) to generate the main vortex beam with lower charge. In the seeded configuration, the low intensity beam is transmitted through a SPP with a higher charge to generate the perturbation beam. The beams are then recombined at the second beam splitter and focused onto the cuvette by a lens. The longitudinal beam profile inside the cuvette and the transverse beam profile behind the cuvette are recorded by a camera and shown in the insets.}
\label{fig:setup}
\end{figure}
In our experiments, the beam from a 532~nm, 6~W, continuous wave Coherent Verdi 6 laser was first converted into an optical vortex beam using a spiral phase plate and then focused inside a 10-mm-long cuvette filled with the colloidal suspension consisting of PTFE particles [Laurel, Ultraflon AD-10] dispersed in glycerin/water solution (3:1, v/v), as shown in \cref{fig:setup}. The filling ratio of the PTFE is 0.7\%. Since the refractive index of the PTFE particles is lower than that of glycerin water~\cite{Silahli:15}, the particles have negative polarizability. First, we observed MI growing from noise, i.e. without a well-defined perturbation. \Cref{fig:res}(a)--(c) shows different optical vortices of charge 1, 2, and 4, generated using the spiral phase plates.
\begin{figure}[!t]
\centering
\includegraphics[width=0.7\textwidth,clip=true,trim= 0 0 0 0]{Fig4.png}
\caption{Experimental results showing the formation of the necklace beam from an initial vortex beam propagating in a nonlinear colloidal suspension with negative polarizability particles. (a)--(c) Intensity profiles of the incident vortex beams of charges 1, 2, and 4. (d)--(f) Interference patterns corresponding to vortex beams with topological charges in (a)--(c), respectively. (g)--(i) Intensity distributions of the resulting necklace beams after the propagation in the colloidal medium corresponding to the incident beams (a)--(c), respectively.}
\label{fig:res}
\end{figure}
Interference experiments were performed to confirm the topological charges of the generated vortex beams, as shown in \cref{fig:res}(d)--(f). Due to the MI, the original doughnut-shaped beam after passing through the colloidal suspension splits into several bright spots, depending on its initial charge. Here, we performed two series of experiments with and without the seed, as shown in \cref{fig:setup}. For the incident beam with $m=1$ which was directly focused into the cuvette without adding any induced perturbation, the beam after passing through the colloidal solution splits into 2 bright spots, as shown in \cref{fig:res}(g). In the case of seeded (or induced) MI, we investigated the propagation of the vortex beams of $m=2$ and $m=4$. Firstly, we explored the case of focused beam with $m=2$ in the presence of weak seeded perturbations. Perturbations of charge 4 and charge 8 were added separately and then together to the main beam. The intensity ratio between the perturbations and the main was adjusted from 0 to 3\%. By carefully testing all these cases with different perturbation charges and intensities, we find that in such a competition between the perturbations originating from the noise and those seeded by the low intensity beam, the final beam pattern on the screen always shows 3 maxima. This result is qualitatively consistent with our analytical predictions, revealing the fact that only the perturbation with the charge close to the maximum of the gain curve is amplified, as it grows faster than other perturbations, even if they are seeded. Secondly, a similar test was performed for the vortex with charge 4 in the presence of the perturbation with charge 8. The final pattern observed on the screen shows a necklace beam including 7 maxima, which also corresponds to the maximum of the gain curve shown in \cref{fig:analytical}(c).
In summary, we have experimentally and numerically studied both seeded and unseeded modulation instability in colloidal suspensions of negative polarizability nano-particles. The experimental results are in good agreement with the numerical predictions. In particular, in the case of seeded modulation instability, the observed necklace beam patterns were identical with the patterns obtained without the seed. This shows that the perturbation with the largest growth rate predicted in the analytical and numerical calculations prevails over all the other perturbations introduced to the beam either through the noise or as a seeded perturbation. These results are likely to enable a new platform for fundamental studies of nonlinear optical phenomena in engineered media as well as for imaging and light manipulation in scattering media, such as biological and chemical systems.
\section{Methods}
The nonlinear Schr\"odinger equation governing the evolution of the slowly varying electric field envelope $E$ can be written as~\cite{El-Ganainy:07,Silahli:15}:
\begin{equation}
i \frac{\partial E}{\partial z} + \frac{1}{2 k_0 n_b} \nabla^2_{\perp} E + k_0 (n_b - n_p) V_p \rho_0 e^{\frac{\alpha}{4 k_B T}|E|^2} E + \frac{i}{2} \sigma \rho_0 e^{\frac{\alpha}{4 k_B T}|E|^2} E = 0
\label{eq:NLSE}
\end{equation}
where $\nabla^2_{\perp} = \frac{1}{r}\frac{\partial }{\partial r}\left( r \frac{\partial }{\partial r} \right) +\frac{1}{r^2}\frac{\partial^2 }{\partial \theta^2}$ is the transverse Laplacian. The particle polarizability is denoted by $\alpha$, and $k_B T$ is the thermal energy, with the Boltzmann constant $k_B$ and at temperature $T$, $V_p$ is the volume of a particle, $\rho_0$ is the unperturbed particle concentration, $\sigma$ is the scattering cross-section, $k_0=\frac{2\pi}{\lambda_0}$ is the wave number, and $\lambda_0$ is the free-space wavelength. This equation was analyzed in detail using the linear stability analysis~\cite{Silahli:15} and solved numerically using the split-step Fourier algorithm.
\begin{acknowledgement}
Army Research Office [W911NF-11-1-0297, W911NF-15-1-0146].
The authors thank professor D. Christodoulides from University of Central Florida fruitful discussions.
\end{acknowledgement}
\providecommand{\latin}[1]{#1}
\providecommand*\mcitethebibliography{\thebibliography}
\csname @ifundefined\endcsname{endmcitethebibliography}
{\let\endmcitethebibliography\endthebibliography}{}
\begin{mcitethebibliography}{26}
\providecommand*\natexlab[1]{#1}
\providecommand*\mciteSetBstSublistMode[1]{}
\providecommand*\mciteSetBstMaxWidthForm[2]{}
\providecommand*\mciteBstWouldAddEndPuncttrue
{\def\unskip.}{\unskip.}}
\providecommand*\mciteBstWouldAddEndPunctfalse
{\let\unskip.}\relax}
\providecommand*\mciteSetBstMidEndSepPunct[3]{}
\providecommand*\mciteSetBstSublistLabelBeginEnd[3]{}
\providecommand*\unskip.}{}
\mciteSetBstSublistMode{f}
\mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})}
\mciteSetBstSublistLabelBeginEnd
{\mcitemaxwidthsubitemform\space}
{\relax}
{\relax}
\bibitem[Einstein(1926)]{einstein1956investigations}
Einstein,~A. \emph{Investigations on the Theory of the Brownian Movement};
Dover, New York, 1926\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Perrin(1916)]{perrin1916}
Perrin,~J. \emph{Les Atomes}; Constable, London, 1916\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hunter(2001)]{hunter2001foundations}
Hunter,~R.~J. \emph{Foundations of colloid science}; Oxford University Press,
2001\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Dholakia and {\v{C}}i{\v{z}}m{\'a}r(2011)Dholakia, and
{\v{C}}i{\v{z}}m{\'a}r]{Dholakia2011}
Dholakia,~K.; {\v{C}}i{\v{z}}m{\'a}r,~T. \emph{Nat. Photon.} \textbf{2011},
\emph{5}, 335--342\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Battersby \latin{et~al.}(2002)Battersby, Lawrie, Johnston, and
Trau]{B200038P}
Battersby,~B.~J.; Lawrie,~G.~A.; Johnston,~A. P.~R.; Trau,~M. \emph{Chem.
Commun.} \textbf{2002}, 1435--1441\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wu \latin{et~al.}(2008)Wu, Hajjarian, and Kavehrad]{Wu:08}
Wu,~B.; Hajjarian,~Z.; Kavehrad,~M. \emph{Appl. Opt.} \textbf{2008}, \emph{47},
3168--3176\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rohwetter \latin{et~al.}(2010)Rohwetter, Kasparian, Stelmaszczyk, Hao,
Henin, Lascoux, Nakaema, Petit, Quei{\ss}er, Salame, Salmon, W\"oste, and
Wolf]{Rohwetter2010}
Rohwetter,~P.; Kasparian,~J.; Stelmaszczyk,~K.; Hao,~Z.; Henin,~S.;
Lascoux,~N.; Nakaema,~W.~M.; Petit,~Y.; Quei{\ss}er,~M.; Salame,~R.;
Salmon,~E.; W\"oste,~L.; Wolf,~J.-P. \emph{Nat. Photon.} \textbf{2010},
\emph{4}, 451--456\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[McAulay(2011)]{mcaulay2011military}
McAulay,~A.~D. \emph{Military laser technology for defense: Technology for
revolutionizing 21st century warfare}; John Wiley \& Sons, New Jersey,
2011\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Liu \latin{et~al.}(2016)Liu, Fan, Padilla, Powell, Zhang, and
Shadrivov]{ADMA:ADMA201670049}
Liu,~M.; Fan,~K.; Padilla,~W.; Powell,~D.~A.; Zhang,~X.; Shadrivov,~I.~V.
\emph{Adv. Mater.} \textbf{2016}, \emph{28}, 1525--1525\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kudyshev \latin{et~al.}(2013)Kudyshev, Richardson, and
Litchinitser]{Kudyshev2013a}
Kudyshev,~Z.~A.; Richardson,~M.~C.; Litchinitser,~N.~M. \emph{Nat. Commun.}
\textbf{2013}, \emph{4}, 2557\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[El-Ganainy \latin{et~al.}(2007)El-Ganainy, Christodoulides,
Musslimani, Rotschild, and Segev]{El-Ganainy:07}
El-Ganainy,~R.; Christodoulides,~D.~N.; Musslimani,~Z.~H.; Rotschild,~C.;
Segev,~M. \emph{Opt. Lett.} \textbf{2007}, \emph{32}, 3185--3187\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[El-Ganainy \latin{et~al.}(2007)El-Ganainy, Christodoulides, Rotschild,
and Segev]{El-Ganainy:07b}
El-Ganainy,~R.; Christodoulides,~D.~N.; Rotschild,~C.; Segev,~M. \emph{Opt.
Express} \textbf{2007}, \emph{15}, 10207--10218\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Man \latin{et~al.}(2013)Man, Fardad, Zhang, Prakash, Lau, Zhang,
Heinrich, Christodoulides, and Chen]{PhysRevLett.111.218302}
Man,~W.; Fardad,~S.; Zhang,~Z.; Prakash,~J.; Lau,~M.; Zhang,~P.; Heinrich,~M.;
Christodoulides,~D.~N.; Chen,~Z. \emph{Phys. Rev. Lett.} \textbf{2013},
\emph{111}, 218302\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kelly \latin{et~al.}(2016)Kelly, Ren, Samadi, Bezryadina,
Christodoulides, and Chen]{Kelly:16}
Kelly,~T.~S.; Ren,~Y.-X.; Samadi,~A.; Bezryadina,~A.; Christodoulides,~D.;
Chen,~Z. \emph{Opt. Lett.} \textbf{2016}, \emph{41}, 3817--3820\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rubinsztein-Dunlop \latin{et~al.}(2017)Rubinsztein-Dunlop, Forbes,
Berry, Dennis, Andrews, Mansuripur, Denz, Alpmann, Banzer, Bauer, Karimi,
Marrucci, Padgett, Ritsch-Marte, Litchinitser, Bigelow, Rosales-Guzmán,
Belmonte, Torres, Neely, Baker, Gordon, Stilgoe, Romero, White, Fickler,
Willner, Xie, McMorran, and Weiner]{95302849a5214942adf2c240ebe9bee6}
Rubinsztein-Dunlop,~H. \latin{et~al.} \emph{J. Opt.} \textbf{2017}, \emph{19},
013011\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Xie \latin{et~al.}(2013)Xie, Liu, Jin, Santangelo, and Xi]{Xie:13}
Xie,~H.; Liu,~Y.; Jin,~D.; Santangelo,~P.~J.; Xi,~P. \emph{J. Opt. Soc. Am. A}
\textbf{2013}, \emph{30}, 1640--1645\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ladavac and Grier(2004)Ladavac, and Grier]{Ladavac:04}
Ladavac,~K.; Grier,~D.~G. \emph{Opt. Express} \textbf{2004}, \emph{12},
1144--1149\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lee \latin{et~al.}(2006)Lee, Garc\'{e}s-Ch\"{a}vez, and
Dholakia]{Lee:06}
Lee,~W.~M.; Garc\'{e}s-Ch\"{a}vez,~V.; Dholakia,~K. \emph{Opt. Express}
\textbf{2006}, \emph{14}, 7436--7446\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Reichert(2006)]{Reichert2006Hydro-9458}
Reichert,~M. Hydrodynamic Interactions in Colloidal and Biological Systems.
Ph.D.\ thesis, Universit\"at Konstanz, Konstanz, 2006\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Silahli \latin{et~al.}(2015)Silahli, Walasik, and
Litchinitser]{Silahli:15}
Silahli,~S.~Z.; Walasik,~W.; Litchinitser,~N.~M. \emph{Opt. Lett.}
\textbf{2015}, \emph{40}, 5714--5717\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Vin\ifmmode~\mbox{\c{c}}\else \c{c}\fi{}otte and
Berg\'e(2005)Vin\ifmmode~\mbox{\c{c}}\else \c{c}\fi{}otte, and
Berg\'e]{PhysRevLett.95.193901}
Vin\ifmmode~\mbox{\c{c}}\else \c{c}\fi{}otte,~A.; Berg\'e,~L. \emph{Phys. Rev.
Lett.} \textbf{2005}, \emph{95}, 193901\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Vin\ifmmode~\mbox{\c{c}}\else \c{c}\fi{}otte and
Berg\'e(2006)Vin\ifmmode~\mbox{\c{c}}\else \c{c}\fi{}otte, and
Berg\'e]{Vincotte2006163}
Vin\ifmmode~\mbox{\c{c}}\else \c{c}\fi{}otte,~A.; Berg\'e,~L. \emph{Physica D:
Nonlinear Phenomena} \textbf{2006}, \emph{223}, 163 -- 173\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Walasik \latin{et~al.}(2017)Walasik, Silahli, and
Litchinitser]{Walasik2017}
Walasik,~W.; Silahli,~S.~Z.; Litchinitser,~N.~M. \emph{Sci. Rep.}
\textbf{2017}, \emph{7}, 11709\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Feit and Fleck(1978)Feit, and Fleck]{Feit:78}
Feit,~M.~D.; Fleck,~J.~A. \emph{Appl. Opt.} \textbf{1978}, \emph{17},
3990--3998\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lax \latin{et~al.}(1981)Lax, Batteh, and
Agrawal]{doi:10.1063/1.328442}
Lax,~M.; Batteh,~J.~H.; Agrawal,~G.~P. \emph{J. Appl. Phys.} \textbf{1981},
\emph{52}, 109--125\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\end{mcitethebibliography}
\end{document}
|
2,877,628,089,115 | arxiv | \section{Introduction}
\par
Low mass X-ray binaries display remarkable physical phenomena such as accretion, winds, plasma heating, time variability and magnetic fields. An accretion disk of an X-ray irradiating plasma encircles the central compact source of the system. Some of these systems present a thickened accretion disk with material that extends above the disk midplane, the so-called accretion disk corona (ADC or hereafter extended disk atmosphere). The nature and exact geometry of this extended atmosphere is not yet fully understood.\par
To this end, different theoretical models predict the existence of a disk atmosphere. \citet{holt1982} studied the accretion disk corona and predicted that it is probably generated by evaporation of hot material from the surface of the accretion disc. Later on, \citet{miller2000} with magnetohydrodynamic (MHD) models showed that an initially weak magnetic field in the core of the disk can be amplified by MHD turbulence driven by magnetorotational instabilities. This field rises out of the disk through buoyancy, creating a magnetised heated corona above the disk at 2-5 scale heights and a corona temperature of $\sim 10^{8}$ K. \citet{garate2002} modelled both the accretion disk atmosphere and the corona, photoionized by a central X-ray source. They found that the vertical scale height of the accretion disk atmosphere is enlarged by illumination heating. In this case the atmosphere is orders of magnitude less dense than the disk itself. \par
Observational studies have also revealed the existence of an extended atmosphere for a number of sources. Using high-resolution X-ray spectroscopy it is possible to identify emission and absorption lines within the disk plasma and around it. \citet{cottam2001_1} studied the XMM-$\textit{Newton}$ RGS spectrum of EXO 0748-676 and found emission and absorption features ($\ion{O}{viii}\xspace$, $\ion{O}{vii}\xspace$, $\ion{Ne}{ix}\xspace$, $\ion{N}{vii}\xspace$) from an extended, oblate structure above the accretion disk.
Further, \citet{garate2003} examined the $\textit{Chandra}$ spectrum of the same source, and found photoionized plasma located above the disk midplane. Also insights have been gained from the spectrum of other accretion disk corona (ADC) sources. 4U 1822-37 \citep{cottam2001_2}, 2S 0921-63 \citep{kallman2003} and Hercules X-1 \citep{garate2005} have been studied using $\textit{Chandra}$ observations. For all these sources spectral signatures of an extended disk atmosphere were found. Noticeably, all the systems above are at high inclination angles. \par
Furthermore, the light curve of the low mass X-ray binaries can reveal events such as dips, bursts and eclipses. Dips and eclipses are shown in the light curve as an abrupt drop in the count-rate (see Fig. \ref{fig:lc}). Eclipses are caused when the companion star is passing in front of the compact object and hides the emission of the disk, while the dips can be created due to over-densities from the impact of the accretion stream and the disk \citep{garate2002}. It was found that only a group of low mass X-ray binaries with high-inclination (\textit{i} $\sim 70^{\circ}-90^{\circ}$) present dips and eclipses (\citealt{king1987}, \citealt{trigo2006}). Among the 13 listed LMXBs with high-inclination, only a few of them show eclipses, e.g. EXO 0748-676 \citep{parmar1986} and MXB 1659-298 \citep{sidoli2001}. \par
EXO 0748-676 is a LMXB discovered in 1985 by the EXOSAT satellite. This source was used as a calibration source of XMM-$\textit{Newton}$ satellite and it has been observed and studied extensively by different satellites (\citealt{parmar1991}, {\citealt{hertz1995}}, \citealt{hertz1997}, \citealt{thomas1997}, \citealt{church1998}, \citealt{garate2003}, \citealt{sidoli2005}, \citealt{wolff2005}, \citealt{peet2017}). \par
\citet{parmar1986} discovered periodic intensity dips. Eclipses of this source display a period of 3.82 hours lasting 8.3 minutes. The mass of the companion is reported to be $0.08M_{\odot}<M_{c}<0.45M_{\odot}$ and the inclination of the system is $75^{\circ}<i<83^{\circ}$ \citep{parmar1986}. The radial velocity is 20 $\ensuremath{\mathrm{km\ s^{-1}}}\xspace$ \citep{duflot1995}. The distance of the source was derived to be [$5.8\pm0.9 $ kpc] or [$7.7\pm0.9$ kpc] depending on whether the X-ray bursts of the source are hydrogen dominated or helium dominated respectively \citep{wolff2005}. Here, we use the average value of 6.8 kpc.\par
In this work, we have analysed the XMM-$\textit{Newton}$ data of EXO 0748-676 during the eclipses. This is the first time that the spectrum from only the extended atmosphere of the disk is being studied for this source. In that way we can understand the emission that comes from the upper disk atmosphere and also its geometry. In previous studies, the density of the plasma has been derived confronting line ratios of the $\ion{O}{vii}\xspace$ triplet with theoretical calculations \citep[see][]{porquet2000}. In our study we constrain the density directly using photoionization modelling in SPEX as described in Section \ref{modeling}. This paper is organised as follows. In Section \ref{datared} we present the data and the XMM-$\textit{Newton}$ data reduction and we explain the methodology used to obtain the RGS spectrum of the eclipses. In Section \ref{modeling} we present the Gaussian line fitting and the photoionization modelling of the eclipsed spectrum. Finally in Section \ref{discussion} we discuss the results, proposing a geometry for the X-ray emitting gas.
\section{XMM-\textit{Newton} data reduction}
\label{datared}
\par
We use data from EPIC-pn \citep{struder2001}, EPIC-MOS \citep{turner2001} cameras and RGS spectrometer \citep{herder2001}, taken from the XMM-$\textit{Newton}$ public archive\footnote{http://nxsa.esac.esa.int/nxsa-web/}. We reduce the data using the Science Analysis Software, SAS (ver. 16). First, we filter the EPIC event lists for flaring particle background. We exclude the observations with flaring particle background exceeding 0.4 counts/s for pn and 0.35 for MOS.
\par
To extract the light-curve we use a circular aperture (R $\sim$ $30^{''}$) around the source. For the background we use a same size circular aperture away from the source but within the same chip. We selected energies between 5 and 10 keV. This allowed us to clearly identify the eclipse events. At softer energies the contribution of the dipping events to the light curve become relevant, possibly confusing the selection. The light-curve of the source is corrected for various effects on the detection efficiency such as vignetting, quantum efficiency and bad pixels, using the SAS task \textit{epiclccorr}. An example of a corrected light curve is shown in Fig. \ref{fig:lc}. In the upper panel we see the light curve of the eclipses and bursts with a 5 < E (keV) < 10 energy selection. In the lower panel we present also the dipping events, obtained in the energy range 0.3-5 keV, and will be discussed in Section \ref{atm}. We call the periods outside these events "persistent emission".
\par
We use in total 11 observations obtained in 3 different years, presented in Table \ref{tab:data}. Depending on the availability of the data and the quality of the light curve, we use the EPIC-pn or EPIC-MOS data to obtain the Good Time Intervals (GTI) for the emission during the eclipses. We use these GTIs to extract the RGS spectrum at the time of the eclipses. We choose the best observations according to the quality of the light curves: the count rate (counts/sec) of the persistent emission in the light curve should be at least 2 times higher than the eclipses. After that, we process the RGS data using the SAS task \textit{rgsproc}. We also use a geometrical binning of a factor of 3, which provides a bin size of about 1/3 of the RGS resolution.
\par
The eclipsed RGS spectra from different years do not present significant variability within the errors. Therefore, we combine the observations using the SAS task \textit{rgscombine} to obtain a better signal-to-noise ratio. We also use a single EPIC-pn observation to create the spectrum of the pn persistent emission and to obtain the continuum parameters. This is useful in order to obtain the correct Spectral Energy Distribution (SED) of the illuminating source which will be discussed in Section \ref{modeling}.
\begin{table*}[htbp]
\begin{minipage}[t]{\hsize}
\setlength{\extrarowheight}{3pt}
\centering
\caption{Log of XMM-$\textit{Newton}$ observations used in this paper.}
\newcommand{\head}[1]{\textnormal{\textbfh{#1}}}
\begin{tabular}{cccc}
\hline
\hline
obs. ID &instruments& obs. date (year)& observing time (sec)\\
\hline
0123500101 & mos/rgs & 2000 & 62068 \\
\hline
0134561101 & mos/rgs & 2001 & 8312 \\
0134562101 & mos/rgs & 2001 & 8678 \\
0134562401 & mos/rgs & 2001 & 6875 \\
0134562501 & mos/rgs & 2001 & 6882 \\
\hline
0160760101 & pn/rgs & 2003 & 99246 \\
0160760201 & pn/rgs & 2003 & 98950 \\
0160760301 & pn/rgs & 2003 & 108703 \\
0160760401 & pn/rgs & 2003 & 83951 \\
0160760601 & pn/rgs & 2003 & 55351 \\
0160760801 & pn/rgs & 2003 & 69750 \\
\hline
\label{tab:data}
\end{tabular}
\end{minipage}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{fig1.pdf}
\caption{$\textit{Upper panel)}$ Light curve of a single observation showing the eclipses and bursts in the hard band (5-10 keV). $\textit{Lower panel)}$ Light curve for the same observation including dips, eclipses and bursts in the soft band (0.3-5 keV). (Obs. ID: 0160760201) }
\label{fig:lc}
\end{figure*}
\section{Modelling the eclipsed spectrum}
\label{modeling}
We fit the combined RGS spectrum using the SPEX fitting package (\citealt{kaastra1996}, ver. 3.04). We perform a time averaged analysis of the spectrum. In Figure \ref{fig:gaus} we present the eclipsed spectrum of the source obtained as described in Section \ref{datared}. The observed continuum is extremely low due to the fact that the persistent continuum from the disk and the neutron star is blocked during the eclipse. From the spectrum we can clearly see interesting features such as the emission lines in $\ion{O}{viii}\xspace$ (18.97 \AA) and the $\ion{O}{vii}\xspace$ triplet (21.6-22.1 {\AA}). For a detailed study of the spectrum we perform a Gaussian line detection and then we model the eclipsed spectrum of the source using a global modelling. \par
\subsection{Line detection}
First, we apply an RGS line detection in the 7-37 {\AA} wavelength range following the method described in \citet{pinto2016}. This will help us to identify fainter lines in our low flux spectrum. We scan the spectrum using Gaussians (component $\textit{gaus}$ in SPEX) in order to identify the lines with the highest significance. We apply a scanning step of 0.025 {\AA} and fix the width of the Gaussian line at 0.005 {\AA}. In Figure \ref{fig:scan} we present the result from the line scanning, where the $\rm \Delta C_{stat}$ is multiplied by the sign of the Gaussian normalization. The red line indicates the level of 3 $\sigma$ detection. From the scan we can clearly see the lines at 18.97 {\AA} and at (21.6-22.1) {\AA} which correspond to $\ion{O}{viii}\xspace$ and $\ion{O}{vii}\xspace$ respectively and present the highest significance. Also, with lower significance level (but $> 3\sigma$) we observe the $\ion{Ne}{ix}\xspace$ line at 13.4 {\AA} and $\ion{N}{vii}\xspace$ at 24.7 {\AA}.
\subsection{Gaussian line fitting}
We now apply a Gaussian line fitting to parametrise our emission lines in a model independent way and determine the kinematics of our spectrum. We begin with the simple approach of a power law continuum and Gaussian line profiles. In this way we can obtain the strength of the lines and their velocities. We use a power law continuum model for the continuum and the $\textit{hot}$ model in SPEX in order to take into account the Galactic absorption. For the column density we use the Galactic value $3.5 \cdot 10^{21}$ $\ensuremath{\mathrm{cm^{-2}}}\xspace$ \citep{kalberla2005}. In Table \ref{tab:gaussians} we present the results of the fitting and in Fig. \ref{fig:gaus} the best fit. \par
The width of the lines in some cases could not be resolved. For this reason we couple the widths of the forbidden and the recombination lines to the width of the intercombination line for each of our triplets, $\ion{O}{vii}\xspace$ and $\ion{Ne}{ix}\xspace$. Also we keep the theoretical values for the wavelength in most of our lines but we apply a redshift component in SPEX for these lines to take into account a possible line shift. We apply different redshift for the $\ion{O}{vii}\xspace$ and $\ion{Ne}{ix}\xspace$ triplets. We observe our lines slightly redshifted. In Table \ref{tab:gaussians} we present the velocity shift ($v_{flow}$) and broadening ($\sigma$) for each line. The velocity shifts of the lines are comparable. Especially the low ionization ($\ion{O}{vii}\xspace$) and high ionization ($\ion{O}{viii}\xspace$) lines present the same kinematics. Within the errors, a moderate inflow is detected. \par
From the flux of the lines in the $\ion{O}{vii}\xspace$ triplet, we calculate the G and R ratios. These ratios are plasma diagnostics and can be used to identify the photoionized plasma (\citealt{liedahl1999}, \citealt{mewe1999}). The G ratio is sensitive to the electronic temperature while the R ratio is sensitive to the electronic density \citep{gabriel1969} and are given by:
\begin{align}
& & G = \frac{f+i}{r} \\
& & R = \frac{f}{i}
\end{align}
where $\it f$, $\it i$ and $\it r$ represent the flux of the forbidden, intercombination and resonance line, respectively. The G ratio is $\sim$ 4 which indicates photoionized gas \citep{porquet2000}. The R ratio is $\sim$ 0.02 indicating a high electron density $> 10^{12}$ $ \rm cm^{-3}$ (Fig. 8 of Porquet et al. 2000).
\begin{table*}[htbp]
\begin{minipage}[t]{\hsize}
\setlength{\extrarowheight}{4pt}
\caption{Gaussian modelling parameters of the emission lines detected in the RGS spectrum. The symbol (c) indicates the coupled parameters and (t) the parameters that were set to the theoretical value.}
\centering
\small
\renewcommand{\footnoterule}{}
\begin{tabular}{c c c c c c c }
\hline \hline
Line & norm $10^{41}$ (ph/s/keV) &$\lambda_{observed} (\AA) $&$ \lambda_{theory} (\AA) $ & Width (\AA) &$\sigma (\ensuremath{\mathrm{km\ s^{-1}}}\xspace$) & $v_{flow}$ ($\ensuremath{\mathrm{km\ s^{-1}}}\xspace$) \\
\hline
$\ion{O}{viii}\xspace$ & $2.7^{ +1.7}_{ -0.4}$ & $19.01 \pm 0.01$ & 18.97 & $0.06 \pm 0.03$ &$ 870 \pm {480}$ & $+584 \pm 126 $ \\
\hline
$\ion{O}{vii}\xspace_{r}$ & $2.4 \pm 0.6$ & 21.6 (t) & 21.6 & 0.13 (c) & 1750 (c) & +300 (c) \\
$\ion{O}{vii}\xspace_{i}$ & $4.2 \pm 0.8$ & 21.8 (t) & 21.8 & $0.1^{ + 3.1 \times 10^{-2}}_{ - 2.5\times 10^{-5}} $ & $1750 \pm 400 $ & $ +300 \pm 165 $ \\
$\ion{O}{vii}\xspace_{f}$ & $ 4.1^{+ 4.1\times 10^{-5}}_{- 3.8\times 10^{-5}}$ & 22.01 (t) & 22.01 & 0.13 (c) & 1750 (c) & +300 (c) \\
\hline
$\ion{Ne}{ix}\xspace_{r}$ & $ 8.5^{ + 4.4 \times10^{-5}}_{ - 3.7\times 10^{-5}}$ & 13.44 (t) & 13.44 & 0.2 (c) & 4400 (c) & 0 (c) \\
$\ion{Ne}{ix}\xspace_{i}$ & $1.1^{ + 4.2 \times 10^{-4}}_{ - 3.6\times 10^{-4}}$ & 13.55 (t) & 13.55 & $ 0.2^{ + 1.6 \times 10^{-1}}_{ - 5.3 \times 10^{-2}} $ & $4400^{+ 3500}_{- 1200} $ &$ 0 \pm 2000$ \\
$\ion{Ne}{ix}\xspace_{f}$ & $ 1.3 \pm {0.5}$ & 13.69 (t) & 13.69 & 0.2 (c) & 4400 (c) & 0 (c) \\
\hline
$\ion{N}{vii}\xspace$ & $1.2 \pm 0.3$ & $24.84 \pm 0.01 $ & 24.77 & 0 & - & $ +714 \pm{121} $\\
\hline
\label{tab:gaussians}
\end{tabular}
\end{minipage}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{fig2.pdf}
\caption{Eclipsed spectrum of EXO 0748-676 with Gaussian modelling and residuals. The line at 23 $\AA$ corresponds to a bad pixel.}
\label{fig:gaus}
\end{figure*}
\begin{figure} [htbp]
\centering
\includegraphics[width=0.5\textwidth]{fig3.pdf}
\caption{Gaussian line scanning to detect the strongest lines. The red dashed lines correspond to the 3 $\sigma$ detection level. The very narrow absorption-like features correspond to bad pixels.}
\label{fig:scan}
\end{figure}
\subsection{Photoionization modelling}
Next, we perform a detailed spectral analysis using a photoionization model. We apply the models for the RGS wavelength range, 7-35 {\AA}. In order to fit the emission features we use the \textit{pion} model in SPEX\footnote{http://var.sron.nl/SPEX-doc/manualv3.04.00.pdf} (see \citealt{mehdipour2016}). \textit{Pion} is a photoionization plasma model where the photoionization equilibrium is calculated self-consistently.
\subsubsection{Spectral Energy Distribution}
\label{sed}
To obtain the correct Spectral Energy Distribution (SED) of the central source we use the following approach. During the eclipses the ionizing continuum is shielded and is not visible in the eclipsed spectrum. Thus, we fit the EPIC-pn spectrum of a single observation, during the persistent phase, to obtain the parameters of the ionizing continuum. The best fit of the persistent emission is shown in Fig. \ref{fig:persistent}. We apply a power-law and a black body. For a better fit of the absorption and emission features we also add a $\textit{hot}$ component for the Galactic absorption. We also use a $\textit{xabs}$ component. $\textit{Xabs}$ is a photoionized absorption model and calculates the transmission of a slab of material. It is needed to fit the ionized gas seen in the persistent spectrum of this source, as described in \citet{peet2017}. $\textit{Xabs}$ is a fast fitting model which contains only absorption lines while \textit{pion} contains also lines in emission. Thus, it is useful to fit the pn spectrum with $\textit{xabs}$ where we cannot see the emission lines because of the moderate resolution of the instrument. The parameters of the fit are listed in Table \ref{tab:persistent}. \par
The emission lines that are present in the 'eclipsed' RGS spectrum are photoionized by the continuum that is seen during the 'persistent' phase. Therefore, for our $ \it pion$ modelling we use the continuum model that we derived from the EPIC-pn spectrum taken during the persistent phase. This continuum is of course different from the low-level observed continuum during the eclipsed phase. Thus, in our SPEX modelling of the eclipsed RGS spectrum, we prevent the persistent continuum from being observed, which is done by incorporating an $\textit{etau}$ model. We note that additional $\textit{etau}$\footnote{http://var.sron.nl/SPEX-doc/manualv3.04.00.pdf} components are used to create the low-energy and high-energy exponential cut-off of the power-law component of the persistent continuum. A high-energy cut-off has applied at 100 keV and a low energy at 13.6 eV.
\subsubsection{Fitting the eclipsed emission}
We fit the following parameters of the \textit{pion} model in SPEX. First, we fit the ionization parameter $\xi =L/ n_{\rm H} \cdot r^{2}$ where $\textit{L}$ is the source luminosity and $ \it r$ the distance from the ionizing source. The hydrogen column density $N_{\rm H}$ in $ \rm cm^{-2}$ and the plasma density $n_{\rm H}$ in $ \rm cm^{-3}$ are also fitted. Further we fit the parameter $\Omega/4\pi$ which gives the opening angle of our plasma material divided by $4\pi$, the velocity shift $v_{flow}$ and the broadening $\sigma$ in $\ensuremath{\mathrm{km\ s^{-1}}}\xspace$. \par
The $\ion{O}{viii}\xspace$ line prevents the $\ion{O}{vii}\xspace$ to be properly fitted when only one photoionization component is applied. For this reason, we need two \textit{pion} components with different ionization parameters to fit the highly ionized and the weakly ionized lines (hereafter, component A and B respectively). A comparison between the modelling with one and two $\textit{pion}$ components is presented in Figure \ref{fig:bestfit}, lower panel. \par
The best fit is shown in Fig. \ref{fig:bestfit} and the best fit parameters for the two photoionization components in Table \ref{tab:pions}. The $\ion{O}{vii}\xspace$ triplet (component B) is very sensitive to the density of the plasma. In our model, we applied a density grid starting from a low value of $\sim 1$ $\rm cm^{-3}$ to see the effect of using lower and higher densities. In Figure \ref{fig:oviidensities} we present the zoom in of the $\ion{O}{vii}\xspace$ region using the models with different density. The best fit to the data gives a rather high density of $2 \cdot 10^{13} \rm cm^{-3}$ for component B (upper limit). Also in our case, the $\ion{O}{vii}\xspace$ intercombination line seems to be the strongest and the forbidden line seems to be suppressed which is what we expect for a high density gas \citep{porquet2000}. On the other hand, for component A we cannot constrain a value of density due to poor signal-to-noise around $\ion{Ne}{ix}\xspace$ region. The $\ion{O}{viii}\xspace$ and $\ion{Ne}{ix}\xspace$ lines are not affected from the density changes. For this reason the densities of components A and B are coupled. \par
We observe a net gas velocity of $ \sim +800$ $\rm km \ s^{-1}$ (from component B) which indicates an inflowing gas towards the disk. For component A the velocity cannot be constrained. We derive a different ionization parameter and opening angle ($\Omega$) for the two components. In our spectrum, we also detect the $\ion{O}{vii}\xspace$ and $\ion{O}{viii}\xspace$ radiative recombination emission feature (RRC) at 16.75 {\AA} and 14.20 {\AA} respectively. These narrow features are an indication of photoionized gas (\citealt{paradijs1998}, \citealt{garate2003}). \par
Furthermore, knowing the ionization parameter and the density, we calculate the emission measure (EM) for the two photoionization components (see Tables \ref{tab:pions}, \ref{tab:EM}). Knowing the distance from the ionization source from the equation:
\begin{align}
& & r=\sqrt{\frac{L}{n_H \cdot \xi} }
\end{align}
we then calculate the emission measure using the formula (see e.g. \citealt{mao2017}):
\begin{align}
& & EM= n_e \cdot n_H \cdot 4 \cdot \pi \cdot \Omega \cdot r^{2} \cdot \frac{N_H}{n_H}
\end{align}
where $n_{H}$, $N_{H}$ and $n_{e}$ are the density, the column density and the electron density, respectively and $\Omega$ is the opening angle. \par
\subsubsection{Testing for alternative models}
\label{cie}
Here, we test whether collisional-ionization equilibrium provides a better description of the data. Therefore, we fitted our eclipsed spectrum with the $\textit{cie}$ model in SPEX which fits the spectrum of a plasma in a collisionally equilibrium state. We use the same continuum (as described in Section \ref{sed}) and fit two $\textit{cie}$ components. We also add a $\textit{reds}$ component in SPEX to take into account the possible redshift of the lines. We found a different plasma temperature for each $\textit{cie}$ component, 5 keV and 0.15 keV respectively. In this model, we notice that the forbidden line is strong and the intercombination is suppressed which does not represent our case. We conclude that a low-density collisionally ionized gas is not valid in this case. We test also a high-density collisionally-ionized gas, which also does not fit better in our spectrum (C-stat\,/\,Exp. C-stat= 1842\,/\,1067). \par
Furthermore, in the $\textit{cie}$ model, one can test the case of non equilibrium plasma. We let free the parameter $\textit{rt}$ (the ratio of ionization balance to electron temperature). The electron density for both components is $\sim 3 \cdot 10^{13} \rm cm^{-3}$. In this case, the intercombination line becomes stronger than the resonance and the forbidden is suppressed but we still do not obtain a better fit than the photoionized case, as we see in Fig. \ref{fig:cie} (C-stat\,/\,Exp. C-stat= 1738\,/\,1067). Lastly, we tested the possibility of a more complicated model which is the combination of a collisionally ionized component and a photoionization component in order to reduce our residuals. It still does not improve our fit so we conclude that the best fit model that represents our case is the photoionization modelling.
\begin{figure} [!tbp]
\centering
\includegraphics[width=0.55\textwidth]{fig4.pdf}
\caption{Best fit of the EPIC-pn persistent spectrum and residuals (Obs ID: 0160761301). }
\label{fig:persistent}
\end{figure}
\begin{table*}[!tbp]
\begin{minipage}[t]{\hsize}
\centering
\setlength{\extrarowheight}{3pt}
\caption{The continuum parameters of the spectrum during the persistent emission and the power law parameters resulting from the eclipsed spectrum. The symbol 'pow' refers to the power law parameters and 'bb' to the black body.}
\small
\renewcommand{\footnoterule}{}
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{c c c c c c }
\hline \hline
Continuum & Parameter (Unit) & Value \\
Component & & \\
\hline
$pow_{persistent} $ & norm $10^{44}$ (ph/s/keV at 1 keV) & $1.6 \pm 0.02 $ \\
& $\Gamma$ (photon index) & $1.4 \pm 0.01$ \\
$bb_{persistent} $ & norm ($\rm cm^{2}$) & $1.13^{ + 8.0 \times 10 ^{-8}}_{ - 5.5 \times 10^{-8}} $ \\
& Temperature (keV) &$ 0.12 ^{+ 1.8 \times 10 ^{-3}}_{- 1.7 \times 10^{-2}} $ \\
\hline
$pow_{eclipsed} $ & norm $10^{44}$ (ph/s/keV at 1 keV) &$ 0.032 \pm 0.003$\\
&$ \Gamma$ &$ 1.9 \pm 0.2 $\\
\hline
\label{tab:persistent}
\end{tabular}
\end{minipage}
\end{table*}
\begin{table*}[!tbp]
\begin{minipage}[t]{\hsize}
\setlength{\extrarowheight}{3pt}
\caption{Best-fit parameters of the pion photoionization model components fitted to the stacked XMM-$\textit{Newton}$ data. The symbol (c) indicates the coupled parameters.}
\centering
\small
\renewcommand{\footnoterule}{}
\begin{tabular}{c c c c c c c c c c c }
\hline \hline
Comp & $\log~\xi$ &$ \ensuremath{N_{\mathrm{H}}}\xspace$ & $\sigma $ & $\Omega\,/\,4 \pi$ & $n_{H}$ & $v_{flow}$ \\
& ($\rm erg \cdot \rm s^{-1} \cdot \rm cm$) & ($ \rm cm^{-2} \cdot 10^{21}$) & ($\ensuremath{\mathrm{km\ s^{-1}}}\xspace$) & - & $( \rm cm^{-3} \cdot 10^{13})$ & ($\ensuremath{\mathrm{km\ s^{-1}}}\xspace$) \\
\hline
A & $2.5\pm 0.1$ & $0.75^{ + 0.53}_{ - 0.25}$ & $591 \pm 350$ & $< 0.5^{+ 0.0}_{- 0.3}$ & 2 (c) & 0 (fixed) \\
B & $1.3 \pm 0.1$ & $18^{ + 7}_{ - 6}$ & $ 504^{+ 190}_{- 120} $ & $0.007\pm0.002 $ & $ < 2^{+ 0.0}_{- 0.6}$ & $880 ^{+28}_{-91} $ \\
\hline
\multicolumn{8}{c}{C-stat\,/\,Exp. C-stat = 1637\,/\,1067 \footnote{For the Exp. C-stat see \citet{kaastra2017}} } \\
\hline
\label{tab:pions}
\end{tabular}
\end{minipage}
\end{table*}
\begin{table*}[!tbp]
\begin{minipage}[t]{\hsize}
\setlength{\extrarowheight}{3pt}
\caption{Calculated parameters according to the best-fit parameters of the pion photoionization model components fitted to the stacked XMM-$\textit{Newton}$ data.}
\centering
\small
\renewcommand{\footnoterule}{}
\begin{tabular}{c c c c c c }
\hline \hline
Comp & L & EM & r & $n_e$ & Thickness \\
& ($ \rm erg \cdot s^{-1}$) & $( \rm cm^{-3})$ & (cm) &$ (\rm 10^{9} cm^{-3} )$ & (cm) \\
\hline
A &$1.83\times 10^{32}$& $ 3\times 10^{51} $ & $1.67\times 10^{8} $ & 2.4& $ 3.5 \times 10^7$\\
B &$7.68\times 10^{30}$& $ 9\times 10^{46}$&$ 2.4 \times 10^{10}$ & 2.4 &$8.2 \times 10^8$ \\
\hline
\label{tab:EM}
\end{tabular}
\end{minipage}
\end{table*}
\begin{figure*} [htbp]
\begin{centering}
\includegraphics[width=1\textwidth]{fig5.pdf}
\label{fig:bestfit}
\caption{$\textit{Upper panel)}$ Best fit spectrum of EXO 0746-676 using 2 pion components. The feature at 23 $\AA$ corresponds to a bad pixel. $\textit{Middle panel)}$ Residuals. $\textit{Lower panel)}$ Pion emission components. The combination of 2 components can fit best both $\ion{O}{vii}\xspace$ and $\ion{O}{viii}\xspace$. }
\end{centering}
\end{figure*}
\begin{figure} [htbp]
\begin{centering}
\includegraphics[width=0.5\textwidth]{fig6.pdf}
\label{fig:oviidensities}
\caption{$\ion{O}{vii}\xspace$ spectral range and model calculation with different densities. The best fit is given from the higher density value, displayed in red.}
\end{centering}
\end{figure}
\begin{figure*} [!tbp]
\centering
\includegraphics[width=0.95\textwidth]{fig7.pdf}
\caption{Modelling the eclipsed spectrum of EXO 0748-676 with collisionally ionized gas and comparison to the best-fit photoionized gas. In both cases we have a high-density gas.}
\label{fig:cie}
\end{figure*}
\subsection{Are the two photoionization components thermally stable?}
\label{scurve}
\begin{figure*} [h]
\centering
\includegraphics[width=0.5\textwidth]{fig8.pdf}
\caption{Thermal stability curve of the photoionization modelling. The two components are not in pressure equilibrium. }
\label{fig:scurve}
\end{figure*}
A photoionized plasma can be unstable at some ionization states. We test whether our photoionization components are in pressure equilibrium. This can be tested by creating a stability curve (called also S-curve or cooling curve). The stability curve shows the change in the electron temperature as a function of the pressure-form of the ionization parameter $\Xi$, introduced by \citet{krolik1981}, which is defined as the radiation pressure divided by the gas pressure.
\par
We produce the S-curves as follows. We first create a grid of electron temperatures and ionization parameter $\xi$ according to our photoionization modelling and the SED in SPEX. Further we calculate the pressure, $\Xi$. The parameter $\Xi$ gives the ionization equilibrium and can be expressed as $\Xi =F/n_{H}\cdot c\cdot kT$ where $n_{\rm H}$ is the hydrogen density in $ \rm cm^{-3}$, $\it k$ is the Boltzmann constant and $\it T$ the gas temperature. Since $F=L/4\cdot \pi \cdot r^{2}$ and $\xi=L/n_{H} \cdot r^{2}$, $\Xi$ can be written as:
\begin{equation}
\Xi=\frac{L}{4 \pi r^{2} \cdot n_{H} \cdot c \cdot kT}=\frac{\xi}{4 \pi \cdot c \cdot kT}=19222 \cdot \frac{\xi}{T}
\end{equation}
In Figure \ref{fig:scurve} we present the stability curve. We also overplot with empty and filled circle the location of our photoionization components A and B, respectively. The two components are not in pressure equilibrium. Component B is cooler and it belongs to a thermally stable part of the curve. Component A is hotter and it may reside on a thermally unstable branch, although from the position of the component on the curve this is difficult to constrain.
\section{Discussion}
\label{discussion}
In this work, we performed a spectroscopic study of the eclipsed spectrum of the low mass X-ray binary EXO 0748-676. We fit the spectrum using both Gaussian line modelling and photoionization modelling. By modelling the eclipsed spectrum we isolate only the material that come from the area that surrounds the disk, from which we confirm the existence of an upper extended disk atmosphere. \subsection{Geometry of the system}
\label{geometry}
\subsubsection{The observed two-phase gas}
\label{equil}
In our analysis we performed photoionization modelling using two components in order to fit both the lower ($\ion{O}{vii}\xspace$) and higher ionization ($\ion{O}{viii}\xspace$) lines (see Figure \ref{fig:bestfit}). The Gaussian line fitting and G ratio indicate the existence of photoionized emission from the upper disc atmosphere. Also, with our global modelling, we found narrow RRC features in the spectrum. The RRC emission features in the spectrum show that photoionization is the dominant ionizing mechanism in the plasma. Thus, we support our initial assumption for using photoionization modelling. \par
From the stability curve (see Section \ref{scurve}) we see that we have a two-phase gas out of pressure equilibrium. This may suggest that components A and B (see Table \ref{tab:pions}) are independent gas components. Looking at the fitting parameters in Table \ref{tab:pions}, the two components have different ionization ($\xi$) and opening angle ($\Omega$). The higher ionization component (A) comes from a region with opening angle of $2 \cdot\pi$ while the lower ionization one (B) comes from a region two orders of magnitudes smaller. We also find a rather high density for component B ($\sim 10^{13} \rm cm^{-3}$) while for component A the density could not be determined due to poor signal-to-noise around the $\ion{Ne}{ix}\xspace$ region. Component B seems to be also inflowing. For component A the velocity cannot be constrained. \par
\subsubsection{The structure of the atmosphere}
\label{atm}
In Fig. \ref{fig:exo} we present an illustration that shows the position of the two-phase gas and the shape of the upper atmosphere which most likely fit to our case. From our observational results we see a two-phase gas covering a different area along the line of sight. The emission measure for the lower ionization component (B) is smaller than that of the higher one (A) (see Table \ref{tab:EM}). From the definition of the emission measure ($EM=\int n_{e} n_{H} dV$), we can conclude that the gas of component B is coming from a small volume which is consistent with the small value of $\Omega$, while component A is covering a broader area. \par
A likely scenario that explains the above results is the following. The high $\xi $ and larger $\Omega$ material comes from the extended atmosphere that is located above the accretion disk. This atmosphere can be created by illumination that comes from the disk which is heated by the central X-ray source (see \citealt{garate2003}). \par
The material that we see during the eclipse, and refer to component B, can be related to the material impinging into the disk. Interestingly, the eclipses are systematically coming during the dipping event, almost at the end of it (see Fig \ref{fig:lc}, lower panel). This applies to all our observations. Most dips are observed to be over-densities above the disk region (at the outer disk edge) which has been thickened as an effect of the impact with the accretion stream (\citealt{white1982}, \citealt{king1987}). \citealt{trigo2009} found that in low mass X-ray binary system XB 1254-690 , the gas in the line of sight causing the dips is clumpy. Clumps have also been suggested to explain the phenomenology seen in Her X-1 \citep{schandl1996} or to explain the high optical luminosity of supersoft X-ray sources \citep{suleimanov2003}. In our case, the lower $\xi $ gas with the small volume may come from clumpy region above the disk. This emission could come from a clumpy gas which may have been created due to pressure instabilities during the impact of the accretion stream and the disk. The clumps seem to have a rather high density ($\sim 10^{13} \rm cm^{-3}$) and also they might have an inflowing velocity. This geometrical picture can explain also our findings for a two-phase gas. \par
The atmosphere (component A in Fig. \ref{fig:exo}) is most likely extended in the outer region of the disk. We can explain the existence of an extended corona in the frame of X-ray irradiation of the upper layers of an accretion disk (see e.g. \citealt{garate2002}). In this context, in the inner region of the accretion disk, close to the neutron star, the viscous heating is the dominant heating mechanism. The outer region of the accretion disk is dominated by external illumination and thus radiative heating exceeds viscous heating. The X-ray field of the neutron star photoionizes and heats the gas. Additional heating also is provided by the illumination from the accretion stream. The gas tends to move towards a situation of hydrostatic equilibrium, convection is suppressed and the disk is increasing the scale height. For this reason the geometry of the extended atmosphere is more extended in the outer part of the disk.
\begin{figure*} [htbp]
\includegraphics[width=0.95\textwidth]{fig9.png}
\caption{Illustration of EXO 0748-676 with the clumps in the extended atmosphere. Symbols A and B show the emission regions of our two photoionization components. A is the hotter component while B is the cooler one. The components in the figure are not to scale.}
\label{fig:exo}
\end{figure*}
\subsection{Comparison with previous results}
\label{comparison}
We compare the spectroscopic results of EXO 0748-676 with results from other sources that present an extended accretion disk corona. \citet{kallman2003} analysed $\textit{Chandra}$ observations from the high-inclination ADC source 2S 0921-63. From the density diagnostics in $\ion{O}{vii}\xspace$ line, they constrain a density for the ADC of $10^{9}-10^{11} \rm cm^{-3}$ and the ionization parameter, $ \rm log \ \xi=2$.\par
Further, \citet{cottam2001_2} found from the $\textit{Chandra}$ spectrum of the ADC source 4U 1822-37 an opening angle of $\frac{\Omega}{2\pi}=0.11$ which, according to the authors, constrains an extended area around the source. Also they derive an electron density of $10^{11}$ $\rm cm^{-3}$. Further, \citet{garate2005}, for the ADC source Hercules X-1, obtained using the R-ratio a high density of $10^{13} \rm cm^{-3}$, similar to the value obtained here. According to the authors, this density is consistent also with predicted atmospheric models. Also, in both sources, the presence of the RRC emission features is evident. \par
\citet{cottam2001_1} studied the XMM-$\textit{Newton}$ spectrum during the persistent emission intervals of EXO 0748-67 and found emission lines from $\ion{O}{vii}\xspace$, $\ion{O}{viii}\xspace$, $\ion{Ne}{ix}\xspace$ and $\ion{N}{vii}\xspace$. They conclude that these features are coming from the extended atmosphere above the disk. They estimate the density using the line ratios and they found a lower limit for $\ion{O}{vii}\xspace$ and $\ion{Ne}{ix}\xspace$ of $ 2 \cdot 10^{12} \rm cm^{-3}$ and $ 7 \cdot 10^{12} \rm cm^{-3}$. They also found the velocity broadening of the $\rm Ly\alpha$ of $\ion{O}{viii}\xspace$ $\sim 1400$ $\rm km \ s^{-1}$ and a systemic velocity of the emitting plasma < 300 $\rm km \ s^{-1}$. \par
\citet{garate2003} analysed the persistent $\textit{Chandra}$ spectrum of EXO 0748-676 and confirmed the existence of an upper atmosphere around the disk. They find the upper atmosphere to extend $8^{\circ}-15^{\circ}$ with respect to the disk midplane. Further the density measured from the $\ion{O}{vii}\xspace$ lines is $\sim 10^{11}$ $\rm cm^{-3}$ and a mean broadening of the brightest lines of $\sim 700$ $\rm km \ s^{-1}$. Their lines seem to be at rest, while our modelling shows a slight redshift. It has to be noted that in this case the authors studied the total emission of the disk in the persistent interval while in our case we get the emission only from the upper disk atmosphere and to a specific time interval of the accretion event. \par
\section{Summary}
\label{Summary}
In this study we have analysed the XMM-\textit{Newton} RGS spectrum of the low mass X-ray binary EXO 0748-676 during the eclipses. This allowed us to study for the first time the gas coming only from the upper disk atmosphere. In our work, we modelled the emission lines using Gaussian line fitting to estimate the gas velocity and line shift in a model independent way. Furthermore, we used photoionization modelling and constrained the density and the structure of the atmosphere. Our conclusions are the following: \\
\begin{itemize}
\item We confirm the existence of an extended atmosphere above the accretion disk of EXO 0748-676. We detect $\ion{O}{vii}\xspace$ and $\ion{O}{viii}\xspace$ lines with high significance but also $\ion{Ne}{ix}\xspace$ and $\ion{N}{vii}\xspace$ lines. We measure positive velocity shifts for the strongest lines which gives an evidence for inflowing gas towards the central source.
\item From the line ratios of the $\ion{O}{vii}\xspace$ triplet and the photoionization modelling, we estimate the density of the gas and we obtain a rather high value of $\sim 10^{13} \rm cm^{-3}$.
\item In our modelling we use two photoionization components. Our thermal stability analysis shows that the two components are out of equilibrium with each other. This means that we probably observe two distinct gas components. One displays smaller opening angle and is coming most likely from clumps created from the impact of the accretion stream with the disk, inflowing towards the source. The other belongs to the atmosphere and from our modelling we find that is covering an area of at most $2\pi $.
\item The results support the scenario that the extended disk atmosphere is created due to heating of the outer part of the disk from the central compact source and the accretion stream and it is most likely photoionized.
\end{itemize}
\begin{acknowledgements}
The authors thank the anonymous referee for the useful comments. IP, DR and EC are supported by the Netherlands Organisation
for Scientific Research (NWO) through The Innovational Research Incentives
Scheme Vidi grant 639.042.525. The Space Research Organization of the
Netherlands is supported financially by the NWO. We would like to thank R. Waters for constructive suggestions on the manuscript and C. Done for useful discussions on the shape of the disk atmosphere.
We also thank I. Urdampilleta for providing help with $\textit{python}$ and C. Pinto for advises on the line detection. \end{acknowledgements}
\vspace{-0.4cm}
|
2,877,628,089,116 | arxiv | \section{Generically universally Baire iteration strategies}\label{sec:gub}
In this paper we will need three properties of iteration strategies, namely \textit{Skolem-hull condensation}, \textit{pullback condensation} and \textit{generically universal Bairness}. We now define these notions.
We say $({\mathcal{P} }, \Psi)$ is an \textit{iterable pair} if ${\mathcal{P} }$ is a pre-iterable structure and $\Psi$ is a strategy for it. Suppose $({\mathcal{P} }, \Psi)$ is an iterable pair. If ${\mathcal{T}}$ is a smooth iteration of ${\mathcal{P} }$ according to $\Psi$ with last model ${\mathcal{ Q}}$ then we write $\Psi_{{\mathcal{T}}, {\mathcal{ Q}}}$ for the strategy of ${\mathcal{ Q}}$ induced by $\Psi$. Namely, $\Psi_{{\mathcal{T}}, {\mathcal{ Q}}}({\mathcal{U}})=\Psi({\mathcal{T}}^\frown {\mathcal{U}})$. When $\Psi_{{\mathcal{T}}, {\mathcal{ Q}}}$ is independent of ${\mathcal{T}}$ we will drop it from our notation. Given a ${\mathcal{P} }$-cardinal $\xi$, we write $\Psi_{{\mathcal{P} }|\xi}$ for the fragment of $\Psi$ that acts on smooth iterations based on ${\mathcal{P} }|\xi$. Here recall that ${\mathcal{P} }|\xi=H_\xi^{\mathcal{P} }$.
Continuing with $({\mathcal{P} }, \Psi)$, suppose $\pi:{\mathcal{N}}\rightarrow {\mathcal{P} }$ is elementary. Given a smooth iteration ${\mathcal{T}}$ of ${\mathcal{N}}$ we can define the copy $\pi{\mathcal{T}}$ on ${\mathcal{P} }$ which may or may not have well-founded models. The construction of $\pi{\mathcal{T}}$ was introduced in \cite{IT} on page 17. Suppose now that ${\mathcal{T}}$ is such that $\pi{\mathcal{T}}$ is according to $\Psi$ and ${\mathcal{T}}$ is of limit length. Let $b=\Psi(\pi{\mathcal{T}})$. It follows from the construction of $\pi{\mathcal{T}}$ that $b$ yields a well-founded branch of ${\mathcal{T}}$.
We then say $\Lambda$ is the $\pi$-pullback of $\Psi$ if for any smooth iteration ${\mathcal{T}}$ on ${\mathcal{N}}$ that is according to $\Lambda$, $\pi{\mathcal{T}}$ is according to $\Psi$. It is customary to let $\Lambda$ be $\Psi^\pi$.
\begin{definition}\label{skolem hull condensation} Suppose $({\mathcal{P} }, \Psi)$ is an iterable pair.
We say $\Psi$ has \textbf{Skolem-hull condensation} if whenever ${\mathcal{T}}$ is an iteration according to $\Psi$, $\xi$ is such that ${\mathcal{T}}\in V_\xi$ and $\pi: M\rightarrow V_\xi$ is elementary such that $({\mathcal{P} }|\xi, \Psi_{{\mathcal{P} }|\xi}, {\mathcal{T}})\in rng(\pi)$ then $\pi^{-1}({\mathcal{T}})$ is according to $\Psi^\pi_{{\mathcal{P} }|\xi}$.
\end{definition}
\begin{definition}\label{pullback condensation} Suppose $({\mathcal{P} }, \Psi)$ is an iterable pair. We say $\Psi$ has \textbf{pullback condensation} if whenever ${\mathcal{T}}$ is an iteration according to $\Psi$ with last model ${\mathcal{ Q}}$ and ${\mathcal{U}}$ is an iteration of ${\mathcal{ Q}}$ according to $\Psi_{{\mathcal{T}}, {\mathcal{ Q}}}$ with last model ${\mathcal R}$ then $\Psi^{\pi^{\mathcal{U}}}_{{\mathcal{T}}^\frown {\mathcal{U}}, {\mathcal R}}=\Psi_{{\mathcal{T}}, {\mathcal{ Q}}}$.
\end{definition}
The following theorems are easy consequences of $\sf{UBH}$ ($\sf{gUBH}$), and are probably not due to the authors.
\begin{theorem}\label{easy consequence0} Assume $\sf{UBH}$ and suppose $\lambda$ is inaccessible. Then $V_\lambda\models \sf{UBH}$.
\end{theorem}
\begin{theorem}\label{easy consequence1} Assume self-iterability and suppose $\Psi$ is the unique strategy of ${\mathcal{V}}$. Then $\Psi$ has Skolem-hull condensation and pullback condensation.
\end{theorem}
Suppose $({\mathcal{P} }, \Psi)$ is an iterable pair. Given a strong limit cardinal $\kappa$ and $F\subseteq Ord$, set
\begin{center}
$W^{\Psi, F}_\kappa=(H_{\kappa}, F\cap \kappa, {\mathcal{P} }|\kappa, \Psi_{{\mathcal{P} }|\kappa}\restriction H_\kappa, \in)$.
\end{center}
Given a structure $Q$ in a language extending the language of set theory with a transitive universe, and an $X\prec Q$, we let $ M_X$ be the transitive collapse of $X$ and $\pi_X: M_X\rightarrow Q$ be the inverse of the transitive collapse. In general, the preimages of objects in $X$ will be denoted by using $X$ as a subscript, e.g. $\pi_X^{-1}({\mathcal{P} }) = {\mathcal{P} }_X$. Suppose in addition $Q=(R,...{\mathcal{P} },\Phi,...)$ where ${\mathcal{P} }$ is a pre-iterable structure and $\Phi$ is an iteration strategy of ${\mathcal{P} }$. We will then write $X\prec (Q|\Phi)$ to mean that $X\prec Q$ and the strategy of ${\mathcal{P} }_X$ that we are interested in is $\Phi^{\pi_X}$. We set $\Lambda_X=\Phi^{\pi_X}$.
Motivated by the definition of universally Baire sets that involves club of generically correct hulls, we make the following definition.
\begin{definition}\label{ub strategy} We say $\Psi$ is a \textbf{generically universally Baire (guB) strategy} for a pre-iterable ${\mathcal{P} }=(P, \vec{E})$ if there is a formula $\phi(x)$ in the language of set theory augmented by three relation symbols and $F\subseteq Ord$ such that for every inaccessible cardinal $\kappa$ and for every countable
\begin{center}
$X\prec (W^{\Psi, F}_\kappa| \Psi_{{\mathcal{P} }|\kappa})$
\end{center}
whenever
\begin{enumerate}[(a)]
\item $g\in V$ is $M_{X}$-generic for a poset of size $<\kappa_X$ and
\item ${\mathcal{T}}\in M_X[g]$ is such that for some $M_X$-inaccessible $\eta<\kappa_X$, ${\mathcal{T}}$ is an iteration of ${\mathcal{P} }_X|\eta$,
\end{enumerate}
the following conditions hold:
\begin{enumerate}
\item if $lh({\mathcal{T}})$ is a limit ordinal and ${\mathcal{T}}\in dom(\Lambda_X)$ then $\Lambda_X({\mathcal{T}})\in M_X[g]$,
\item ${\mathcal{T}}$ is according to $\Lambda_X$ if and only if ${\mathcal{M}}_X[g]\models \phi[{\mathcal{T}}]$.
\end{enumerate}
We say that $(\phi, F)$ is a generic prescription of $\Psi$.
\end{definition}
In \rdef{ub strategy}, we could demand that there is a club of $X$ with the desired properties. However that would be equivalent to our definition as we can let $F$ above code the desired club. In the next section our goal is to prove some basic facts about $guB$-strategies.
\section{Generic interpretability of guB strategies}\label{sec: gen-it}
As we said in the introduction, from this point on we work under the hypothesis of \rthm{main theorem}. However, we will not use the existence of a strong cardinal until \rsec{sec:der model}.
Let $\Psi$ be the guB-strategy of ${\mathcal{V}}=(V, \sf{ile}(V))$ and fix a generic prescription $(\phi, F)$ for $\Psi$ (see Definition \ref{ub strategy}). We will omit $\Psi, F$ from our notation and just write $W_\kappa$ instead of $W_\kappa^{\Psi, F}$. Given a cardinal $\alpha$ we will write $\Psi_\alpha$ for the fragment of $\Psi$ that acts on iterations based on ${\mathcal{V}}|\alpha$. Often we will treat $\Psi_\alpha$ as a strategy for ${\mathcal{V}}|\alpha$ rather than a strategy for ${\mathcal{V}}$. Similarly, given an interval $(\alpha, \beta)$ we will write $\Psi_{\alpha, \beta}$ for the fragment of $\Psi$ on iterations based on ${\mathcal{V}}|\beta$ above $\alpha$. To make the notation simpler, often we will not specify the domain of $\Psi_\alpha$ that we have in mind (as in \rlem{simple capturing}).
Let $\delta$ be a Woodin cardinal of ${\mathcal{V}}$. We first prove that $\Psi_\delta$ has canonical extensions in generic extensions of $V$. As a first step, we prove the following useful capturing result.
\begin{lemma}\label{simple capturing} Suppose $\lambda$ is an inaccessible cardinal and let $X\prec (W_\lambda|\Psi_\delta)$ be countable. Set $\Phi=\pi_X^{-1}(\Psi_\delta)$. Then $\Lambda_X\restriction M_X=\Phi$.
\end{lemma}
\begin{proof} Let ${\mathcal{U}}\in M_X$ be such that ${\mathcal{U}}\in dom(\Phi)\cap dom(\Lambda_X)$. Set $b=\Lambda_X({\mathcal{U}})$. It follows from (2) of Definition \ref{ub strategy} that $b\in M_X$. Because $M_X\models {\sf{gUBH}}$, it follows that $\Phi({\mathcal{U}})=b$.
\end{proof}
\begin{theorem}\label{strategies can be extended}
Suppose $\delta$ is a Woodin cardinal and $\eta\geq \delta$ is an inaccessible cardinal. Let $g\subseteq Coll(\omega, \eta)$ be generic. Then, in $V[g]$, there is an $Ord$-strategy $\Sigma$\footnote{Recall that we are assuming self-iterability.} for ${\mathcal{V}}|\delta$ such that the following hold.
\begin{enumerate}
\item $\Psi_\delta \subseteq \Sigma$,
\item Letting $\Delta$ be the $\omega_1$-fragment of $\Sigma$, $V[g]\models ``\Delta$ is universally Baire".
\item For all $V[g]$-generic $h$, letting $\Delta^h$ be the canonical extension of $\Delta$ to $V[g*h]$, $\Delta^h\restriction V[g]\subseteq \Sigma$.
\end{enumerate}
\end{theorem}
\begin{proof} Let $\lambda > \eta$ be an inaccessible cardinal. Set $W=W_\lambda$, ${\mathcal{P} }={\mathcal{V}}|\delta$ and given a iteration ${\mathcal{T}}$ of ${\mathcal{P} }$ of limit length and a cofinal well-founded branch $b$ of ${\mathcal{T}}$, set $\psi[{\mathcal{T}}, b]= \phi[{\mathcal{T}}^\frown \{b\}] \wedge \forall \alpha<lh({\mathcal{T}}) \phi[{\mathcal{T}}\restriction \alpha+1]$.
Working in $V_\lambda[g]$, let $\Sigma$ be the strategy given by $\psi$. More precisely, let $\Sigma$ be defined as follows.
\begin{enumerate}
\item ${\mathcal{T}}\in dom(\Sigma)$ if and only if $lh({\mathcal{T}})$ is of limit length and for every limit $\alpha<lh({\mathcal{T}})$ if $b=[0, \alpha)_{\mathcal{T}}$ then $V_\lambda[g]\models \psi[{\mathcal{T}}, b]$.
\item $\Sigma({\mathcal{T}})=b$ if and only if $V_\lambda[g]\models \psi[{\mathcal{T}}, b]$.
\end{enumerate}
The following is an immediate consequence of our definitions.
\begin{lemma}\label{capturing1} Suppose $X\prec (W |\Psi_\delta)$ is countable. Let $k\in V$ be $M_X$-generic. Suppose $({\mathcal{U}}, b)\in M_X[k]$ is such that $M_{X}[k]\models \psi[ {\mathcal{U}}, b]$. Then ${\mathcal{U}}\in dom(\Lambda_X)$ and $\Lambda_X({\mathcal{U}})=b$.
\end{lemma}
We now work towards showing that $\Sigma$ is a total strategy.
\begin{lemma}\label{step1} Suppose ${\mathcal{T}}\in dom(\Sigma)$. Then there is at most one branch $b$ such that $V[g]\models \psi[{\mathcal{T}}, b]$.
\end{lemma}
\begin{proof} Towards a contradiction assume not. Let $X\prec (W|\Psi_\delta)$ be countable and $k\subseteq Coll(\omega, \eta_X)$ be $M_X$-generic with $k\in V$. Fix now ${\mathcal{U}}, b, c\in M_{X}[k]$ such that $M_{X}[k]\models \psi[{\mathcal{U}}, b]\wedge \psi[{\mathcal{U}}, c]$. It follows from \rlem{capturing1} that $b=\Lambda_X({\mathcal{U}})=c$. Therefore, $b=c$.
\end{proof}
\begin{lemma}\label{step2} Suppose ${\mathcal{T}}\in dom(\Sigma)$. Then there is a branch $b$ such that $V_\lambda[g]\models \psi[{\mathcal{T}}, b]$.
\end{lemma}
\begin{proof} Towards a contradiction assume not. Let $X\prec (W|\Psi_\delta)$ be countable and $k\subseteq Coll(\omega, \eta_X)$ be $M_X$-generic. It follows that there is an iteration ${\mathcal{U}}\in M_X[k]$ of ${\mathcal{P} }_X$ such that\\\\
(a) for every $\alpha<lh({\mathcal{U}})$, letting $b_\alpha=[0, \alpha)_{\mathcal{U}}$, $M_X[k]\models \psi[{\mathcal{U}}\restriction \alpha, b_\alpha]$ but\\
(b) for no well-founded cofinal branch $b\in M_X[k]$ of ${\mathcal{U}}$, $M_X[k]\models \psi[{\mathcal{U}}, b]$. \\\\
It follows from (a) and \rlem{capturing1} that ${\mathcal{U}}\in dom(\Lambda_X)$. Hence, setting $\Lambda_X({\mathcal{U}})=b$, $b\in M_X[k]$ and $M_{X}[k]\models \phi[{\mathcal{U}}^\frown \{b\}]$. Therefore, $M_X[k]\models \psi[{\mathcal{U}}, b]$.
\end{proof}
\begin{lemma}\label{lemma1 towards uB} Let $X\prec (W| \Psi_\delta)$ be countable and let $k\in V$ be $M_X$-generic for $Coll(\omega, \eta_X)$. Let $\Phi$ be the strategy of ${\mathcal{P} }_X$ defined by $\psi$ in $M_X[k]$. Then $\Lambda_X\restriction M_X[k]=\Phi$.
\end{lemma}
\begin{proof}
Suppose that ${\mathcal{T}}\in M_X[k]$ is according to both $\Lambda_X$ and $\Phi$. Set $b=\Phi({\mathcal{T}})$. Because $\Phi({\mathcal{T}})=b$ we have that $M_X[k]\models \phi[{\mathcal{T}}^\frown \{b\}]$. Hence, $\Lambda_X({\mathcal{T}})=b$.
\end{proof}
\begin{corollary}\label{total strategy}
$V_\lambda[g]\models ``\Sigma$ is a total strategy extending $\Psi_{\delta}\restriction V_\lambda$".
\end{corollary}
\begin{proof}
\rlem{step1} and \rlem{step2} imply that $\Sigma$ is a total strategy. To show that it extends $\Psi_\delta\restriction V_\lambda$, we reflect. Let $X\prec (W|\Psi_\delta)$ be countable and let $k\subseteq Coll(\omega, \eta_X)$ be $M_X$-generic such that $k\in V$. Let $\Phi$ be the strategy of ${\mathcal{P} }_X$ defined by $\psi$ over $M_X[k]$. It follows from \rlem{lemma1 towards uB} that $\Phi=\Lambda_X\restriction (M_X[k])$. It follows from \rlem{simple capturing} that $\Lambda_X\restriction M_X=\pi_X^{-1}(\Psi_\delta)$. Hence, $\pi^{-1}_X(\Psi_\delta)\subseteq \Phi$.
\end{proof}
We now work towards showing that $\Delta=_{def}\Sigma\restriction HC^{V[g]}$ is universally Baire. For this it is enough to show that $\psi$ is generically correct. More precisely, it is enough to show that in $V[g]$, for a club of $X\prec (W, \Psi_\delta)$ such that $V_\eta\cup \{\eta\}\subseteq X$, whenever $k\in V[g]$ is $M_X[g]$-generic and $({\mathcal{T}}, b)\in M_X[g][k]$,
\begin{center}
$M_X[g][k]\models \psi[{\mathcal{T}}, b] \mathrel{\leftrightarrow} V[g]\models \psi[{\mathcal{T}}, b]$.\footnote{See \cite[Lemma 4.1]{DMT} for a proof of the equivalence.}
\end{center}
Working in $V$, fix $X\prec H_{\lambda^+}$ such that $W, \Psi_\delta\in X$. It is enough to show that our claim holds in $M_X$. Let $k\in V$, $k\subset Coll(\omega, \eta_X)$ be $M_X$-generic. Let $\Phi$ be the strategy defined by $\psi$ over $M_X[k]$ and $\Psi=\pi_X^{-1}(\Psi_\delta)$. Let $Y\prec (W_X| \Psi)$ be any countable substructure in $M_X[k]$ such that $V^{M_X}_{\eta_X}\cup \eta_X \subset Y$ and let $h\in M_X[k]$ be $M_Y[k]$-generic. Fix $({\mathcal{T}}, b)\in M_Y[k][h]$.
Suppose now that $M_Y[k][h]\models \psi[{\mathcal{T}}, b]$. Because $\pi_X[Y]\in V$ we have that ${\mathcal{T}}$ is according to $\Lambda_Y$ and $\Lambda_Y({\mathcal{T}})=b$. But because $\pi_Y\restriction \eta_X=id$, we have that $\Lambda_Y=\Lambda_X$. Therefore, ${\mathcal{T}}$ is according to $\Lambda_X$ and $\Lambda_X({\mathcal{T}})=b$. It follows from \rlem{lemma1 towards uB} that $\Phi({\mathcal{T}})=b$, i.e. $M_X[k] \vDash \psi[{\mathcal{T}},b]$. The reader can easily verify that these implications are reversible, and so if $\Phi({\mathcal{T}})=b$ then $M_Y[k][h]\models \psi[{\mathcal{T}}, b]$.
Finally, we need to verify that if $h$ is $V[g]$-generic for a poset of size $<\lambda$ then $\Delta^h\restriction V_\lambda[g]\subseteq \Sigma$. This again can be verified by first reflecting in $V$. Indeed, working in $V$, fix $X\prec H_{\lambda^+}$ be countable such that $W, \Psi_\delta\in X$. Let $(k, \Phi, \Psi)$ be as above. Let $\Gamma=\Phi\restriction HC^{M_X[k]}$. Let $h\in V$ be any $M_X[k]$-generic. We want to see that $\Gamma^h\restriction M_X[k]\subseteq \Phi$. To see this, let ${\mathcal{T}}\in M_X[k]$ be according to both $\Gamma^h$ and $\Phi$. Let $b=\Gamma^h({\mathcal{T}})$. It follows that $M_X[k][h]\models \psi[{\mathcal{T}}, b]$. Hence, ${\mathcal{T}}\in dom(\Lambda_X)$ and $\Lambda_X({\mathcal{T}})=b$. It follows from \rlem{lemma1 towards uB} that $\Phi({\mathcal{T}})=b$.
Thus far we have shown that \rthm{strategies can be extended} holds in $V_\lambda[g]$ for any inaccessible $\lambda>\eta$. Let $\Sigma_\lambda$ be the strategy defined above. To finish the proof of \rthm{strategies can be extended} it is enough to show that if $\lambda_0<\lambda_1$ are two inaccessible cardinals bigger than $\eta$ then $\Sigma_{\lambda_1}\restriction V_{\lambda_0}[g]=\Sigma_{\lambda_0}$. This can be verified by a reflection argument similar to the ones given above.
Indeed, let $X\prec H_{\lambda_1^+}$ be countable such that $\{W_{\lambda_0}, W_{\lambda_1}\} \in X$. Let $k\subseteq Coll(\omega, \eta_X)$ be $M_X$-generic such that $k\in V$. Let $\Phi_0$ and $\Phi_1$ be the versions of $\Sigma_{\lambda_0}$ and $\Sigma_{\lambda_1}$ in $M_X[k]$. It follows from \rlem{lemma1 towards uB} that for $i\in 2$, $\Phi_i=\Lambda_X\restriction M_{X\cap W_{\lambda_i}}[k]$. Therefore, $\Phi_0\subseteq \Phi_1$. This completes the proof of Theorem \ref{strategies can be extended}.
\end{proof}
We record a useful corollary to the proof of \rthm{strategies can be extended}. We let $\psi$ be the formula used in the proof of \rthm{strategies can be extended}. If $g, \Sigma$ are as in \rthm{strategies can be extended} and $k$ is $V[g]$-generic then we let $\Sigma^k$ be the extension of $\Sigma$ to $V[g][k]$.
\begin{corollary}\label{definable extension} Suppose $\delta, g, \Sigma$ are as in Theorem \ref{strategies can be extended}. Suppose $\lambda$ is an inaccessible cardinal and $k$ is $V[g]$-generic for a poset in $V_\lambda[g]$. Then $\Sigma^k\restriction V_\lambda[g][k]$ is defined via $\psi$. More precisely, the following conditions hold.
\begin{enumerate}
\item ${\mathcal{T}}\in dom(\Sigma^k)\cap V_\lambda[g*k]$ if and only if for every limit $\alpha<lh({\mathcal{T}})$, setting $b_\alpha=[0, \alpha)_{\mathcal{T}}$, $V_\lambda[g*k]\models \psi[{\mathcal{T}}\restriction \alpha, b_\alpha]$.
\item For ${\mathcal{T}}\in dom(\Sigma^k)\cap V_\lambda[g*k]$, $\Sigma^k({\mathcal{T}})=b$ if and only if $V_\lambda[g*k]\models \psi[{\mathcal{T}}, b]$.
\end{enumerate}
\end{corollary}
As the definition of $\Sigma$ uses only parameters from $V$, it follows that in all generic extensions $V[h]$ of $V$, $\Psi_\delta$ has an extension $\Psi_\delta^h$. For instance, we can define $\Psi^h_\delta({\mathcal{U}})$ by first selecting some inaccessible $\eta$ such that $h$ is generic for a poset in $V_\eta$ and ${\mathcal{U}}\in V_\eta[h]$ then picking a generic $g\subseteq Coll(\omega, \eta)$ such that $V[h]\subseteq V[g]$ and then finally setting $\Psi^h_\delta({\mathcal{U}})=\Sigma({\mathcal{U}})$ where $\Sigma$ is as in \rthm{strategies can be extended}.
\section{Some correctness results}
Say $u=(\eta,\delta, \lambda)$ is a \textit{good triple} if it is increasing, $\delta$ is a Woodin cardinal, and $\lambda$ is an inaccessible cardinal. The assumption on Woodinness of $\delta$ will not be necessary in this section but will be used extensively in subsequent sections. Fix a good triple and set $\Phi=\Psi_\delta\restriction H_\lambda$. The goal of this section is to show that many Skolem hulls of $\Phi$ are computed correctly. Forcing posets in some of the main claims this section will be in $V_\eta$. We start by showing that a stronger form of \rlem{simple capturing} holds.
\begin{lemma}\label{first step} Suppose $X\prec ((W_\lambda, u) | \Phi)$ is countable and $k\in V$ is $M_X$-generic. Then
\begin{center}
$\Phi_X^k\restriction (M_X[k])=\Lambda_X\restriction (M_X[k])$\footnote{Here $\Phi^k_X$ is the generic interpretation of $\Phi_X$ in $M_X[k]$ using the definition of $\Phi$ given in \rthm{strategies can be extended}.}.
\end{center}
\end{lemma}
\begin{proof} Fix ${\mathcal{T}}\in dom(\Phi_X^k)\cap dom(\Lambda_X)$ and set $\Phi_X^k({\mathcal{T}})=b$. It follows from \rcor{definable extension} that ${\mathcal{M}}_X[k]\models \psi[{\mathcal{T}}, b]$. Therefore, $\Lambda_X({\mathcal{T}})=b$.
\end{proof}
The following is a straightforward corollary of \rlem{first step} and can be proven by a reflection like that in the proof of Theorem \ref{strategies can be extended}.
\begin{corollary}\label{first step generically} Suppose $g$ is generic for a poset in $V_\eta$ and $X\prec ((W_\lambda, u) | \Phi^g)$ is countable in $V[g]$. Let $k\in V[g]$ be $M_X$-generic. Then
\begin{center}
$\Phi_X^k\restriction (M_X[k])=\Lambda_X\restriction (M_X[k])$.
\end{center}
\end{corollary}
\begin{corollary}\label{generic interpretability claim} Suppose $g$ is generic for a poset in $V_\eta$ and $i: {\mathcal{V}}\rightarrow {\mathcal{P} }$ is an iteration embedding via a normal iteration ${\mathcal{T}}$ of length $<\lambda$ that is based on ${\mathcal{V}}|\delta$ and is according to $\Phi$. Then $i(\Phi)=\Phi^g_{{\mathcal{P} }|i(\delta)}\restriction {\mathcal{P} }$. \footnote{Recall that $\Phi_{{\mathcal{P} }|i(\delta)} =_{\textrm{def}} \Phi_{{\mathcal{T}}, {\mathcal{P} }|i(\delta)}$ is the tail strategy of ${\mathcal{P} }|i(\delta)$ induced by $\Phi$.}
\end{corollary}
\begin{proof} It is enough to prove the claim in some $M_Z$ where $Z\prec ((H_{\lambda^{++}}, W_\lambda, u, \Phi)|\Phi)$ is countable. Let $h\in V$ be $M_Z$-generic for a poset in $M_Z|\eta_Z$, and let ${\mathcal{U}}\in M_Z|\lambda_Z[h]$ be a normal iteration of ${\mathcal{V}}_Z$ based on ${\mathcal{V}}_Z|\delta_Z$ according to $\Phi_Z$ with last model ${\mathcal{ Q}}$. We want to see that $\pi^{\mathcal{U}}(\Phi_Z)=(\Phi_Z)_{{\mathcal{ Q}}|\pi^{\mathcal{U}}(\delta_Z)}^h\restriction {\mathcal{ Q}}$.
Let ${\mathcal R}$ be the last model of $\pi_Z{\mathcal{U}}$ and $\sigma:{\mathcal{ Q}}\rightarrow {\mathcal R}$ come from the copying construction. It follows from \cite[Theorem 4.9.1]{Neeman} that $\sigma$ is generic over ${\mathcal R}$ and ${\mathcal R}[\sigma]\in V$. It then follows from \rcor{first step generically} that $\pi^{\mathcal{U}}(\Phi_Z)=(\pi^{\pi_Z{\mathcal{U}}}(\Phi))^\sigma$. It again follows from \rcor{first step generically} that $\Phi_Z^h=\Lambda_Z\restriction M_Z[h]$, and hence
\begin{center}
$(\Phi_Z)_{{\mathcal{ Q}}|\pi^{\mathcal{U}}(\delta_Z)}^h\restriction {\mathcal{ Q}}=(\Lambda_Z)_{{\mathcal{ Q}}|\pi^{\mathcal{U}}(\delta_Z)}\restriction {\mathcal{ Q}} = (\pi^{\pi_Z{\mathcal{U}}}(\Phi))^\sigma = \pi^{\mathcal{U}}(\Phi_Z)$.
\end{center}
\end{proof}
\begin{figure}
\centering
\begin{tikzpicture}[node distance=2.5cm, auto]
\node (A) {${\mathcal{V}}$};
\node (B) [right of=A] {${\mathcal{P} }$};
\node (C) [below of=A] {$M_Z$};
\node (D) [right of=C] {${\mathcal{ Q}}$};
\draw[->] (A) to node {$i$}(B);
\draw[->] (C) to node {$\sigma$}(D);
\draw[->] (C) to node {$\pi_Z$} (A);
\draw[->] (D) to node {$\tau$} (B);
\end{tikzpicture}
\caption{Corollary \ref{generic interpretability}}
\label{fig:comm_diag}
\end{figure}
\begin{corollary}[Figure \ref{fig:comm_diag}]\label{generic interpretability} Suppose $g$ is generic for a poset in $V_\eta$ and $i: {\mathcal{V}}\rightarrow {\mathcal{P} }$ is an iteration embedding via a normal iteration ${\mathcal{T}}$ of length $<\lambda$ that is based on ${\mathcal{V}}|\delta$ and is according to $\Phi$. Let $X\prec ((W_\lambda, u) | \Phi^g)$ be countable in $V[g]$ and let ${\mathcal{ Q}}\in HC^{V[g]}$ be such that there are embeddings $\sigma: M_X\rightarrow {\mathcal{ Q}}$ and $\tau:{\mathcal{ Q}}\rightarrow{\mathcal{P} }$ with the property that $i\circ \pi_X=\tau\circ \sigma$. Then for any ${\mathcal{ Q}}$-generic $k\in V[g]$,
\begin{center}
$(\sigma(\Phi_X))^k_{{\mathcal{ Q}}|\sigma(\delta_Z)}=(\tau$-pullback of $\Phi^g_{{\mathcal{P} }|i(\delta)})\restriction {\mathcal{ Q}}[k]$.
\end{center}
\end{corollary}
\begin{proof} It is enough to prove the claim assuming $g$ is trivial. The more general claim then will follow by using the proof of \rcor{generic interpretability claim}. It follows from \cite[Corollary 4.9.2]{Neeman} that $\tau$ is generic over ${\mathcal{P} }$ and that ${\mathcal{P} }[\tau]$ is a definable class of $V$; here, to apply \cite[Corollary 4.9.2]{Neeman}, we need ${\mathcal{ Q}}$ is countable in $V[g]$. Applying \rcor{first step generically} and \rcor{generic interpretability claim} in ${\mathcal{P} }$, we get that
\begin{center}
$(\sigma(\Phi_X))^k_{{\mathcal{ Q}}|\sigma(\delta_Z)}=(\tau$-pullback of $\Phi^g_{{\mathcal{P} }|i(\delta)})\restriction {\mathcal{ Q}}[k]$.
\end{center}
\end{proof}
\begin{corollary}\label{generic interpretability claim1} Suppose $i: {\mathcal{V}}\rightarrow {\mathcal{P} }$ is an iteration embedding via a normal iteration ${\mathcal{T}}$ of length $<\lambda$ that is based on ${\mathcal{V}}|\delta$ and is according to $\Phi$. Let $h\in V$ be ${\mathcal{P} }$-generic for a poset in ${\mathcal{P} }|\lambda$. Then $i(\Phi)^h=\Phi_{{\mathcal{P} }|i(\delta)}\restriction {\mathcal{P} }[h]$.
\end{corollary}
\begin{proof} It is enough to prove the claim in some $M_Z$ where $Z\prec ((H_{\lambda^{++}}, W_\lambda, u, \Phi)|\Phi)$. Let ${\mathcal{U}}\in M_Z$ be an iteration of length $< \lambda_Z$ on $M_Z$ based on $M_Z|\delta_Z$ and $j: M_Z\rightarrow{\mathcal{ Q}}$ be the iteration embedding. Let $G\in M_Z$ be ${\mathcal{ Q}}$-generic for a poset in ${\mathcal{ Q}}|\lambda_Z$. We want to see that $j(\Phi_Z)^{G}=(\Phi_Z)_{{\mathcal{ Q}}|j(\delta_Z)}\restriction {\mathcal{ Q}}[G]$. Let ${\mathcal{T}}=\pi_Z{\mathcal{U}}$, ${\mathcal{P} }$ the last model of ${\mathcal{T}}$, $k=\pi^{\mathcal{T}}$ and $\tau:{\mathcal{ Q}}\rightarrow {\mathcal{P} }$ be the copy map.
It follows from \rcor{generic interpretability} that $j(\Phi_Z)^G=(\tau$-pullback of $\Phi_{{\mathcal{P} }|i(\delta)})\restriction {\mathcal{ Q}}[G]$. But because $\Phi_Z=\Lambda_Z\restriction M_Z$ (see \rlem{first step}),
\begin{center}
$(\tau$-pullback of $\Phi_{{\mathcal{P} }|i(\delta)})\restriction {\mathcal{ Q}}[G]=(\Phi_Z)_{{\mathcal{ Q}}|j(\delta_Z)}\restriction {\mathcal{ Q}}[G]$.
\end{center}
Therefore,
\begin{center}
$j(\Phi_Z)^G = (\Phi_Z)_{{\mathcal{ Q}}|j(\delta_Z)}\restriction {\mathcal{ Q}}[G]$.
\end{center}
\end{proof}
Suppose $M$ is a transitive model of set theory and $\nu$ is its least strong cardinal. Suppose $M\models ``u=(\eta,\delta, \lambda)$ is a good triple" and suppose ${\mathcal{T}}$ is a normal iteration of $M$. We say ${\mathcal{T}}$ is a \textit{sealed} iteration if ${\mathcal{T}}={\mathcal{T}}_0^\frown\{E_0\}$ is such that
\begin{enumerate}
\item ${\mathcal{T}}_0$ is a normal iteration of $M$ of successor length based on $M|{\delta}$ with last model $N$,
\item ${\mathcal{T}}_0$ is above $\nu$ (this implies that $\delta > \nu$),
\item $E_0\in N$ is an extender such that ${\rm crit }(E_0)=\nu$, $lh(E_0)>\pi^{{\mathcal{T}}_0}(\delta)$,
\item $N$ has an inaccessible cardinal in the interval $(\pi^{{\mathcal{T}}_0}(\delta), lh(E_0))$.
\end{enumerate}
Clearly the last model of ${\mathcal{T}}$ is $Ult(M, E_0)$. We say that a normal iteration ${\mathcal{T}}$ is a stack of sealed iterations if for some $n<\omega$, ${\mathcal{T}}=\oplus_{i\leq n}{\mathcal{T}}_i$ such that ${\mathcal{T}}_i$ is a sealed iteration of its first model.
\begin{corollary}\label{coherence} Suppose $u=(\eta, \delta, \lambda)$ is a good triple, $g$ is generic for a poset in $V_\eta$ and ${\mathcal{T}}\in V_\lambda[g]$ is a normal iteration of ${\mathcal{V}}$ that is a stack of sealed iterations and is according to $\Phi^g$ where $\Phi=\Psi_\delta$. Set ${\mathcal{T}}=\oplus_{i\leq n}{\mathcal{T}}_i$ and let ${\mathcal{P} }$ be the last model of ${\mathcal{T}}_{n-1}$ if $n>0$ and ${\mathcal{V}}$ otherwise. Let ${\mathcal{T}}_n=({\mathcal{U}}, E)$ and let ${\mathcal{ Q}}$ be the last model of ${\mathcal{U}}$. Set $\nu=\pi^{{\mathcal{U}}}(\pi^{\oplus_{i<n}{\mathcal{T}}_i}(\delta))$. Then $\Phi^g_{Ult({\mathcal{P} }, E)|\nu}=\Phi^g_{{\mathcal{ Q}}|\nu}$.
\end{corollary}
\begin{proof} We prove the claim in some $M_Z$ where $Z\prec ((H_{\lambda^+}, W_\lambda, u, \Phi)|\Phi)$ is countable. Let $h$ be $M_Z$-generic for a poset in $M_Z|\eta_Z$ and let $({\mathcal{W} }, {\mathcal R}, {\mathcal{W} }_n, {\mathcal{S}}, {\mathcal{X}}, F, \xi)\in M_Z[h]$ play the role of $({\mathcal{T}}, {\mathcal{P} }, {\mathcal{T}}_n, {\mathcal{ Q}}, {\mathcal{U}}, E, \nu)$.
We will redefine the objects ${\mathcal{P} }$ etc. in the following; this will not cause any confusion as we have no more use for the original objects. Let ${\mathcal{P} }$ be the last model of the $\pi_Z$-copy of $\oplus_{i<n}{\mathcal{W} }_i$ and let $\sigma:{\mathcal R}\rightarrow {\mathcal{P} }$ be the copy map. We have that $\sigma$ is generic over ${\mathcal{P} }$ (see \cite[Corollary 4.9.2]{Neeman}) and ${\mathcal{P} }[\sigma]$ is a definable class of $V$. Let ${\mathcal{ Q}}$ be the last model of $\sigma{\mathcal{X}}$ and let $\tau_0:{\mathcal{S}}\rightarrow {\mathcal{ Q}}$ and $\tau_1:Ult({\mathcal R}, F)\rightarrow Ult({\mathcal{P} }, \tau_0(F))$ come from the copying construction. Notice that
\begin{center}
$\tau_0\restriction ({\mathcal{S}}|lh(F))=\tau_1\restriction ({\mathcal{S}}|lh(F))$.
\end{center}
We then let $\tau$ be this common embedding. Set $\tau_0(F)=E$ and $\nu=\tau_0(\xi)$. We have that $\tau_0$ and $\tau_1$ are generic over ${\mathcal{ Q}}$ and $Ult({\mathcal{P} }, E)$ respectively.
We now want to see that in $M_Z[h]$,
\begin{center}
$(\Phi_Z^h)_{Ult({\mathcal R}, F)|\xi}=(\Phi_Z^h)_{{\mathcal{S}}|\xi}$.
\end{center}
Notice that it follows from \rlem{first step} that $\Phi^h_Z=\Lambda_Z\restriction M_Z[h]$. Let $\Gamma_0=(\tau$-pullback of $\Phi_{{\mathcal{ Q}}|\nu})$ and $\Gamma_1=(\tau$-pullback of $\Phi_{Ult({\mathcal{P} }, E)|\nu})$. It follows that \\\\
(0) $\Gamma_0\restriction M_Z[h]=(\Phi_Z^h)_{{\mathcal{S}}|\xi}$ and $\Gamma_1\restriction M_Z[h]=(\Phi_Z^h)_{Ult({\mathcal R}, F)|\xi}$.\\\\
Let $i:{\mathcal{V}}\rightarrow {\mathcal{ Q}}$ and $j: {\mathcal{V}}\rightarrow Ult({\mathcal{P} }, E)$ be the iteration maps. It follows from \rcor{generic interpretability claim} that\\\\
(1) $\Phi_{{\mathcal{ Q}}|\nu}\restriction {\mathcal{ Q}}=i(\Phi)_{{\mathcal{ Q}}|\nu}$ and $\Phi_{Ult({\mathcal{P} }, E)|\nu}=j(\Phi)_{Ult({\mathcal{P} },E)|\nu}$.\\\\
Because $Ult({\mathcal{P} }, E)|lh(E)={\mathcal{ Q}}|lh(E)$, $lh(E)>\nu$ is an inaccessible cardinal in ${\mathcal{ Q}}$, and ${\mathcal{ Q}}|lh(E)\models \sf{gUBH}$, we have that\\\\
(2) $i(\Phi)_{{\mathcal{ Q}}|\nu}\restriction ({\mathcal{ Q}}|lh(E))=j(\Phi)_{Q|\nu}\restriction ({\mathcal{ Q}}|lh(E))=_{def}\Sigma$\\\\
implying by the way of (1) that\\\\
(3) $\Phi_{{\mathcal{ Q}}|\nu}\restriction ({\mathcal{ Q}}|lh(E))=\Phi_{Ult({\mathcal{P} }, E)|\nu}\restriction ({\mathcal{ Q}}|lh(E))$.\\\\
Using \cite[Corollary 4.9.2]{Neeman} we can find $H\in V$ that is ${\mathcal{ Q}}$-generic for a poset in ${\mathcal{ Q}}|\nu$ and is such that $\tau_0\in {\mathcal{ Q}}[H]$. It now follows that $\tau\in Ult({\mathcal{P} }, E)[H]$ as $\tau\in {\mathcal{ Q}}|lh(E)[H]$. We now have that\\\\
(4) $(\Sigma^H)^{Ult({\mathcal{P} }, E)[H]}\restriction ({\mathcal{ Q}}|lh(E)[H])=(\Sigma^H)^{{\mathcal{ Q}}[H]}\restriction ({\mathcal{ Q}}|lh(E)[H])$.\\\\
Applying \rcor{generic interpretability claim1} to (4) we get that\\\\
(5) $\Phi_{{\mathcal{ Q}}|\nu}\restriction ({\mathcal{ Q}}|lh(E)[H])=\Phi_{Ult({\mathcal{P} }, E)|\nu}\restriction ({\mathcal{ Q}}|lh(E)[H])$.\\\\
It follows from (5) that \\\\
(6) $\Gamma_0\restriction M_Z[h]$ and $\Gamma_1\restriction M_Z[h]$ are equal.\\\\
(6) then implies, by the way of (0), that $(\Phi_Z^h)_{Ult({\mathcal R}, F)|\xi}=(\Phi_Z^h)_{{\mathcal{S}}|\xi}$.
\end{proof}
\section{Capturing universally Baire sets}\label{sec: cap ub}
The following is a useful corollary of \rthm{strategies can be extended}. We say that a pair of trees $T,S$ are \textit{$\delta$-absolutely complementing} if for any poset $\mathbb{P}$ of size $\leq \delta$, for any generic $g\subseteq \mathbb{P}$, $V[g]\models ``p[T]={\mathbb{R}}-p[S]"$. Similarly, we say that $T,S$ are \textit{$<\delta$-absolutely complementing} if for any poset $\mathbb{P}$ of size $< \delta$, for any generic $g\subseteq \mathbb{P}$, $V[g]\models ``p[T]={\mathbb{R}}-p[S]"$. Given a limit of Woodin cardinals $\nu$ and $g\subseteq Coll(\omega, <\nu)$, let
\begin{enumerate}
\item ${\mathbb{R}}^*_g=\bigcup_{\alpha<\nu}{\mathbb{R}}^{V[g\cap Coll(\omega, \alpha)]}$,
\item $\Delta_g$ be the set of reals $A\in V({\mathbb{R}}^*)$ such that for some $\alpha<\nu$, there is a pair $(T, S)\in V[g\cap Coll(\omega, \alpha)]$ such that $V[g\cap Coll(\omega, \alpha)]\models ``(T, S)$ are $<\nu$-complementing trees" and $p[T]^{V({\mathbb{R}}^*)}=A$, and
\item $DM(g)=L(\Delta_g, {\mathbb{R}}^*_g)$.
\end{enumerate}
The following is immediate from results of the previous sections.
\begin{corollary}\label{strategy in der model} Suppose $\nu$ is a limit of Woodin cardinals. Let $\delta<\nu$ be a Woodin cardinal, and let $g\subseteq Coll(\omega, <\nu)$ be $V$-generic. Then $\Psi^g_\delta\in DM(g)$.
\end{corollary}
We next need a characterization of universally Baire sets via strategies. We show this in \rlem{ub to strategy}. The lemma is standard.
If $\nu$ is a Woodin cardinal we let $\sf{EA}_\nu$ be the $\omega$-generator version of the extender algebra associated with $\nu$ (see e.g. \cite{steel2010outline} for a detailed discussion of Woodin's extender algebras). We say the triple $(M, \delta, \Phi)$ \textit{Suslin, co-Suslin captures}\footnote{This notion is probably due to Steel, see \cite{DMATM}.} the set of reals $B$ if there is a pair $(T, S)\in M$ such that $M\models ``(T, S)$ are $\delta$-complementing" and
\begin{enumerate}
\item $M$ is a countable transitive model of some fragment of $\sf{ZFC}$,
\item $\Phi$ is an $\omega_1$-strategy for $M$,
\item $M\models ``\delta$ is a Woodin cardinal",
\item for $x\in {\mathbb{R}}$, $x\in B$ if and only if there is an iteration ${\mathcal{T}}$ of $M$ according to $\Phi$ with last model $N$ such that $x$ is generic over $N$ for $\textsf{EA}_{\pi^{\mathcal{T}}(\delta)}^{N}$ and $x\in p[\pi^{\mathcal{T}}(T)]$.
\end{enumerate}
The next lemma is standard and originates in \cite{IT}.
\begin{lemma}\label{realizability} Suppose $u=(\eta, \delta, \lambda)$ is a good triple and $g$ is $V$-generic for a poset in $V_\eta$. Suppose $X\prec (W_\lambda[g]|\Psi_{\eta, \delta}^g)$ is countable in $V[g]$. Then whenever ${\mathcal{T}}$ is a countable iteration of $M_X$ according to $\Lambda_X$ with last model $N$, there is $\sigma:N\rightarrow W_\lambda[g]$ such that $\pi_X=\sigma\circ \pi^{\mathcal{T}}$.
\end{lemma}
\begin{proof} Let ${\mathcal{P} } = W_\lambda[g]$. Let ${\mathcal{U}} =_{def} \pi_X{\mathcal{T}}$ be the copy of ${\mathcal{T}}$, considered as a tree on $V[g]$. Let $W$ be the last model of ${\mathcal{U}}$. There is then $\tau:N\rightarrow \pi^{{\mathcal{U}}}({\mathcal{P} })$ such that $\pi^{\mathcal{U}}\circ \pi_X=\tau\circ \pi^{\mathcal{T}}$. It follows by absoluteness, noting $N\in W$ is countable and $\pi^{\mathcal{U}}({\mathcal{P} })\in W$, that there is $m:N\rightarrow \pi^{\mathcal{U}}({\mathcal{P} })$ with $m\in W$ such that $\pi^{\mathcal{U}}(\pi_X)=m\circ \pi^{\mathcal{T}}$. The existence of $\sigma$ follows from elementarity.
\end{proof}
The next lemma is also standard, but we do not know its origin. To state it we need to introduce some notations. Suppose $M$ is a countable transitive model of set theory and $\Phi$ is a strategy of $M$. Let $(\eta, g)$ be such that $g$ is $M$-generic for a poset in $M|\eta$. Let $\Phi'$ be the fragment of $\Phi$ that acts on iterations that are above $\eta$. Then $\Phi'$ can be viewed as an iteration strategy of $M[g]$. This is because if ${\mathcal{T}}$ is an iteration of $M[g]$ above $\eta$, there is an iteration ${\mathcal{U}}$ of $M$ that is above $\eta$ and such that
\begin{enumerate}
\item $lh({\mathcal{T}})=lh({\mathcal{U}})$,
\item ${\mathcal{T}}$ and ${\mathcal{U}}$ have the same tree structure,
\item for each $\alpha<lh({\mathcal{T}})$, $M^{\mathcal{T}}_\alpha=M^{\mathcal{U}}_\alpha[g]$,
\item for each $\alpha<lh({\mathcal{T}})$, $E_\alpha^{\mathcal{T}}$ is the extension of $E_\alpha^{\mathcal{U}}$ onto $M_\alpha^{\mathcal{U}}[g]$.
\end{enumerate}
Let $\Phi''$ be the strategy of $M[g]$ with the above properties. We then say that $\Phi''$ is induced by $\Phi'$. We will often confuse $\Phi''$ with $\Phi'$.
\begin{corollary}\label{strategy for the generic extension} Suppose $(\eta, \delta, \lambda)$ is a good triple, $g$ is generic for a poset of size $<\eta$ and $h\subseteq Coll(\omega, \lambda)$ is generic over $V$ such that $V[g]\subseteq V[h]$. Let $\Sigma$ be as in \rthm{strategies can be extended} applied to $h$ and $\Psi_\delta$, and let $\Phi$ be the fragment of $\Sigma\restriction V[g]$ that acts on iterations that are above $\eta$. Then $\Phi$ induces a strategy $\Phi'$ for ${\mathcal{V}}|\delta[g]$, and $\Phi'$ is projective in $\Phi$. \footnote{This just means in $V[h]$, $\Phi'\restriction HC$ is definable over the structure $(HC, \in, \Phi\restriction HC)$ perhaps with parameters in $HC$.}
\end{corollary}
We can now state our lemma.
\begin{lemma}\label{ub to strategy} Suppose $u=(\eta, \delta, \lambda)$ is a good triple and $g$ is $V$-generic for a poset in $V_\eta$. Let $A\in \Gamma^\infty_g$. Then, in $V[g]$, there is a club of countable $X\prec (W_\lambda[g]| \Psi_{\eta, \delta}^g)$ such that $(M_X, \delta_X, \Lambda^g_X)$ Suslin, co-Suslin captures $A$.\footnote{To conform with the above setup, we tacitly assume $\Lambda^g_X$ to be the iteration strategy acting on trees above $\eta_X$.} For each such $X$, let $X' = X\cap W_\lambda \prec W_\lambda$, and $(M_{X'}, \Lambda_{X'})$ be the transitive collapse of $X'$ and its strategy. Then $A$ is projective in $\Lambda_{X'}$. Moreover, these facts remain true in any further generic extension by a poset in $V_\eta[g]$.
\end{lemma}
\begin{proof}
Let ${\mathcal{P} } = W_\lambda[g]$. Work in $V[g]$. Let $(T, S)$ be $\lambda$-complementing trees such that $A=p[T]$. Let $X\prec W_\lambda[g]$ be countable such that $(T, S)\in X$. We claim that $(M_X, \delta_X, \Lambda^g_X)$ Suslin co-Suslin captures $A$. Let $\delta = \delta_X$. To see this fix a real $x$. Let ${\mathcal{T}}$ be any countable normal iteration of $M_X$ such that
\begin{enumerate}
\item ${\mathcal{T}}$ is according to $\Lambda^g_X$,
\item ${\mathcal{T}}$ has a last model $N$,
\item $x$ is generic for ${\sf{EA}}_{\pi^{\mathcal{T}}(\delta)}^N$.
\end{enumerate}
Using \rlem{realizability}, we can find $\sigma:N\rightarrow {\mathcal{P} }$ such that $\pi_X=\sigma\circ \pi^{\mathcal{T}}$.
Assume first $x\in A$. Then $x\in p[T]$. If now $x\not \in p[\pi^{\mathcal{T}}(T_X)]$ then $x\in p[\pi^{\mathcal{T}}(S_X)]$ (this uses the fact that $T_X,S_X$ are $\lambda_X$-complementing in $M_X$) and hence, $x\in p[S]$ (this follows from the fact that $\sigma[\pi^{\mathcal{T}}(S_X)]\subseteq S$). Thus, $x\in p[\pi^{\mathcal{T}}(T_X)]$.
Next suppose $x \in p[\pi^{\mathcal{T}}(T_X)]$. Then because $\sigma[\pi^{\mathcal{T}}(T_X)]\subseteq T$, $x\in p[T]$ implying that $x\in A$.
That $\Lambda^g_X$ is projective in $\Lambda_{X'}$ follows from \rcor{strategy for the generic extension}; hence $A$ is projective in $\Lambda_{X'}$. We leave it to the reader to verify that these facts remain true in a further generic extension by a poset in $V_\eta[g]$.
\end{proof}
\section{A derived model representation of $\Gamma^\infty$}\label{sec:der model}
In this section our goal is to establish a derived model representation of $\Gamma^\infty$. We set $\iota=\kappa^+$ and fix $g\subseteq Coll(\omega, \iota)$.
We say $u=(\eta, \delta, \delta', \lambda)$ is a \textit{good quadruple} if $(\eta, \delta, \lambda)$ and $(\eta, \delta', \lambda)$ are good triples with $\delta<\delta'$. Suppose $u=(\eta, \delta, \delta', \lambda)$ is a good quadruple and $h$ is a $V[g]$-generic such that $g*h$ is generic for a poset in $V_\eta$. Working in $V[g*h]$, let $D(h, \eta, \delta, \lambda)$ be the club of countable
\begin{center}
$X\prec ((W_\lambda[g*h], u)| \Psi^g_{\eta, \delta})$
\end{center}
such that $H_{\iota}^V\cup\{g\}\subseteq X$.
Suppose $A\in \Gamma^\infty_{g*h}$. Then for a club of $X\in D(h, \eta, \delta, \lambda)$, $A$ is Suslin, co-Suslin captured by $(M_X, \delta_X, \Lambda^{g*h}_X)$ and $A$ is projective in $\Lambda_{X'}$ where $X' =X\cap W_\lambda$ (see \rlem{ub to strategy}). Given such an $X$, we say $X$ \textit{captures} $A$.
Let $k\subseteq Coll(\omega, \Gamma^\infty_{g*h})$ be generic, and let $(A_i:i<\omega),(w_i: i<\omega)$ be generic enumerations of $\Gamma^\infty_{g*h}$ and ${\mathbb{R}}_{g*h}$ respectively in $V[g*h*k]$. Let $(X_i: i<\omega)\in V[g*h*k]$ be such that for each $i$
\begin{enumerate}
\item $X_i \in D(h,\eta,\delta,\lambda)$, and
\item $X_i$ captures $A_i$.
\end{enumerate}
In particular, $A_i$ is projective in $\Lambda_{X^{'}_i}$, where $X^{'}_i = X_i\cap W_\lambda$.
We set $M^0_n=M_{X'_n}$,
$\pi^0_n=\pi_{X_0}$,
$\kappa_0=\kappa_{X_0}$,
$\nu_0=\delta_{X_0}$,
$\nu_0'=\delta_{X_0}'$,
$\eta_0=\eta_{X_0}$,
$\delta_0=\delta$,
${\mathcal{P} }_0={\mathcal{V}}$.
Next we inductively define sequences $(M^i_n: i, n<\omega)$, $(\pi^i_n: i, n<\omega)$, $(\Lambda_i: i\leq \omega)$, $(\tau^{i, i+1}_n: i, n<\omega)$, $(\nu_n: i<\omega)$, $(\nu_n': i<\omega)$, $(\eta_n: n<\omega)$, $(\kappa_i:i<\omega)$, $(\theta_i: i<\omega)$, $({\mathcal{T}}_i, E_i: i<\omega)$, $(M_i': i<\omega)$, $({\mathcal{U}}_i, F_i: i<\omega)$, $({\mathcal{P} }_i: i\leq n)$, $({\mathcal{P} }_i': i<\omega)$, and $(\sigma_i: i<\omega)$ satisfying the following conditions (see Diagram \ref{Full diagram}).
\begin{enumerate}[(a)]
\item For all $i, n<\omega$, $\pi^i_n: M^i_n\rightarrow {\mathcal{P} }_i$ and $rng(\pi^i_n)\subseteq rng(\pi^i_{n+1})$.
\item $\tau^{i, i+1}_n: M^i_n\rightarrow M^{i+1}_n$. Let $\tau_n: M^0_n\rightarrow M^n_n$ be the composition of $\tau^{j, j+1}_n$'s for $j<n$.
\item For all $i, n<\omega$, $\kappa_n=\tau_n(\kappa_0)$, $\eta_n=\tau_n(\eta_{0})$, $\nu_n=\tau_n(\nu_0)$ and $\nu_n'=\tau_n(\nu_0')$.
\item For all $n<\omega$, ${\mathcal{T}}_n$ is an iteration of $M^n_n|\nu_n'$ above $\nu_n$ that makes $w_n$ generic and $M_n'$ is its last model.
\item $\theta_n=\pi^{{\mathcal{T}}_n}(\nu_n')$ and $E_n\in \vec{E}^{M_n'}$ is such that $lh(E_n)>\theta_n$ and ${\rm crit }(E_n)=\kappa_n$.
\item for all $m,n$, $M^{n+1}_m=Ult(M^n_m, E_n)$ and $\tau_m^{n, n+1}=\pi_{E_n}^{M^n_m}$.
\item ${\mathcal{U}}_n=\pi^n_n{\mathcal{T}}_n$, ${\mathcal{P} }_n'$ is the last model of ${\mathcal{U}}_n$, $\sigma_n: M_n'\rightarrow {\mathcal{P} }_n'$ is the copy map and $F_n=\sigma_n(E_n)$.\footnote{So $\oplus_{i\leq n}{\mathcal{T}}_i$ and $\oplus_{i\leq n} {\mathcal{U}}_i$ are sealed iterations based on $\kappa$.}
\item ${\mathcal{P} }_{n+1}=Ult({\mathcal{P} }_n, F_n)$ and $\psi^{n+1}_m:M^{n+1}_m\rightarrow {\mathcal{P} }_{n+1}$ is given by $\pi^{n+1}_m(\pi_{E_n}^{M^n_m}(f)(a))=\pi_{F_n}^{{\mathcal{P} }_n}(\pi^n_m(f))(\sigma_n(a))$.
\item $\Lambda_n=(\pi^n_n$-pullback of $(\Psi^{g*h}_\lambda)_{{\mathcal{P} }_n|\psi_n(\nu_n)})_{\eta_n, \nu_n}=(\sigma_n$-pullback of $(\Psi^{g*h}_\lambda)_{{\mathcal{P} }_n'|\sigma_n(\nu_n)})_{\eta_n, \nu_n}$ (see \rcor{coherence}).
\end{enumerate}
\begin{figure}
\centering
\resizebox{0.85\textheight}{!}{
\begin{tikzpicture}[align=center, node distance= 3cm, auto]
\node (A) {${\mathcal{P} }_0$};
\node (B) [right of=A] {${\mathcal{P} }^{'}_{0},F_0$};
\draw[->] (A) to node {${\mathcal{U}}_0$} (B);
\node (C) [right of=B] {${\mathcal{P} }_1$};
\draw[->, bend left=33] (A) to node {$F_0$} (C);
\node (D) [right of = C] {${\mathcal{P} }^{'}_{1}, F_1$};
\draw[->] (C) to node {${\mathcal{U}}_1$} (D);
\node (E) [right of=D] {${\mathcal{P} }_2$};
\draw [->, bend left=33] (C) to node {$F_1$} (E);
\node (F) [right of=E] {${\mathcal{P} }^{'}_{2}, F_2$};
\draw[->] (E) to node {${\mathcal{U}}_2$} (F);
\node (G) [right of=F] {};
\node (H) [right of=F]{};
\node (K) [right of=H]{${\mathcal{P} }_\omega$};
\path (H) -- (K) node [font=\Huge, midway, sloped] {$\dots$};
\node (AA) [below of=A]{$M^0_0$};
\node (AB) [right of=AA]{$M^{'}_{0}, E_0$};
\node (AC) [right of=AB]{$M^1_0$};
\node (AD) [below of=D]{};
\node (AE) [below of=E]{$M^2_0$};
\node (AF) [right of=AE]{};
\node (AG) [right of=AF]{$M^3_0$};
\node (AH) [right of=AF]{};
\node (AK) [below of=K]{$M^\omega_0$};
\path (AH)--(AK) node [font=\Huge, midway, sloped]{$\dots$};
\draw[->] (AA) to node {${\mathcal{T}}_0$} (AB);
\draw[->] (AA) to node {$\pi^0_0$} (A);
\draw[->] (AB) to node {$\sigma_0$}(B);
\draw[->, bend right=25] (AA) to node {$E_0$}(AC);
\draw[->] (AC) to node {$E_1$}(AE);
\node (BA) [below of=AA]{$M^0_1$};
\node (BB) [below of=AB]{};
\node (BC) [right of=BB]{$M_1^1$};
\node (BD) [right of=BC]{$M^{'}_{1}, E_1$};
\node (BE) [right of=BD]{$M^2_1$};
\node (BF) [right of=BE]{};
\node (BG) [below of=AG]{$M^3_1$};
\node (BH) [right of=BF]{};
\node (BK) [right of=BH]{$M^\omega_1$};
\path (BH)--(BK) node [font=\Huge, midway, sloped]{$\dots$};
\draw[->] (BA) to node {$E_0, \tau^{0,1}_1=\tau_1$} (BC);
\draw[->] (BC) to node {${\mathcal{T}}_1$} (BD);
\draw[->, bend left=33] (BA) to node {$\pi^0_1$}(A);
\draw[->, bend left=20] (BC) to node {$\pi^1_1$}(C);
\draw[->] (AA) to node {} (BA);
\draw[->] (AC) to node {} (BC);
\draw[->, bend left=20] (BD) to node {$\sigma_1$} (D);
\draw[->, bend right=25] (BC) to node {$E_1$}(BE);
\draw[->] (AE) to node {}(BE);
\node (CA) [below of=BA]{$M^0_2$};
\node (CB) [below of=BB]{};
\node (CC) [below of=BC]{$M^{1}_{2}$};
\node (CD) [right of=CC]{};
\node (CE) [right of=CD]{$M^2_2$};
\node (CF) [right of=CE]{$M^{'}_{2}, E_2$};
\node (CG) [right of=CF]{$M^3_2$};
\node (CH) [right of=CF]{};
\node (CK)[below of=BK]{$M^\omega_2$};
\draw[->] (BA) to node {}(CA);
\draw[->, bend left=45] (CA) to node {$\pi^0_2$}(A);
\draw[->] (CA) to node {$E_0, \tau^{0,1}_2$}(CC);
\draw[->] (CC) to node {$E_1, \tau^{1,2}_2$}(CE);
\draw[->] (CE) to node {${\mathcal{T}}_2$}(CF);
\draw[->, bend left=25] (CE) to node {$\pi^2_2$}(E);
\draw[->, bend left=25] (CF) to node {$\sigma_2$}(F);
\draw[->,bend right=25] (CE) to node {$E_2$}(CG);
\draw[->] (BC) to node {}(CC);
\draw[->] (BE) to node {}(CE);
\draw[->] (BE) to node {$E_2$}(BG);
\draw[->] (AE) to node {$E_2$}(AG);
\draw[->] (AG) to node {}(BG);
\draw[->] (BG) to node {}(CG);
\path (CH)--(CK) node [font=\Huge, midway, sloped]{$\dots$};
\node (DA) [below of=CA]{$M^0_n$};
\node (DB) [right of=DA]{};
\node (DC) [right of=DB]{$M^1_n$};
\node (DD) [right of=DC]{};
\node (DE) [right of=DD]{$M^2_n$};
\node (DF) [right of=DE]{};
\node (DG) [right of=DF]{$M^3_n$};
\node (DH) [right of=DF]{};
\node (DK) [right of=DH]{$M^\omega_n$};
\path (CA) -- (DA) node [font=\Huge, midway, sloped] {$\dots$};
\draw[->] (DA) to node {$E_0, \tau^{0,1}_n$}(DC);
\draw[->] (DC) to node {$E_1,\tau^{1,2}_n$}(DE);
\draw[->] (DE) to node {$E_2, \tau^{2,3}_n$}(DG);
\path (DH) -- (DK) node [font=\Huge, midway, sloped] {$\dots$};
\draw[->, bend right=33] (DK) to node {$\pi^\omega_n$}(K);
\node (EA) [below of=DA]{};
\node (EG) [below of=DG]{};
\path (DA) -- (EA) node [font=\Huge, midway, sloped] {$\dots$};
\path (DG) -- (EG) node [font=\Huge, midway, sloped] {$\dots$};
\path (CG) -- (DG) node [font=\Huge, midway, sloped] {$\dots$};
\path (CK) -- (DK) node [font=\Huge, midway, sloped] {$\dots$};
\draw[->, bend left=55] (DA) to node {$\pi^0_n$}(A);
\end{tikzpicture}
}
\caption{Diagram of the main argument}
\label{Full diagram}
\end{figure}
Let $M^{\omega}_{n}$ be the direct limit of $(M^m_n: m<\omega)$ under the maps $\tau^{m, m+1}_n$. Letting ${\mathcal{P} }_\omega$ be the direct limit of $({\mathcal{P} }_n: n<\omega)$ and the compositions of $\pi_{F_n}^{{\mathcal{P} }_n}$, we have natural maps $\pi^\omega_n:M^\omega_n\rightarrow {\mathcal{P} }_\omega$. Notice that\\\\
(1) for each $n<\omega$, $\kappa_n<\omega_1^{V[g*h]}$ and $sup_n\kappa_n=\omega_1^{V[g*h]}$.\\\\
It follows that if $\tau^m_n: M_n^m\rightarrow M^\omega_n$ is the direct limit embedding then\\\\
(2) $\tau^m_n(\kappa_n)=\omega_1^{V[g*h]}$. \\\\
Next, notice that\\\\
(3) for each $m, n, p$, letting $\iota_n=\tau_n(\iota_{X_0})=\tau_n(\iota)$, $M^n_m|\iota_n=M^n_p|\iota_n$ and $\iota_n=(\kappa_n^+)^{M^n_m}$.\\
(4) for each $m,n, p$, $\pi^n_m\restriction (M^n_m|\iota_n)=\pi^n_p\restriction (M^n_p|\iota_n)$\\
(5) for each $m$, $n>1$ and $p>n$, $M^n_m|\theta_{n-1}=M^p_m|\theta_{n-1}$.\\
(6) for each $m$, $n>1$ and $p$ with $p>n$, $\pi^n_m\restriction (M^n_m|\theta_{n-1})=\pi^p_m\restriction (M^p_m|\theta_{n-1})$.\\\\
Because of condition (d) above we can find $G\subseteq Coll(\omega, <\omega_1^{V[g*h]})$ generic over $M^\omega_0$ such that ${\mathbb{R}}^{M^\omega_0[G]}={\mathbb{R}}_{g*h}$ and $G\in V[g*h*k]$. By constructions, $\omega_1^{V[g*h]}$ is a limit of Woodin cardinals in $M^\omega_0$. It then follows from the results of \rsec{sec: gen-it} and \rsec{sec: cap ub} that
\begin{lemma}\label{der model rep} $DM(G)^{M^\omega_0[G]}=L(\Gamma^\infty_{g*h}, {\mathbb{R}}_{g*h})$.
\end{lemma}
\begin{proof} It follows from \rcor{coherence} and \rlem{ub to strategy} that $A_n$ is projective in $\Lambda_n$. It follows from Corollary \ref{generic interpretability} that $\Lambda_n\restriction HC^{V[g*h]} \in M^\omega_0[G]$ and it follows from \rcor{strategy in der model} that $\Lambda_n\restriction HC^{V[g*h]}\in DM(G)^{M^\omega_0[G]}$. It follows that $\Gamma^\infty_{g*h}\subseteq DM(G)^{M^\omega_0[G]}$.
Moreover, it follows from \rcor{ub to strategy} that any set in $DM(G)^{M^\omega_0[G]}$ is projective in some $\Lambda_n\restriction HC^{V[g*h]}$ and it follows from \rthm{strategies can be extended} that $\Lambda_n\restriction HC^{V[g*h]}\in \Gamma^\infty_{g*h}$. Thus, $DM(G)^{M^\omega_0[G]}\subseteq L(\Gamma^\infty_{g*h}, {\mathbb{R}}_{g*h})$.
\end{proof}
We can also show variations of the above lemma for $M^\omega_n$ for each $n<\omega$. \rlem{der model rep} implies that in order to prove that $\sf{Sealing}$ holds, it is enough to establish clause 2 of $\sf{Sealing}$ as clause 1 immediately follows from \rlem{der model rep} and standard results about derived models (see \cite{DMT}).
To continue, it will be easier to introduce some terminology. We say that the sequence $(X_i: i<\omega)$ is cofinal in $\Gamma^\infty_{g*h}$ as witnessed by $(A_i: i\in \omega)$ and $(w_i: i<\omega)$. We also say that $(M^n_0, \Lambda_n, \theta_n, \tau_{n, m}: n<m<\omega)$ is a $\Gamma^\infty_{g*h}$-genericity iteration induced by $(X_i: i<\omega)$ where $\tau_{n, m}: M^n_0\rightarrow M^m_0$ is the composition of $\tau^{i, i+1}_0$ for $i\in [n, m)$.
\section{A proof of \rthm{main theorem}}\label{sec:main}
We now put together the results of the previous sections to obtain a proof of \rthm{main theorem}. Fix $h$ and $h'$ such that $h$ is $V[g]$-generic and $h'$ is $V[g*h]$-generic. We have shown in $V[g]$, clause (1) of $\sf{Sealing}$ holds. We now show clause (2) of $\sf{Sealing}$ holds in $V[g]$. We want to show that there is an embedding
\begin{center}
$j: L(\Gamma_{g*h}, {\mathbb{R}}_{g*h})\rightarrow L(\Gamma_{g*h*h'}, {\mathbb{R}}_{g*h*h'})$
\end{center}
such that for $A\in \Gamma_{g*h}$, $j(A)=A^{h'}$. Let $(\xi_i: i<\omega)$ be an increasing sequence of cardinals such that $g*h*h'$ is generic for a poset in $V_{\xi_0}$. Let $u_n=(\xi_i: i< n)$. Set $W=L(\Gamma_{g*h}, {\mathbb{R}}_{g*h})$ and $W'=L(\Gamma_{g*h*h'}, {\mathbb{R}}_{g*h*h'})$.
Because $(\Gamma_{g*h})^{\#}$ exists, there is only one possibility for $j$ as above. Namely, given a term $\tau$, $n\in \omega$, $x\in {\mathbb{R}}_{g*h}$ and $A\in \Gamma^\infty_{g*h}$, we must have that
\begin{center}
$j(\tau^W(u_n, A, x))=\tau^{W'}(u_n, A^{h'}, x)$.
\end{center}
What we must show is that $j$ is elementary. The next lemma finishes the proof.
\begin{lemma}\label{j elem} $j$ is elementary.
\end{lemma}
\begin{proof} Let $u=(\eta, \delta, \delta', \lambda)$ be a good quadruple such that $\sup_{i<\omega}\xi_i<\eta$. Let $k\subseteq Coll(\omega, \Gamma^\infty_{g*h})$ be $V[g*h]$-generic and $k'\subseteq Coll(\omega, \Gamma^\infty_{g*h*h'})$ be $V[g*h*h']$-generic.
We have that $\Gamma^\infty_{g*h}$ is the Wadge closure of strategies of the countable substructures of $W_\lambda$. More precisely, given $A\in \Gamma^\infty_{g*h}$, there is an $X\prec (W_\lambda|\Psi_{\eta, \delta}^{g*h})$ such that $A$ is Wadge reducible to $\Lambda_X$. It follows that to show that $j$ is elementary it is enough to show that given a formula $\phi$, $m\in \omega$, $X\prec ((W_\lambda, u)|\Psi_{\eta, \delta}^{g*h})$ and a real $x\in {\mathbb{R}}_{g*h}$,
\begin{center}
$W\models \phi[u_m, \Lambda_X, x]\Rightarrow W'\models \phi[u_m, \Lambda^{h'}_X, x]$.\footnote{The $\Leftarrow$ is similar as will be evident by the following proof.}
\end{center}
Fix then a tuple $(\phi, n, X, x)$ as above.
Working inside $V[g*h*k]$, let $(Y_i: i<\omega)$ be a cofinal sequence in $\Gamma^\infty_{g*h}$ as witnessed by some $\vec{A}$ and $\vec{w}$ such that $A_0=\emptyset$, $w_0=x$ and $Y'_0=X$.
Working inside $V[g*h*h'*k']$, let $(Z_i: i<\omega)$ be a cofinal sequence in $\Gamma^\infty_{g*h*h'}$ as witnessed by some $\vec{B}$ and $\vec{v}$ such that $B_0=\emptyset$, $v_0=x$ and $Z'_0=X$.
Let $(M_n, \Lambda_n, \theta_n, \tau_{n, l}: n<l<\omega)$ be a $\Gamma^\infty_{g*h}$-genericity iteration induced by $(Y_i: i<\omega)$ and $(N_n, \Phi_n, \nu_n, \sigma_{n, l}: n<l<\omega)$ be a $\Gamma^\infty_{g*h*h'}$-genericity iteration induced by $(Z_i: i<\omega)$. It is not hard to see that we can make sure that $M_1=N_1$ by simply selecting the same extender $E_0$ after ${\mathcal{T}}_0$; by our assumptions, $M_0 = N_0$ and $w_0 = v_0$.
Let $\zeta=\eta_X$ and $\Gamma=(\Psi_{\eta, \delta})_X$. Let $M_\omega$ be the direct limit along $(M_n: n<\omega)$ and $N_\omega$ the direct limit along $(N_n: n<\omega)$. For $n<\omega$, let $\kappa_n$ be the least strong cardinal of $M_n$ and $\kappa_n'$ be the least strong cardinal of $N_n$. Let $s_m^n$ be the first $m$ (cardinal) indiscernibles of $(M_n|\kappa_n)$ and $t_m^n$ be the first $m$ (cardinal) indiscernibles of $(N_n|\kappa_n')$. Notice that $(M_n|\kappa_n)^\#\in M_n$ and $(N_n|\kappa_n')^\#\in N_n$. It follows that
$\tau_{n, l}(s^n_m)=s^l_m$ and $\sigma_{n, l}(t^n_m)=t^l_m$ for $n<l\leq \omega$.
We then have the following sequence of implications. Below we let $\Gamma^*$ be the name for the generic extension of $\Gamma$ in the relevant model and $\dot{DM}$ be the name for the derived model. The third implication below uses the fact that $M_1 = N_1$.
\begin{align*}
W\models \phi[u_m, \Lambda_X, x] & \Rightarrow M_\omega[x]\models \emptyset \Vdash_{Coll(\omega, <\kappa_\omega)}\dot{DM}\models \phi[s^\omega_m, \Gamma^*, x]\\ &\Rightarrow M_1[x]\models \emptyset \Vdash_{Coll(\omega, <\kappa_1)}\dot{DM}\models \phi[s^1_m, \Gamma^*, x] \\ &\Rightarrow N_\omega[x]\models \emptyset \Vdash_{Coll(\omega, <\kappa'_\omega)}\dot{DM}\models \phi[t^\omega_m, \Gamma^*, x] \\ &\Rightarrow W'\models \phi[u_m, \Lambda^{h'}_X, x].
\end{align*}
\end{proof}
\section{$\sf{LSA-over-UB}$ may fail} \label{sec:LSAoverUBfails}
In this section, we prove Theorem \ref{thm:not_equiv}. We assume the hypotheses of \rthm{thm:not_equiv}. Here is the main consequence of the hypotheses that we need (see Lemma \ref{ub to strategy}):
\begin{enumerate}[(i)]
\item \label{one} letting $\lambda$ be an inaccessible cardinal which is a limit of Woodin cardinals, and $g\subseteq Coll(\omega, <\lambda)$ be $V$-generic, then for any set $A$ which is Suslin co-Suslin in the derived model given by $g$, $DM(g)$ (see Section \ref{sec: cap ub}), then $A$ is Wadge reducible to $\Psi^g_{\delta}\restriction HC^{V(\mathbb{R}^*_g)}$, for some Woodin cardinal $\delta < \lambda$. Furthermore, $\Psi^g_{\delta}\restriction HC^{V(\mathbb{R}^*_g)}\in DM(g)$; in fact, $\Psi^g_{\delta}\restriction HC^{V(\mathbb{R}^*_g)}=\Psi^g_{\delta}\restriction HC^{V[g]}\in \Gamma^\infty_g$.
\end{enumerate}
Suppose for contradiction that $\sf{LSA-over-UB}$ holds. Let $\lambda$ be an inaccessible cardinal which is a limit of Woodin cardinals in $V$. Let $h\subseteq Coll(\omega, <\lambda)$ be $V$ generic. By our assumption, in $V[h]$, there is some set $A$ such that
\begin{itemize}
\item $A\in V(\mathbb{R}^{V[h]})$;
\item $L(A,\mathbb{R})\models \sf{LSA}$;
\item $\Gamma^\infty_{h}$ is the Suslin co-Suslin sets of $L(A,\mathbb{R})$.
\item $\Gamma^\infty_{h} = \Delta_{h}$, where $\Delta_{h}$ is defined at the beginning of Section \ref{sec: cap ub}.
\end{itemize}
We note that the last item follows from (\ref{one}).
Recall the notion of lbr hod mice is defined in \cite{normalization_comparison}. We will not need the precise definition of these objects. However, we need some notions related to short-tree strategies. Let ${\mathcal{P} }$ be a premouse (or hod premouse), $\tau$ a cut point cardinal of ${\mathcal{P} }$ (typically the $\tau$ we consider will be a Woodin cardinal or a limit of Woodin cardinals of ${\mathcal{P} }$), and $\Sigma$ an iteration strategy of ${\mathcal{P} }$ acting on trees based on ${\mathcal{P} }|\tau$. Suppose ${\mathcal{T}}$ according to $\Sigma$ is of successor length $\xi + 1$. Then we say ${\mathcal{T}}$ is \textit{short} if either $[0,\xi]_{\mathcal{T}}$ drops in model or else, letting $i$ be the branch embedding, $i(\tau) > \delta({\mathcal{T}})$; otherwise, we say ${\mathcal{T}}$ is \textit{maximal}. We let $\Sigma^{sh}$ be the short part of $\Sigma$; so $\Sigma^{sh}$ is a partial strategy. In the following, we may not have a (total) iteration strategy, but a partial strategy $\Lambda$ such that whenever ${\mathcal{T}}$ is according to $\Lambda$, if $\Lambda({\mathcal{T}})$ is defined, then either $\Lambda({\mathcal{T}})$ drops in model or else the branch embedding $i_{\Lambda({\mathcal{T}})}(\tau) > \delta({\mathcal{T}})$. We call such a $\Lambda$ a \textit{short-tree strategy}.\footnote{An example of a short-tree strategy is $\Sigma^{sh}$ for some total strategy $\Sigma$.} We may turn $\Lambda$ into a total strategy by assigning $\Lambda({\mathcal{T}})$ to be ${\mathcal{M}}({\mathcal{T}})^\sharp$ whenever a branch of ${\mathcal{T}}$ is not defined by $\Lambda$. Short tree strategies may be defined on stacks of normal trees as usual.
The proof of \cite[Theorem 0.5]{HPC} gives us a pair $({\mathcal{P} },\Sigma)$ such that the following hold in $V(\mathbb{R}^{V[h]})$ (here the hypothesis $\sf{HPC+NLE}$ is applied in the model $L(A,\mathbb{R})$):
\begin{enumerate}
\item ${\mathcal{P} }$ is a least-branch hod premouse (lpm) (cf. \cite[Section 5]{normalization_comparison});
\item ${\mathcal{P} }$ has a largest Woodin cardinal $\delta = \delta^{\mathcal{P} }$ and letting $\kappa^{\mathcal{P} }$ be the least $<\delta$-strong cardinal in ${\mathcal{P} }$, then $\kappa^{\mathcal{P} }$ is a limit of Woodin cardinals;
\item \label{three}$\Sigma$ is a short-tree strategy of ${\mathcal{P} }$ and $\Sigma\in L(A,\mathbb{R})\backslash \Gamma^\infty_{h}$; furthermore, $\Sigma$ is Suslin in $L(A,\mathbb{R})$;
\item \label{four} for every $A\in \Gamma^\infty_{h}$, there is an iteration map $i:{\mathcal{P} }\rightarrow {\mathcal{ Q}}$ according to $\Sigma$ such that $A <_w \Sigma_{Q|\kappa^{\mathcal{ Q}}}$, where $\kappa^{\mathcal{ Q}}$ is the least $\delta^{\mathcal{ Q}} = i(\delta^{\mathcal{P} })$-strong cardinal in ${\mathcal{ Q}}$;
\item whenever ${\mathcal{T}}$ is according to $\Sigma$ and either $\Sigma({\mathcal{T}}) = b$ is nondropping with last model ${\mathcal{ Q}}$ or $\Sigma({\mathcal{T}})$ is not a branch with ${\mathcal{ Q}} = \Sigma({\mathcal{T}})$, $\Sigma_{{\mathcal{T}},{\mathcal{ Q}}}$ satisfies (\ref{three}) and (\ref{four});\footnote{The above properties follow from the proof of Step 1 in \cite[Theorem 0.5]{HPC}, which can be applied to our hypothesis.}
\setcounter{nameOfYourChoice}{\value{enumi}}
\end{enumerate}
General properties of sets of reals in derived models give:
\begin{enumerate}
\setcounter{enumi}{\value{nameOfYourChoice}}
\item \label{six} there is some $\gamma < \lambda$ such that $({\mathcal{P} },\Sigma\restriction V[h\restriction \gamma]) \in V[h\restriction \gamma]$.
\end{enumerate}
\begin{lemma}\label{lem:smaller}
Fix a $\gamma$ as in (\ref{six}). In $V[h\restriction \gamma]$, there is a Woodin cardinal $\delta < \lambda$ such that $\delta > \gamma$ and there is a tree ${\mathcal{T}}$ according to $\Sigma$ such that either $\Sigma({\mathcal{T}})$ is a branch, ${\mathcal{ Q}} = {\mathcal{M}}^{\mathcal{T}}_b$, and the branch embedding $i:{\mathcal{P} }\rightarrow {\mathcal{ Q}}$ exists, or $\Sigma({\mathcal{T}})$ is not a branch with ${\mathcal{ Q}} = \Sigma({\mathcal{T}})$, and $\Sigma_{{\mathcal{T}},{\mathcal{ Q}}}$ satisfies (\ref{three}) and (\ref{four}) and is Wadge reducible to $\Psi^{h\restriction \gamma}_\delta$.
\end{lemma}
\begin{proof}
Let $\delta$ be the least Woodin cardinal $> \gamma$. Let $\Psi = \Psi^{h\restriction \gamma}_\delta$. Let $({\mathcal{M}}_\xi, \Lambda_\xi: \xi \leq \delta)$ be the models and strategies of the fully backgrounded (lbr) hod mouse construction over $W^\Psi_\delta$ (cf. \cite{normalization_comparison}), where backgrounded extenders used have critical points $> max(\gamma, |{\mathcal{P} }|)$. Let ${\mathcal{T}}$ be according to $\Sigma$ be the comparison tree of ${\mathcal{P} }$ against the above construction. By universality, there is $\xi\leq \delta$ such that
\begin{enumerate}[(i)]
\item either $\Sigma({\mathcal{T}}) = b$ exists and there is an iteration map $i: {\mathcal{P} } \rightarrow {\mathcal{M}}_\xi$ and $\Sigma_{{\mathcal{T}},{\mathcal{M}}_\xi} = \Lambda_\xi^{sh}$,
\item or $\Sigma({\mathcal{T}})$ does not exist (${\mathcal{T}}$ is $\Sigma$-maximal), ${\mathcal{M}}_\xi = \Sigma({\mathcal{T}})$, and $\Sigma_{{\mathcal{T}},{\mathcal{M}}_\xi} = \Lambda_\xi^{sh}$.
\end{enumerate}
In either case, we get that $\Sigma_{{\mathcal{T}},{\mathcal{M}}_\xi}$ satisfies (\ref{three}) and (\ref{four}) above and $\Sigma_{{\mathcal{T}},{\mathcal{M}}_\xi} = \Lambda_\xi^{sh}$ is Wadge reducible to $\Psi$ in $V(\mathbb{R}^{V[h]})$.\footnote{The fact that $\Lambda_\xi^{sh}$ is Wadge reducible to $\Psi$ is a standard property of fully backgrounded constructions. We abuse notations here, identifying for example $\Psi$ with its canonical extension in $V(\mathbb{R}^{V[h]})$.}
\end{proof}
Let $\delta, {\mathcal{T}}, {\mathcal{ Q}}$ be as in Lemma \ref{lem:smaller}. Applying (\ref{one}) in $DM(h)$, we get that $\Psi^{h\restriction \gamma}_\delta\restriction HC^{V[h]}\in \Gamma^\infty_{h}$. Lemma \ref{lem:smaller} then implies that $\Sigma_{{\mathcal{T}},{\mathcal{ Q}}}\in \Gamma^\infty_{h}$. This contradicts (\ref{three}). This completes the proof of Theorem \ref{thm:not_equiv}.
\bibliographystyle{amsalpha}
|
2,877,628,089,117 | arxiv | \section{Introduction}
The relationship between the X-ray and optical/UV luminosity of active galactic nuclei (AGNs) is usually described in terms of the index $\alpha_{ox}=0.3838 \log(L_X/L_{UV})$, i.e., the slope of a hypothetical power law between 2500 \AA\ and 2 keV rest-frame frequencies. The X-ray and UV monochromatic luminosities are correlated over 5 decades as $L_X \propto L_{UV}^k$, with $k\sim0.5-0.7$, and this provides an anticorrelation $\alpha_{ox} = a \log L_{UV}$ + const, with $-0.2\la a\la -0.1$ \citep[e.g.,][]{avni86,vign03,stra05,stef06,just07,gibs08}. One of the main results of these analyses is that QSOs are universally X-ray luminous and that X-ray weak QSOs are very rare \citep[e.g.,][]{avni86,gibs08}, but it is not yet known if the same is true for moderate luminosity AGNs.
UV photons are generally believed to be radiated from the QSO accretion disk, while X-rays are supposed to originate in a hot coronal gas of unknown geometry and disk-covering fraction. The X-ray/UV ratio provides information about the balance between the accretion disk and the corona, which is not yet understood in detail. The $\alpha_{ox}-L_{UV}$ anticorrelation implies that AGNs redistribute their energy in the UV and X-ray bands depending on the overall luminosity, with more luminous AGNs emitting fewer X-rays per unit UV luminosity than less luminous AGNs \citep{stra05}. It has been proposed that the anticorrelation can be caused by
the larger dispersion in the luminosities in the UV than the X-ray band for a population with intrinsically uniform $\alpha_{ox}$ \citep{la-f95,yuan98}; however, more recent analyses based on samples with wider luminosity ranges confirm the reality of the relationship \citep{stra05}. \citet{gibs08} stressed the quite large scatter in the X-ray brightness of individual sources about the average relation and investigated the possible causes of the dispersion. Part of this scatter, usually removed \citep[e.g.,][]{stra05,stef06,just07,gibs08} is caused by radio-loud quasars, which are relatively X-ray bright because of the enhanced X-ray emission associated with their jets \citep[e.g.,][]{worr87}, and to broad absorption line (BAL) quasars, which are relatively X-ray faint \citep[e.g.,][]{bran00} due to X-ray absorption associated with the UV BAL outflows.
Additional causes of deviations from the average $\alpha_{ox}-L_{UV}$ relation include: i) X-ray absorption not associated with BALs, ii) intrinsic X-ray weakness, and iii) UV and X-ray variability, possibly in association with non-simultaneous UV and X-ray observations. In particular, \citet{gibs08} estimate that variability may be responsible for 70\%-100\% of the $\alpha_{ox}$ dispersion, and that a few percent ($<2$\%) of all quasars are intrinsically X-ray weak by a factor of 10, compared to the average value at the same UV luminosity. A large fraction of intrinsically X-ray weak sources would suggest that coronae may frequently be absent or disrupted in QSOs. An extreme case is PHL 1811, which is X-ray weak by a factor $\sim$70, studied in detail by \citet{leig07}, who propose various scenarios, including disk/corona coupling by means of magnetic reconnections, Compton cooling of the corona by unusually soft optical/UV spectrum, and the photon trapping of X-ray photons and their advection to the black hole. The influence of variability on the $\alpha_{ox}-L_{UV}$ relation can be divided into two different effects: i) non-simultaneity of X-ray and UV measurements, which we call ``artificial $\alpha_{ox}$ variability'', and ii) true variability in the X-ray/UV ratio, which we refer to as ``intrinsic $\alpha_{ox}$ variability''. It is beneficial to analyse simultaneously acquired X-ray and UV data to eliminate the effect of the artificial variability and search for the intrinsic X-ray/UV ratio and/or its variability. On a rest-frame timescale of a few years, the optical/UV variability of QSOs has been estimated to be $\sim$30\% \citep[e.g.,][]{gial91,vand04}, while X-ray variability has been estimated to be $\sim$40\% for Seyfert 1 AGNs \citep{mark03}. On intermediate timescales, the relation between X-ray and optical/UV variability may be due to either: i) the reprocessing of X-rays into thermal optical emission, by means of irradiation and heating of the accretion disk, or ii) Compton up-scattering, in the hot corona, of optical photons emitted by the disk. In the former case, variations in the X-ray flux would lead optical/UV ones, and vice versa in the latter case. Cross-correlation analyses of X-ray and optical/UV light curves allow us to constrain models for the origin of variability. The main results obtained so far, on the basis of simultaneous X-ray and optical observations, indicate a cross-correlation between X-ray and UV/optical variation on the timescale of days, and in some cases delays of the UV ranging from 0.5 to 2 days have been measured \citep{smit07}. Simultaneous X-ray/UV data can be obtained by the XMM-Newton satellite, which carries the co-aligned Optical Monitor (OM). The second XMM-Newton serendipitous source catalogue (XMMSSC) \citep{wats09} is available online in the updated incremental version 2XMMi\footnote{http://heasarc.gsfc.nasa.gov/W3Browse/xmm-newton/xmmssc.html}. The XMM-Newton Optical Monitor Serendipitous UV Source Survey catalogue (XMMOMSUSS) is also available online\footnote{http://heasarc.gsfc.nasa.gov/W3Browse/xmm-newton/xmmomsuss.html}. We look for simultaneous measurements of the $\alpha_{ox}$ index from XMM/OM catalogues, to provide at least partial answers to the following questions: how large is the effect of non-simultaneous X-ray/UV observations on the dispersion about the average $\alpha_{ox}-L_{UV}$ relationship? Is there any spectral X-UV variability for individual objects? Do their $\alpha_{ox}$ harden in the bright phases or vice versa? Which constraints do these measurements place on the relationship between the accretion disk and the corona?
The paper is organised as follows. Section 2 describes the data extracted from the archival catalogues. Section 3 describes the SEDs of the sources and the evaluation of the specific UV and X-ray luminosities. Section 4 discusses the $\alpha_{ox}-L_{UV}$ anticorrelation and its dispersion. In Sect. 5, we present the multi-epoch data and discuss the intrinsic X/UV variability of individual sources. Section 6 provides notes about individual peculiar sources. Section 7 discusses and summarises the results.
Throughout the paper, we adopt the cosmology H$_{o}$=70 km s$^{-1}$ Mpc$^{-1}$, $\Omega_{m}$=0.3, and $\Omega_{\Lambda}$=0.7.
\section{The data}
The updated incremental version 2XMMi of the second XMM-Newton serendipitous source catalogue (XMMSSC) \citep{wats09} is available online and contains 289083 detections between 2000 February 3 and 2008 March 28\footnote{After the submission of the article, a note has been distributed about ``Incorrect EPIC band-4 fluxes in the 2XMM and 2XMMi catalogues'' (XMM-Newton News \#105, http://xmm.esac.esa.int/external/xmm\_news/news\_list/). This affects 83 observations among the 315 in Table 1, which is corrected in agreement with the new data released by the XMM-Newton Survey Science Centre. All our analysis is also corrected with the new data.}. The net sky area covered by the catalogue fields is $\sim 360$ deg$^2$.
XMMOMSUSS is a catalogue of UV sources detected serendipitously by the Optical Monitor (OM) onboard the XMM-Newton observatory and is a partner resource to the 2XMM serendipitous X-ray source catalogue. The catalogue contains source detections drawn from 2417 XMM-OM observations in up to three broad-band UV filters, performed between 2000 February 24 and 2007 March 29. The net sky area covered is between 29 and 54 square degrees, depending on UV filter. The XMMOMSUSS catalogue contains 753578 UV source detections above a signal-to-noise ratio threshold limit of 3-$\sigma$, which relate to 624049 unique objects.
We first correlated the XMMSSC with the XMMOMSUSS catalogue to search for X-ray and UV sources with a maximum separation of 1.5 arcsec, corresponding to $\sim 1 \sigma$ uncertainty in the X-ray position. This yields 22061 matches. To obtain simultaneous X-ray and UV data, we searched for data from the same XMM-Newton observations, comparing the parameters OBS\_ID and OBSID of the XMMSSC and XMMOMSUSS catalogue, respectively, that identify uniquely the XMM-Newton pointings. This reduces the set to 8082 simultaneous observations. For the correlations, we used the Virtual Observatory application TOPCAT\footnote{http://www.star.bris.ac.uk/$\sim$mbt/topcat/}.
We then correlated this table with the Sloan Digital Sky Survey (SDSS) Quasar Catalogue, Data Release 5, to provide optical classifications and redshifts for the matched objects \citep{schn07}. Using again a maximum distance of 1.5 arcsec (uncertainty in the X-ray position), we found 310 matches. Increasing the maximum distance up to 5 arcsec, we add only 5 matches, none of which has a separation $>2$ arcsec. This indicates that, in spite of the relatively small (1.5 arcsec $\sim 1\sigma$) cross-correlation radius adopted to reduce the contamination, the resulting incompleteness (at the present flux limit) is negligible.
The X-ray to optical ratios of the added 5 sources are not peculiar, therefore we used the entire sample of 315 matches. This also includes multi-epoch data for 46 sources (from 2 to 9 epochs each) and single-epoch observations for 195 more sources, for a total number of 241 sources.
To estimate the probability of false identifications, we applied an arbitrary shift of 1 arcmin in declination to the X-ray coordinates of the 8082 simultaneous observations, and we found 219 UV/X-ray spurious associations, i.e., 2.7\%. This would correspond to $\sim 8$ spurious matches among the 315 observations of our final sample.
The relevant data of the sources are reported in Table 1, where:
Col. 1 corresponds to the source serial number;
Col. 2, the observation epoch serial number;
Col. 3 source name;
Col. 4 epoch (MJD);
Col. 5 redshift;
Col. 6 radio-loud flag (1=radio-loud, 0=radio-quiet, -1=unclassified);
Col. 7 BAL flag (1=BAL, 0=non-BAL);
Col. 8 log of the specific luminosity at 2500\AA\ in erg s$^{-1}$ Hz$^{-1}$;
Col. 9 log of the specific luminosity at 2 keV in erg s$^{-1}$ Hz$^{-1}$;
Col. 10 UV to X-ray power law index $\alpha_{ox}$;
Col. 11 residual of $\alpha_{ox}$ w.r.t. the adopted $\alpha_{ox}$-$L_{UV}$ correlation;
and Col. 12 hardness ratio between the bands 1-2 keV and 2-4.5 keV.
The sources span a region in the luminosity-redshift plane with $0.1\la z\la 3$ and $10^{29}$ erg s$^{-1}$ Hz$^{-1} \la L_{UV} \la 10^{32}$ erg s$^{-1}$ Hz$^{-1}$, as shown in Fig. 1.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{f1.pdf}}
\caption{The sources in the luminosity-redshift plane. All the sources in Table 1 are reported. Small dots correspond to single-epoch data, while open circles indicate average luminosity values of multi-epoch sources.}
\end{figure}
\section{Evaluation of the specific luminosities}
\subsection{UV}
The Optical Monitor onboard XMM-Newton is described in detail in \citet{maso01}. The set of filters included within the XMMOMSUSS catalogue is described in a dedicated page at MSSL\footnote{http://www.mssl.ucl.ac.uk/$\sim$mds/XMM-OM-SUSS/SourcePropertiesFilters.shtml.}. The filters are called UVW2, UVM2, UVW1, U, B, and V, with central wavelengths 1894\AA, 2205\AA, 2675\AA, 3275\AA, 4050\AA, and 5235\AA, respectively. The last three filters are similar, but not identical, to the Johnson UBV set.
In the evaluation of the rest-frame luminosities, it is inadvisable to apply k-corrections using fixed power laws, because the local slope of the power law\footnote{We adopt spectral indices following the implicit sign convention, $L_\nu\propto\nu^{\alpha}$.} at the emission frequency corresponding to the observed bandpasses changes as a function of the source redshift, between $\sim -0.5$ and $\sim -2$ \citep[see, e.g.,][]{rich06}. The effective slope to compute specific luminosity at 2500\AA\ is an appropriate average of the slopes between the emission frequency and the frequency corresponding to 2500\AA.
One or more specific fluxes, up to six, are reported in XMMOMSUSS for the filters effectively used for each source, depending on observational limitations at each pointing. We were therefore able to compute optical-UV spectral energy distributions (SEDs) for each source. We derived specific luminosities at the different emission frequencies of the SEDs according to the classical formula
\begin{equation}
L_\nu(\nu_e)=F_\nu(\nu_o) {4\pi D_L^2\over 1+z}\, .
\end{equation}
\noindent
The result is plotted in Fig. 2, where SEDs with 2-6 frequency points are shown as lines, while small circles represent sources with only 1 frequency point. Black lines and circles refer to sources with data at a single epoch, while colours are used for multi-epoch sources. Data from the same source are plotted with the same colour, but more sources are represented with the same colour. The continuous curve covering the entire range of the plot is the average SED computed by \citet{rich06} for Type 1 quasars from the SDSS.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{f2.pdf}}
\caption{Spectral energy distributions from the available OM data. Sources with 2 or more frequency points are shown as lines, while small circles represent sources with only 1 frequency point. Black lines and circles refer to sources with data at a single epoch, while coloured data refer to multi-epoch sources. Data from the same source are plotted with the same colour, but more sources are represented with the same colour. The continuous curve covering all the range of the plot is the average SED computed by \citet{rich06} for Type 1 quasars from the SDSS.}
\end{figure}
The specific luminosity at 2500\AA , ($\log\nu_e=15.08$), called $L_{UV}$ for brevity, is evaluated as follows: i) if the SED of the source extends across a sufficiently wide range at low frequency, crossing the $\log\nu_e=15.08$ line (see Fig. 2), $L_{UV}$ is computed as an interpolation of the SED values in the 2 nearest frequency points; ii) in the other cases, i.e., if $\log\nu_e>15.08$ for all the SED, we use a curvilinear extrapolation, adopting the shape of the average SED by \citet{rich06}, shifting it vertically to match the specific luminosity of the source at the lowest frequency point available, say $\nu_1$, and applying a correction factor between $\log\nu_1$ and 15.08. Another possibility would be to extrapolate the source's SED using a power law with the same slope as that between the two lowest frequency points, but this is not applicable when there is only 1 frequency point, and is inappropriate when $\log\nu_1\ga 15.3$, because it is then in a region where the average SED by \citet{rich06} steepens. We therefore do not apply this power law extrapolation, and use instead the curvilinear extrapolation (ii) described above. However, we tested the use of this power law extrapolation for the subset of SEDs for which it can be applied, and computed the $\alpha_{ox}-L_{UV}$ relation as described in the following (Sect. 4). We found similar slopes (within 0.010) and dispersions (within 0.005), which does not influence our final conclusions.
\subsection{X-ray}
X-ray fluxes are provided by the XMMSSC catalogue integrated in 5 basic energy bands, 0.2-0.5 keV (band 1), 0.5-1 keV (band 2), 1-2 keV (band 3), 2-4.5 keV (band 4), and 4.5-12 keV (band 5) \citep{wats09}. Power law distributions with photon index\footnote{With the usual convention of explicit minus sign for the photon index, $P(E)\propto E^{-\Gamma}$ and with the implicit sign adopted by us for the energy index $\alpha$, the relation between the two indices is $\Gamma=1-\alpha$.} $\Gamma=1.7$ and absorbing column density $N_H=3\times 10^{20}$ cm$^{-2}$ are assumed in the computation of the fluxes.
To evaluate the specific luminosity at 2 keV (which we call $L_X$ for brevity), we can use the flux in one of the two adjacent bands, 3 or 4. Since the fluxes are computed with negligible absorption, we prefer to use the band 4, which is less absorbed than the band 3 in type-2 obscured AGNs. It would also be possible to directly measure rest-frame 2 keV flux from observed low-energy bands 1 or 2, but - again - this would provide in some cases an absorbed flux. We therefore use the power law integral
\begin{equation}
F_X({\rm 2-4.5\,keV})=\int_{\rm 2\,keV}^{\rm 4.5\,keV} F_\nu({\rm 2\,keV})\left({\nu\over\nu_{\rm 2\,keV}}\right)^{1-\Gamma} d\nu\end{equation}
and determine the specific flux at 2 keV (observed frame) to be:
\begin{equation}
F_\nu({\rm 2\,keV})={F_X({\rm 2-4.5\,keV})\over \nu_{\rm 2\,keV}} {2-\Gamma\over 2.25^{2-\Gamma}-1}\quad .
\end{equation}
\noindent We then apply a standard power law k-correction
\begin{equation}
L_\nu({\rm 2\,keV})=F_\nu({\rm 2\,keV}) {4\pi D_L^2\over (1+z)^{2-\Gamma}}\quad ,
\end{equation}
adopting $\Gamma=1.7$ as assumed in the catalogue.
\section{The $\alpha_{ox}-L_{UV}$ anticorrelation}
We define, as usual
\begin{equation}
\alpha_{ox}={\log({L_{2{\rm\,keV}}/L_{2500{\rm\,\AA}}})\over \log({\nu_{2{\rm\,keV}}/\nu_{2500{\rm\,\AA}}})}=0.3838 \log\left({L_{2{\rm\,keV}}\over L_{2500{\rm\,\AA}}}\right)
\end{equation}
\noindent
and show in Fig. 3, $\alpha_{ox}$ as a function of $L_{UV}$ for all the sources in Table 1, including also multi-epoch measurements where available. Radio-loud quasars and BAL quasars are also shown with different symbols, and they are then removed from the main correlation.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{f3.pdf}}
\caption{$\alpha_{ox}$ as a function of the 2500\AA\ specific luminosity $L_{UV}$, for all the 315 measurements of the sources in our sample, including multi-epoch measurements. Filled and open circles are, respectively, single-epoch and multi-epoch measurements of radio-quiet, non-BAL AGNs. BAL AGNs are plotted as filled (radio-quiet) and open (radio-loud) squares. Filled triangles represent radio-loud, non-BAL, sources, and open triangles are radio-unclassified sources. The $\times$ symbol indicates the anomalously X-ray-weak source \#130. Linear fits are represented as thin continuous (all the sources), dotted (radio-quiet non-BAL sources), thick continuous (excluding also source \#130). The short-dashed line is the best-fit reported by \citet{just07}. The dot-dashed, and long-dashed lines are the best-fit relations found by \citet{gibs08}, and by \citet{grup10}, respectively, and are plotted for limited ranges of luminosities, as analysed in the corresponding works.}
\end{figure}
Radio flux density at 1.4 GHz from FIRST radio survey \citep{beck95} is directly available in the SDSS-DR5 Quasar Catalog, where radio sources are associated with SDSS positions adopting a cross-correlation radius of 2 arcsec \citep{schn07}. In a few cases, additional radio information is taken from the NVSS survey \citep{cond98} and/or from the NASA Extragalactic Database (NED). In total, radio information is available for 228 sources out of 241 in Table 1. Following \citet{gibs08}, we assume a radio spectral index $\alpha=-0.8$ to estimate the specific luminosity at 5 GHz. We then calculate the radio-loudness parameter \citep[e.g.,][]{kell89},
\begin{equation}
R^*=L_\nu(5{\rm\,GHz})/L_{2500\AA}\, ,
\end{equation}
\noindent
and classify sources with $\log(R^*)\geq 1$ as radio-loud (RL), marking them with $f_{RL}=1$ in Table 1. Sources without detected radio flux or with $\log(R^*)< 1$ are classified as radio-quiet (RQ) and marked with $f_{RL}=0$. Sources without radio information from FIRST, NVSS, or NED are marked with $f_{RL}=-1$ .
Eight sources are present in the \citet{gibs08} and \citet{gibs09} catalogues as BAL quasars, and are accordingly marked in Table 1 with $f_{BAL}=1$.
As a first step, we show in Fig. 3 linear least squares fits corresponding to all the available measurements with the same weights, even for multi-epoch sources, as if they were different sources. The thin continuous line is a fit to all the sources, regardless of their radio-loudness and/or BAL characteristics, given by
\begin{equation}
\alpha_{ox} =(-0.137\pm0.013)\log L_{UV}+(2.610\pm0.401)\, .
\end{equation}
\noindent
A second fit, shown as a dotted line, corresponds to radio-quiet non-BAL sources, which are 193 of 241 in our sample
\begin{equation}
\alpha_{ox} =(-0.157\pm0.013)\log L_{UV}+(3.212\pm0.386)\, .
\end{equation}
\noindent
Radio-unclassified sources marked in Table 1 with $f_{RL}=-1$, are not included in this fit. Including them would make however a minor difference, as we have verified.
Most of the radio-loud sources in Fig. 3 are located above the fits, as expected, radio-loud quasars being known to have jet-linked X-ray emission components that generally lead to higher X-ray-to-optical ratios than those of radio-quiet quasars \citep[e.g.,][]{worr87}.
One source, \#130 in Table 1, appears to be very X-ray weak relative to the average correlation, as quantified in Sect. 4.1. This source is discussed further in Sect. 6 and we believe there are reasons to consider it to be anomalous. We then exclude it, so obtaining a reference sample of 192 radio-quiet non-BAL sources, not containing source \#130. We indicate with a thick continuous line the corresponding fit
\begin{equation}
\alpha_{ox} =(-0.166\pm0.012)\log L_{UV}+(3.489\pm0.377)\, .
\end{equation}
These correlations can be compared with that reported by \citet{gibs08}, which is shown in Fig. 3 as a dot-dashed line
\begin{equation}
\alpha_{ox} =(-0.217 \pm 0.036) \log L_{UV} + (5.075 \pm 1.118)\, .
\end{equation}
\noindent
and with those found by previous authors, usually flatter, as e.g. in \citet{just07}, whose fit is shown in Fig. 3 as a dashed line:
\begin{equation}
\alpha_{ox} =(-0.140 \pm 0.007) \log L_{UV} + (2.705 \pm 0.212)\, .
\end{equation}
The analysis of \citet{grup10} is also interesting, because it uses simultaneous X-ray and optical measurements from {\it Swift}, and has a yet flatter slope
\begin{equation}
\alpha_{ox} =(-0.114 \pm 0.014) \log L_{UV} + (1.177 \pm 0.305)\, .
\end{equation}
We note that the relations of \citet{gibs08} and \citet{grup10} are obtained by means of analyses in limited ranges of UV luminosities and redshifts, respectively of ($30.2<\log L_{UV}<31.8$, $1.7<z<2.7$) and ($26<\log L_{UV}<31$, $z<0.35$). This suggests a possible dependence of the slope of the $\alpha_{ox}-L_{UV}$ relation on luminosity or redshift or both, and is discussed further in Sect. 4.2.
We now limit ourselves to our reference sample of 192 sources, and show in Fig. 4 (as open circles) the average values of $L_{UV}$ and $\alpha_{ox}$ for 41 multi-epoch sources, together with the corresponding values for 151 single-epoch sources (black dots). Source \#45 is a known gravitational lens \citep{koch97}. \citet{char00} estimated that its luminosity is amplified by a factor $\sim 15$. We plot this source in Fig. 4 as an open square at the observed luminosity, and deamplified by a factor of 15 as an open circle, connected to the observed point by a dotted line. The parameter $\alpha_{ox}$ is not affected, as gravitational lensing is achromatic.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{f4.pdf}}
\caption{$\alpha_{ox}$ as a function of the 2500\AA\ specific luminosity $L_{UV}$, for the 192 radio-quiet non-BAL sources of the reference sample. Multi-epoch measurements of the same sources are averaged and shown as open circles, while black dots refer to single-epoch sources. Source \#45 is a gravitational lens and is shown with both its observed luminosity (as an open square) and its deamplified luminosity (as an open circle). The continuous line shows the least squares fit to the points. Dashed lines show separate fits to the single-epoch (long-dash) and multi-epoch (short-dash) sources.}
\end{figure}
The best-fit relation to the data in Fig. 4, including source \#45 with its deamplified luminosity, is
\begin{equation}
\alpha_{ox} =(-0.178\pm0.014)\log L_{UV}+(3.854\pm0.420)\, .
\end{equation}
\noindent
Separate fits for single-epoch and multi-epoch sources give, respectively, $\alpha_{ox} =(-0.179\pm0.016)\log L_{UV}+(3.863\pm0.482)$ and $\alpha_{ox} =(-0.171\pm0.029)\log L_{UV}+(3.657\pm0.877)$.
\subsection{Dispersion in $\alpha_{ox}$}
We adopt Eq. (13) as our reference $\alpha_{ox}(L_{UV})$ relation and investigate the dispersion of the sources around it. We therefore define the residuals
\begin{equation}
\Delta\alpha_{ox}=\alpha_{ox}-\alpha_{ox}(L_{UV})\, .
\end{equation}
We show in Fig. 5 the histograms of $\Delta\alpha_{ox}$, using the average values of multi-epoch measurements as in Fig. 4. Contour histogram represents all sources, the filled histogram the reference sample, and source \#130 is marked by a cross. The two histograms have standard deviations $\sigma=0.158$ and $\sigma=0.122$, respectively. The source \#130, with $\Delta\alpha_{ox}=-0.60$, differs by about $5\sigma$ from the reference relation, and appears X-ray weaker by a factor of $\sim 40$ than AGNs of the same UV luminosity.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{f5.pdf}}
\caption{Histograms of the residuals $\Delta\alpha_{ox}$ (Eq. (14)) for all the sources (contour histogram), and for the reference sample (filled histogram). Source \#130 is marked by a cross. Dispersions for the two samples are $\sigma=0.158$ and $\sigma=0.122$, respectively.}
\end{figure}
The dispersion in our $\Delta\alpha_{ox}$ distribution is comparable to those obtained by, e.g., \citet{stra05}, \citet{just07}, and \cite{gibs08} on the basis of non-simultaneous X-ray and UV data, with values between 0.10 and 0.14. Our result based on simultaneous data eliminates a possible cause of dispersion due to ``artificial $\alpha_{ox}$ variability''. The dispersion is not lower than previous non-simultaneous estimates, thus it is probably caused by other factors affecting the X-ray/UV ratio. These could include: (i) ``intra-source dispersion'', caused by ``intrinsic $\alpha_{ox}$ variability'', i.e., true temporal change in the X-ray/UV ratio for individual sources, and/or (ii) ``inter-source dispersion'', due to intrinsic differences in the average $\alpha_{ox}$ values from source to source, perhaps related to different conditions in the emitting regions.
\subsection{Dependence on $z$ and $L$}
To estimate the possible dependence of $\alpha_{ox}$ on redshift, we perform a partial correlation analysis, correlating $\alpha_{ox}$ with $L_{UV}$, with account for the effect of $z$, and correlating $\alpha_{ox}$ with $z$, taking account of the effect of $L_{UV}$.
For our reference sample of 192 radio-quiet, non-BAL, sources, we find a Pearson partial correlation coefficient $r_{\alpha L,z}=-0.51$, with a probability $P(>r)=1.3\times 10^{-12}$ for the null hypothesis that $\alpha_{ox}$ and $L_{UV}$ are uncorrelated. The other partial correlation coefficient is $r_{\alpha z,L}=0.05$ with $P(>r)=0.52$, which implies that there is no evidence of a correlation with $z$.
Our results agree with previous studies \citep{avni86,stra05,stef06,just07}, which also found no evidence of a dependence of $\alpha_{ox}$ on redshift \citep[see however][]{kell07}.
In the upper panel of Fig. 6, we plot the residuals $\alpha_{ox}-\alpha_{ox}(L_{UV})$, Eq. (14), as a function of $z$, which show no correlation ($r=0.027$, $P(>r)=0.703$, $\Delta\alpha_{ox}=(0.005\pm0.014)z+(-0.006\pm0.018)$). In the lower panel, we plot the residuals $\alpha_{ox}-\alpha_{ox}(z)$ as a function of $\log L_{UV}$, after computing the average $\alpha_{ox}-z$ relation, $\alpha_{ox}(z)=(-0.139\pm0.016)z+(-1.394\pm0.022)$. These residuals clearly decrease with luminosity ($r=-0.305$, $P(>r)=2.4\times 10^{-5}$, $\Delta\alpha_{ox}=(-0.067\pm0.015)\log L_{UV}+(2.050\pm0.465)$). Similar results were obtained by \citet{stef06}. These results suggest that the dependence of $\alpha_{ox}$ on $z$ is induced by the intrinsic dependence on $L_{UV}$ through the $L_{UV}-z$ correlation.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{f6.pdf}}
\caption{Upper panel: residuals $\alpha_{ox}-\alpha_{ox}(L_{UV})$, as a function of $z$. Lower panel: residuals $\alpha_{ox}-\alpha_{ox}(z)$ as a function of $\log L_{UV}$.}
\end{figure}
The slope of the $\alpha_{ox}-L_{UV}$ relation, according to the fits by \citet{gibs08} and \citet{grup10}, shown in Fig. 3 with the results of \citet{just07} and ourselves, may be flatter at lower luminosity and/or redshift. We divide our reference sample into two equally populated subsamples, $\log L_{UV}\lessgtr 30.43$, finding $\alpha_{ox} =(-0.137\pm0.029)\log L_{UV}+(2.639\pm0.878)$ for the low luminosity sources, and $\alpha_{ox} =(-0.193\pm0.038)\log L_{UV}+(4.319\pm1.182)$ for the high luminosity ones, while for the entire sample Eq. (13) is valid. A Student's-t test applied to the low-$L_{UV}$ and high-$L_{UV}$ subsamples gives a 12\% probability that they are drawn from the same parent distribution. A similar result was found by \citet{stef06}.
We similarly divide our sample into two redshift subsamples, $z\lessgtr 1.2$, finding $\alpha_{ox} =(-0.166\pm0.022)\log L_{UV}+(3.491\pm0.650)$ for the low $z$ sources, and $\alpha_{ox} =(-0.225\pm0.033)\log L_{UV}+(5.305\pm1.015)$ for the high $z$ sources. Application of the Student's-t test gives in this case a 7\% probability that low-$z$ and high-$z$ subsamples are drawn from the same parent distribution.
This suggests that the slope of the $\alpha_{ox}-L_{UV}$ relation may be $L_{UV}$- and/or $z$-dependent. However, the apparent dependence on $z$ can be an artifact of a true dependence on $L_{UV}$, or vice versa. A sample of sources evenly distributed in the $L-z$ plane is required to distinguish these dependences.
\section{Multi-epoch data}
We show in Fig. 7 the tracks of individual sources in the $\alpha_{ox}-L_{UV}$ plane, for the reference sample. Only 41 of 192 sources have multi-epoch information, and most of them exhibit small variations. Some sources (\#73, \#168) have strong variations in both $\alpha_{ox}$ and $L_{UV}$, but nearly parallel to the average $\alpha_{ox}-L_{UV}$ relation, therefore not contributing appreciably to the dispersion in $\Delta\alpha_{ox}$. A few sources (e.g., \#90, \#157, \#225) have appreciable or strong variations perpendicular to the average relation, possibly contributing to the overall dispersion. Figure 8 shows a histogram of the individual dispersions in $\Delta\alpha_{ox}$ for these 41 sources.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{f7.pdf}}
\caption{behaviour of individual radio-quiet non-BAL sources in the plane $\alpha_{ox}-L_{UV}$. Connected segments show the tracks of multi-epoch sources, while open circles represent the average values of the same sources, which are labeled with their serial numbers as in Table 1. Small dots refer to single-epoch sources. The straight line is the adopted $\alpha_{ox}-L_{UV}$ relation, Eq. (13).}
\end{figure}
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{f8.pdf}}
\caption{Histogram of the individual dispersions of $\Delta\alpha_{ox}$ for the 41 radio-quiet non-BAL sources with multi-epoch information.}
\end{figure}
Most sources have data at only 2 epochs, and only 9 sources have more epochs, up to 9. The individual variations occur on different timescales, from hrs to yrs, and cannot be directly compared with each other. It is however possible to build an ensemble structure function (SF) to describe the variability in a given quantity $A(t)$ for different rest-frame time-lags $\tau$. We define it as in \citet{dicl96}
\begin{equation}
SF(\tau)={\pi\over 2}\langle|A(t+\tau)-A(t)|\rangle\, .
\end{equation}
\noindent
The factor $\pi/2$ is introduced to measure SF in units of standard deviations, and the angular brackets indicate the ensemble average over appropriate bins of time lag. The function $A(t)$ is usually a flux or luminosity in a given spectral band, or its logarithm. Here, we apply the definition of Eq. (15) to both $\alpha_{ox}(t)$ and the residuals $\Delta\alpha_{ox}(t)$. The result is illustrated in Fig. 9 for both functions. There is a clear increase in both SFs, which reach average variations of up to $\sim 0.07$ at $\sim 1$yr rest-frame. Unfortunately, the sampling is quite irregular, and most sources contribute with single points (corresponding to 2 epochs), while the few sources with more epochs have a greater weight in the ensemble statistic. To check whether the increase in SFs may be due to a single highly variable source, we compute new SFs by removing source \#157, which also has a relatively high number of epochs ($n=6$ epochs, therefore contributing $n(n-1)/2=15$ SF points), and find in this case a slightly smaller (but still relevant) increase ($\sim 0.06$ at $\sim 1$yr).
These values can be compared with the dispersion in the residuals shown in Fig. 5, which is $\sigma=0.122$ for the reference sample. We note that the ensemble variability of $\Delta\alpha_{ox}$ was computed for only 41 multi-epoch sources, while the filled histogram shown in Fig. 5 also includes 151 single-epoch sources. We then checked whether the dispersions in the residuals for the single-epoch and multi-epoch subsamples are similar, being $\sigma=0.122$ and $\sigma=0.119$, respectively.
It then appears that variability in $\alpha_{ox}$ could account for a large part of the observed dispersion around the average $\alpha_{ox}-L_{UV}$ correlation. It is reasonable to expect that sources measured at single-epochs have temporal behaviours similar to those described by the SFs of Fig. 9, and that the variations in individual sources during their lifetime are similar to the variations measured from source to source at random epochs. However, the average temporal values of individual sources may differ, and ``inter-source dispersion'' may be present, in addition to ``intra-source dispersion'' (see Sect. 4.1). Assuming that other factors contributing to the dispersion can be neglected, the overall variance would then be:
\begin{equation}
\sigma^2=\sigma^2_{\rm intra-source}+\sigma^2_{\rm inter-source}\, .
\end{equation}
Our structure function analysis infers a value of 0.07 for the intra-source dispersion at 1 yr (or 0.06 if we remove the highly variable source \#157), while the total dispersion in the residuals shown in Fig. 5 is $\sigma\sim0.12$. This would indicate a $\sim 30\%$ contribution of intra-source dispersion to the total variance $\sigma^2$. However, the SF may increase further at longer time delays, so that the contribution of intra-source dispersion would be higher, while that of inter-source dispersion would be constrained toward lower values.
Other factors may affect the dispersion, for example: (i) errors in the extrapolations of UV and X-ray luminosities, (ii) differences in galactic absorption, (iii) spurious inclusion of unknown BAL sources. From Fig. 2, it appears that a few sources have SEDs with anomalous slopes, and extrapolations with the average SED by \citet{rich06} infer in these cases poor luminosity estimates; however, this applies only to a small fraction of the sample. For X-rays, we adopted $\Gamma=1.7$ to be consistent with the fluxes catalogued in the XMMSSC; a distribution of $\Gamma$ values would introduce an extra dispersion. All these factors would probably contribute an additional term in Eq. (16). This would constrain more tightly the contribution of the inter-source dispersion, therefore increasing the relative weight of variability and intra-source dispersion.
A finer sampling of the SF and a homogeneous weight of the individual sources are however needed to quantify more definitely the contribution of variability, and which fraction has yet to be explained by other factors. Simultaneous UV and X-ray observations for a homogeneous sample of sources no greater than our own would be sufficient, assuming that each source is observed at $\sim 10$ epochs, spanning a monitoring time of a few years.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{f9.pdf}}
\caption{Rest-frame structure function of $\alpha_{ox}(t)$ (upper panel) and the residuals $\Delta\alpha_{ox}(t)$ (lower panel) for the 41 radio-quiet non-BAL sources with multi-epoch data. The crosses represent the contributions of the variations in individual sources. All the points marked by open squares refer to the variable source \#157. The filled circles connected by continuous lines represent the ensemble structure function for the set of 41 sources, in bins of $\Delta\log\tau=1$. The open circles connected by dashed lines correspond to the remaining set of 40 sources, after removing source \#157.}
\end{figure}
\section{Peculiar sources}
\subsection{2XMM J112611.6+425245}
We computed the X-ray luminosity and the $\alpha_{ox}$ spectral index starting from the X-ray flux in the 2-4.5 keV band (XMM-Newton band 4), as described in Sect. 3. Since 2XMM J112611.6+425245 (source \#130) is X-ray weak by a factor $\sim 40$, we analysed the X-ray information in the various XMM-Newton bands, available in the XMMSSC catalogue, and found this source to be even weaker in the softer 1-2 keV band (band 3), with a very high hardness ratio between the two bands, $HR3=(CR4-CR3)/(CR4+CR3)=0.52$, $CR3$ and $CR4$ being the count rates in the two bands. We then plot the sources of Table 1 in the plane $\Delta\alpha_{ox}-HR3$, to see whether X-ray weak sources are in some way related to particular values of the X-ray hardness ratio. This is shown in Fig. 10, where it can be seen that most sources are concentrated in a region with ``standard'' values around $\Delta\alpha_{ox}=0$ and $HR3\simeq -0.4$, while a few sources are located at greater distances, along tails in various directions. Source \#130, indicated by a $\times$ sign in the figure, is the most distant, and very X-ray weak and very hard.
\citet{hu08} report this source (which has a redshift $z=0.156$) in their study of the FeII emission in quasars, where it is shown that systematic inflow velocities of FeII emitting clouds are inversely correlated with Eddington ratios. The source 2XMM J112611.6+425245 has one of the highest measured inflow velocities, $v_{Fe}\sim 1700$ km s$^{-1}$. \citet{ferl09} argue about the high column densities, $N_H\sim 10^{22}-10^{23}$ cm$^{-2}$, necessary to account for the inflows in this class of quasars, and about the possibility that either UV or X-ray absorption are associated with the infalling component.
The source 2XMM J112611.6+425245 also has a high $HR4=(CR5-CR4)/(CR5+CR4)=0.63$, $CR5$ being the count rate in the 4.5-12 keV energy band. High values of $HR3$ and $HR4$ are used by \citet{nogu09} to select, on the basis of a modelling of the direct and scattered emission, a sample of AGNs hidden by geometrically thick tori. The hardness ratios of this source imply that it is a good candidate for that class of AGNs.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{f10.pdf}}
\caption{Plot of the sources of Table 1 in the plane $\Delta\alpha_{ox}-HR3$. Symbols as in Fig. 3. Sources with multi-epoch data are represented by their average values, except source \#157, whose strong variations are also shown by the connected segments.}
\end{figure}
\subsection{2XMM J123622.9+621526}
Source \#157 is one of those with the greatest variance in $\Delta\alpha_{ox}$. It exhibits even more extraordinary variations in $HR3$, which are shown in Fig. 10 by a broken line. It is in the Chandra Deep Field North, and its X-ray spectrum, analysed by \citet{baue04}, classifies it as an unobscured quasar. While its $UV$ luminosity has remained nearly constant, its $\Delta\alpha_{ox}$ and $HR3$ have varied by 0.22 and 0.76, respectively, between epochs 5 and 6 in Table 1, which differ by 20 days in the observed frame, i.e., less than a week in the rest-frame at the redshift $z=2.597$.
\section{Discussion}
The behaviour of $\alpha_{ox}$, i.e., its dependence on luminosity and redshift, its dispersion and variability, are to be considered as symptoms of the relation between disk and corona emissions and their variabilities.
It is generally believed that variable X-ray irradiation can drive optical variations by means of variable heating of the internal parts of the disk on relatively short timescales, days to weeks, while intrinsic disk instabilities in the outer parts of the disk dominate on longer timescales, months to years, propagating inwards and modulating X-ray variations in terms of Compton up-scattering in the corona \citep{czer04,arev06,arev09,papa08,mcha10}.
The structure functions of the light curves increase on long timescales both in the optical \citep[e.g.,][]{dicl96,vand04,baue09} and X-rays \citep[e.g.,][Vagnetti et al., in prep.]{fior98}. This, however, does not imply that the $\alpha_{ox}$ SF also increases with time lag. Larger changes (on long time lags) in both X-ray and UV fluxes may occur without changes in the spectral shape (i.e., with constant $\alpha_{ox}$). Our results shown in Fig. 9 indicate that this is not the case, i.e., that slope changes are indeed larger on longer timescales.
Moreover, it is evident from Fig. 9 that most of the dispersion about the $\alpha_{ox}-L_{UV}$ relation is due, in the present sample, to variations on timescales from months to years, which are associated with optically driven variations, according to the general belief.
The $\alpha_{ox}$ structure function does not distinguish between the hardening or softening of the optical to X-ray spectrum during brightening. This is instead described by the spectral variability parameter $\beta=\partial\alpha/\partial\log F$ \citep{trev01,trev02}, which can be adapted to the optical-X-ray case to become
\begin{equation}
\beta_{ox}={\delta\alpha_{ox}\over\delta\log L_{UV}}\,,
\end{equation}
\noindent
where $\beta_{ox}$ is the slope of the correlated variations $\delta\alpha_{ox}$ and $\delta L_{UV}$, and describes whether a source hardens when it brightens or vice versa, i.e., if the X-ray luminosity increases more than the optical or less. For example, single source variations parallel to the $\alpha_{ox}$ anticorrelation have a negative $\beta_{ox}$, while variations perpendicular to the correlation have $\beta_{ox}>0$. Both of these behaviours can be seen in Fig. 7.
Of course, the different behaviour of the sources in the $\alpha_{ox}-L_{UV}$ plane may correspond to a different time sampling.
Constraining physical models of the primary variability source and disk-corona coupling would require the analysis of $\beta_{ox}$ as a function of the time lag.
This analysis does not look feasible, i.e., to have statistical reliability, with the present sparse sampling. We can propose more conventional scenarios and note that since most of the variability, in the present sample, occurs on long ($\sim 1$ year ) timescale, it is presumably associated with optically driven variations.
Considering all the measured variations $\delta\alpha_{ox}$ and $\delta L_{UV}$, we obtain the ``ensemble" average $\langle\beta_{ox}\rangle=-0.240$.
The negative sign implies that, on average, a spectral steepening occurs in the brighter phase. This is, in fact, consistent with larger variations in the UV band, driving the X-ray variability.
The value of $\langle\beta_{ox}\rangle$ can be compared with the average slope of the $\alpha_{ox}-L_{UV}$ relation, Eq. (13), indicating that the UV excess in the brighter phase (steepening) is larger than the average UV excess in bright objects respect to faint ones.
Finally we emphasise that, despite its limitations, the present analysis illustrates the feasibility of an ensemble analysis of the $\alpha_{ox}-L_{UV}$ correlation, e.g., by considering the $\beta$ parameter as a function of time lag. What is presently missing is an adequate simultaneous X-ray-UV sampling, at relatively short time lags, of a statistical AGN sample. An ensemble analysis may provide important constraints even when the total number of observations does not allow us to carry out a cross-correlation analysis of X-ray and UV variations of individual sources.
We summarise our main results as follows:
\begin{itemize}
\item we have studied the $\alpha_{ox}-L_{UV}$ anticorrelation with simultaneous data extracted from the XMM-Newton Serendipitous Source catalogues;
\item we confirm the anticorrelation, with a slope (-0.178) slightly steeper w.r.t. \citet{just07};
\item we do not find evidence for a dependence of $\alpha_{ox}$ on redshift, in agreement with previous authors \citep[e.g.][]{avni86,stra05,stef06,just07};
\item there appears to be a flatter slope to the anticorrelation at low luminosities and low redshifts, in agreement with previous results by \citet{stef06};
\item the dispersion in our simultaneous data ($\sigma\sim 0.12$) is not significantly smaller w.r.t. previous non-simultaneous studies \citep{stra05,just07,gibs08}, indicating that ``artificial $\alpha_{ox}$ variability'' introduced by non-simultaneity is not the main cause of dispersion;
\item ``intrinsic $\alpha_{ox}$ variability'' , i.e., true variability in the X-ray to optical ratio, is important, and accounts for $\sim 30\%$ of the total variance, or more;
\item ``inter-source dispersion", due to intrinsic differences in the average $\alpha_{ox}$ values from source to source, is also important;
\item the dispersion introduced by variability is mostly caused by the long timescale variations, which are expected to be dominated by the optical variations; the average spectral softening observed in the bright phase is consistent with this view;
\item distinguishing the trends produced by optical or X-ray variations may be achievable using the ensemble analysis of the spectral variability parameter $\beta_{ox}$ as a function of time lag; crucial information would be provided by wide field simultaneous UV and X-ray observations with relatively short (days-weeks) time lags.
\end{itemize}
\begin{acknowledgements}
We thank P. Giommi and A. Paggi for useful discussions.
This research has made use of the XMM-Newton Serendipitous Source Catalogue, which is a collaborative project involving the whole Science Survey Center Consortium.
This research has made use of the XMM-OM Serendipitous Ultra-violet Source Survey (XMMOMSUSS), which has been created at the University College London's (UCL's) Mullard Space Science Laboratory (MSSL) on behalf of ESA and is a partner resource to the 2XMM serendipitous X-ray source catalogue.
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/.
This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
This work makes use of EURO-VO software, tools or services. The EURO-VO has been funded by the European Commission through contract numbers RI031675 (DCA) and 011892 (VO-TECH) under the 6th Framework Programme and contract number 212104 (AIDA) under the 7th Framework Programme.
S.T. acknowledges financial support through Grant ASI I/088/06/0.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,877,628,089,118 | arxiv | \section{
\setcounter{equation}{0}
\@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus
-.2ex}{2.3ex plus .2ex}{\large\bf}}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}}
\def\subsection{\@startsection{subsection}{2}{\z@}{-3.25ex plus -1ex minus
-.2ex}{1.5ex plus .2ex}{\normalsize\bf}}
\def\subsubsection{\@startsection{subsubsection}{3}{\z@}{-3.25ex plus
-1ex minus -.2ex}{1.5ex plus .2ex}{\normalsize}}
\newsavebox{\eqlabel}
\makeatletter
\newlength{\numblen}
\newsavebox{\eqnumb}
\def\@eqnnum{\savebox{\eqnumb}{\rm (\theequation)}%
\settowidth{\numblen}{\usebox{\eqnumb}}%
\makebox[\numblen][l]{\usebox{\eqnumb}~~~\usebox{\eqlabel}}}
\makeatother
\newenvironment{equationwithlabel}[1]{ %
\begin{equation}\label{#1} }{\end{equation}}
\newcommand{\beql}[1]{\begin{equationwithlabel}{#1}}
\newcommand{\end{equationwithlabel}}{\end{equationwithlabel}}
\newcommand{{\bar\psi}}{{\bar\psi}}
\newcommand{{\psi^\ad}}{{\psi^\dagger}}
\newcommand{{\psi_L}}{{\psi_L}}
\newcommand{{\psi_R}}{{\psi_R}}
\newcommand{{\psi_L^\ad}}{{\psi_L^\dagger}}
\newcommand{{\psi_R^\ad}}{{\psi_R^\dagger}}
\newcommand{{\dot a}}{{\dot a}}
\newcommand{{\dot b}}{{\dot b}}
\newcommand{{\dot c}}{{\dot c}}
\newcommand{{\bf p}}{{\bf p}}
\newcommand{{\bf k}}{{\bf k}}
\newcommand{\mathfrak{F}} % if this won't work, try \frak{F} or {{\cal F}}{\mathfrak{F}}
\newcommand{\Z}{\mathbb{Z}}
\newcommand{m_s}{m_s}
\newcommand{M_s}{M_s}
\newcommand{{M^{-1}}}{{M^{-1}}}
\newcommand{{q_+}}{{q_+}}
\newcommand{{q_-}}{{q_-}}
\newcommand{{q\!\!\!/}_+}{{q\!\!\!/}_+}
\newcommand{{q\!\!\!/}_-}{{q\!\!\!/}_-}
\newcommand{{\vec q}^{\,2}}{{\vec q}^{\,2}}
\begin{document}
\title{\bf Unlocking Color and Flavor\\
in\\ Superconducting Strange Quark Matter}
\newcommand{\normalsize}{\normalsize}
\author{
Mark Alford, J\"urgen Berges, Krishna Rajagopal \\[0.5ex]
{\normalsize Center for Theoretical Physics,}\\
{\normalsize Massachusetts Institute of Technology, Cambridge, MA 02139 }
}
\newcommand{\preprintno}{\normalsize
MIT-CTP-2844
}
\date{\preprintno}
\begin{titlepage}
\maketitle
\def\thepage{}
\begin{abstract}
We explore the phase diagram of strongly interacting matter with
massless $u$ and $d$ quarks as a function of the strange quark mass
$m_s$ and the chemical potential $\mu$ for baryon number. Neglecting
electromagnetism, we describe the different baryonic and quark matter
phases at zero temperature. For quark matter, we support our
model-independent arguments with a quantitative analysis of a model
which uses a four-fermion interaction abstracted from single-gluon
exchange. For any finite $m_s$, at sufficiently large $\mu$ we find
quark matter in a color-flavor locked state which leaves a global
vector-like $SU(2)_{{\rm color}+L+R}$ symmetry unbroken. As a
consequence, chiral symmetry is always broken in sufficiently dense
quark matter. As the density is reduced, for sufficiently large $m_s$
we observe a first order transition from the color-flavor locked phase
to a color superconducting phase analogous to that in two flavor QCD.
At this unlocking transition chiral symmetry is restored. For
realistic values of $m_s$ our analysis indicates that chiral symmetry
breaking may be present for all densities down to those characteristic
of baryonic matter. This supports the idea that quark matter and
baryonic matter may be continuously connected in nature. We map
the gaps at the quark Fermi surfaces in the high density color-flavor
locked phase onto gaps at the baryon Fermi surfaces at
low densities.
\end{abstract}
\end{titlepage}
\renewcommand{\thepage}{\arabic{page}}
\section{Introduction and Phase Diagram}
\label{sec:int}
Strongly interacting matter at high baryon number density and low
temperature is far less well understood than strongly interacting
matter at high temperature and zero baryon number density. At high
temperatures, the symmetry of the lowest free energy state is not in
dispute, calculations using lattice gauge theory are bringing the
equilibrium thermodynamics under reasonable quantitative control, and
even non-equilibrium dynamical questions are being addressed.
At high densities, we are still learning about the symmetries
of the lowest free energy state. In addition to being of
fundamental interest, an understanding of the symmetry properties
of dense matter can be expected to inform our understanding
of neutron star astrophysics and perhaps also heavy
ion collisions which achieve high baryon densities without
reaching very high temperatures.
At high densities and low
temperatures, the relevant degrees of freedom are those which involve
quarks with momenta near the Fermi surface(s). The presence of an
arbitrarily weak attraction between pairs of quarks results in the
formation of a condensate of quark Cooper pairs. Pairs of quarks cannot
be color singlets, and in QCD with two
flavors of massless quarks the Cooper pairs form in the
color ${\bf \bar 3}$
channel.\cite{Barrois,BailinLove,ARW2,RappETC,BergesRajagopal}
The resulting condensate gives gaps to quarks with two of three
colors and
breaks
the local color symmetry
$SU(3)_{\rm color}$ to an $SU(2)_{\rm color}$
subgroup.
The breaking of a gauge symmetry
cannot be characterized by a gauge invariant local
order parameter which vanishes on one side of
a phase boundary. The superconducting phase can
be characterized rigorously only by its global symmetries.
In QCD with two flavors, the Cooper pairs are $ud-du$ flavor
singlets and, in particular, the global flavor
$SU(2)_L \times SU(2)_R$
symmetry is left intact.
There is an
unbroken global symmetry which plays the role of baryon number
symmetry, $U(1)_B$.
Thus, no global symmetries are broken and the
only putative Goldstone bosons are those five which become the
longitudinal parts of the five gluons which
acquire masses.\cite{ARW2}\footnote{There is
also an unbroken gauged symmetry which plays the role of electromagnetism.
Also, the third color quarks can condense,\cite{ARW2} as we
discuss below, but the resulting gap is much smaller.}
In QCD with three flavors of massless quarks
the Cooper pairs {\it cannot}
be flavor singlets, and both color and flavor symmetries are
necessarily broken. The symmetries of the phase which
results have been analyzed in Ref.~\cite{ARW3}.
The attractive channel favored by one-gluon
exchange exhibits ``color-flavor locking''.
It locks $SU(3)_L$ flavor rotations to $SU(3)_{\rm color}$,
in the sense that the condensate is not symmetric under
either alone, but is symmetric under the
simultaneous $SU(3)_{{\rm color}+L}$ rotations.
The condensate
also locks $SU(3)_R$ rotations to $SU(3)_{\rm color}$,
and since color is a vector symmetry
the chiral $SU(3)_{L-R}$ symmetry is broken.
Thus, in quark matter with three massless quarks, the
$SU(3)_{\rm color}\times SU(3)_{L}\times
SU(3)_{R}\times U(1)_B$ symmetry is broken down to the global
diagonal $SU(3)_{{\rm color}+V}$ subgroup.
There is also an unbroken gauged $U(1)$ symmetry (under which
all quarks have integer charges) which plays the role
of electromagnetism.
All nine quarks have a gap.
All eight gluons get a mass. There are nine
massless Nambu Goldstone excitations of the condensate of Cooper pairs
which result from the breaking of the axial $SU(3)_A$ and
baryon number $U(1)_B$.
We see that cold
dense quark matter has rather different
global symmetries for $m_s=0$ than
for $m_s=\infty$.
A nonzero strange quark mass explicitly breaks the
flavor $SU(3)_V$ symmetry. As a consequence, color-flavor locking
with an unbroken global $SU(3)_{{\rm color}+V}$ occurs only for
$m_s\equiv 0$. Instead, for nonzero but sufficiently small strange
quark mass we expect, and find, color-flavor locking
which leaves a global $SU(2)_{{\rm color}+V}$ group unbroken.
As $m_s$ is increased from zero to infinity, there has to be some
value $m_s^{\rm unlock}$ at which color and flavor rotations
are unlocked, and
the full $SU(2)_L \times SU(2)_R$ symmetry is restored.
We argue on general grounds in Section \ref{sec:general} that
this unlocking phase transition must be of first order.
In subsequent Sections, we
analyze this transition
quantitatively in a model using a four-fermion interaction
with quantum numbers abstracted from single-gluon exchange.
{}From our analysis of the unlocking transition, we conclude that
for realistic values of the strange quark mass
chiral symmetry breaking may be present for
densities all the way down to those characteristic of baryonic matter.
This raises the possibility that
quark matter and baryonic matter may be continuously
connected in nature, as Sch\"afer and Wilczek
have conjectured
for QCD with three massless quarks.\cite{SchaeferWilczek}
We use our calculations of the properties of
color-flavor locked superconducting
strange quark matter to map the gaps due to pairing at the quark
Fermi surfaces onto gaps
due to pairing at the baryon Fermi surfaces in
superfluid baryonic matter consisting of nucleons,
$\Lambda$'s, $\Sigma$'s,
and $\Xi$'s. (See Section~\ref{sec:cont}).
We argue that color-flavor
locking will always occur
for sufficiently large chemical potential, for
any nonzero, finite $m_s$. We make
this argument by first using our results as a guide
to quark matter at moderate
densities and then using them to normalize
Son's model-independent analysis valid at
very high densities.\cite{Son}
As a consequence of color-flavor locking,
chiral symmetry is spontaneously broken even at asymptotically
high densities, in sharp contrast to the well established
restoration of chiral symmetry at high temperature.
\begin{figure}[thb]
\epsfxsize=5in
\begin{center}
\hspace*{0in}
\epsffile{phasediagram.eps}
\end{center}
\caption{
Conjectured phase diagram for QCD with two massless quarks
and a strange quark at zero temperature.
The global symmetries of each phase are labelled.
The regions of the phase diagram labelled
2SC, 2SC+s and CFL denote color superconducting quark matter phases.
The Figure is described at length in the text.
}
\label{fig:phasediagram}
\end{figure}
Our goal in this paper is a consistent
picture of the symmetries of cold dense quark matter with
massless $u$ and $d$ quarks as a function of
the strange quark mass $m_s$ and the chemical potential $\mu$.
We take $\mu$ to be the chemical potential for quark number,
one third that for baryon number. $m_s$ is the current
quark mass; we will refer to the $\mu$-dependent constituent
quark mass as $M_s(\mu)$.
Figure 1 summarizes our conjecture for the zero temperature
phase diagram of QCD as a function
of $m_s$ and $\mu$. (We work at zero temperature throughout
this paper.)
The rest of this section can be read as a description
of this Figure. Lines in the diagram separate phases which
differ in their global symmetries. This means that each line
describes a distinction which can be associated with
a local order parameter which vanishes
on one side of the line.
In each region of the diagram, we list
the unbroken global symmetries of the corresponding phase.
We characterize the phases using the $SU(2)_L\times SU(2)_R$
flavor rotations of the light quarks, and the
$U(1)_S$ rotations of the strange quarks.\footnote{The
$U(1)_B$ symmetry associated
with baryon number is a combination of $U(1)_S$, a $U(1)$ subgroup of
isospin, and the gauged $U(1)_{\rm EM}$ of electromagnetism.
Therefore, in our analysis of the global symmetries,
once we have analyzed isospin and strangeness,
considering baryon number adds nothing new.}
In subsequent Sections of this paper, we
explore those
phases (labelled 2SC, 2SC+s and CFL)
which extend to high enough density that they
can be described as superconducting quark matter.
We analyze these phases, and the unlocking phase transition
which separates them, quantitatively in a
model in which quarks interact via a four-quark interaction
modelled on that induced
by single-gluon exchange. Certainly, our model is not
expected to be a valid
description for QCD at nuclear matter density,
where confinement plays an important role. That part of the
diagram which describes the symmetry properties
of different phases of baryonic matter is
therefore not derived from our model. It is
conjectural but plausible.
In Figure 1 and throughout this paper, we
neglect the small $u$ and $d$ current quark masses.
The light quark masses have no
substantial influence on the condensation of quark Cooper
pairs.\cite{BergesRajagopal,PisarskiRischke1OPT} We also ignore the
effects of electromagnetism throughout.
We assume that wherever a baryon Fermi surface is
present, baryons always pair at zero temperature.
To simplify our analysis, we assume that baryons always
pair in channels which preserve rotational invariance,
breaking internal symmetries such as isospin if necessary.
We now explain the features shown in Figure 1
by beginning at $\mu=0$ (in vacuum) and describing the
phase transitions which occur as $\mu$ is increased at constant $m_s$.
We do this twice, first with a large enough value of $m_s$
that the strange quark is heavy enough
that
immediately above
deconfinement $\mu$ is still less than $m_s$ and
there are still no strange quarks present.
For $\mu=0$ the density is
zero; isospin and strangeness are unbroken; Lorentz symmetry is
unbroken; chiral symmetry is broken.
Above a first order
transition\footnote{Discussed in Ref.\ \cite{Halasz}.}
at an onset chemical potential $\mu_{\rm o}\sim 300~{\rm MeV}$,
one finds nuclear matter (``nuclear'' in Figure~\ref{fig:phasediagram}).
Lorentz symmetry is broken, leaving only
rotational symmetry manifest.
Chiral symmetry is thought to be
broken, although the
chiral condensate $\langle \bar q q \rangle$ is expected to be reduced
from its vacuum value.
In the nuclear matter phase, we expect an instability of the nucleon
Fermi surfaces to lead to Cooper pairing. We assume that
(as is observed in nuclei) the pairing is
$pp$ and $nn$, breaking
isospin.
Since there are no
strange baryons present, $U(1)_S$
is unbroken.
At the large value of $m_s$ we are currently describing,
when $\mu$ is increased above $\mu_{\rm V}$, we
find the ``2SC'' phase of color-superconducting
matter consisting of up and down quarks only, described
in Refs.~\cite{Barrois,BailinLove,ARW2,RappETC}.
A nucleon description is no longer appropriate.
The light quarks pair in isosinglet channels. $SU(2)_V$
is unbroken. $SU(2)_A$ is unbroken.
Quarks of two colors
pair in a Lorentz singlet channel; quarks carrying the
third color
can form an axial vector (ferromagnetic) condensate, which
breaks rotational invariance. The associated gap
is of order a keV or much less\cite{ARW2} and we neglect
this condensate in Figure 1.\footnote{In a mean-field
treatment, one gluon exchange
is neither attractive nor repulsive in the channel
which leads to the ferromagnetic condensate while the instanton
interaction is only weakly attractive. The
ferromagnetic condensate is therefore fragile in the sense
that this channel could easily be rendered repulsive
by other interactions.\cite{ARW2} If that were the case,
the third color quarks would presumably form Cooper pairs
with nonzero orbital angular momentum, and an even smaller gap.}
The phase transition at $\mu_V$ is first
order
\cite{ARW2,RappETC,BergesRajagopal,BJW2,PisarskiRischke1OPT,CarterDiakonov}
and is characterized by a competition between the chiral
$\langle \bar q q\rangle$ condensate and the superconducting
$\langle q q \rangle$ condensate.\cite{BergesRajagopal,CarterDiakonov}
As the chemical potential is increased further, when $\mu$
exceeds the constituent strange quark mass $M_s(\mu)$
a strange quark Fermi surface forms, with a Fermi momentum
far below that for the light quarks. We denote the resulting
phase ``2SC+s''.
Light and strange quarks do not pair with each other, because
their respective Fermi momenta are so different
(see Section \ref{sec:general}).
The strange Fermi
surface is presumably nevertheless unstable. The resulting
$ss$ condensate must be constructed from Cooper pairs which are
either color ${\bf 6}$, or have spin $1$, or have nonzero
angular momentum, or must be $\langle s C \gamma_4 s \rangle$,
which is symmetric in Dirac indices.
Each of these options lead to small gaps, for different
reasons: One-gluon exchange
is repulsive in rotationally invariant
color ${\bf 6}$ channels and the $\langle s C \gamma_4 s \rangle$
channel, and although
other interactions may overcome this and result in a net
attraction, this is likely to be much weaker than the
attraction in the dominant color ${\bf \bar 3}$ channels. Gaps
involving Cooper pairs with nonzero $J$ tend to be
significantly suppressed because not all quarks at the Fermi surface
participate.\cite{ARW2}
Thus, although $U(1)_S$ will be broken
in the ``2SC+s'' phase by an $ss$ condensate in {\it some}
channel, we expect that the resulting gaps will be very small.
In our analysis below we therefore
neglect the difference between the 2SC and 2SC+s
phases.
Finally, when the chemical potential is high enough
that the Fermi momenta for the strange and light quarks
become comparable, we cross the first order locking
transition described in detail
in the next Sections, and find
the color-flavor locked (CFL) phase.
There is an unbroken global symmetry constructed
by locking the $SU(2)_V$ isospin rotations and an
$SU(2)$ subgroup of color. Chiral symmetry is once again broken.
We now describe the sequence of phases which arise
as $\mu$ is increased,
this time for a value of $m_s$
small enough that strange baryonic matter
forms below the deconfinement density.
At $\mu_{\rm o}$, one enters the nuclear matter phase,
with the familiar
$nn$ and $pp$ pairing at the neutron and proton Fermi surfaces
breaking isospin.
The $\Lambda$, $\Sigma$ and $\Xi$ densities are
still zero, and strangeness is unbroken. At a somewhat larger
chemical potential, we enter the strange baryonic matter phase, with
Fermi surfaces for the $\Lambda$ and $\Sigma$. These pair with themselves in
spin singlets, breaking $U(1)_S$. This phase is labelled ``strange
baryon'' in Figure 1. The global symmetries $SU(2)_L\times SU(2)_R$
and $U(1)_S$ are all broken.
As $\mu$ rises, one
finds yet another onset at which the $\Xi$ density becomes
nonzero. This breaks no new symmetries, and so is not shown in the Figure.
Note that kaon condensation\cite{KaplanNelson} breaks $U(1)_S$, and $SU(2)_V$.
The only phase in the diagram in which this occurs is that which
we have labelled the
strange baryon phase. Thus, if kaon condensation occurs, by definition it
occurs within this region of the diagram.
If kaon condensation is favored, this will
tend to enlarge the region of the diagram within which $U(1)_S$ and
$SU(2)_V$ are both broken.
We can imagine two possibilities for what happens next as $\mu$ increases
further.
(1) Deconfinement: the baryonic Fermi surface is replaced by
$u,d,s$ quark Fermi
surfaces, which are unstable against pairing, and
we enter the CFL phase, described above. Isospin is locked to color and
$SU(2)_{{\rm color}+V}$ is restored, but chiral symmetry remains broken.
(2) No deconfinement: the Fermi momenta of all of the octet
baryons are now similar enough that pairing between baryons with
differing strangeness becomes possible. At this point,
isospin is restored: the baryons pair in rotationally
invariant isosinglets
($p\Xi^-$, $n\Xi^0$, $\Sigma^+ \Sigma^-$, $\Sigma^0\Sigma^0$, $\Lambda \Lambda$).
The interesting point is that scenario (1) and scenario (2) are
indistinguishable.
Both look like the ``CFL'' phase of the figure:
$U(1)_S$ and chirality are broken, and there is an
unbroken vector $SU(2)$. This is the ``continuity of quark and hadron matter''
described by Sch\"afer and Wilczek \cite{SchaeferWilczek}.
We conclude that for low enough strange quark mass, $m_s<m_s^{\rm cont}$, there
may be a region where sufficiently dense baryonic matter has the same
symmetries as quark
matter, and there need not be any
phase transition between them. In Section 6 we use this observation to
construct a mapping between
the gaps we have calculated at the Fermi surfaces in
the quark matter phase and gaps at the baryonic
Fermi surfaces at lower densities.
All of the
qualitative features shown in Figure 1
follow from the above discussion of the small and large $m_s$
regimes, with one exception. In Figure 1, a transition between
two flavor nuclear matter and three flavor quark matter (in the 2SC+s
phase) occurs only for a range of values of $m_s$ above $m_s^{\rm cont}$.
However, it may be that the strange baryon
phase ends {\it below} $m_s^{\rm cont}$. One would then have
a transition
between two flavor nuclear matter and three flavor quark
matter (in either the 2SC+s phase or the CFL phase)
for a range of $m_s$ values which extends both above and below
$m_s^{\rm cont}$.
Determining
the extent of the strange baryon phase requires a detailed
analysis of strange baryonic matter, including the possibility
of kaon condensation. This is not our goal here.
This concludes our overview of the
qualitative features of the phase diagram for low temperature
strongly interacting matter. In subsequent sections
we analyze the 2SC and CFL phases and
the unlocking transition more quantititatively,
and use the properties of the CFL phase to make
predictions for properties of baryonic
matter in which $U(1)_S$ is broken while
$SU(2)_V$ is unbroken.
\section{Model Independent Features of the Unlocking Phase Transition}
\label{sec:general}
In this section, we give a model independent argument that the
unlocking phase transition between the CFL and 2SC phases
in Figure 1 must be first order.
For any $m_s\neq 0$, transformations in $SU(3)_A$ which
involve the strange quark are explicitly not symmetries.
The CFL and 2SC phases are distinguished by whether
the chiral $SU(2)_A$ rotations involving
only the $u$ and $d$ quarks are or are not spontaneously
broken. As we will make clear in subsequent
sections, the unlocking transition
is associated with the vanishing of those diquark
condensates which pair a strange quark with either an up or
a down quark. We denote the
resulting gaps $\Delta_{us}$ for simplicity.
In the absence of any $\Delta_{us}$ gap,
the only
Cooper pairs are those involving pairs of
light quarks, or pairs of strange quarks.
The light quark
condensate is unaffected by the strange quarks, and
behaves as in a theory with only two flavors of
quarks. Chiral symmetry is unbroken.
We will show explicitly below that when $\Delta_{us}\neq 0$,
the interaction between the light quark condensates and
the mixed (light and strange) condensate results in the
breaking of two-flavor chiral symmetry $SU(2)_A$ via the locking of
$SU(2)_L$ and $SU(2)_R$
flavor symmetries to an $SU(2)$ subgroup of color. This
color-flavor locking mechanism leaves a global
$SU(2)_{{\rm color}+V}$ group unbroken.
\begin{figure}[t]
\epsfxsize=3.5in
\begin{center}
\hspace*{0in}
\epsffile{fermi.eps}
\end{center}
\caption{
How the strange quark mass disrupts a $u$-$s$ condensate. The strange
quark (upper curve) and light quark (straight line) dispersion
relations are shown, with their Fermi seas filled up to the Fermi
energy $E_F$. The horizontal axis is the magnitude of the
spatial momentum; pairing occurs between particles (or holes)
with the same $p$ and opposite $\vec p$.
For $p<p_F^s$, hole-hole pairing ($\bar s$-$\bar u$) is
possible (two examples are shown). For $p>p_F^u$, particle-particle
pairing ($s$-$u$) is possible (one example is shown). Between the
Fermi momenta for the $s$ and $u$ quarks, no such pairing is possible.}
\label{fig:fermi}
\end{figure}
The unlocking transition is a transition
between a phase with $\Delta_{us}\neq 0$ at $M_s<M_s^{\rm unlock}$
and a phase with
$\Delta_{us}=0$. ($M_s(\mu)$
is the constituent strange quark mass at chemical potential $\mu$.)
The BCS mechanism guarantees superconductivity in the
presence of an arbitrarily weak attractive interaction,
and there is certainly an attraction between $u$ and $s$
quarks (with color ${\bf \bar 3}$) for any $M_s$.
How, then, can $\Delta_{us}$ vanish above $M_s^{\rm unlock}$?
The BCS result relies on a singularity
which arises for pairs of fermions with zero total momentum
{\it at} the Fermi surface.
We see from Figure 2 that no pairing is possible for quarks
with momenta between the $u$ and $s$ Fermi momenta, and that
at most
one of the quarks in a $u$-$s$ Cooper pair
can be at its respective Fermi surface.
The BCS singularity therefore does not arise if $M_s\neq 0$,
and a $u$-$s$ condensate is not mandatory.
A $u$-$s$ condensate involves pairing of quarks with momenta
within about $\Delta_{us}$ of the Fermi surface,
and we therefore expect that
$\Delta_{us}$ can only be nonzero if the mismatch
between the up and strange Fermi momenta is less than
or of order $\Delta_{us}$:
\begin{equation}
\sqrt{\mu^2-M_u(\mu)^2} - \sqrt{ \mu^2-M_s(\mu)^2}
\approx \frac{M_s(\mu)^2-M_u(\mu)^2}{2\mu}
\lesssim \Delta_{us}\ .
\label{criterion}
\end{equation}
Here $M_s(\mu)$ and $M_u(\mu)$ are the constituent quark masses
in the CFL phase. We neglect $M_u(\mu)$ in the following.
Equation (\ref{criterion})
implies that arbitrarily small values of $\Delta_{us}$ are
impossible.
As $m_s$ is increased from zero, $\Delta_{us}$ decreases
until it is comparable to $M_s(\mu)^2/2\mu$. At this point,
smaller nonzero values of $\Delta_{us}$ are not possible,
and $\Delta_{us}$ must therefore
vanish discontinuously. This simple
physical argument leads us to conclude that the
unlocking phase transition at $M_s=M_s^{\rm unlock}$
must be first order.
Below, we confirm by
explicit calculation in a model that the
unlocking phase transition is first order.
We find that $\Delta_{us}$ is of order
$50-100~{\rm MeV}$ on the CFL side
of the transition if the coupling is calibrated to give a
reasonable magnitude for the
chiral condensate in vacuum.
\section{Superconducting Condensates in a Model}
\label{sec:con}
We study the physics of the quark matter phases in Figure 1
in a toy model
in which we replace the full
interactions between quarks by a four-fermion interaction
with the quantum numbers of single-gluon exchange,
\begin{equation}
{\cal L}_{\rm interaction} = G\int d^4 x \left( \bar q \lambda^A
\gamma^\mu q\right)\left( \bar q \lambda^A
\gamma_\mu q\right)\ ,
\label{con:interaction}
\end{equation}
and work in a mean-field approximation.
All four-fermion interactions involving fermions at the Fermi surface
are equally relevant, in the renormalization group sense, so
why use only the one with the quantum numbers of one-gluon exchange?
Renormalization group analyses\cite{SchaeferWilczekRG,EHS} show that other
interactions are important, but confirm that in QCD with two and three
massless quarks the most attractive
channels for condensation are those generated by (\ref{con:interaction}).
There is one important caveat.
The single-gluon-exchange interaction is symmetric under
$U(1)_A$, and so it sees no distinction between
condensates of the form $\<q C q \>$ and $\<q C \gamma_5 q \>$.
However, once instantons are included the Lorentz scalar
$\<q C \gamma_5 q \>$ is favored,\cite{ARW2,RappETC}
so we neglect the other form.
One of the
central qualitative lessons of past work is that superconducting
gaps of order 100 MeV are obtained in
the quark matter phase independent of the details
of the interaction which is used, as long as the strength
of the interaction is chosen in a way which is roughly
consistent with what we know about the vacuum chiral condensate.
Superconducting
gaps of this magnitude can be obtained using a number of different
treatments based upon single-gluon exchange.\cite{BailinLove, IwaIwa, ARW3}
Refs.~\cite{ARW2, RappETC, BergesRajagopal, CarterDiakonov} find gaps
of this magnitude
by making approximations of varying sophistication in which
the interaction between
quarks is modeled by
that induced by instantons. Further confirmation of this
lesson comes from recent work using the NJL model.\cite{Klevansky}
As in any model which uses a four-fermion interaction,
in addition to choosing the quantum numbers of the interaction
we must introduce an ad hoc form factor.
The explicit lesson of Refs.~\cite{ARW2,ARW3,Klevansky},
implicit in other work, is that results are rather insensitive to
the choice of form factor, again
as long as $G$ is suitably normalized.
We therefore
use a simple sharp cutoff in momentum integrals at
a spatial momentum $\Lambda=800$~MeV.
Our results should therefore not be extended above $\mu\sim\Lambda$.
The form factor (which we
describe by the single parameter $\Lambda$) is a crude
representation of physics not described
by the model, and $\Lambda$ should not be taken to infinity.
For $\Lambda=0.8$ GeV, setting
$G=7.5$ GeV$^{-2}$ in (\ref{con:interaction})
results in a vacuum constituent mass for the
light quarks of 400 MeV, and we use these values
of $\Lambda$ and $G$ throughout.
We have however checked that if we instead take $\Lambda=1.$ GeV,
and change $G$ accordingly, the superconducting gaps and
the location of the unlocking phase transition are not
significantly affected.
An analysis of the phase diagram requires
condensates
\begin{equation}
\label{con:conds}
\<q^\alpha_i C\gamma_5 q^\beta_j\>\ ,\ \ \
\<q^\alpha_i C\gamma_5\gamma_4 q^\beta_j\>\ ,\ \ \
\< \bar q^{\,i}_\alpha q^\beta_j \>\ ,
\end{equation}
leading to gap parameters
\begin{equation}
\label{con:gaps}
\Delta^{\alpha\beta}_{ij}\ ,\ \ \ \ \ \ \ \ \ \
\kappa^{\alpha\beta}_{ij}\ ,\ \ \ \ \ \ \ \ \ \
\phi^{i\beta}_{\alpha j}
\end{equation}
with corresponding quantum numbers. Each
gap matrix is a
symmetric $9\times 9$ matrix describing the color (Greek indices)
and flavor (Roman indices) structure.
In \ref{sec:appendix} we derive the gap equation for these condensates
in full generality, assuming nothing about their color-flavor
structure. However, it is very difficult to solve the general gap
equation, so at this point we make several simplifying assumptions to
obtain easily soluble gap equations. These are:
\begin{enumerate}
\item We fix $\phi^{i\beta}_{\alpha j} = M_s \delta^i_3 \delta^3_j \delta_\alpha^\beta$.
For more details see below.
\item We use the simplest form for $\Delta^{\alpha\beta}_{ij}$ which
allows an interpolation between the color-flavor locking favored
by single-gluon exchange at $m_s=0$ and the ``2SC'' phase
favored at $m_s\to\infty$ (see \eqn{sol:ansatz}).
This ansatz, which requires
five independent superconducting gap parameters, leads to
consistent gap equations in the presence of the one-gluon
exchange interaction. We describe this ansatz in detail
in Section 4.
\item We neglect $\kappa^{\alpha\beta}_{ij}$. This condensate
pairs left-handed and right-handed quarks, and so breaks
chiral symmetry. It can be shown to vanish in the absence
of quark masses.\cite{PisarskiRischke}
In \ref{sec:appendix}
we show that it {\it must} be nonzero in the presence
of a nonzero $\langle q C \gamma_5 q \rangle$ condensate and
nonzero quark masses. In the 2SC phase, the
$\langle q C \gamma_5 q \rangle$ condensate involves only the massless
quarks and $\langle q C \gamma_5 \gamma_4 q \rangle$
therefore vanishes and chiral symmetry remains unbroken.
In the CFL phase, however, the
$\langle q C \gamma_5 q \rangle$ condensate involves the
massive strange quarks and itself breaks chiral
symmetry by color-flavor locking.
No symmetry argument precludes $\kappa\neq 0$, and in
\ref{sec:appendix} we have confirmed by direct calculation at one
value of $\mu$
that the gaps $\kappa^{\alpha\beta}_{ij}$ are nonzero in the CFL phase.
However, we find that these gaps are much
smaller than the corresponding gaps generated by the $\langle q C
\gamma_5 q \rangle$ condensate. Including $\kappa$ in the gap equations
for $\Delta$
makes finding solutions prohibitively slow.
Therefore, once we have convinced ourselves that these condensates are
present but small, we neglect them.
\end{enumerate}
We now give a more detailed discussion of
our assumptions about the quark mass matrix $\phi$.
Since all our calculations are in the quark matter regime ($\mu>\mu_V$
in Figure~\ref{fig:phasediagram}), we will assume that $\mu$ is high
enough that we can neglect
chiral condensates for the $u$ and $d$ quarks,\footnote{
Because chiral symmetry is broken in the CFL phase,
there is no reason for the ordinary $\langle \bar u u \rangle$ and
$\langle \bar d d\rangle$ chiral condensate to vanish. Indeed,
Ref.~\cite{ARW3} demonstrates that the $\langle \bar u u \rangle$ and
$\langle \bar d d\rangle$ condensates must be nonvanishing. However,
an explicit calculation \cite{Schaefer} shows them to be small,
and we neglect them. \Eqn{criterion} shows that nonzero $\<\bar u u\>$
and $\<\bar d d\>$ condensates allow color-flavor locking to persist
to higher strange quark masses.}
but
because the current quark mass $m_s$ is nonzero there may
be a nonzero $\langle \bar s s \rangle$ chiral condensate.
In this case the quark mass matrix
is $\phi={\rm diag}(0,0,M_s)$, where $M_s$ is the $\mu$-dependent
constituent strange quark mass, satisfying a gap equation of its
own. At sufficiently high densities, it is given by the current mass
$m_s$, of order 100 MeV. At lower densities, it receives an
additional contribution from $\langle \bar s s \rangle$.
In the interests of simplicity, however, we will not solve the $M_s$
gap equation simultaneously with the superconductivity gap equations.
Instead, we treat $M_s$ as a parameter in the
superconductivity gap equations, and do not
determine what value of $M_s$ corresponds to given values
of $m_s$ and $\mu$. For simplicity, we choose the
same value of the parameter $M_s$ for strange quarks
of all three colors. This would not be the case in a
treatment in which the $M_s$ gap equations are solved
simultaneously with the superconductivity gap
equations.\cite{CarterDiakonov}
With our simplifying assumptions, we are left needing only a gap equation for
the quark-quark condensate $\Delta^{\alpha\beta}_{ij}$.
The simplest ansatz that
interpolates between the two flavor case ($m_s=\infty$)
and the three flavor case ($m_s=0$) is
\addtocounter{equation}{3}
\beql{sol:ansatz}
\begin{array}{l}
\Delta^{\alpha\beta}_{ij} =
\left(
\begin{array}{ccccccccc}
b+e & b & c \\
b & b+e & c \\
c & c & d \\
& & & & e \\
& & & e & \\
& & & & & & f\\
& & & & & f &\\
& & & & & & & & f\\
& & & & & & & f &\\
\end{array}
\right) \\
\hbox{basis vectors:} \\
\begin{array}{rcl@{\,\,}l@{\,\,}l@{\,\,\,\,}
l@{\,\,}l@{\,\,\,\,} l@{\,\,}l@{\,\,\,\,}l@{\,\,}l}
(\alpha,i) &=& (1,1),&(2,2),&(3,3),&(1,2),&(2,1),&(1,3),&(3,1),&(2,3),&(3,2) \\
&=& (r,u),&(g,d),&(b,s),&(r,d),&(g,u),&(r,s),&(b,u),&(g,s),&(b,d)
\end{array}
\end{array}
\end{equationwithlabel}
where the color indices are $\alpha,\beta$ and the flavor indices are $i,j$.
The strange quark is $i=3$. The rows are labelled by $(\alpha,i)$ and the
columns by $(\beta,j)$.
\begin{table}[hbt]
\def\st{\rule[-1.5ex]{0em}{4ex}}
\begin{center}
\begin{tabular}{llll}
\hline
\st description & condensate & symmetry \\
\hline
$\begin{array}{l}\hbox{2SC: 2-flavor} \\ \hbox{\phantom{2SC: }superconductivity}\end{array}$
& $\begin{array}{l} c=d=f=0,\\ b=-e \end{array}$
& $SU(2)_L\times SU(2)_R$ \\[2ex]
$\begin{array}{l} \hbox{CFL: color-flavor locking}\end{array}$ & &
$SU(2)_{{\rm color}+L+R}$ \\[0.5ex]
$\begin{array}{l}\hbox{CFL: color-flavor locking} \\ \hbox{\phantom{CFL: }with $m_s=0$}\end{array}$
& $\begin{array}{l} c=b,f=e,\\ d=b+e \end{array} $
& $SU(3)_{{\rm color}+L+R}$ \\
\hline
\end{tabular}]
\end{center}
\caption{
Symmetries of the condensate ansatz \eqn{sol:ansatz} in various
regimes.
}
\label{tab:syms}
\end{table}
The properties of the ansatz are summarized in Table \ref{tab:syms}.
In its general form, this condensate locks color and flavor.
This is because of the condensates $c$ and $f$, referred to
collectively as $\Delta_{us}$ above,
that
combine a strange quark with a light one.
It is straightforward
to confirm by direct calculation that if either $c$ or $f$ or $b+e$ is
nonzero, then the matrix $\Delta^{\alpha\beta}_{ij}$ of (\ref{sol:ansatz}) is
not invariant under separate flavor or color rotations but is
left invariant by simultaneous rotations of $SU(2)_V$ and the $SU(2)$
subgroup of color corresponding to
the colors 1 and 2. Thus, color-flavor locking occurs whenever one
or more of $c$, $f$, or $b+e$ is nonzero.
We discuss $b+e$ below.
Although the standard electromagnetic symmetry is broken
in the CFL phase, as are all the color gauge symmetries,
there is a combination of electromagnetic and color
symmetry that is preserved.\cite{ARW3} Consider
the gauged $U(1)$ under which the charge $Q'$
of each quark is the sum of its electromagnetic
charge $(2/3,-1/3,-1/3)$ (depending on the flavor
of the quark) and its color hypercharge $(-2/3,1/3,1/3)$ (depending
on the color of the quark). It is easy to confirm that the
sum of the $Q'$ charges of each pair of quarks corresponding
to a nonzero entry in (\ref{sol:ansatz}) is zero. This modified
electromagnetism is therefore not broken by the condensate.
At $M_s=0$, one has color-flavor locking: $c$, $f$ and $b+e$
are all
nonzero. In addition,
$c=b$, $f=e$, and $d=b+e$, and
the matrix is invariant under simultaneous
rotations of $SU(3)_V$ and $SU(3)_{\rm color}$.
For $M_s$ nonzero but sufficiently small, $c$, $f$ and $b+e$
remain nonzero but are no longer equal to $b$, $e$ and $d$.
Color and flavor are locked, and the matrix is invariant
under simultaneous
rotations of $SU(2)_V$ and $SU(2)_{\rm color}$.
As described in Section 2,
once $M_s$ becomes large enough, $c$ and $f$ both
vanish (and so does $b+e$) and the symmetry
group enlarges, unlocking color and flavor, and restoring chiral symmetry
(see Table \ref{tab:syms}).
If $\Delta$ were nonzero only in color
${\bf \bar 3}$ channels, we would have $b=-e$, $d=0$ and $c=-f$.
Single-gluon-exchange is repulsive in
the color ${\bf 6}$ channels.
However, in the CFL phase where $c$ and $f$ are
nonzero, the only consistent solutions to the gap
equations have $b\neq -e$, $c\neq -f$, and $d\neq 0$.
This means that the single-gluon-exchange interaction {\it requires}
a small color ${\bf 6}$ admixture
along with the favored color ${\bf \bar 3}$ condensate, even
though it prohibits color ${\bf 6}$ condensates alone.
This feature
occurs both for $M_s=0$\cite{ARW3} and for $0< M_s < M_s^{\rm unlock}$.
Note that not all color ${\bf 6}$
terms arise in (\ref{sol:ansatz}). For example, only three of the nine
diagonal elements of (\ref{sol:ansatz}) are nonzero.
These are the only three diagonal elements
which can be nonzero without
breaking the $SU(2)_{{\rm color}+V}$
symmetry, and upsetting color-flavor locking.
Furthermore, if the two upper-most diagonal elements were not
equal to $b+e$, the $SU(2)_{{\rm color}+V}$ symmetry
would also be broken.
We have therefore learned that
the color ${\bf 6}$ terms which are induced in the presence of a
color ${\bf \bar 3}$ condensate
are those which do not change the global symmetry of the
color ${\bf \bar 3}$ condensate. Those color ${\bf 6}$ terms
which are ``allowed'' in this sense are generated by the interaction.
The corresponding analysis of the 2SC phase,
where only two flavors participate in the
condensate is similar in logic but
leads to different conclusions.
In the 2SC phase, where $c$ and $f$ are zero, we do in fact find $b=-e$
and $d=0$, namely a pure color ${\bf \bar 3}$ condensate.
In this case,
if $b+e$ were nonzero, one {\em would} have
color-flavor locking and broken chiral symmetry.
In this case, then, the pure color ${\bf \bar 3}$
condensate with $b+e=0$ is ``protected'' by the fact
that any admixture of color ${\bf 6}$ condensate like $b+e$
would break a global symmetry which the color ${\bf \bar 3}$ condensate
does not break.
In the absence of
$c$ and $f$, a nonzero $d$ condensate ($\langle ss \rangle$)
is still allowed, but there is no way for the
color ${\bf \bar 3}$ condensate to induce it. Therefore the simple
argument that one-gluon exchange is repulsive in this channel
holds, and in our model
we find $d=0$ in the 2SC phase even where the strange quark density
is nonzero.
As discussed above, a more complete analysis would include
a small $\langle ss \rangle$ condensate in some channel, resulting
in a distinction between the 2SC and 2SC+s phases which our analysis
does not see.
\section{The Gap Equation \ldots }
\label{sec:sol}
Our task now is to obtain the gap equation for $\Delta$,
and verify the
behavior described in Sections 1 and 2. In particular, we want to
show that the unlocking phase transition is first-order, and
calculate $M_s^{\rm unlock}(\mu)$ and compare
to the criterion (\ref{criterion}) which we derived
on model-independent grounds.
In \ref{sec:appendix} we have derived the
mean field gap equation and given its general
solution. It takes the form (see \Eqn{app:gap1})
\beql{sol:gap1}
\Delta = {G\over (2\pi)^4} \int \! d^4q \,
\lambda^T_A \gamma_\mu \,P_1(\mu,M_s,\Delta,q) \,\lambda_A \gamma^\mu,
\end{equationwithlabel}
where the function $P_1$ is given by \eqn{app:X} and \eqn{app:Xinv},
and we are assuming, as described in Section~\ref{sec:con} that
$\phi^{i\beta}_{\alpha j} = M_s \delta^i_3 \delta^3_j \delta_\alpha^\beta$
and that $\kappa=0$. Note that strictly speaking the gap equations do not
close for $\kappa=0$. The $\kappa$ gap equation is just like \eqn{sol:gap1}
but with $-P_4$ instead of $P_1$ on the right hand side, and for $M_s>0$,
$P_4$ is nonzero even when $\kappa=0$. This means that it is
inconsistent to set $\kappa=0$.
However, as discussed in the Appendix,
$\kappa$ turns out to be small and we use (\ref{sol:gap1}) with
$\kappa=0$ in $P_1$.
In principle we could just evaluate $P_1(\mu,M_s,\Delta,q)$
by substituting \eqn{sol:ansatz} into \eqn{app:Xinv}, but that would
lead to a very complicated expression.
To turn \eqn{sol:gap1} into a tractable set of gap equations, we transform
to a slightly different basis from \eqn{sol:ansatz}.
In the new basis, the first 2 vectors are
$(\alpha,i)= 1/\sqrt{2}\Bigl( (1,1)\pm(2,2) \Bigr)$.
Now the top left $3\times 3$ block of \eqn{sol:ansatz} looks like
\begin{equation}
\left(
\begin{array}{ccc}
e \\
& a_{11} & a_{12} \\
& a_{12} & a_{22}
\end{array}
\right)
=
\left(
\begin{array}{ccc}
e \\
& 2b+e & \sqrt{2}c \\
& \sqrt{2}c & d
\end{array}
\right)
\end{equation}
$\Delta$ is now block-diagonal, consisting of $1\times 1$ and $2\times 2$
blocks. We find that
$P_1(\mu,M_s,\Delta,q)$ takes a corresponding
block-diagonal form,
\begin{equation}
\begin{array}{l}
P_1(\mu,M_s,\Delta,q) = \\[2ex]
\left(
\begin{array}{c@{\!\!\!\!\!}c@{\,\,}c@{\!\!\!}c@{\!}c@{\!\!\!}c@{\!}c@{\!\!\!}c@{\!}c}
E(e,q) \\
& A_{11}(a,q) & A_{12}(a,q) \\
& A_{12}(a,q) & A_{22}(a,q) \\
& & & & E(e,q) \\
& & & E(e,q) & \\
& & & & & & F(f,q)\\
& & & & & F(f,q) &\\
& & & & & & & & F(f,q)\\
& & & & & & & F(f,q) &\\
\end{array}
\right) \\
\end{array}
\end{equation}
where
\beql{sol:RHS}
\begin{array}{r@{\,\,}c@{\,\,}l@{\,\,}c@{\,\,}l}
F(f,q) &=& P_1(\mu,M_s,f,q) &=& \displaystyle
{f w \over w^2 - (4\mu^2-M_s^2)\,{\vec q}^{\,2} - (\mu- iq_0)^2 M_s^2 } \\[2ex]
&& \phantom{ P_1(\mu,M_s,q)}(w &=& f^2 + \mu^2 + {\vec q}^{\,2} + q_0^2) \\[0.5ex]
E(e,q) &=& P_1(\mu,\,0\,,e,q) &=&
\hbox{as above, with $f\to e$ and $M_s=0$} \\[0.5ex]
A(a,q) &=& P_1(\mu,M_s,a,q)
\end{array}
\end{equationwithlabel}
Note that the function $P_1$ simplifies considerably when its argument
is a symmetric off-diagonal $2\times 2$ matrix, as in the $e$ and $f$
blocks. The resulting expressions for $E$ and $F$ are given on the
right-hand side of \eqn{sol:RHS}.
The only difficult thing in \eqn{sol:RHS} is evaluating $P_1$
for the $2\times 2$ matrix $a$, in which case Eqns.~\eqn{app:X} and
\eqn{app:Xinv} must be used.
The only nonzero matrix elements in $P_1$,
on the right hand side of the gap equation, are those
which are nonzero in our ansatz for $\Delta$, on the left
hand side of the gap equation. This confirms that the
only color ${\bf 6}$ terms which are induced by the
color ${\bf \bar 3}$ condensate are those which we
have included in our ansatz, in agreement with the
symmetry arguments given above. Although more complicated
ans\"atze may be worth considering in future work,
this ansatz is the minimal one which suffices.
Taking into account the $\lambda$ and $\gamma$ matrices from the gluon vertex,
we end up with the 5 gap equations
\beql{sol:gap2}
\begin{array}{rcl}
e &=& {4G/ (2\pi)^4} \int\!d^4q\,\,
\Bigl(A_{11}(a,q) - {5\over 3} E(e,q) \Bigr) \\[1ex]
f &=& {4G/ (2\pi)^4} \int\!d^4q\,\,
\Bigl(\sqrt{2}A_{12}(a,q) - {2\over 3} F(f,q) \Bigr)\\[1ex]
a_{11} &=& {4G/ (2\pi)^4} \int\!d^4q\,\,
\Bigl({1\over 3} A_{11}(a,q) + 3 E(e,q) \Bigr)\\[1ex]
a_{12} &=& {4G/ (2\pi)^4} \int\!d^4q\,\,
\Bigl(-{2\over 3} A_{12}(a,q) + 2\sqrt{2} F(f,q) \Bigr)\\[1ex]
a_{22} &=& {4G/ (2\pi)^4} \int\!d^4q\,\,
{4\over 3} A_{22}(a,q)
\end{array}
\end{equationwithlabel}
To obtain the gap parameters as a function of $\mu$ and $M_s$, we solve the
5 simultaneous equations \eqn{sol:gap2} numerically, using
{\em Mathematica}. The right-hand sides are evaluated by numerical
integration of the integrands given in \eqn{sol:RHS}.
For $A(a,q)$, we must evaluate $P_1(\mu,M_s,a,q)$, which corresponds to
\eqn{app:Xinv} with $\Delta$ being the $2\times 2$ matrix $a$,
$\kappa=0$, and $\phi=\hbox{diag} (0,M_s)$. The gap parameters $b,c,d,e,f$
determine the poles of the propagator $P_1$ and hence
determine the quasiparticle dispersion relations and the gaps
at the Fermi surface.
\section{\ldots and Its Solutions}
\label{sec:res}
\begin{figure}[t]
\epsfxsize=4in
\begin{center}
\hspace*{0in}
\epsffile{pd2.eps}
\end{center}
\caption{
The part of the phase diagram in which we have
done calculations using our model.
The thin solid lines show the slices along which
we have solved the gap equations. Note that a horizontal
line in Figure 1 at some $m_s$
corresponds to a curve in this figure which
approaches the horizontal line $M_s=m_s$ from above as
$\mu\rightarrow\infty$.
}
\label{fig:pd2}
\end{figure}
In this section, we present solutions to the gap equations
\eqn{sol:gap2}. We demonstrate explicitly that the
unlocking phase transition is first order in our model,
as we have argued on general grounds in Section 2.
Furthermore, we confirm that the simple criterion
(\ref{criterion}) is a good guide.
We determine the
value of $M_s^{\rm cont}$, the highest strange quark constituent
mass at which, in the model, no 2SC phase intrudes.
We use our results to draw tentative conclusions for
the behavior of strongly interacting matter as a
function of density in nature.
We plot the solutions $b,c,d,e,f$ to the gap equations along two
different
lines of constant $M_s$ and one line of constant $\mu$, all shown
in Figure~\ref{fig:pd2}. Recall that $b=-e$
and $c=d=f=0$ in the 2SC phase whereas all the gaps are nonzero
in the CFL phase. For $M_s=0$, $b=c$ and $e=f$.
If the condensate were purely color ${\bf \bar 3}$
in the CFL phase, one would have $b=-e$, $c=-f$ and $d=0$.
\begin{figure}[t]
\epsfxsize=4.5in
\begin{center}
\hspace*{0in}
\epsffile{GapPlot_ms0.15.eps}
\end{center}
\vspace{-15ex}
\caption{
Solutions to the gap equations
as a function of $\mu$, at $M_s=150~{\rm MeV}$. At this value of $M_s$, the
CFL phase persists down to $\mu=400~{\rm MeV}$, and below.
The two solid lines are $b$ and $-e$, the dashed lines are
$c$ and $-f$. At large $\mu$, the gaps approach the $M_s=0$ pattern.
}
\label{fig:gap-ms0.15}
\end{figure}
\begin{figure}[hbt]
\epsfxsize=4.75in
\begin{center}
\hspace*{0in}
\epsffile{GapPlot_ms0.35.eps}
\end{center}
\vspace{-15ex}
\caption{
Solutions to the gap equations
as a function of $\mu$, at $M_s=350~{\rm MeV}$.
At this large value of $M_s$, the CFL phase exists only for
$\mu> 530~{\rm MeV}$. That is, $M_s^{\rm unlock}=350~{\rm MeV}$ for $\mu=530~{\rm MeV}$.
}
\label{fig:gap-ms0.35}
\end{figure}
Figures 4 and 5 show the solutions to the gap equations
as a function of $\mu$,
for $M_s=150~{\rm MeV}$ and $M_s=350~{\rm MeV}$.
We see that for $M_s=150~{\rm MeV}$,
the color-flavor-locked phase continues down to
$\mu=400~{\rm MeV}$, below which we expect a baryonic description
to become appropriate.
In contrast, in Figure 5 we see that for $M_s=350~{\rm MeV}$,
the CFL phase only exists for $\mu>530~{\rm MeV}$. At lower
densities, below a first order unlocking phase transition,
the 2SC phase is favored.\footnote{In the calculations
for our plots we assume $M_s$ is the same on either side of
the phase transition. In reality $M_s$ will be somewhat smaller in the 2SC
phase because there is no chiral symmetry breaking there, but since
the 2SC condensates do not involve the $s$ quark this is of no
consequence.}
This transition occurs at high
enough densities that a quark matter description is justified,
and our model can therefore be used to describe it.
At lower densities
still, there is another first order phase transition
to a baryonic
phase.\cite{ARW2,RappETC,BergesRajagopal,BJW2,PisarskiRischke1OPT,CarterDiakonov}
\begin{figure}[htb]
\epsfxsize=4.75in
\begin{center}
\hspace*{0in}
\epsffile{GapPlot_mu0.4.eps}
\end{center}
\vspace{-15ex}
\caption{
Solutions to the gap equations
as a function of $M_s$, at $\mu=400~{\rm MeV}$, along the vertical
line in Figure 3.
When the strange quark becomes massive enough, $s$-$u$ condensates
become too expensive, and there is a first-order phase transition from
CFL to 2SC.
}
\label{fig:gap-mu0.4}
\end{figure}
In order to estimate $M_s^{\rm cont}$, in Figure 6 we plot
the solutions to the gap equations as a function of $M_s$ with
$\mu$ fixed at $400~{\rm MeV}$.
This is the lowest $\mu$ at which we expect a quark matter
description to be valid.
We find that the CFL phase
exists for $M_s$ below a first order transition at
$M_s^{\rm unlock}=235~{\rm MeV}$. Our model therefore suggests that
$M_s^{\rm cont}$ is of order $250~{\rm MeV}$.
The question, then, is whether Figure 4 or Figure 5 is
closer to the behavior of strongly interacting matter
as a function of density in nature. If Figure 5 shows
the correct qualitative behavior, we would expect two first order phase
transitions as the density is increased above nuclear
density. After the first transition, one would have
quark matter in the 2SC phase. In this quark matter
phase, the constituent strange quark mass must
be greater than $M_s^{\rm cont}\sim 250~{\rm MeV}$. Then,
above the next transition, a color-flavor-locked
quark matter phase is obtained.
If Figure 4 is the better
gu1ide to nature, as we surmise, then there is no
window of $\mu$ in which the 2SC phase intrudes, and
nature may choose a continuous transition between baryonic
and quark matter with the symmetries of the CFL phase.
\begin{figure}[htb]
\epsfxsize=4in
\begin{center}
\hspace*{0in}
\epsffile{FirstOrder.eps}
\end{center}
\vspace{-0ex}
\caption{
The effective potential $\Omega$ in units
of $10^{-5}~{\rm GeV}^4$ in the vicinity
of the unlocking phase
transition, for $\mu=400~{\rm MeV}$, $M_s=235~{\rm MeV}$. The plot corresponds
to a slice of the potential in the 5 dimensional parameter
space along a line which linearly interpolates
between the minima in the 2SC phase ($x=0$) and
the CFL phase ($x=1$).
The phase transition is clearly first order.
}
\label{fig:firstorder}
\end{figure}
We have analyzed the unlocking phase transition quantitatively
in our model, and can now confirm the predictions
of the model-independent but qualitative arguments of Section 2.
We first demonstrate that the transition is first order,
and justify the location of the discontinuities
shown in Figures 5 and 6.
To find the first order transition in our model, we need
the effective potential $\Omega$.
We use the fact that the gap equations
are equivalent to the requirement that
the derivative of $\Omega$ with respect
to the gap parameters vanishes.
The
set of gap equations (\ref{sol:gap2})
corresponds to five differential equations
$\partial\Omega/\partial \vec{l}\,|_{\rm min}=0$ where
$\vec{l}=(b,c,d,e,f)$ at the potential minimum, so we
have analytic expressions for
$\partial\Omega/\partial \vec{l}$
for any value of the gap parameters $\vec{l}$.
At a first order phase transition,
$\Omega$ should have
two degenerate minima, one corresponding to the CFL phase and
the other to the 2SC phase.
We have found two minima at $M_s=235~{\rm MeV}$, $\mu=400~{\rm MeV}$. (See
Figure 6.) We define a straight line $\vec l(x)$ in the
five-dimensional space of gap parameters which goes from the 2SC
minimum $\vec{l}(0)=(b=100,c=0,d=0,e=-100,f=0)~{\rm MeV}$ to the CFL
minimum $\vec{l}(1)=(91,56,4,-88,-52)~{\rm MeV}$. We evaluate $(\partial
\Omega/ \partial \vec{l}\, )$ along this line, and then obtain the
potential itself by integrating $(\partial \Omega/ \partial \vec{l}\,
) (\partial \vec{l}/\partial x)$ with respect to $x$. The result is
shown in Figure~\ref{fig:firstorder}. The phase transition is first
order.
The model-independent arguments of Section 2 lead to
the criterion (\ref{criterion}) for the
location of the unlocking phase transition. We now
compare this prediction to the quantitative results
we have obtained in our model. For $\mu=400~{\rm MeV}$, at the critical
strange quark mass $M_s^{\rm unlock}=235~{\rm MeV}$ (see Figure 6) the
gaps $c$ and $-f$, which vanish
in the 2SC phase, are within a few
MeV of each other in the CFL phase, with $c\simeq -f \simeq 55 {\rm MeV}$.
Taking $\Delta_{us}\approx 55~{\rm MeV}$, we find
\begin{equation}
\label{results:criterion}
(M_s^{\rm unlock})^2 \approx 2.5 \mu \Delta_{us}\ .
\end{equation}
The unlocking phase transition in Figure 5 occurs at
$M_s=350~{\rm MeV}$, $\mu=530~{\rm MeV}$ and has $c\approx -f\approx 85~{\rm MeV}$;
this yields a relation as in (\ref{results:criterion}),
with 2.5 replaced by 2.7.
We conclude that (\ref{criterion}), derived by simple
physical arguments based on comparing the mismatch between
the up and strange Fermi momenta with the $\Delta_{us}$
gap, is a very good guide to
the location of the unlocking phase transition.
\section{Quark-hadron continuity}
\label{sec:cont}
As has been emphasized in Section~\ref{sec:sol}, for low enough
$m_s$ the CFL phase may consist of hadronic matter at low $\mu$, and
quark matter at high $\mu$. This raises the exciting possibility
\cite{SchaeferWilczek} that properties of sufficiently dense
hadronic matter could be found by extrapolation from the quark matter
regime where models like the one considered in this paper can be used
as a guide at moderate densities, and where the QCD gauge coupling
becomes small at very high densities.
\begin{table}[htb]
\newlength{\wid}\settowidth{\wid}{XXX}
\def\st{\rule[-1.5ex]{0em}{4ex}}
\begin{tabular}{lccc|cccc}
\hline
\st Quark & $SU(2)_{{\rm color}+V}$ & $Q'$ & gap &
Hadron & $SU(2)_{V}$ & $Q$ & gap \\
\hline
\multirow{2}{\wid}{$\left(\begin{array}{c} bu\\[1ex] bd \end{array}\right)$} &
\multirow{2}{2em}{\bf 2} &
$+1$ &
\multirow{4}{2em}[-1ex]{$ f$} &
\multirow{2}{4em}{$\left(\begin{array}{c} p\\[1ex] n \end{array}\right)$} &
\multirow{2}{2em}{\bf 2} &
$+1$ &
\multirow{4}{2em}[-1ex]{$\Delta^B_4$} \st \\
& & 0 & & & & 0 \st \\
\multirow{2}{\wid}{$\left(\begin{array}{c} gs\\[1ex] rs \end{array}\right)$} &
\multirow{2}{2em}{\bf 2} &
0 & &
\multirow{2}{4em}{$\left(\begin{array}{c} \Xi^0\! \\[1ex] \Xi^-\!\! \end{array}\right)$} &
\multirow{2}{2em}{\bf 2} &
0 \st \\
& & $-1$ & & & & $-1$ \st \\
\hline
\multirow{3}{\wid}{$\left(\begin{array}{c} ru-gd\\[1ex] gu\\[1ex] rd \end{array}\right)$} &
\multirow{3}{2em}{\bf 3} &
0 &
\multirow{3}{2em}{$ e$}&
\multirow{3}{4em}{$\left(\begin{array}{c} \Sigma^0 \\[1ex] \Sigma^+ \\[1ex] \Sigma^- \end{array}\right)$} &
\multirow{3}{2em}{\bf 3} &
0 &
\multirow{3}{2em}{$\Delta^B_3$}\st \\
& & $+1$ & & & & $+1$ \st \\
& & $-1$ & & & & $-1$ \st \\
\hline
$ru+gd+\xi_- bs$\hspace{-1em} & \hspace{-2em} {\bf 1} & 0 & $\Delta_-$ &
$\Lambda$ \hspace{-1em} & \hspace{-2em} {\bf 1} & 0 & $\Delta^B_1$ \st \\
\hline
$ru+gd-\xi_+ bs$ & \hspace{-2em}{\bf 1} & 0 & $\Delta_+$ &
--- & \st \\
\hline
\end{tabular}
\[
\begin{array}{rcr@{}l}
\Delta_\pm &=& \half & \Bigl( 2b+e+d \pm \sqrt{(2b+e-d)^2+8c^2}\Bigr) \\
\xi_\pm &=& -{1\over 2c} & \Bigl( 2b+e-d \mp \sqrt{(2b+e-d)^2+8c^2}\Bigr)
\end{array}
\]
\caption{Comparison of states and gap parameters in high density quark
and hadronic matter.}
\label{tab:qm}
\end{table}
The most straightforward application of this idea is to relate the
quark/gluon description of the spectrum to the hadron
description of the spectrum in the CFL
phase.\cite{SchaeferWilczek}
As $\mu$ is decreased from the regime in which
a quark/gluon
description is convenient to one in which a baryonic
description is convenient, there is no change in symmetry
so there need be no transition: the spectrum of the theory may
change continuously. Under this mapping,
the massive gluons in the CFL phase
map to the octet of vector
bosons;\footnote{The singlet vector boson in the hadronic phase
does not correspond to a massive gluon in the CFL phase. This
has been discussed in Ref.~\cite{SchaeferWilczek}.}
the Goldstone bosons associated with chiral symmetry breaking
in the CFL phase
map to the pions;
and the quarks map onto baryons. Pairing
occurs at the Fermi surfaces, and we therefore expect the gap
parameters in the
various quark channels, calculated in Section~\ref{sec:res}, to map to
the gap parameters due to baryon pairing.
In Table~\ref{tab:qm}
we show how this works for the fermionic states in 2+1 flavor QCD.
There are nine states in the quark matter phase. We show how they
transform under the unbroken ``isospin'' of $SU(2)_{{\rm color}+V}$ and their
charges under the unbroken ``rotated electromagnetism'' generated
by $Q'$, as described in Section 4.
Table~\ref{tab:qm} also shows the baryon octet,
and their transformation properties under the symmetries
of isospin and electromagnetism that are unbroken in sufficiently
dense hadronic matter. Clearly there is a correspondence between
the two sets of particles.\footnote{The
one exception is the final isosinglet.
In the $\mu\to\infty$ limit, where the
full 3-flavor symmetry is restored, it becomes an $SU(3)$ singlet,
so it is not expected to map to any member of the baryon octet.
We discuss this further below.
The gap $\Delta_+$ in this channel is twice as large as the others
(it corresponds to $\Delta_1$ in Ref.~\cite{ARW3}).}
As $\mu$ increases,
the spectrum described in Table 2 may evolve continuously
even as the language used to describe it changes from baryons,
$SU(2)_{V}$ and $Q$ to quarks, $SU(2)_{{\rm color}+V}$ and $Q'$.
If the spectrum changes continuously, then in particular so must the
gaps. As discussed above, and displayed explicitly in
\eqn{sol:ansatz}, the quarks pair into rotationally invariant,
$Q'$-neutral, $SU(2)_{{\rm color}+V}$ singlets. The two doublets of
Table \ref{tab:qm} pair with each other, with
gap parameter $f$. The triplet pairs with itself, with gap
parameter $e$. Finally, the two singlets pair with themselves.
When we map the quark states onto baryonic states,
we can predict that the baryonic pairing scheme that will occur
is the one conjectured in Sect.~\ref{sec:int} for sufficiently
dense baryonic matter:
\begin{equation}
\begin{array}{ll}
\<p\Xi^-\>,\<\Xi^-p\>,\<n\Xi^0\>,\<\Xi^0n\>
& \rightarrow \hbox{4 quasiparticles, with gap parameter~~} \Delta^B_4 \\[0.3ex]
\<\Sigma^+\Sigma^-\>,\<\Sigma^-\Sigma^+\>,\<\Sigma^0\Sigma^0\>
& \rightarrow\hbox{3 quasiparticles, with gap parameter~~} \Delta^B_3 \\[0.3ex]
\<\Lambda\La\> & \rightarrow\hbox{1 quasiparticle,\phantom{s} with
gap parameter~~} \Delta^B_1
\end{array}
\end{equation}
The baryon pairs are rotationally-invariant, $Q$-neutral, $SU(2)_V$
singlets. It seems reasonable to conclude that as $\mu$ is increased
the baryonic gap parameters $(\Delta^B_4, \Delta^B_3, \Delta^B_1)$ may evolve
continuously to become the quark matter gap parameters $(f, e, \Delta_-)$,
which we have calculated in this paper. Assuming continuity, the magnitude of
the gaps will change as the density is increased but if their ratios
change less we can use our results for the gap parameters in the CFL phase at
$\mu=400~{\rm MeV}$ and $M_s=150~{\rm MeV}$ to suggest baryonic gap parameters with
ratios
\begin{equation}
\Delta^B_4:\Delta^B_3:\Delta^B_1\sim 1.06 : 1.26 : 1
\label{prediction}
\end{equation}
in matter which is sufficiently dense
but still conveniently described as baryonic.
As mentioned above, the ninth quark corresponds to a singlet baryon
which is very heavy, for reasons which have nothing to do with our
considerations. We therefore expect that as $\mu$ is increased
and a quark matter description takes over from a baryonic description,
the density of the singlet quark/baryon becomes nonzero at some $\mu$,
and begins to increase. Deep into the quark matter phase, there is
a gap $\Delta_+$ for this ninth quark, but this does not correspond
to any gap deep in the baryonic phase, because the density of the
corresponding baryon is zero. There is no change in symmetry
at the $\mu$ at which the density of the ninth quark/baryon
becomes nonzero, just as there was no change in symmetry
at the lower $\mu$ (within the ``strange baryon'' phase of
Figure 1) at which the $\Xi$ density became nonzero. Both
these onsets are smooth transitions at arbitrarily small
but nonzero temperature. We do not expect the onset
of a nonzero density for the ninth quark/baryon to upset
the continuity between the baryonic and quark matter phases,
but symmetry arguments can only demonstrate
the possibility of continuity; they cannot prove that there
is no transition.
The analysis leading to (\ref{prediction})
provides the first example of the use of the
hypothesis of quark-hadron continuity in the color-flavor
locked phase to map a quark matter calculation onto
quantitative properties of baryonic matter.
\section{Color-Flavor Locking at Asymptotic Densities}
\begin{figure}[t]
\epsfxsize=4.5in
\begin{center}
\hspace*{0in}
\epsffile{songap.eps}
\end{center}
\caption{The upper curve shows Son's result for the
superconducting gap $\Delta$ as
a function of $\log_{10}\mu$, for $\mu$
from $0.8$ GeV to $10^6$ GeV. The vertical scale
has been normalized so that $\Delta=0.1$ GeV at $\mu=0.8$ GeV.
We have taken $g(\mu)$ from the two-loop beta function
for three flavor QCD with $\Lambda_{\rm QCD}=200~{\rm MeV}$. Color-flavor
locking occurs whenever $\Delta \gtrsim M_s^2/2\mu$.
The lower
curve is $M_s^2/2\mu$, taking $M_s=150~{\rm MeV}$.
We conclude that QCD at very high
densities is in the CFL phase.}
\label{fig:son}
\end{figure}
Son \cite{Son} has studied color superconductivity
at much higher densities
than those we can treat with our model,
using a resummed gluon propagator rather than a
point-like four-quark interaction, and has determined the
leading behavior of the gap in the large $\mu$, small $g(\mu)$
limit.
Here, $g(q)$ is the QCD gauge coupling at momentum transfer $q$.
The resulting expression,
\begin{equation}
\Delta \sim \mu
\frac{1}{g(\mu)^5}
\exp\Bigl(-\frac{3\pi^2}{\sqrt{2}}
\frac{1}{g(\mu)}\Bigr)\ ,
\label{songapequation}
\end{equation}
describes the $\mu$-dependence of the gap(s), but not
their absolute normalization.
$\Delta$ could be any of our gaps. The distinction between
$b$, $c$, $d$, $e$ and $f$ is in the
prefactor which should appear in (\ref{songapequation}), but which
has so far not been calculated. We take our model estimates
as a guide at moderate densities, around $\mu\sim 400-800$ MeV,
and use them to normalize the calculation of Ref.~\cite{Son}
by fixing the prefactor in (\ref{songapequation}) so that
$\Delta=100~{\rm MeV}$
at $\mu=800~{\rm MeV}$. We show the result in Figure \ref{fig:son}.
Note that $\Delta$ is plotted versus $\log\mu$; it changes
very slowly. It decreases by about a factor of three
as $\mu$ is increased to around 100 GeV (!) and then
begins to rise without bound at even higher densities.
We conclude that independent of any details (like
the precise value of $M_s$, for example) at asymptotically
high densities $\Delta$ is
far above $M_s^2/2\mu$. For any finite value of the strange
quark mass $m_s$, quark matter is in the color-flavor locked
phase, with broken chiral symmetry, at arbitrarily high
densities where the gauge coupling becomes small.
\section{Conclusions}
\label{sec:concl}
We have discussed
a conjectured phase diagram for
$2+1$ flavor QCD as a function of $\mu$, the chemical potential for quark
number, and $m_s$, the strange quark mass. We
have worked at zero temperature
and ignored electromagnetism and the $u$ and $d$
quark masses throughout.
Our quantitative analysis
is restricted to the high density quark matter phases, where we
use a model with an effective four-fermion interaction which has the
index structure of single gluon exchange.
Our analysis shows that for any finite $m_s$ the quarks
will always pair in a color-flavor-locked (CFL) state
at arbitrarily large chemical potential, breaking chiral
symmetry. What happens at lower $\mu$ depends on the strange quark
mass. If $m_s$ is greater than a critical value $m_s^{\rm cont}$,
then as $\mu$ is reduced, there will be an unlocking phase transition
to a color superconductor phase analogous to
that in two flavor QCD , with restoration of chiral
symmetry. We give model-independent arguments that
the unlocking transition between the CFL and 2SC phases
is controlled by the mismatch between the light and strange
quark Fermi momenta and is therfore first order.
We confirm this quantitatively in our model.
If $\mu$ is reduced further, there will presumably be a phase transition to
nuclear matter, and chiral symmetry will be broken again. However, if
$m_s<m_s^{\rm cont}$ then the 2SC state never occurs.
The quark matter stays in the color-flavor locked phase
all the way down until the transition to hadronic matter. Chiral
symmetry is never restored at any $\mu$. In this case there may be
continuity between the CFL quark matter and sufficiently dense
hadronic matter.
Assuming such continuity, we use our calculation
of quark gaps to make predictions about the gaps in hadronic matter
\eqn{prediction}.
At arbitrarily high densities, where the QCD gauge coupling is small,
quark matter is always in the CFL phase with broken chiral symmetry. This is
true independent of whether the ``transition'' to quark matter is
continuous (as may occur for small $m_s$, including, we surmise,
realistic $m_s$) or whether, as for larger $m_s$, there are two first
order transitions, from nuclear matter to the 2SC phase, and
then to the CFL phase.
There are many directions in which this work can be developed. We
have worked at zero temperature, so a natural extension would be to
study the effects of finite temperature. The phase diagram of
two-flavor QCD as a function of baryon density, temperature
and quark mass has been explored in Ref.~\cite{BergesRajagopal}.
A nonperturbative approach beyond the mean field approximation
that we have employed can be performed along the lines of
Ref.~\cite{BJW} using the exact renormalization group,
or by doing a lattice calculation \cite{HandsMorrison}.
Within our model, one could
study more exotic channels that we have ignored, such as
those with $S=1$ and/or $L=1$, or channels that would lead to
pairing of the strange quarks in
the 2SC+s phase.
There are also improvements that could be made to the model,
such as including four- and six-fermion
interactions induced by the three-flavor
instanton vertex, or
using sum rules to include nonperturbative gluons \cite{AKS}, or
including perturbative gluons \cite{Son}. It would also be desirable
to include the effects of electromagnetism, and of different chemical
potentials for the different flavors.
Finally, it is of great importance to investigate the consequences of
our findings for the phenomenology of neutron/quark stars, which are
the only naturally occuring example of cold matter at the densities we
have studied. We are confident that, with the basic symmetry
properties of the phase diagram now at hand, a whole new
phenomenology waits to be uncovered as the role of the strange quark
in dense matter is fully elucidated.
\bigskip
\begin{center}
Acknowledgments
\end{center}
We thank E. Shuster and D. T. Son for helpful discussions. Related
issues are discussed, with a different emphasis, in ``Quark
Description of Hadronic Phases'' by T. Sch\"afer and F. Wilczek, IAS
preprint IASSNS-HEP 99/32. We thank these authors for showing us
their work prior to publication and for informative discussions.
This work is supported in part by the U.S. Department
of Energy (D.O.E.) under cooperative research agreement \#DF-FC02-94ER40818.
|
2,877,628,089,119 | arxiv | \section{Introduction}
Since the introduction of the gauge/gravity duality \cite{Maldacena:1997re, Witten:1998qj}, there has been an immense effort from the theoretical physics community to use holographic methods in order to gain enhanced understanding about various physical phenomena. The duality between a thermal state in the boundary theory and a black hole in the bulk has formed the backbone of
these studies
and enabled the researchers to associate the process of thermalization in a unitary field theory with the formation of a black hole in the dual side. Explaining the dynamics of thermalization in isolated quantum systems, which is a central problem in many-body physics \cite{srednicki:1994, deutsch:1991}, has also been studied within the context of AdS/CFT
by focusing on certain string theory-inspired constructions such as the BFSS \cite{Banks:1996vh} and BMN \cite{Berenstein:2002jq} matrix models.
Over a decade ago, the remaining mysteries about
the nature
of black holes have led to several speculations related to their quantum mechanical structure, some of which could be tested in matrix model environments.
To elaborate, motivated by the arguments of \cite{Hayden:2007cs}, Sekino and Susskind have conjectured that black holes are fast scramblers i.e. they scramble information at a rate proportional to the logarithm of the number of degrees of freedom \cite{Sekino:2008he}.
The thermalization processes observed in the BMN model have been numerically investigated and reported in a series of papers \cite{Asplund:2011qj, Berenstein:2010bi, Asplund:2012tg}.
In particular,
the results obtained in \cite{Asplund:2011qj} are intriguing as they are broadly consistent with the fast scrambling conjecture. Berenstein et al. have shown that
simulations of thermalization in the BMN model
provide numerical evidence for fast thermalization, which may also be interpreted to implicate fast scrambling.
On the other hand, extensive thermodynamic simulations of the BFSS model,
including detailed numerical studies of thermalization times,
have been performed in references \cite{Riggins:2012qt,Aoki:2015uha}.
Furthermore, the relation between quantum chaos and thermalization has been recently explored in \cite{Buividovich:2018scl}. Besides these developments, it is essential to note that
that due to large number of degrees of freedom interacting through a quartic Yang-Mills potential, it does not appear quite possible that general solutions of the BFSS/BMN models can be determined. Even the smallest Yang-Mills matrix model with two $2 \times 2$ matrices and with $SU(2)$ gauge symmetry has not been completely solved until this date \cite{Berenstein:2016zgj}. Thus, in order to reach meaningful results, instead of considering the whole matrix theory it seems reasonable to
concentrate on simplified structures with less degrees of freedom.
A convenient way of achieving this is to
place prior constraints on the system at hand by starting the
simulations with specified sets of initial conditions. Although
one would ideally prefer to choose initial conditions with the aim of setting up a configuration, which resembles the phenomenon
of scattering gravitons at high energies, currently this not possible
in the BFSS case due to the insufficient understanding of graviton states in this matrix model \cite{Berenstein:2010bi}. Nevertheless, as it will be discussed shortly, valuable information regarding the thermalization phenomenon can still be gathered from certain gauge invariant massive deformations of the BFSS model.
In this paper, our main interest is to analyze the dynamics of thermalization in a Yang-Mills matrix model with two distinct mass deformation terms, whose emerging chaotic motions have been investigated in \cite{Baskan:2019qsb}. This model has the same matrix content as the bosonic part of the BFSS matrix model, but also contains mass deformation terms that keep the gauge invariance intact. The paper is structured as follows. Section \ref{secYM} starts out with a brief introduction of the model, which is followed by the description of the initial conditions that are used in the simulations. In section \ref{secNum}, we investigate the thermalization processes observed in the matrix model with massive deformations by performing a detailed numerical analysis of its classical evolution.
This is followed by an examination of the variation of thermalization time with respect to matrix size.
Then, by introducing a configuration of matrices,
we obtain reduced actions from the full matrix model and subsequently
explore the change of thermalization time with the energies of these reduced actions. Lastly, section \ref{Concs} is devoted to conclusions and outlook.
\section{Yang-Mills matrix model with double mass deformation} \label{secYM}
The BFSS matrix model is a Yang-Mills theory in $0+1$ dimensions which arises from the dimensional reduction of the Yang-Mills theory in $9+1$ dimensions with $\mathcal{N}=1$ supersymmetry \cite{Banks:1996vh}. In this paper, we focus upon a
gauge invariant double mass deformation of the bosonic part of the BFSS action which may be specified as \cite{Baskan:2019qsb}
\begin{equation}
\label{MD}
S = \frac{1}{g^2} \int dt \, \tr( \frac{1}{2}(D_t B_I)^2 + \frac{1}{4}
{\lbrack B_I, B_J \rbrack}^2 - \frac{1}{2} \mu_1^2 B_i^2 - \frac{1}{2} \mu_2^2 B_k^2 ) \,,
\end{equation}
where the indices $i$ and $k$ take on the values $i = 1,2,3$ and $k = 4,5,6$, respectively.
In (\ref{MD}), $B_I$ $(I=1,\dots,9)$ are $N \times N$ Hermitian matrices and $ \tr$ stands for the trace. The covariant derivatives are defined by
\begin{equation}
D_t {B}_I=\partial_t B_I - i \lbrack A, B_I \rbrack \, .
\end{equation}
When the deformation parameters $\mu_1$ and $\mu_2$ are both equal to zero, (\ref{MD}) reduces to the bosonic part of the classical BFSS action. Since, we are going to be essentially concerned with the classical dynamics of~\eqref{MD}, we
absorb the coupling constant in the definition of $\hbar$,
as it only determines the overall scale of energy classically.
In the Weyl gauge, $A=0$, the equations of motion for $B_I$ take the form
\begin{subequations}
\begin{align}
\ddot{B_i} + \lbrack B_I , \lbrack B_I , B_i \rbrack \rbrack + \mu_1^2 B_i &= 0 \,, \\
\ddot{B_k} + \lbrack B_I , \lbrack B_I , B_k \rbrack \rbrack + \mu_2^2 B_k &= 0 \,, \\
\ddot{B_r} + \lbrack B_I , \lbrack B_I , B_r \rbrack \rbrack &= 0 \,,
\end{align}
\end{subequations}
where the index $r$ runs through the values $7,8,$ and $9$.
Similarly,
the Weyl gauge Hamiltonian reads
\begin{equation}
\label{Ham1}
H = \tr( \dfrac{{P_I}^2}{2} - \frac{1}{4}
{\lbrack B_I, B_J \rbrack}^2 + \frac{1}{2} \mu_1^2 B_i^2 + \frac{1}{2} \mu_2^2 B_k^2 ) \,.
\end{equation}
Due to gauge invariance, $B_I$ matrices and conjugate momenta should also satisfy the Gauss Law constraint given by
\begin{equation}
\label{GaussLaw}
\lbrack B_I, P_I \rbrack = 0 \,.
\end{equation}
The Hamilton's equations of motion can easily be derived from (\ref{Ham1}). However, in order to obtain relations that are more convenient for numerical simulations, we rename a subset of phase space coordinates (namely $B_I$ and $P_I$ for $I \geqslant 4$) and subsequently change the indices labelling the aforementioned coordinates so that all indices can range over the same set of integer values. The resulting equations of motion can be written out as follows
\begin{subequations}
\label{HeomNw}
\begin{align}
\dot{P_i} &= \lbrack \lbrack B_j , B_i \rbrack, B_j \rbrack +
\lbrack \lbrack C_l , B_i \rbrack, C_l \rbrack +
\lbrack \lbrack D_s , B_i \rbrack, D_s \rbrack - \mu_1^2 B_i \,, \label{YMDMeomA1} \\
\dot{R_l} &= \lbrack \lbrack B_i , C_l \rbrack, B_i \rbrack +
\lbrack \lbrack C_{l^\prime} , C_l \rbrack, C_{l^\prime} \rbrack +
\lbrack \lbrack D_s , C_l \rbrack, D_s \rbrack - \mu_2^2 C_l \,, \label{YMDMeomB1} \\
\dot{W_s} &= \lbrack \lbrack B_i , D_s \rbrack, B_i \rbrack +
\lbrack \lbrack C_l , D_s \rbrack, C_l \rbrack +
\lbrack \lbrack D_{s^\prime} , D_s \rbrack, D_{s^\prime} \rbrack \,, \label{YMDMeomC1} \\
P_i &= \dot{B_i} \,, \quad R_l = \dot{C_l} \,, \quad W_s = \dot{D_s} \,,
\end{align}
\end{subequations}
where $j,l,l^\prime,s,s^\prime = 1,2,3$.
Furthermore, in this new notation (\ref{GaussLaw}) becomes
\begin{equation}
\label{GaussLawNew}
G = \lbrack B_i, P_i \rbrack + \lbrack C_l, R_l \rbrack + \lbrack D_s, W_s \rbrack = 0 \,.
\end{equation}
One of the primary purposes of this study is to examine the dependence of thermalization on the choice of initial conditions. To this end, we adopt an approach similar to the one suggested in \cite{Asplund:2011qj} and set up the initial conditions as follows
\begin{align}
\label{IniConds}
B_1 &=
\begin{pmatrix}
J_1 & 0 \\
0 & 0
\end{pmatrix} \,, \quad
B_2 =
\begin{pmatrix}
J_2 & q_1 \\
{q_1}^\dagger & 0
\end{pmatrix} \,, \quad
B_3 =
\begin{pmatrix}
J_3 & q_2 \\
{q_2}^\dagger & 0
\end{pmatrix} \,, \quad
C_l =
\begin{pmatrix}
J_l & 0 \\
0 & 0
\end{pmatrix} \nonumber \\
P_1 &=
\begin{pmatrix}
0 & 0 \\
0 & p_0
\end{pmatrix} \,, \quad P_2 = P_3 = 0 \,, \quad D_s = 0 \,, \quad R_l = W_s = 0 \,,
\end{align}
where $J_i$'s are $(N-1)$-dimensional Hermitian matrices. They denote the spin-$j$
$(j=(N-2)/2)$
irreducible representation of $SU(2)$ and form the fuzzy two-sphere at level $j$
\cite{Madore:1991bw,Balachandran:2005ew}.
While the diagonal modes of $B_i$ and $C_l$ matrices start from the fuzzy sphere configurations, $P_1$ initiates with a single eigenvalue, $p_0$, on the main diagonal. Besides their diagonal modes, the off-diagonal elements of $B_2$ and $B_3$ are also excited with the addition of $q_1$ and $q_2$ blocks that are consisting of randomly generated initial conditions. These blocks, which serve as sources of small fluctuations, are formed by utilizing a complex normal distribution with a spread proportional to
${(\hbar/(N-1))}^\frac{1}{2}$.
After completing the essential procedure of specifying initial conditions, we may now proceed to the stage of numerical simulations.
\section{Numerical results} \label{secNum}
This section is devoted to an investigation of the thermalization processes observed in the Yang-Mills theory with massive deformations. In order to provide a comprehensive analysis, we carry out numerical simulations for the time evolution of (\ref{HeomNw}).
After discretizing the equations of motion, an iterative algorithm can be developed to solve the discretized equations numerically.
By saving the contents of the eighteen matrices every few iterations,
one can gain valuable insight into the dynamics of thermalization as will be discussed shortly.
In the computations, a simulation code implemented in Matlab is used. The code is executed with a constant time step of $0.004$ and we run it for a sufficient amount of time to clearly observe the values that the eigenvalues converge to. Due to truncation of digits, errors are inevitable in numerical calculations. In this regard, although the initial conditions given by (\ref{IniConds}) fulfill the Gauss law constraint, the cumulative effect of rounding errors could cause the violation of (\ref{GaussLawNew}). However, by constantly monitoring $G$ during the the trial runs of the simulation,
we made sure that no such effect is present.
Having now introduced the basic features of numerical computations, we move on to the details of obtained results. When the random fluctuation terms are not added to the system, i.e. $\hbar=0$,
the starting configurations keep evolving periodically in time
and thermalization does not occur. Thus, to avoid such a scenario, we set the value of $\hbar$ to $0.001$, which will remain fixed for the rest of this work. In order to discover various intriguing properties of the thermalization process, we first vary the $p_0$ parameter. Figure \ref{fig:fig11} shows the evolution of the eigenvalues of $B_1$ with simulation time for six different $p_0$ values.
\begin{figure}[!htb]
\centering
\begin{subfigure}[!htb]{.32\textwidth}
\centering
\includegraphics[width= 1\linewidth]{p0_0.jpg}
\caption{$p_0 = 0$}
\label{fig:fig1a1}
\end{subfigure}
\begin{subfigure}[!htb]{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{p0_5.jpg}
\caption{$p_0 = 5$}
\label{fig:fig1b1}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{p0_7.jpg}
\caption{$p_0 = 7$}
\label{fig:fig1c1}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{p0_10.jpg}
\caption{$p_0 = 10$}
\label{fig:fig1d1}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width= 1\linewidth]{p0_19_5.jpg}
\caption{$p_0 = 19.5$}
\label{fig:fig1e1}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=1\linewidth]{p0_30.jpg}
\caption{$p_0 = 30$}
\label{fig:fig1f1}
\end{subfigure}
\caption{Eigenvalues of $B_1$ vs. Time at $N=8$, $\mu_1=1$, and $\mu_2=1.5$}
\label{fig:fig11}
\end{figure}
The first thing we can immediately observe from the plots is that the oscillatory behavior of eigenvalues,
which can be most clearly seen from the last two figures,
become more apparent with increasing $p_0$. In Figures \ref{fig:fig1e1} and \ref{fig:fig1f1}, after a series of oscillations,
the amplitudes of the oscillations
decrease considerably and the frequencies tend to synchronize which results in the emergence of collective oscillations.
\begin{figure}[!htb]
\centering
\begin{subfigure}[!htb]{.48\textwidth}
\centering
\includegraphics[width= 1\linewidth]{P1hist_p0_19_5_noAxs.jpg}
\caption{$p_0 = 19.5$}
\label{fig:fig2_a}
\end{subfigure}
\begin{subfigure}[!htb]{.48\textwidth}
\centering
\includegraphics[width=1\linewidth]{P1hist_p0_30_noAxs.jpg}
\caption{$p_0 = 30$}
\label{fig:fig2_b}
\end{subfigure}
\caption{Histograms of eigenvalues of $P_1$ at $N=8$, $\mu_1=1$, and $\mu_2=1.5$}
\label{fig:fig2_1}
\end{figure}
On the other hand, as it is described in detail in subsection \ref{Ssec_Thrm}, thermalization occurs at all six $p_0$ values that are used in preparation of Figure \ref{fig:fig11}. In order to probe the presence of thermalization occuring at the $p_0$ values of $19.5$ and $30$, let us consider the results shown on Figure \ref{fig:fig2_1}. In Figures \ref{fig:fig2_a} and \ref{fig:fig2_b}, the eigenvalue distributions of the momentum matrix $P_1$ are illustrated. The histograms are generated by sampling the eigenvalues of $P_1$ on the time interval\footnote[2]{A detailed discussion of the determination of thermalization times is given in the next subsection.}
$[758,2500]$ during which
the system resides in potentially thermalized states.
The bin size is set to 40 and the dots in the figure correspond to the midpoints of the top edges of histogram bars. As expected from thermalized configurations, the semicircle distribution model fits the data nicely in both cases. Furthermore, in order to compare the eigenvalue distributions of the momenta matrices, the histograms of the eigenvalues of $P_1$ and $R_1$ are plotted together in Figure \ref{fig:fig31}. This time the histograms are generated by sampling the eigenvalues on the time interval $[758,3000]$ with a bin size equal to $30$.
Let us also note that we now set the value of $p_0$ to $30$, which
will remain fixed for the rest of this work unless otherwise stated.
It appears
\begin{figure}[!htb]
\centering
\includegraphics[width= 0.7\linewidth]{DataP1R1p0_30v3.jpg}
\caption{Histograms of eigenvalues of $P_1$ and $R_1$ at $N=8$, $\mu_1=1$, and $\mu_2=1.5$}
\label{fig:fig31}
\end{figure}
that the semicircle model gives an essentially good fit to both $P_1$ and $R_1$ distributions, which implies that after $t = 758$ momenta temperatures become essentially the same. Thus, it is safe to conclude that thermalization has occurred.
\subsection{Thermalization time} \label{Ssec_Thrm}
The main results concerning the
presence of thermalization
have been discussed up to this point. As it is central to the understanding of the thermalization process, let us consider a method that will help us in both determining the thermalization time of the system and providing evidence for the presence of thermalization. This method
relies on the evaluation of the relative size of changes in both $B_i$ and $C_l$ eigenvalues \cite{Asplund:2011qj}.
Figure \ref{fig:fig41} displays how the standard deviations of the eigenvalues for $B_1$, $B_2$, $C_1$, and $C_2$ matrices evolve with simulation time.
As seen in the legend, std($B_1$) denotes the standard deviation of the eigenvalues for $B_1$ and so on. Starting from oscillatory behavior with nearly constant
\begin{figure}[!htb]
\centering
\includegraphics[width= 1.1\linewidth]{stds_N8_mu1_1_mu2_3bolu2.jpg}
\caption{Standard deviations of eigenvalues vs. Time at $N=8$, $\mu_1=1$, and $\mu_2=1.5$}
\label{fig:fig41}
\end{figure}
amplitude, std($B_1$) undergoes a change
at $t \gtrapprox 500$
and its amplitude decrease considerably with time. In addition, as time progress, the standard deviations tend to converge on a narrow band of values and the system reaches a seemingly stable configuration in which only minor fluctuations are observed.
Among the different notions of thermalization time, we choose to focus on the one that define it as the timescale of thermalization from a given set of initial conditions. Using the signal processing toolbox of Matlab, we have developed a code that detects the time instants at which the variance of a signal changes significantly and run it on the standard deviation data graphed in Figure \ref{fig:fig41}. The approximate time when the standard deviations, hence the system, reaches an equilibrium size is determined to be equal to $758$. In Figure \ref{fig:fig41}, this approximate time instant is
marked with a dashed vertical line and
$t_{th}$ denotes the thermalization time of the system.
The procedure detailed above can be generalized for $N>8$. In Figure \ref{fig:fig51}, we present
plots of thermalization time versus $N$ at four distinct
mass combinations,
where the matrix size $N$ takes the values $N = 8,\dots,100$.
Let us immediately note that the models at $\mu_1 = \mu_2 = 1$ have different features from the rest in the sense that data values tend to decrease with increasing $N$. We find that the function
\begin{equation}
\label{kokfit}
T_1(N) = \frac{3404}{\sqrt{N}} \,,
\end{equation}
provides an adequate fit
to the data as can be seen from Figure \ref{fig:fig5a1}.
In addition,
a logarithmic fit of the form
\begin{equation}
\label{logfit1}
T_a(N) = c_a \log(N) + d_a \,,
\end{equation}
with
\begin{table}[H]
\centering
\begin{tabular}{ | c | c | c |}
\cline{2-3}
\multicolumn{1}{c |}{} & $c_a$ & $d_a$ \\ \hline
$T_2(N)$ &$439.5$ &$-284.9$ \\ \hline
$T_3(N)$ &$380.1$ &$49.04$ \\ \hline
$T_4(N)$ &$419.4$ & $-60.1$ \\ \hline
\end{tabular}
\caption{$c_a$ and $d_a$ values for the fitting curve (\ref{logfit1})}
\label{table:fitvalues1}
\end{table}
\noindent appears to be well-suited
for the remaining models as can be observed from Figures \ref{fig:fig5b1} - \ref{fig:fig5d1}. In equation (\ref{logfit1}), the index $a$ ranges from $2$ to $4$. Besides, it is important to note that expressions (\ref{kokfit}) and (\ref{logfit1}) are quite sufficient to fit the data as the minimum recorded adjusted R-squared value is equal to $0.938$.
\begin{figure}[!htb]
\centering
\begin{subfigure}[!htb]{.495\textwidth}
\centering
\includegraphics[width= 1\linewidth]{ThrTmDt_mu1_1mu2_1.jpg}
\caption{$\mu_2 = 1$}
\label{fig:fig5a1}
\end{subfigure}
\begin{subfigure}[!htb]{.495\textwidth}
\centering
\includegraphics[width=1\linewidth]{ThrTmDt_mu1_1mu2_1_5.jpg}
\caption{$\mu_2 = 1.5$}
\label{fig:fig5b1}
\end{subfigure}
\begin{subfigure}[!htb]{.495\textwidth}
\centering
\includegraphics[width=1\linewidth]{ThrTmDt_mu1_1mu2_2.jpg}
\caption{$\mu_2 = 2$}
\label{fig:fig5c1}
\end{subfigure}
\begin{subfigure}[!htb]{.495\textwidth}
\centering
\includegraphics[width=1\linewidth]{ThrTmDt_mu1_1mu2_3.jpg}
\caption{$\mu_2 = 3$}
\label{fig:fig5d1}
\end{subfigure}
\caption{Thermalization time vs. $N$ at $\mu_1=1$}
\label{fig:fig51}
\end{figure}
At a slight tangent to the analysis of thermalization times, let us return back to the study of Figure \ref{fig:fig41}. The method used in the preparation of this figure can be applied with some arbitrary $p_0$ value of our choosing to produce a similar graph. In Appendix \ref{AppAddFgs}, we display Figure \ref{fig:figApp1}, which shows the variations of the standard deviations of the eigenvalues for $B_1$, $B_2$, $C_1$, and $C_2$ matrices at the $p_0$ values that are already utilized in the preparation of Figure \ref{fig:fig11}. Similar to the behavior observed when $p_0$ is equal to $30$, in Figures \ref{fig:figAppa1}-\ref{fig:figAppe1}, after periods of decrease in oscillation amplitudes, standard deviations converge on narrow bands, which implies that thermalization occurs at all six $p_0$ values. To test this hypothesis, we may pick $p_0=7$ and examine the eigenvalue distributions of the momenta matrices. In Figure \ref{fig:figAppDst}, the histograms of the eigenvalues of $P_1$ and $R_1$ at $p_0=7$ are depicted together for the sake of comparison. Since the semicircle curve provides a good fit to both distributions, we can infer that the momenta temperatures are essentially the same and thermalization has occurred.
\subsubsection{Energy dependence of thermalization time}
Apart from its dependence to matrix size, we can also explore the variation of the thermalization time with respect to energy.
In this subsection, by performing simulations of the matrix model (\ref{Ham1}),
the dependence of thermalization time to energy is depicted at several distinct mass combinations and matrix size values.
We launch the discussion with introducing a matrix configuration in the form
\begin{align}
\label{IniCndsFLc1}
B_1 =
\begin{pmatrix}
v(t) J_1 & 0 \\
0 & 0
\end{pmatrix} \,, \quad
B_2 &=
\begin{pmatrix}
v(t) J_2 & q_1 \\
{q_1}^\dagger & 0
\end{pmatrix} \,, \quad
B_3 =
\begin{pmatrix}
v(t) J_3 & q_2 \\
{q_2}^\dagger & 0
\end{pmatrix} \,,
\nonumber \\
C_l =
\begin{pmatrix}
z(t) J_l & 0 \\
0 & 0
\end{pmatrix} \,, \quad
P_1 &=
\begin{pmatrix}
0 & 0 \\
0 & p_0
\end{pmatrix} \,, \quad
P_2 = P_3 = 0 \,, \quad R_l = 0 \,, \nonumber \\
D_s &= 0 \,, \quad W_s = 0 \,,
\end{align}
where $v(t)$ and $z(t)$ are real functions of time and $J_i$ satisfy the commutation relations given by $[J_i,J_j] = i \hbar^{}_{\!J}
\epsilon_{i j l} J_{l}$.
After substituting configuration (\ref{IniCndsFLc1}),
at an arbitrary time $t$, into the Hamiltonian (\ref{Ham1}), we evaluate the traces using Matlab and arrive at a set of effective Hamiltonians\footnote[3]{It is important to remark that we employ this method only for producing initial configurations. Unlike the reduced models in matrix model settings studied in \cite{Baskan:2019qsb,Arefeva:1997oyf,Asano:2015eha},
the matrix configuration defined by (\ref{IniCndsFLc1}) does not satisfy the equations of motion (\ref{HeomNw}).}.
A generic member of this set can be expressed as follows
\begin{align}
\label{EqHs}
H_s &= \frac{1}{2} p_0^2 + {\hbar^{}_{\!J}}^{\!\!4} c^{}_{N} \Big(v^2+z^2 \Big)^2 + {\hbar^{}_{\!J}}^{\!\!2} \Big[\Big(c^{}_{N} \mu_1^2 + \Delta_1 \Big) v^2 +
\Big(c^{}_{N} \mu_2^2 + \Delta_2\Big) z^2 \Big] \nonumber \\
&+ \Delta_3 \mu_1^2 \,,
\end{align}
where the coefficients $c^{}_{N}$ are defined by $c^{}_{N} = \frac{N(N-1)(N-2)}{8}$.
Here, it is essential to note that, due to the presence of fluctuation blocks $ q_1$ and $q_2$, unlike $c^{}_{N}$, $\Delta_i$ coefficients are random numbers that change with every new substitution of the configuration (\ref{IniCndsFLc1}) into (\ref{Ham1}). With the purpose of listing and examining $\Delta_i$ values, we have repeated the procedure utilized in the obtainment of $H_s$ by running a code $500$ times and determined the reduced Hamiltonians. For $N=8$, the maximums of the absolute values of $\Delta_1$, $\Delta_2$, and $\Delta_3$ were recorded as $0.0018$, $0.0014$, $0.0003$ respectively, which indicates that the extent of change in the coefficients of quadratic terms is small (in comparison to $c^{}_{N}$) but not negligible. Let us also add that we set $\hbar^{}_{\!J}$ to $1$ in this subsection.
Another point to emphasize is that analyzing the classical dynamics of equation (\ref{EqHs}) is not a purpose of this study.
$H_s$ would be solely employed to generate initial conditions for the simulations of equation (\ref{Ham1}). In order to give a detailed description of the initial condition selection process, let us first denote by $(v_b,z_b)\equiv \big(v(t_b),z(t_b)\big)$ a generic set of initial conditions at the start time $t_b$ of a classical simulation of $H$. Then, at $t=t_b$, (\ref{EqHs}) can be expressed as shown below
\useshortskip
\begin{align}
E &= c^{}_{N} v_b^4 + c^{}_{N} z_b^4 + \Big(c^{}_{N} \mu_1^2 + \Delta_1 \Big) v_b^2 + \Big(c^{}_{N} \mu_2^2 + \Delta_2\Big) z_b^2
+ 2 c^{}_{N} v_b^2 z_b^2 \nonumber \\
&+ \frac{1}{2} p_0^2 + \Delta_3 \mu_1^2 \,,
\end{align}
where $E$ is the energy of the reduced action.
With the aim of investigating the variance of thermalization time with energy, we run another Matlab code, which determines the thermalization time at several different values of the energy. We run the code with randomly selected initial conditions satisfying a given energy condition and detect the thermalization time of the system for a specified matrix size and mass combination. In order to give certain effectiveness to the random initial condition selection process, we developed a simple approach which we briefly explain next. To start with, we generate two uniformly distributed random numbers $\phi_\nu$ over the interval $O \leq \phi_\nu \leq E$ satisfying the constraint
$E = \phi_1 + \phi_2$ . Subsequently, the real roots of the expression
\useshortskip
\begin{equation}
\label{iniCon1}
c^{}_{N} z_b^4 + \big( c^{}_{N} \mu_2^2 + \Delta_2 \big) z_b^2
+ \frac{1}{2} {p_0}^2 + \Delta_3 \mu_1^2 - \phi_1 = 0 \,,
\end{equation}
are found. Our code randomly selects one of these roots, which is later used to solve for $v_b$ in the equation
\useshortskip
\begin{equation}
\label{iniCon2}
c^{}_{N} v_b^4 + \big( c^{}_{N} \mu_1^2 +2 c^{}_{N} z_b^2
+ \Delta_1 \big) v_b^2 - \phi_2 = 0 \,.
\end{equation}
Lastly, as the final step of the selection process, one of the real roots of equation (\ref{iniCon2}) is randomly picked by our code. Having now determined the $(v_b,z_b)$ pair, we move on to discuss the simulation stage. In order to measure the thermalization time at the energy $E$, we perform a classical simulation of the matrix model (\ref{Ham1}). This simulation is started with the initial configuration given by
\begin{align}
B_1 =
\begin{pmatrix}
v_b J_1 & 0 \\
0 & 0
\end{pmatrix} \,, \quad
B_2 &=
\begin{pmatrix}
v_b J_2 & q_1 \\
{q_1}^\dagger & 0
\end{pmatrix} \,, \quad
B_3 =
\begin{pmatrix}
v_b J_3 & q_2 \\
{q_2}^\dagger & 0
\end{pmatrix} \,,
\nonumber \\
C_l =
\begin{pmatrix}
z_b J_l & 0 \\
0 & 0
\end{pmatrix} \,, \quad
P_1 &=
\begin{pmatrix}
0 & 0 \\
0 & p_0
\end{pmatrix} \,, \quad
P_2 = P_3 = 0 \,, \quad R_l = 0 \,, \nonumber \\
D_s &= 0 \,, \quad W_s = 0 \,.
\end{align}
Following the completion of the classical simulation,
the thermalization time is measured by the method described at the beginning of subsection \ref{Ssec_Thrm}.
By setting $p_0$ equal to $12$ and repeating the procedure detailed above for a range of energy values,
the data used in the depiction of
Figures \ref{fig:fig61} and \ref{fig:fig71} are prepared.
\begin{figure}[!htb]
\centering
\begin{subfigure}[!htb]{.495\textwidth}
\centering
\includegraphics[width= 1\linewidth]{TvE_N8_mu1_1_mu2_0_5.jpg}
\caption{$\mu_2 = 0.5$}
\label{fig:fig6a1}
\end{subfigure}
\begin{subfigure}[!htb]{.495\textwidth}
\centering
\includegraphics[width=1\linewidth]{TvE_N8_mu1_1_mu2_1_5.jpg}
\caption{$\mu_2 = 1.5$}
\label{fig:fig6b1}
\end{subfigure}
\begin{subfigure}[!htb]{.495\textwidth}
\centering
\includegraphics[width=1\linewidth]{TvE_N8_mu1_1_mu2_3.jpg}
\caption{$\mu_2 = 3$}
\label{fig:fig6c1}
\end{subfigure}
\begin{subfigure}[!htb]{.495\textwidth}
\centering
\includegraphics[width=1\linewidth]{TvE_N8_mu1_1_mu2_4.jpg}
\caption{$\mu_2 = 4$}
\label{fig:fig6d1}
\end{subfigure}
\caption{Thermalization time vs. Energy at $\mu_1=1$ and $N=8$}
\label{fig:fig61}
\end{figure}
Figure \ref{fig:fig61} shows the plots of thermalization time versus energy at four different $\mu_2$ values.
The best-fitting function for the numerical data displayed in Figure \ref{fig:fig61} is found to be
a power law in the form
\begin{equation}
\label{pwrFit1}
\Lambda_m(E) = \alpha_m E^{\beta_{m}} + \xi_m \,.
\end{equation}
The fitting parameters of the best fit equations (\ref{pwrFit1}) are listed in Table \ref{tab:pwFitvals1}. Due to the obvious increase in the variance of the data, the fits describing the thermalization times of
\begin{table}[H]
\centering
\begin{tabular}{ | c | c | c | c |}
\cline{2-4}
\multicolumn{1}{c |}{} & $\alpha_m$ & $\beta_m$ & $\xi_m$ \\ \hline
$\Lambda_1(E)$ &$859.8$ &$-0.1491$ & $-8.2$ \\ \hline
$\Lambda_2(E)$ &$1007$ &$-0.1737$ & $6.9$ \\ \hline
$\Lambda_3(E)$ &$1850$ & $-0.2353$ & $0.4$ \\ \hline
$\Lambda_4(E)$ &$2153$ & $-0.240$ & $3.1$ \\ \hline
\end{tabular}
\caption{$\alpha_m$, $\beta_m$ and $\xi_m$ values for the fitting curve (\ref{pwrFit1}) \label{tab:pwFitvals1}}
\end{table}
\noindent Figure \ref{fig:fig61} are not as good in comparison to the fits displayed in Figure \ref{fig:fig51}. The four fitting curves $\Lambda_m$ ($m = 1,2,3,4$) appear to have the adjusted R-squared statistics of $0.8681$, $0.8654$, $0.897$, and $0.8703$ respectively.
On the other hand, in order to take the effects of matrix size into consideration, we illustrate in Figure \ref{fig:fig71} the evolutions of thermalization times with energy at $N=6,8,10,12$. From the profile of thermalization times with respect to energy shown in Figure \ref{fig:fig71}, we observe that numerical data exhibits a decreasing trend, which can be modelled again with a power law in the form
\begin{equation}
\label{pwrFit2}
\Gamma_m(E) = \theta_m E^{\epsilon_{m}} + \delta_m \,,
\end{equation}
with the fitting parameters displayed in Table \ref{tab:pwFitvals2}. The adjusted R-squared values of the
\begin{table}[H]
\centering
\begin{tabular}{ | c | c | c | c |}
\cline{2-4}
\multicolumn{1}{c |}{} & $\theta_m$ & $\epsilon_{m}$ & $\delta_m$ \\ \hline
$\Gamma_1(E)$ &$1237$ &$-0.1942$ & $-3.4$ \\ \hline
$\Gamma_2 (E)$ &$1007$ &$-0.1737$ & $6.9$ \\ \hline
$\Gamma_3(E)$ &$1291$ & $-0.1825$ & $33.7$ \\ \hline
$\Gamma_4(E)$ &$1128$ & $-0.1796$ & $1.8$ \\ \hline
\end{tabular}
\caption{$\theta_m$, $\epsilon_m$ and $\delta_m$ values for the fitting curve (\ref{pwrFit2}) \label{tab:pwFitvals2}}
\end{table}
\noindent fitting curves depicted in Figures \ref{fig:fig7a1} - \ref{fig:fig7d1} are given by $0.8548$, $0.8654$, $0.8722$ and $0.855$ respectively, which essentially indicates that $\Gamma_m$ curves provide adequate fits to the numerical data.
\begin{figure}[!htb]
\centering
\begin{subfigure}[!htb]{.495\textwidth}
\centering
\includegraphics[width=1\linewidth]{sMu_TvE_N6_mu1_1_mu2_1_5.jpg}
\caption{$N=6$}
\label{fig:fig7a1}
\end{subfigure}
\begin{subfigure}[!htb]{.495\textwidth}
\centering
\includegraphics[width=1\linewidth]{sMu_TvE_N8_mu1_1_mu2_1_5.jpg}
\caption{$N=8$}
\label{fig:fig7b1}
\end{subfigure}
\begin{subfigure}[!htb]{.495\textwidth}
\centering
\includegraphics[width=1\linewidth]{sMu_TvE_N10_mu1_1_mu2_1_5.jpg} \caption{$N=10$}
\label{fig:fig7c1}
\end{subfigure}
\begin{subfigure}[!htb]{.495\textwidth}
\centering
\includegraphics[width=1\linewidth]{sMu_TvE_N12_mu1_1_mu2_1_5.jpg} \caption{$N=12$}
\label{fig:fig7d1}
\end{subfigure}
\caption{Thermalization time vs. Energy at $\mu_1=1$ and $\mu_2=1.5$}
\label{fig:fig71}
\end{figure}
\section{Conclusions and outlook} \label{Concs}
In this paper, we have considered the dynamics of thermalization in a Yang-Mills matrix model with two distinct mass deformation terms, which may be contemplated as a double mass deformation of the bosonic part of the BFSS model. We have performed a detailed numerical analysis
of the
classical evolution of this model and determined that when the simulations are started from a certain set of initial conditions, thermalization occurs. Although small background fluctuations are required to initiate thermalization, from the findings
of numerical simulations it was clearly seen that thermalization times are independent of these fluctuations.
This is an extension of the result given in \cite{Riggins:2012qt} for the BFSS model.
From the results concerning the change in thermalization times with respect to matrix size,
we were able to demonstrate through an appropriate fitting function that thermalization times vary logarithmically with
matrix size when the mass parameters $\mu_1$ and $\mu_2$ differ.
It is worth mentioning that in reference \cite{Sekino:2008he}, thermalization (or scrambling) time of a black hole is conjectured to be proportional to $\log(N)$ where $N$ is the number of degrees of freedom. Even though
we have adopted a different definition of thermalization time, it is still interesting to note that
the findings obtained for Hamiltonian (\ref{Ham1})
confirm this conjecture. In subsection \ref{Ssec_Thrm}, we have also presented plots depicting the variations of thermalization times with
respect to
the energies of the reduced actions and subsequently the best-fitting functions for the data were determined as power laws. A common feature observed in all fitting functions is that
thermalization times converge to finite values in the large energy or matrix size limit.
Let us also mention some recent developments in related subjects.
Although calculating entanglement entropy in ordinary field theories is a rather difficult task, calculations in noncommutative theories such as the scalar field theory on the fuzzy sphere were already carried out in \cite{Dou:2006ni, Karczmarek:2013jca, Okuno:2015kuc}. Moreover, numerical computations of entanglement entropy in the BFSS matrix model were recently performed in \cite{Buividovich:2018scl}.
Besides, the behavior of entanglement entropy during thermalization was
studied in holographic systems in references \cite{Liu:2013qca, Arefeva:2017pho, Arefeva:2020uec}. Based on these considerations, a valuable direction of research would be to investigate the time dependence of entanglement entropy in the system defined by (\ref{Ham1}). Particularly, it would be interesting to explore the possible use of entanglement entropy as a probe of thermalization.
Another challenging direction of development is to analyze the dynamics of quantum chaos with emphasis on the measurements of Lyapunov exponents and check whether our model saturates the Maldacena-Shenker-Stanford bound \cite{Maldacena:2015waa} or not. We hope that these issues will produce useful results to be reported soon.
|
2,877,628,089,120 | arxiv | \section{\label{sec:level1}First-level heading}
\section{Introduction}
In the field of mechanics of continua, including rheology, microfluidics and fluid mechanics, phenomena incorporating several physical properties are frequently observed. Viscoelasticity exhibits both fluidity and solidity while a dimensionless number called Deborah number $De=\tau/T$ \cite{Reiner}, which is defined as the ratio of relaxation time of materials $\tau$ and observation time $T$, qualifies the property. $De \ll 1$ qualifies the material as the fluid while $De \gg 1$ leads to the qualification as solid \cite{Barenblatt2014}. Here note that dimensionless numbers represent the proportion between properties or forces which govern the phenomena (e.g. Reynolds number is the ratio between inertial force and viscous force). In these two cases, the homogeneous physical property can be assumed in each and the problems generally turn to be simple. However, the intermediate scale range reveals characteristic behavior (e.g. viscoelasticity for $De \sim 1$), in which two physical properties are fundamentally mixed, turns to be complicated problems that are occasionally difficult to be formalized and conquered even though they are quite attractive and important for mechanics of continua.
On the other hand, these phenomena can be understood as {\it intermediate asymptotics} \cite{Barenblatt2006,Barenblatt1972}, which are defined as an asymptotic representation of a function valid in a certain range of independent variables. They are occasionally found as simple power-law relation through dimensional analysis when some dimensionless parameters are considered to be negligible. More or less all the theories can be considered as intermediate asymptotics, which are valid in the certain scale range \cite{IA}. This concept is formalized by Barenblatt \cite{Barenblatt2014, Barenblatt2006,Barenblatt1972} with the method of dimensional analysis, supplying the universal and coherent view on the physical theory and applications in various area \cite{Banetta,Boscolo,Goldenfeld,Chorin}. This methodology is expected to be effective for the complex problems involving plural physical properties though the scale range in which dimensionless number takes extremely large or small are focused. The method is not always applicable and limited to some extent, particularly in the case where problems turn to be self-similar solutions of second kind. Self-similar solution of second kind is the problem of which dimensionless parameters have power-law behaviors and generally these behaviors cannot be clarified within dimensional analysis but occasionally deduced by some technical manners such as renormalization group theory or method for nonlinear eigenvalue problems.
The present work focuses on the intermediate scale range of dimensionless parameters in which several physical properties are incorporated, based on the concept of dimensional analysis and intermediate asymptotics. I aim to discuss the relation between dimensionless number and complex behaviors. The problem is dynamical impact of solid sphere onto the mili-textured elastic surface. The dynamical collision is abundant phenomena in our daily life, and interesting for industry \cite{Goryacheva} and sports \cite{Carpick,Nathan}. Since Hertz described the collisional dynamics between two elastic bodies \cite{Hertz}, the theory was developed as contact mechanics \cite{Johnson}. Recently the collision dynamics between macro-textured and immersed sphere is studied by Chastel {\it et al.} \cite{Chastel2016,Chastel2019}. Mili-textured surface can be described by elastic-foundation model \cite{EFM}, of which stress profile is simplified.
Chastel {\it et al.} have already obtained the scaling behavior of dynamical impact of sphere onto the mili-textured surface. However, I will show that this scaling behavior is an intermediate asymptotic valid in a certain scale range by applying the dimensional analysis. We will recognize the problem belongs to self-similar solution of second kind. Finally I attempt to obtain the fundamental dimensionless functions to describe the global power-law behaviors of this problem by referring to the solution obtained by energy conservation complementally. These theoretical predictions are compared with experimental results to verify the validity of the method.
\section{Experiment}
The experiments have been performed using mili-textured surface made of polydimethylsiloxane (PDMS)(kit SYLGARD 180, DOW CORNING) as the elastic surface, of which elastic modulus $E \simeq {\rm 1.6~MPa}$ (see Fig.~\ref{fig:F1}). The periodic, striped-patterned pillars were engraved on the surface, of which the height of pillar ${\rm h = 3.5~mm}$, a square base of side $b={\rm 2.5~mm}$, the interdistance of channel $c={\rm 1.5~mm}$, and the fraction of surface $\phi = b/(b+c) = 0.625$. The metallic sphere (BEARINGOPTION LTD, Steel balls) is suspended by electromagnet (MECALECTRO, F91300 Massy, ${ \rm N^{\circ}}$5,18,01) of which magnet force is controlled and capable of dropping the sphere in arbitrary timing. The collision impacts were recorded by high-speed camera (Phantom V7.3). The collision velocity is varied by changing the height of position from which the sphere is dropped ( $1.5 \sim 50~{\rm cm}$). The size of sphere $R$ is differed as 3.0, 4.0, 4.5, 5.0 and 7.0 mm, of which density $\rho = {\rm 7800~kg \cdot m^{-3}}$. The collision experiments were performed for $30 \sim 40$ times in each conditions, changing the position of the elastic surface every 2.0 mm by motorized actuator so as that the effect of peculiarity between the pillars and sphere were normalized. The information of velocity, deformation and so on was extracted from the movies by image analysis.
\begin{figure}[h]
\begin{center}
\includegraphics[width=8.6cm]{Figure1_HM.pdf}
\caption{(Color online) Sketch of experimental set-up. The solid sphere is suspended by electromagnet which is capable of dropping the ball in arbitrary timing. The velocity of impact can be adjusted by changing the height of the part in which the sphere is suspended. The position of elastic surface made of PDMS can be changed by motorized actuator.}
\label{fig:F1}
\end{center}
\end{figure}
\section{The scaling relation: the result by Chastel et al.}
Firstly, I show the scaling solutions of this problem obtained by Chastel {\it et al.}.
The collision of sphere falling in the velocity of $v$ onto the surface generating the deformation $\delta$ is sketched on Fig.~\ref{fig:F2}. Assuming Hertzian pressure, $\delta$ is described as $\delta(r)=\delta\left[1-\left(r/a\right)^2\right]$. According to the theory of Hertz, the contact diameter $a$ is important parameter, which is obtained geometrically, $a^2 = R^2-\left(R-\delta\right)^2 \simeq 2R\delta$. Firstly, the kinetic energy of sphere is easily obtained as
\begin{equation}
E_{ki}=\frac{2}{3} \pi R^3 \rho v^2.
\label{1a}
\end{equation}
Following the procedure of Chastel {\it et al.} \cite{Chastel2016}, as the normal stress is $\sigma(r)=E\delta(r)/h$, the force of deformation is $F=\int_{0}^{a}\phi \sigma(r) 2 \pi r dr=\pi E \phi R \delta^2 /h$ by eliminating $a$ by $a^2=2R\delta$. Thus elastic energy is obtained as
\begin{equation}
E_{el}=\int_{0}^{\delta}F(\delta^{'})d \delta^{'}=\frac{\pi E \phi \delta^3 R}{3h}.
\label{1b}
\end{equation}
Thus the conservation equation for kinetic energy and elastic energy at instant $t$ after the collision is described as follows,
\begin{equation}
\frac{2}{3} \pi R^3 \rho v\left(t\right)^2+ \frac{\pi E \phi R \delta\left(t \right)^3 }{3h}= \frac{2}{3} \pi R^3 \rho v^2.
\label{eq:E1c}
\end{equation}
The maximum penetration $\delta$ is reached when $v\left(t\right)=0$, then following relation is obtained,
\begin{equation}
\frac{\delta}{R} = \left(\frac{2}{\phi}\right)^{\frac{1}{3}}\left(\frac{h}{R}\right)^{\frac{1}{3}}\left(\frac{\rho v^2}{E} \right)^{\frac{1}{3}}.
\label{eq:E1}
\end{equation}
Compression time $\tau_c$, which is defined as the duration time at which the sphere contacts with surface \cite{comp}, is obtained as follows,
\begin{equation}
\tau_{c} = 2~\frac{\delta}{v}\int_{0}^{1} \frac{d (\delta^{'}/ \delta)}{\sqrt{1-(\delta^{'}/\delta)^3 }} = \frac{2}{3}B\left(\frac{1}{3},\frac{1}{2}\right)\frac{\delta}{v}
\label{eq:E1d}
\end{equation}
where $B\left(x,y\right)$ is Beta function. Thus following equation is obtained from Eq.~\ref{eq:E1},
\begin{equation}
\frac{\tau_{c}}{R} = \frac{C_0}{\phi^{\frac{1}{3}}}\left(\frac{h}{R}\right)^{\frac{1}{3}}\left(\frac{\rho}{E v} \right)^{\frac{1}{3}}
\label{eq:E1e}
\end{equation}
where $C_0 = 2\sqrt[3]{2}/3 \cdot B\left(\frac{1}{3},\frac{1}{2}\right) \simeq 3.533$.
These are the results by Chastel {\it et al}. However, next I show these scaling relations are intermediate asymptotics which are valid in a certain range.
\begin{figure}[h]
\begin{center}
\includegraphics[width=8.6cm]{Figure2_HM.pdf}
\caption{(Color online) The geometrical parameters involved in the collision between elastic surface and solid sphere. Deformation $\delta$ and diameter of contact $a$ are generated by the collision onto the elastic PDMS surface.}
\label{fig:F2}
\end{center}
\end{figure}
\section{Dimensional analysis : scaling between ${\bf \Pi}$ and ${\bf \eta}$}
Based on the recipe of Barenblatt \cite{FN1}, firstly the function to study $\delta=f(a, R, \rho, E, v, h, \phi)$ is proposed. Assuming $LMT$ unit, the dimensionless parameters are constructed. Here I selected $R,\rho, v$ as the independent parameters, which is the parameters which cannot be represented as a product of the remained parameters. As $\delta = L,~R = L,~a=L, ~\rho = M/L^3, ~E = M/LT^2,~v = L/T,~h=L,~\phi = 1$, following dimensionless parameters are obtained ,
\begin{equation}
\Pi = \frac{\delta}{R},~\xi = \frac{a}{R},~\eta = \frac{\rho v^2}{E},~\kappa = \frac{h}{R}.
\label{eq:E2}
\end{equation}
Thus the function is transformed to $\Pi = \Phi(\xi, \eta, \kappa, \phi)$ where $\Phi$ is an arbitrary dimensionless function. Here let us assume that the problem belongs to the self-similar solutions of second kind \cite{SSSK} as follows,
\begin{equation}
\Pi = \phi^{\gamma_{1}} \kappa^{\gamma_{2}} \eta^{\gamma_{3}} \Phi \left(\xi^{\zeta_1}\kappa^{\zeta_2}\phi^{\zeta_3}\eta^{\zeta_4}\right).
\label{eq:E3}
\end{equation}
Self-similar solutions of second kind are the dimensional analysis solutions which are expressed by the products of dimensionless parameters raised to the powers though the power exponents of dimensionless parameters are not obtained within dimensional analysis in principle. However, in our case, power exponents $\gamma_1\cdots \gamma_3$ can be deduced via Eq.~\ref{eq:E1}. $\zeta_1 \cdots\zeta_4$ are obtained by utilizing Eq.~\ref{eq:E1} and $a^2=2R\delta$, then it is $\xi \sim \phi^{-1/6}\kappa^{1/6}\eta^{1/6}$. Therefore Eq.~\ref{eq:E3} leads to
\begin{equation}
\Pi = \left(\frac{\kappa}{\phi}\right)^{\frac{1}{3}} \eta^{\frac{1}{3}} \Phi \left[\left(\frac{\phi}{\kappa}\right)^{\frac{1}{6}}\xi / \eta^{\frac{1}{6}}\right].
\label{eq:E4}
\end{equation}
Eq.~\ref{eq:E4} is the fundamental dimensionless function which describes the dynamical impact of solid sphere on the mili-textured surface. Supposing new parameters $\Psi=\Pi\phi^{1/3}\kappa^{-1/3}\eta^{-1/3}$ and $\Xi= \xi \phi^{1/6} \kappa^{-1/6} \eta^{-1/6}$, Eq.~\ref{eq:E4} is described as $\Psi = \Phi(\Xi)$. See $\Psi$ is function with dimensionless parameter $\Xi$. It suggests that the scaling relation derived from the result by Chastel {\it et al.} is confirmed as far as $\Phi$ does not interfere. In this case following intermediate asymptotic is obtained,
\begin{equation}
\Pi = {\rm const}~\left(\frac{\kappa}{\phi}\right)^{\frac{1}{3}}\eta^{\frac{1}{3}}
\label{eq:E5}
\end{equation}
which corresponds to Eq.~\ref{eq:E1}. This condition holds true in the case that $\Xi$ is small enough to consider as $\Phi \sim {\rm const}$. Here we have recognized that $\Xi$ is an important parameter dominating the power-law behavior.
Next let us move on to the case in which $\Xi$ contributes to the behavior. It is quite interesting to think what kind of intermediate asymptotic is obtained in another scale region. Eq.~\ref{eq:E4} was obtained by the dimensionless parameters via the selection of independent parameters as $R, \rho, v$. However, this choice is arbitrary. Barenblatt suggested that the numerical estimation of dimensionless parameters can help to choose. If the dimensionless parameters to consider is too small or large, these dimensionless parameters can be considered to be negligible. The choice of $R, \rho, v$ is appropriate in the case in which these three parameters play a dominant roll. However the scale range in which $\Xi$ contributes must have $a$ large enough to be considered as $\xi=a/R$ increases $\Xi$. In this case, $a$ should be considered as a dominant parameter.
Now let us apply the dimensional analysis using another selection of independent parameters as $a, \rho, v$. In this case following dimensionless parameters are finally obtained,
\begin{equation}
\Pi^{'} = \frac{\delta}{a},~\xi = \frac{a}{R},~\eta = \frac{\rho v^2}{E},~\kappa^{'} = \frac{h}{a}.
\label{eq:E6}
\end{equation}
The difference from Eq.\ref{eq:E2} is that $\Pi$ and $\kappa$ are replaced by $\Pi^{'}$ and $\kappa^{'}$. Similarly assuming self-similarity of second kind and using Eq.~\ref{eq:E1} and $a^2 \sim R\delta$, the following intermediate asymptotic is obtained in another scale region,
\begin{equation}
\Pi^{'} = {\rm const} \left(\frac{\xi \kappa^{'}\eta}{\phi}\right)^{\frac{1}{6}}
\label{eq:E7}
\end{equation}
where $\kappa = \xi\kappa^{'}$. Eq.~\ref{eq:E7} is another intermediate asymptotic in the case where $a$ is comparatively large enough.
Following calculation will justify our interpretation. In order to see the behavior of Eq.~\ref{eq:E4} in wider scale region from small $\Xi$, series expansion of $\Phi$ in power of $\Xi$ is applied as follows,
\begin{eqnarray}
\Pi &=& \left(\frac{\kappa}{\phi}\right)^{\frac{1}{3}} \eta^{\frac{1}{3}} \left\{A_1+A_2~\Xi+A_3~\Xi^2+\cdots\right\} \nonumber \\
&=& A_1~\left(\frac{\kappa}{\phi}\right)^{\frac{1}{3}} \eta^{\frac{1}{3}} + A_2~\xi\left(\frac{\kappa}{\phi}\right)^{\frac{1}{6}} \eta^{\frac{1}{6}} +A_3~\xi^2+\cdots
\label{eq:E8}
\end{eqnarray}
where $A_1, A_2, A_3$ are constant. Here let us focus on the fact that two dimensionless parameters having different power exponents appear in Eq.~\ref{eq:E8}. Suppose fitting Eq.~\ref{eq:E8} with arbitrary power equation of $\eta$ as follows, $\eta^{\nu} \sim A_1~\phi^{-1/3}\kappa^{1/3}\eta^{1/3} + A_2~\xi\phi^{-1/6}\kappa^{1/6}\eta^{1/6} +A_3 \xi^{2}$, the power exponent $\nu$ is {\it locally} determined and varies in the range $1/6 \leq \nu \leq 1/3$, depending on the contribution of first term and second term in Eq.~\ref{eq:E8}. This balance is critically depends on parameter $\eta$ and $\xi$. We can see that in case of small $\eta$ and large $\xi$, the power exponent of second term 1/6 is dominant. On the other hand, in case of large $\eta$ with small $\xi$, first term is large and dominant, then $\nu$ should be fitted with 1/3. This interpretation corresponds to each intermediate asymptotics Eq.~\ref{eq:E5} and Eq.~\ref{eq:E7} as small $\Xi$ indicates the contribution of second term is extremely small.
This is spontaneously understood as $\Xi$ is given by ratio of first and second terms as follows, $\xi\phi^{-1/6}\kappa^{1/6}\eta^{1/6} / \phi^{-1/3}\kappa^{1/3}\eta^{1/3} = \xi \phi^{1/6} \kappa^{-1/6} \eta^{-1/6} = \Xi$. In the end, series expansion of $\Phi(\Xi)$ gives two intermediate asymptotics which are obtained by different selection of independent parameters and $\Xi$ represents the ratio between two intermediate asymptotics.
\section{Dimensional analysis : scaling between ${\bf \tau_c}$ and ${\bf v}$}
Next let us apply the same way to construct the dimensionless function concerning on Eq.~\ref{eq:E1d}. The function to study is $\tau_c=f_{\tau}(a, R, \rho, E, v, h, \phi)$. Assuming the independent parameters as $R,\rho,v$, following dimensionless parameters are to be prepared,
\begin{equation}
\omega = \frac{\tau_c v}{R},~\xi = \frac{a}{R},~\eta = \frac{\rho v^2}{E},~\kappa = \frac{h}{R}
\label{eq:E9}
\end{equation}
to obtain $\omega = \Phi_{\tau}(\xi,\eta,\kappa,\phi)$. Here we assume the self-similar solution of second kind, and we find the following fundamental dimensionless function,
\begin{equation}
\omega = \left(\frac{\kappa}{\phi}\right)^{\frac{1}{3}} \eta^{\frac{1}{3}} \Phi_{\tau} \left[\left(\frac{\phi}{\kappa}\right)^{\frac{1}{6}}\xi / \eta^{\frac{1}{6}}\right]
\label{eq:E10}
\end{equation}
by referring to Eq.~\ref{eq:E1d} and $a^2 = 2\delta R$. Defining $\Omega = \omega\phi^{1/3}\kappa^{-1/3}\eta^{-1/3}$, here we find the relation as $\Omega = \Phi_{\tau}(\Xi)$, suggesting the dependence between $\Omega$ and $\Xi$. Eq.~\ref{eq:E10} gives an intermediate asymptotic corresponding to Eq.~\ref{eq:E1d} as far as $\Xi$ is uninfluential, then we have $\tau_c \sim v^{-1/3}$. However, in the scale range in which $a$ starts to play a roll and $\Xi$ is large enough, another intermediate asymptotic appears,
\begin{equation}
\omega = {\rm const}~\xi\left(\frac{\kappa}{\phi}\right)^{\frac{1}{6}} \eta^{\frac{1}{6}}
\label{eq:E11}
\end{equation}
which is obtained by the series expansion of $\Phi_{\tau}$ as second term, or corresponds to the solution obtained through the dimensional analysis by the selection of the independent parameters as $a,\rho,v$. In this case, scaling relation $\tau_c \sim v^{-2/3}$ appears.
\section{Comparison with experimental results}
Now let us compare these theoretical results with experimental ones. Fig.~\ref{fig:F3a} is the plots of $\Pi$ and $\eta$ in different size of sphere. It is clearly found that the power law behavior varies depending on the size of sphere. Largest sphere $R={\rm 7.0~mm}$ follows the 1/3 power-law behavior, corresponding to Eq.~\ref{eq:E5}. On the other hand, smallest spheres $R={\rm 3.0~mm}$ reveals different power-law behavior, following 1/6 power-law, which corresponds to Eq.~\ref{eq:E7}.
Fig.~\ref{fig:F3b} is the plots of $\Psi=\Phi(\Xi)$ using experimental data. It is useful to see in which scale range each plots belong to. We can see that plots of small sphere ($R={\rm 3.0~mm}$) which follows 1/6 power-law are belong to larger $\Xi$ while the plots of high velocity decrease $\Xi$. Contrarily it is found that large sphere ($R={\rm 7.0~mm}$) belongs to smaller $\Xi$ though plots of small velocity belong to comparatively larger $\Xi$, which reveals different behavior. Meanwhile, we can find the groups of intermediate size of sphere ($R={\rm 4.0, 4.5, 5.0~mm}$), which belong to $\Xi = 1.1 \sim 1.4$, follow intermediate power-law behaviors. It can be considered as these plots belong to an intermediate scale region in which two power exponents are competing.
\begin{figure}[h]
\begin{center}
\subfigure{
\includegraphics[width=8.0cm]{Figure3a_HM.pdf}
\label{fig:F3a}}
\subfigure{
\includegraphics[width=8.0cm]{Figure3b_HM.pdf}
\label{fig:F3b}}
\caption{(Color online) (a) Power-law relation between $\Pi$ and $\eta$, (b) plots of $\Psi$ vs $\Xi$ in different size of sphere, $R = {\rm 3.0~mm}$ ($\textcolor{blue}{\bullet}$), ${\rm 4.0~mm}$ ($\textcolor{green}{\blacktriangle}$), {\rm 4.5~mm} ($\times$), {\rm 5.0~mm} ($\textcolor{orange}{\blacklozenge}$), {\rm 7.0~mm} ($\textcolor{red}{\blacksquare}$) where $\Pi=\delta/R$, $\eta=\rho v^2/E$, $\Psi=\Pi\phi^{1/3}\kappa^{-1/3}\eta^{-1/3}$ and $\Xi= \xi \phi^{1/6} \kappa^{-1/6} \eta^{-1/6}$. The two dashed lines indicate the slope of 1/6 and 1/3.}
\end{center}
\end{figure}
The different power-law behavior depending on size of sphere can be seen in the plots of $\tau_c$ and $v$ as well (Fig.~\ref{fig:F4a}). The dimensional analysis predicted two power-law behaviors, $\tau_c \sim v^{-1/3}$ at small $\Xi$ and $\tau_c \sim v^{-2/3}$ at large $\Xi$. The plots of largest sphere ($R={\rm 7~mm}$), having small $\Xi$ as it is shown in Fig.~\ref{fig:F4b}, follows -1/3 power-law behavior which corresponds to Eq.~\ref{eq:E1e}. The plots of smallest sphere ($R={\rm 3~mm}$) reveals mixed behavior though the plots having smaller velocity, belonging to large $\Xi$ in Fig.~\ref{fig:F4b}, follow -2/3 power-law behavior. The plots of the intermediate size sphere follows intermediate behavior.
\begin{figure}[h]
\begin{center}
\subfigure{
\includegraphics[width=8.0cm]{Figure4a_HM.pdf}
\label{fig:F4a}}
\subfigure{
\includegraphics[width=8.0cm]{Figure4b_HM.pdf}
\label{fig:F4b}}
\caption{(Color online) (a) Power-law relation between $\tau_c$ and $v$, (b) plots of $\Omega$ vs $\Xi$ in different size of sphere, $R = {\rm 3.0~mm}$ ($\textcolor{blue}{\bullet}$), ${\rm 4.0~mm}$ ($\textcolor{green}{\blacktriangle}$), {\rm 4.5~mm} ($\times$), {\rm 5.0~mm} ($\textcolor{orange}{\blacklozenge}$), {\rm 7.0~mm} ($\textcolor{red}{\blacksquare}$) where $\Omega=\omega\phi^{1/3}\kappa^{-1/3}\eta^{-1/3}$ and $\Xi= \xi \phi^{1/6} \kappa^{-1/6} \eta^{-1/6}$. The two dashed lines indicate the slope of -2/3 and -1/3.}
\end{center}
\end{figure}
Focusing on $\Xi$ in detail, we can find that it consists of $\phi$, $\kappa$, $\xi$ and $\eta$. $\phi$ and $\kappa$ are dimensionless parameters which belong to elastic surface, here we focus on the others. $\eta$, which corresponds to Cauchy number that is defined as the ratio of inertial force and elastic force in fluid mechanics, plays a dominant roll on the impact. This parameter reflects the degree of contribution derived from elasticity and inertia. Therefore I would like to call the impact following 1/6 power-law elasticity-dominant impact, and the one following 1/3 power-law inertia-dominant impact \cite{FN2}.
Not only $\eta$ but also $\xi$ is a key parameter. The second term of Eq.~\ref{eq:E8}, which corresponds to the intermediate asymptotic of elasticity-dominant impact, is multiplied by $\xi$, indicating that the contribution of second term is critically weakened by small $\xi$. $\xi$ measures the relative degree of subsidence into the surface. Smaller sphere subsides relatively deeper than larger one (Fig.~\ref{fig:F4}), which is the reason why small sphere ($R={\rm 3.0~mm}$) follows 1/6 power-behavior. In the end, these physical interpretation of $\Xi$ corresponds to the analytical interpretation of Eq.~\ref{eq:E8}, which proving the validity of the application of dimensional analysis.
\begin{figure}[h]
\begin{center}
\includegraphics[width=8.6cm]{Figure5_HM.pdf}
\caption{(Color online) Comparison of wall/sphere contact generating geometrical dimensionless parameter $\xi=a/R$ in smaller (left) and larger value (right).}
\label{fig:F4}
\end{center}
\end{figure}
In other reports of contact mechanics, the property of deformation is changed from elastic contact to plastic contact giving different power-laws, depending on the scale of interference \cite{Kogut}. The high speed impact generates the plastic deformation depending on the dimensional parameters \cite{Johnson1972}. The present work discovered another scale-dependent phenomenon of contact mechanics. The scale-dependence of power-law behavior is occasionally observed in self-similarity of second kind \cite{Berry,Barenblatt2002} while the dependence is sometimes semiempirical \cite{Barenblatt1981}. This work clearly identified the dependence of dimensionless parameter as the competition between two power exponents.
\section{Conclusion}
In conclusion, the above discussion with experimental results confirm the validity of Eq.~\ref{eq:E4} and Eq.~\ref{eq:E10} as the fundamental dimensionless functions of this problem. Eq.~\ref{eq:E4} and Eq.~\ref{eq:E10} include the information of $global$ scaling behaviors, which give two intermediate asymptotics $locally$, depending on $\Xi$. This {\it scale-locality} was quite important to understand this phenomenon as even the power-law behavior depended on the scale.
The present work is unique on the point that the intermediate scale range in which two physical properties incorporated is focused and the crossover of power-law behaviors are explained as the result of competition between two intermediate asymptotics representing each different physical properties. Generally, the cases in which the uniformity of physical property can be assumed are tend to be concentrated while the intermediate region are avoided. However, this work coped with this intermediate region, and that the crossover of power-law behavior was confirmed with experimental results. Furthermore, the two different method were combined complementally in this work: dimensional analysis and the solution obtained by the equation of kinetic energy and elastic energy. Generally latter solution is considered to be enough but the scale dependence would not have been recognized without dimensional analysis. This suggests that this combinated dimensional analysis with the concept of intermediate asymptotic is quite effective to analyze the mesoscale phenomena incorporating two or more physical properties, revealing different behaviors depending on the scale.
In this work, self-similarity of second kind is understood as the competition between two intermediate asymptotics. This is also quite interesting insight for the concept of self-similarity in general.\\
\section{Acknowledgement}
The author wishes to thank J.-B. Besnard, P. Panizza and L. Courbin for technical assistance of the experiments. He thanks to K. Osaki for theoretical advice. This work was supported by the program Long-term Internship Dispatch for Innovation Leader Training organized by Building of Consortium for the Development of Human Resources in Science and Technology.
|
2,877,628,089,121 | arxiv | \section*{Acknowledgments}
We would like to thank S. Pratt for providing the CRAB program and
acknowledge support by the Frankfurt Center for Scientific Computing
(CSC). We thank H. Appelsh\"{a}user, M. Gyulassy and T.~J.~Humanic
for helpful discussions and valuable suggestions. Q. Li thanks the
Alexander von Humboldt-Stiftung for financial support. This work is
partly supported by GSI, BMBF, DFG, and Volkswagenstiftung.
\newpage
|
2,877,628,089,122 | arxiv | \section{Introduction}
\label{sec:Int}
Motivated by string/M theory, the AdS/CFT correspondence, and the hierarchy
problem of particle physics, braneworld models were studied actively in
recent years (for a review see \cite{Ruba01}). In these models, our world is
represented by a sub-manifold, a three-brane, embedded in a higher
dimensional spacetime. In particular, a well studied example is when the
bulk is an AdS space. The braneworld corresponds to a manifold with
boundaries and all fields which propagate in the bulk will give Casimir-type
contributions to the vacuum energy, and as a result to the vacuum forces
acting on the branes. In dependence of the type of a field and boundary
conditions imposed, these forces can either stabilize or destabilize the
braneworld. In addition, the Casimir energy gives a contribution to both the
brane and bulk cosmological constants and, hence, has to be taken into
account in the self-consistent formulation of the braneworld dynamics.
Motivated by these, the role of quantum effects in braneworld scenarios has
received a great deal of attention. For a conformally coupled scalar this
effect was initially studied in Ref. \cite{Fabi00} in the context of
M-theory, and subsequently in Refs. \cite{Noji00a} for a background
Randall-Sundrum geometry. The models with dS and AdS branes, and higher
dimensional brane models are considered as well \cite{Noji00b}.
In view of these recent developments, it seems interesting to generalize the
study of quantum effects to other types of bulk spacetimes. In particular,
it is of interest to consider non-Poincar\'{e} invariant braneworlds, both
to better understand the mechanism of localized gravity and for possible
cosmological applications. Bulk geometries generated by higher-dimensional
black holes are of special interest. In these models, the tension and the
position of the brane are tuned in terms of black hole mass and cosmological
constant and brane gravity trapping occurs in just the same way as in the
Randall-Sundrum model. Braneworlds in the background of the AdS black hole
were studied in \cite{AdSbhworld}. Like pure AdS space the AdS black hole
may be superstring vacuum. It is of interest to note that the phase
transitions which may be interpreted as confinement-deconfinement transition
in AdS/CFT setup may occur between pure AdS and AdS black hole \cite{Witt98}%
. Though, in the generic black hole background the investigation of
brane-induced quantum effects is technically complicated, the exact
analytical results can be obtained in the near horizon and large mass limit
when the brane is close to the black hole horizon. In this limit the black
hole geometry may be approximated by the Rindler-like manifold (for some
investigations of quantum effects on background of Rindler-like spacetimes
see \cite{Byts96} and references therein).
In the previous papers \cite{Saha05,Saha06} we have considered the vacuum
densities induced by a spherical brane in the bulk $Ri\times S^{D-1}$, where
$Ri$ is a two-dimensional Rindler spacetime. Continuing in this direction,
in the present paper we investigate the Wightman function, the vacuum
expectation values of the field square and the energy-momentum tensor for a
scalar field with an arbitrary curvature coupling parameter for two
spherical branes on the same bulk. Though the corresponding operators are
local, due to the global nature of the vacuum, these expectation values
describe the global properties of the bulk and carry an important
information about the physical structure of the quantum field at a given
point. The expectation value of the energy-momentum tensor acts as the
source of gravity in the Einstein equations and, hence, plays an important
role in modelling a self-consistent dynamics involving the gravitational
field. In addition to applications in braneworld models on the AdS black
hole bulk, the problem under consideration is also of separate interest as
an example with gravitational and boundary-induced polarizations of the
vacuum, where all calculations can be performed in a closed form. Note that
the vacuum densities induced by a single and two parallel flat branes in the
bulk geometry $Ri\times R^{D-1}$ for both scalar and electromagnetic fields
are investigated in \cite{Cand77,Saha06b}.
The paper is organized as follows. In the next section we consider the
positive frequency Wightman functions in the region between two branes. On
the basis of the generalized Abel-Plana formula, we present this function in
the form of the sum of single brane and second brane induced parts. By using
expression for the Wightman function, in section \ref{sec:VEVEMT} we
investigate the vacuum expectation values of the field square and the
energy-momentum tensor. Various limiting cases of the general formulae are
studied. In section \ref{sec:IntForce} the vacuum forces acting on the
branes due to the presence of the second brane are evaluated by making use
of the expression for the radial vacuum stress. The main results of the
paper are summarized in section \ref{sec:Conc}.
\section{Wightman function}
\label{sec:WF}
We consider a scalar field $\varphi (x)$ propagating on background of $(D+1)$%
-dimensional Rindler-like spacetime $Ri\times S^{D-1}$. The corresponding
metric is described by the line element%
\begin{equation}
ds^{2}=\xi ^{2}d\tau ^{2}-d\xi ^{2}-r_{H}^{2}d\Sigma _{D-1}^{2},
\label{ds22}
\end{equation}%
with the Rindler-like $(\tau ,\xi )$ part and $d\Sigma _{D-1}^{2}$ is the
line element for the space with positive constant curvature with the Ricci
scalar $R=(D-2)(D-1)/r_{H}^{2}$. Line element (\ref{ds22}) describes the
near horizon geometry of $(D+1)$-dimensional topological black hole with the
line element \cite{Mann97}%
\begin{equation}
ds^{2}=A_{H}(r)dt^{2}-\frac{dr^{2}}{A_{H}(r)}-r^{2}d\Sigma _{D-1}^{2},
\label{ds21}
\end{equation}%
where $A_{H}(r)=k+r^{2}/l^{2}-r_{0}^{D}/l^{2}r^{n}$, $n=D-2$, and
the parameter $k$ classifies the horizon topology, with $k=0,-1,1$
corresponding to flat, hyperbolic, and elliptic horizons,
respectively. The parameter $l$ is related to the bulk
cosmological constant and the parameter $r_{0}$ depends on the
mass of the black hole. In the non extremal case the function
$A_{H}(r)$ has a simple zero
at $r=r_{H}$, and in the near horizon limit, introducing new coordinates $%
\tau $ and $\rho $ in accordance with%
\begin{equation}
\tau =A_{H}^{\prime }(r_{H})t/2,\quad r-r_{H}=A_{H}^{\prime
}(r_{H})\xi ^{2}/4, \label{tau}
\end{equation}%
the line element is written in the form (\ref{ds22}). Note that for a $(D+1)$%
-dimensional Schwarzschild black hole \cite{Call88} one has $%
A_{H}(r)=1-(r_{H}/r)^{n}$ and, hence, $A_{H}^{\prime }(r_{H})=n/r_{H}$.
The field equation is in the form%
\begin{equation}
\left( g^{ik}\nabla _{i}\nabla _{k}+m^{2}+\zeta R\right) \varphi (x)=0,
\label{fieldeq1}
\end{equation}%
where $\zeta $ is the curvature coupling parameter. Below we will assume
that the field satisfies the Robin boundary conditions on the hypersurfaces $%
\xi =a$ and $\xi =b$, $a<b$,
\begin{equation}
\left. \left( A_{j}+B_{j}\frac{\partial }{\partial \xi }\right) \varphi
\right\vert _{\xi =j}=0,\quad j=a,b, \label{bound1}
\end{equation}%
with constant coefficients $A_{j}$ and $B_{j}$. The Dirichlet and Neumann
boundary conditions are obtained as special cases. In accordance with \ (\ref%
{tau}), the hypersurface $\xi =j$ corresponds to the spherical shell near
the black hole horizon with the radius $r_{j}=r_{H}+A_{H}^{\prime
}(r_{H})j^{2}/4$.
The branes divide the bulk into three regions corresponding to $0<\xi <a$, $%
a<\xi <b$, and $b<\xi <\infty $. In general, the coefficients in the
boundary conditions (\ref{bound1}) can be different for separate regions. In
the corresponding braneworld scenario based on the orbifolded version of the
model the region between the branes is employed only and the ratio $%
A_{j}/B_{j}$ for untwisted bulk scalars is related to the brane mass
parameters $c_{j}$\ of the field by the formula \cite{Saha05}
\begin{equation}
\frac{A_{j}}{B_{j}}=\frac{1}{2}\left( c_{j}-\frac{\zeta }{j}\right) ,\;j=a,b.
\label{ABbraneworld}
\end{equation}%
For a twisted scalar the Dirichlet boundary conditions are obtained on both
branes.
To evaluate the vacuum expectation values (VEVs) of the field square and the
energy-momentum tensor we need a complete set of eigenfunctions satisfying
the boundary conditions (\ref{bound1}). In accordance with the problem
symmetry, below we shall use the hyperspherical angular coordinates $%
(\vartheta ,\phi )=(\theta _{1},\theta _{2},\ldots ,\theta _{n},\phi )$ on $%
S^{D-1}$\ with $0\leqslant \theta _{k}\leqslant \pi $, $k=1,\ldots ,n$, and $%
0\leqslant \phi \leqslant 2\pi $. In these coordinates the eigenfunctions in
the region between the branes can be written in the form%
\begin{equation}
\varphi _{\alpha }(x)=C_{\alpha }Z_{i\omega }^{(b)}(\lambda _{l}\xi ,\lambda
_{l}b)Y(m_{k};\vartheta ,\phi )e^{-i\omega \tau }, \label{eigfunc1}
\end{equation}%
where $m_{k}=(m_{0}\equiv l,m_{1},\ldots m_{n})$, and $m_{1},m_{2},\ldots
m_{n}$ are integers such that $0\leqslant m_{n-1}\leqslant \cdots \leqslant
m_{1}\leqslant l$, $-m_{n-1}\leqslant m_{n}\leqslant m_{n-1}$. In Eq. (\ref%
{eigfunc1}) $Y(m_{k};\vartheta ,\phi )$ is the spherical harmonic of degree $%
l$ \cite{Erdelyi}, and
\begin{equation}
Z_{i\omega }^{(j)}(u,v)=\bar{I}_{i\omega }^{(j)}(v)K_{i\omega }(u)-\bar{K}%
_{i\omega }^{(j)}(v)I_{i\omega }(u),\;j=a,b, \label{Deigfunc}
\end{equation}%
with $I_{i\omega }(x)$ and $K_{i\omega }(x)$ being the modified Bessel
functions with the imaginary order,
\begin{equation}
\quad \lambda _{l}=\frac{1}{r_{H}}\sqrt{l(l+n)+\zeta n(n+1)+m^{2}r_{H}^{2}}%
\,. \label{lambdal}
\end{equation}%
Here and below for a given function $f(z)$ we use the barred notations
\begin{equation}
\bar{f}^{(j)}(z)=A_{j}f(z)+\frac{B_{j}}{j}zf^{\prime }(z),\quad j=a,b.
\label{fbarnot}
\end{equation}
Functions (\ref{eigfunc1}) satisfy the boundary condition on the brane $\xi
=b$. From the boundary condition on the brane $\xi =a$ we find that the
possible values for $\omega $ are roots to the equation
\begin{equation}
Z_{i\omega }(\lambda _{l}a,\lambda _{l}b)=0, \label{Deigfreq}
\end{equation}%
with the notation
\begin{equation}
Z_{\omega }(u,v)=\bar{I}_{\omega }^{(b)}(v)\bar{K}_{\omega }^{(a)}(u)-\bar{K}%
_{\omega }^{(b)}(v)\bar{I}_{\omega }^{(a)}(u). \label{Zomega}
\end{equation}%
For a fixed $\lambda _{l}$, the equation (\ref{Deigfreq}) has an infinite
set of real solutions with respect to $\omega $. We will denote them by $%
\omega _{n}=\omega _{n}(\lambda _{l}a,\lambda _{l}b)$, $\omega _{n}>0$, $%
n=1,2,\ldots $, and will assume that they are arranged in the ascending
order $\omega _{n}<\omega _{n+1}$. In addition to the real zeros, in
dependence of the values of the ratios $jA_{j}/B_{j}$, equation (\ref%
{Deigfreq}) can have a finite set of purely imaginary solutions. The
presence of such solutions leads to the modes with an imaginary frequency
and, hence, to the unstable vacuum. In the consideration below we will
assume the values of the coefficients in boundary conditions (\ref{bound1})
for which the imaginary solutions are absent and the vacuum is stable.
The coefficient $C_{\alpha }$ in (\ref{eigfunc1}) can be found from the
normalization condition%
\begin{equation}
r_{H}^{D-1}\int d\Omega \int_{a}^{b}\frac{d\xi }{\xi }\varphi _{\alpha }%
\overleftrightarrow{\partial }_{\tau }\varphi _{\alpha ^{\prime }}^{\ast
}=i\delta _{\alpha \alpha ^{\prime }}. \label{normcond}
\end{equation}%
where the integration goes over the region between two spheres. Substituting
eigenfunctions (\ref{eigfunc1}) and using the relation $\int \left\vert
Y(m_{k};\vartheta ,\phi )\right\vert ^{2}d\Omega =N(m_{k})$ for spherical
harmonics, one finds%
\begin{equation}
C_{\alpha }^{2}=\left. \frac{r_{H}^{1-D}\bar{I}_{i\omega }^{(a)}(\lambda
_{l}a)}{N(m_{k})\bar{I}_{i\omega }^{(b)}(\lambda _{l}b)\frac{\partial }{%
\partial \omega }Z_{i\omega }(\lambda _{l}a,\lambda _{l}b)}\right\vert
_{\omega =\omega _{n}}. \label{Calfa}
\end{equation}%
The explicit form for $N(m_{k})$ is given in \cite{Erdelyi} and will not be
necessary for the following considerations in this paper.
First of all we evaluate the positive frequency Wightman function%
\begin{equation}
G^{+}(x,x^{\prime })=\langle 0\vert \varphi (x)\varphi (x^{\prime })\vert
0\rangle , \label{W1}
\end{equation}%
where $|0\rangle $ is the amplitude for the corresponding vacuum state. The
VEVs for the field square and the energy-momentum tensor are obtained from
this function in the coincidence limit of the arguments. In addition, the
Wightman function determines the response of a particle detector in given
state of motion. By expanding the field operator over eigenfunctions and
using the commutation relations one can see that%
\begin{equation}
G^{+}(x,x^{\prime })=\sum_{\alpha }\varphi _{\alpha }(x)\varphi _{\alpha
}^{\ast }(x^{\prime }). \label{W2}
\end{equation}%
Substituting eigenfunctions (\ref{eigfunc1}) into this mode sum formula and
by making use of the addition theorem%
\begin{equation}
\sum_{m_{k}}\frac{Y(m_{k};\vartheta ,\phi )}{N(m_{k})}Y(m_{k};\vartheta
^{\prime },\phi ^{\prime })=\frac{2l+n}{nS_{D}}C_{l}^{n/2}(\cos \theta ),
\label{addtheor}
\end{equation}%
for the Wightman function in the region between the branes one finds
\begin{eqnarray}
G^{+}(x,x^{\prime }) &=&\frac{r_{H}^{1-D}}{nS_{D}}\sum_{l=0}^{\infty
}(2l+n)C_{l}^{n/2}(\cos \theta )\sum_{n=1}^{\infty }\frac{\bar{I}_{i\omega
}^{(a)}(\lambda _{l}a)e^{-i\omega (\tau -\tau ^{\prime })}}{\bar{I}_{i\omega
}^{(b)}(\lambda _{l}b)\frac{\partial }{\partial \omega }Z_{i\omega }(\lambda
_{l}a,\lambda _{l}b)} \notag \\
&&\times \left. Z_{i\omega }^{(b)}(\lambda _{l}\xi ,\lambda _{l}b)Z_{i\omega
}^{(b)}(\lambda _{l}\xi ^{\prime },\lambda _{l}b)\right\vert _{\omega
=\omega _{n}}. \label{Wigh1}
\end{eqnarray}%
In this formula, $S_{D}=2\pi ^{D/2}/\Gamma (D/2)$ is the volume of the unit $%
(D-1)$-sphere, $C_{l}^{n/2}(x)$ is the Gegenbauer polynomial of degree $l$
and order $n/2$, $\theta $ is the angle between directions $(\vartheta ,\phi
)$ and $(\vartheta ^{\prime },\phi ^{\prime })$.
As the normal modes $\omega _{n}$ are not explicitly known and the terms
with large $n$ are highly oscillatory, the Wightman function in the form (%
\ref{Wigh1}) is not convenient. For the further evaluation we apply to the
sum over $n$ the summation formula derived in Ref. \cite{Saha06b} on the
basis of the generalized Abel-Plana formula \cite{SahRev}:%
\begin{equation}
\sum_{n=1}^{\infty }\frac{\bar{I}_{-i\omega _{n}}^{(b)}(v)\bar{I}_{i\omega
_{n}}^{(a)}(u)}{\frac{\partial }{\partial z}Z_{iz}(u,v)|_{z=\omega _{n}}}%
F(\omega _{n})=\int_{0}^{\infty }dz\frac{\sinh \pi z}{\pi ^{2}}%
\,F(z)-\int_{0}^{\infty }dz\frac{F(iz)+F(-iz)}{2\pi Z_{z}(u,v)}\bar{I}%
_{z}^{(a)}(u)\bar{I}_{-z}^{(b)}(v). \label{Sumformula}
\end{equation}%
As a function $F(z)$ in this formula we choose
\begin{equation}
F(z)=\frac{Z_{iz}^{(b)}(\lambda _{l}\xi ,\lambda _{l}\lambda
b)Z_{iz}^{(b)}(\lambda _{l}\xi ^{\prime },\lambda _{l}b)}{\bar{I}%
_{iz}^{(b)}(\lambda _{l}b)\bar{I}_{-iz}^{(b)}(\lambda _{l}b)}e^{-iz(\tau
-\tau ^{\prime })}. \label{FtoAPF}
\end{equation}%
The conditions for the formula (\ref{Sumformula}) to be valid are satisfied
if $a^{2}e^{|\tau -\tau ^{\prime }|}<\xi \xi ^{\prime }$. For the Wightman
function one obtains the expression%
\begin{eqnarray}
G^{+}(x,x^{\prime }) &=&G^{+}(x,x^{\prime };b)-\frac{r_{H}^{1-D}}{\pi nS_{D}}%
\sum_{l=0}^{\infty }(2l+n)C_{l}^{n/2}(\cos \theta )\int_{0}^{\infty }d\omega
\,\Omega _{b\omega }(\lambda _{l}a,\lambda _{l}b) \notag \\
&&\times Z_{\omega }^{(b)}(\lambda _{l}\xi ,\lambda _{l}b)Z_{\omega
}^{(b)}(\lambda _{l}\xi ^{\prime },\lambda _{l}b)\cosh [\omega (\tau -\tau
^{\prime })], \label{Wigh3}
\end{eqnarray}%
where
\begin{equation}
\Omega _{b\omega }(u,v)=\frac{\bar{I}_{\omega }^{(a)}(u)}{\bar{I}_{\omega
}^{(b)}(v)Z_{\omega }(u,v)}. \label{Omega2}
\end{equation}%
In Eq. (\ref{Wigh3})
\begin{eqnarray}
G^{+}(x,x^{\prime };b) &=&\frac{r_{H}^{1-D}}{\pi ^{2}nS_{D}}%
\sum_{l=0}^{\infty }(2l+n)C_{l}^{n/2}(\cos \theta )\int_{0}^{\infty }d\omega
\sinh (\pi \omega ) \notag \\
&&\times e^{-i\omega (\tau -\tau ^{\prime })}\frac{Z_{i\omega
}^{(b)}(\lambda _{l}\xi ,\lambda b)Z_{i\omega }^{(b)}(\lambda _{l}\xi
^{\prime },\lambda _{l}b)}{\bar{I}_{i\omega }^{(b)}(\lambda _{l}b)\bar{I}%
_{-i\omega }^{(b)}(\lambda _{l}b)}, \label{Wigh1pl}
\end{eqnarray}%
is the Wightman function in the region $\xi <b$ for a single brane at $\xi
=b $ and the second term on the right is induced by the presence of the
brane at $\xi =a$. The function (\ref{Wigh3}) is investigated in Ref. \cite%
{Saha05} and can be presented in the form
\begin{equation}
G^{+}(x,x^{\prime };b)=G_{0}^{+}(x,x^{\prime })+\langle \varphi (x)\varphi
(x^{\prime })\rangle ^{(b)}, \label{G+2}
\end{equation}%
where $G_{0}^{+}(x,x^{\prime })$ is the Wightman function for the geometry
without boundaries and the part%
\begin{eqnarray}
\langle \varphi (x)\varphi (x^{\prime })\rangle ^{(b)} &=&-\frac{r_{H}^{1-D}%
}{\pi nS_{D}}\sum_{l=0}^{\infty }(2l+n)C_{l}^{n/2}(\cos \theta
)\int_{0}^{\infty }d\omega \frac{\bar{K}_{\omega }^{(b)}(\lambda _{l}b)}{%
\bar{I}_{\omega }^{(b)}(\lambda _{l}b)} \notag \\
&&\times I_{\omega }(\lambda _{l}\xi )I_{\omega }(\lambda _{l}\xi ^{\prime
})\cosh [\omega (\tau -\tau ^{\prime })] \label{phi212}
\end{eqnarray}%
is induced in the region $\xi <b$ by the presence of the brane at $\xi =b$.
Note that the representation (\ref{G+2}) with (\ref{phi212}) is valid under
the assumption $\xi \xi ^{\prime }<b^{2}e^{|\tau -\tau ^{\prime }|}$. As it
has been shown in \cite{Saha05}, the Wightman function for the boundary-free
geometry may be written in the form%
\begin{eqnarray}
G_{0}^{+}(x,x^{\prime }) &=&\tilde{G}_{0}^{+}(x,x^{\prime })-\frac{%
r_{H}^{1-D}}{\pi ^{2}nS_{D}}\sum_{l=0}^{\infty }(2l+n)C_{l}^{n/2}(\cos
\theta ) \notag \\
&&\times \int_{0}^{\infty }d\omega e^{-\omega \pi }\cos [\omega (\tau -\tau
^{\prime })]K_{i\omega }(\lambda _{l}\xi )K_{i\omega }(\lambda _{l}\xi
^{\prime }), \label{GM1}
\end{eqnarray}%
where $\tilde{G}_{0}^{+}(x,x^{\prime })$ is the Wightman function for the
bulk geometry $R^{2}\times S^{D-1}$. Outside the horizon the divergences in
the coincidence limit of the expression on the right of (\ref{GM1}) are
contained in the first term.
It can be seen that the Wightman function in the region between
the branes can be also presented in the form
\begin{eqnarray}
G^{+}(x,x^{\prime }) &=&G^{+}(x,x^{\prime };a)-\frac{r_{H}^{1-D}}{\pi nS_{D}}%
\sum_{l=0}^{\infty }(2l+n)C_{l}^{n/2}(\cos \theta )\int_{0}^{\infty }d\omega
\,\Omega _{a\omega }(\lambda _{l}a,\lambda _{l}b) \notag \\
&&\times Z_{\omega }^{(a)}(\lambda _{l}\xi ,\lambda _{l}a)Z_{\omega
}^{(a)}(\lambda _{l}\xi ^{\prime },\lambda _{l}a)\cosh [\omega (\tau -\tau
^{\prime })], \label{Wigh31}
\end{eqnarray}%
with the notation
\begin{equation}
\Omega _{a\omega }(u,v)=\frac{\bar{K}_{\omega }^{(b)}(v)}{\bar{K}_{\omega
}^{(a)}(u)Z_{\omega }(u,v)}. \label{Oma}
\end{equation}
In this representation,%
\begin{equation}
G^{+}(x,x^{\prime };a)=G_{0}^{+}(x,x^{\prime })+\langle \varphi (x)\varphi
(x^{\prime })\rangle ^{(a)} \label{G+1}
\end{equation}%
is the Wightman function in the region $\xi >a$ for a single brane at $\xi
=a $, and
\begin{eqnarray}
\langle \varphi (x)\varphi (x^{\prime })\rangle ^{(a)} &=&-\frac{r_{H}^{1-D}%
}{\pi nS_{D}}\sum_{l=0}^{\infty }(2l+n)C_{l}^{n/2}(\cos \theta
)\int_{0}^{\infty }d\omega \frac{\bar{I}_{\omega }^{(a)}(\lambda _{l}a)}{%
\bar{K}_{\omega }^{(a)}(\lambda _{l}a)} \notag \\
&&\times K_{\omega }(\lambda _{l}\xi )K_{\omega }(\lambda _{l}\xi ^{\prime
})\cosh [\omega (\tau -\tau ^{\prime })]. \label{phi211}
\end{eqnarray}%
Two representations of the Wightman function, given by Eqs. (\ref{Wigh3})
and (\ref{Wigh31}), are obtained from each other by the replacements%
\begin{equation}
a\rightleftarrows b,\quad I_{\omega }\rightleftarrows K_{\omega }.
\label{replacement}
\end{equation}%
In the coincidence limit the second term on the right of formula (\ref{Wigh3}%
) is finite on the brane $\xi =b$ and diverges on the brane at $\xi =a$,
whereas the second term on the right of Eq. (\ref{Wigh31}) is finite on the
brane $\xi =a$ and is divergent for $\xi =b$. Consequently, the forms (\ref%
{Wigh3}) and (\ref{Wigh31}) are convenient for the investigations of the
VEVs near the branes $\xi =b$ and $\xi =a$, respectively.
We have investigated the Whightman function in the region between two branes
for an arbitrary ratio of boundary coefficients $A_{j}/B_{j}$. Note that in
the orbifolded version of the model the integration in the normalization
integral goes over two copies of the bulk manifold. This leads to the
additional coefficient $1/2$ in the expression (\ref{Calfa}) for the
normalization coefficient $C_{\alpha }$. Hence, the Whightman function in
the orbifolded braneworld case is given by formula (\ref{Wigh3}) with an
additional factor $1/2$ in the second term on the right and in formula (\ref%
{Wigh1pl}). As it has been mentioned above this function corresponds to the
braneworld in the AdS black hole bulk in the limit when the branes are close
to the black hole horizon.
\section{Casimir densities}
\label{sec:VEVEMT}
\subsection{VEV for the field square}
In this section we will consider the VEVs of the field square and the
energy-momentum tensor in the region between the branes. In the coincidence
limit, taking into account the relation $C_{l}^{n/2}(1)=\Gamma (l+n)/\Gamma
(n)l!$, from the formulae for the Wightman function one obtains two
equivalent forms for the VEV\ of the field square:%
\begin{eqnarray}
\langle 0\vert \varphi ^{2}\vert 0\rangle &=&\langle 0_{0}\vert \varphi
^{2}\vert 0_{0}\rangle +\langle \varphi ^{2}\rangle ^{(j)} \notag \\
&&-\frac{r_{H}^{1-D}}{\pi S_{D}}\sum_{l=0}^{\infty }D_{l}\int_{0}^{\infty
}d\omega \,\Omega _{j\omega }(\lambda _{l}a,\lambda _{l}b)Z_{\omega
}^{(j)2}(\lambda _{l}\xi ,\lambda _{l}j), \label{phi2sq1}
\end{eqnarray}%
corresponding to $j=a$ and $j=b$, and $|0_{0}\rangle $ is the amplitude for
the vacuum without boundaries,
\begin{equation}
D_{l}=(2l+D-2)\frac{\Gamma (l+D-2)}{\Gamma (D-1)l!} \label{Dl}
\end{equation}%
is the degeneracy of each angular mode with given $l$. The VEV $\langle
0_{0}\vert \varphi ^{2}\vert 0_{0}\rangle $ is obtained from the
corresponding Wightman function given by (\ref{GM1}). For the points outside
the horizon, the renormalization procedure is needed for the first term on
the right only, which corresponds to the VEV in the geometry $R^{2}\times
S^{D-1}$. This procedure is realized in \cite{Saha05} on the base of the
zeta function technique.
In Eq. (\ref{phi2sq1}), the part $\langle \varphi ^{2}\rangle ^{(j)}$ is
induced by a single brane at $\xi =j$ when the second brane is absent. For
the geometry of a single brane at $\xi =a$, from (\ref{phi211}) one has
\begin{equation}
\langle \varphi ^{2}\rangle ^{(a)}=-\frac{r_{H}^{1-D}}{\pi S_{D}}%
\sum_{l=0}^{\infty }D_{l}\int_{0}^{\infty }d\omega \frac{\bar{I}_{\omega
}^{(a)}(\lambda _{l}a)}{\bar{K}_{\omega }^{(a)}(\lambda _{l}a)}K_{\omega
}^{2}(\lambda _{l}\xi ). \label{phi21plb}
\end{equation}%
The expression for $\langle \varphi ^{2}\rangle ^{(b)}$ is obtained from (%
\ref{phi21plb}) by replacements (\ref{replacement}) and is investigated in
\cite{Saha05}. The last term on the right of formula (\ref{phi2sq1}) is
induced by the presence of the second brane. It is finite on the brane at $%
\xi =j$ and diverges for the points on the other brane. By taking into
account the relation $Z_{\omega }^{(j)}(u,u)=B_{j}/j$, we see that for the
Dirichlet boundary condition this term vanishes on the brane $\xi =j$.
Let us consider the behavior of the single brane part (\ref{phi21plb}) in
asymptotic regions of the parameters. In the limit $\xi \rightarrow a$ this
part diverges and, hence, for points near the brane the main contribution
comes from large values $l$. By making use of the corresponding uniform
asymptotic expansions for the modified Bessel functions, to the leading
order we find
\begin{equation}
\langle \varphi ^{2}\rangle ^{(a)}\approx -\frac{\delta _{B_{a}}\Gamma
\left( \frac{D-1}{2}\right) }{(4\pi )^{\frac{D+1}{2}}(\xi -a)^{D-1}},
\label{phi2anear}
\end{equation}%
where $\delta _{B_{a}}=1$ for $B_{a}=0$ and $\delta _{B_{a}}=-1$ for $%
B_{a}\neq 0$. Hence, near the brane the brane-induced part is negative for
the Dirichlet boundary condition and is positive for non-Dirichlet boundary
condition. At large distances from the brane, $\xi \gg r_{H}$, the dominant
contribution into (\ref{phi21plb}) comes from the $l=0$ term and in the
leading order we have%
\begin{equation}
\langle \varphi ^{2}\rangle ^{(a)}\approx -\frac{r_{H}^{1-D}e^{-2\lambda
_{0}\xi }}{2S_{D}\lambda _{0}\xi }\int_{0}^{\infty }d\omega \frac{\bar{I}%
_{\omega }^{(a)}(\lambda _{0}a)}{\bar{K}_{\omega }^{(a)}(\lambda _{0}a)}.
\label{phi2afar}
\end{equation}%
In the limit when the position of the brane tends to the horizon, $%
a\rightarrow 0$, with fixed $\xi $, we use the formulae for the modified
Bessel functions with small values of the argument. The main contribution
into the integral comes from the lower limit of the integration and we
obtain the formula%
\begin{equation}
\langle \varphi ^{2}\rangle ^{(a)}\approx -\frac{r_{H}^{1-D}\delta _{B_{a}}}{%
2\pi S_{D}\ln ^{2}(r_{H}/a)}\sum_{l=0}^{\infty }D_{l}K_{0}^{2}(\lambda
_{l}\xi ). \label{phi2anearhoriz}
\end{equation}
In the limit $r_{H}\rightarrow 0$ the curvature for the background spacetime
is large. In this limit $\lambda _{l}$ is also large. The exception is the
term $l=0$ for a minimally coupled scalar field for which $\lambda _{0}=m$.
For large values $\lambda _{l}$ the main contribution into the integral in (%
\ref{phi21plb}) comes from large values $\omega $. Introducing a new
integration variable $x=\omega /\lambda _{l}a$, we estimate the integral by
the Laplace method. This leads to the following result%
\begin{equation}
\langle \varphi ^{2}\rangle ^{(a)}\approx -\frac{\delta _{B_{a}}r_{H}^{1-D}a%
}{4\sqrt{\pi }S_{D}\xi }\frac{\exp [-2\sqrt{\zeta n(n+1)}(\xi -a)/r_{H}]}{%
\sqrt{\lambda _{0}(\xi -a)}}. \label{phi2alargecurv}
\end{equation}%
For a minimally coupled scalar field the contribution of the terms with $%
l\geqslant 1$ is suppressed by the factor $e^{-2\lambda _{l}(\xi -a)}$ and
the dominant contribution comes from the $l=0$\ term:%
\begin{equation}
\langle \varphi ^{2}\rangle ^{(a)}=-\frac{r_{H}^{1-D}}{\pi S_{D}}%
\int_{0}^{\infty }d\omega \frac{\bar{I}_{\omega }^{(a)}(ma)}{\bar{K}_{\omega
}^{(a)}(ma)}K_{\omega }^{2}(m\xi ). \label{phi2asmallrh}
\end{equation}%
As we see, the behavior of the VEV in the high curvature regime is
essentially different for minimally and non-minimally coupled
fields.
In the near-horizon limit, $a,\xi \ll r_{H}$, the main contribution into the
sum over $l$ in (\ref{phi21plb}) comes from large values $l$ corresponding
to $l\lesssim r_{H}/(\xi -a)$. To the leading order we can replace the
summation over $l$ by the integration in accordance with the formula%
\begin{equation}
\sum_{l=0}^{\infty }D_{l}f(\lambda _{l})\rightarrow \frac{2r_{H}^{D-1}}{%
\Gamma (D-1)}\int_{0}^{\infty }dk\,k^{D-2}f(\sqrt{k^{2}+m^{2}}).
\label{sumtoint}
\end{equation}%
Now it is easily seen that from (\ref{phi21plb}) we obtain the corresponding
result for the plate uniformly accelerated through the Fulling-Rindler
vacuum.
In the geometry of two branes, extracting the contribution from the second
brane, we can write the expression (\ref{phi2sq1}) for the VEV in the
symmetric form
\begin{equation}
\langle 0\left\vert \varphi ^{2}\right\vert 0\rangle =\langle
0_{0}\left\vert \varphi ^{2}\right\vert 0_{0}\rangle +\sum_{j=a,b}\langle
\varphi ^{2}\rangle ^{(j)}+\langle \varphi ^{2}\rangle ^{(ab)},
\label{phi2sq2n}
\end{equation}%
with the interference part%
\begin{equation}
\langle \varphi ^{2}\rangle ^{(ab)}=-\frac{r_{H}^{1-D}}{\pi S_{D}}%
\sum_{l=0}^{\infty }D_{l}\int_{0}^{\infty }d\omega \bar{I}_{\omega
}^{(a)}(\lambda _{l}a)\left[ \frac{Z_{\omega }^{(b)2}(\lambda _{l}\xi
,\lambda _{l}b)}{\bar{I}_{\omega }^{(b)}(\lambda _{l}b)Z_{\omega }(\lambda
_{l}a,\lambda _{l}b)}-\frac{K_{\omega }^{2}(\lambda _{l}\xi )}{\bar{K}%
_{\omega }^{(a)}(\lambda _{l}a)}\right] . \label{phi2int}
\end{equation}%
An equivalent form for this part is obtained with the replacements (\ref%
{replacement}) in the integrand. The interference term
(\ref{phi2int}) is finite for all values of $\xi $ in the range
$a\leqslant \xi \leqslant b$, including the points on the branes.
The surface divergences are contained in the single brane parts
only.
Let us consider the behavior of the interference part in the VEV of the
field square in limiting regions for values of the parameters. First of all,
it can be seen that in the limit $a\rightarrow b$, to the leading order the
result for the parallel plates in the Minkowski bulk is obtained. When the
left brane tends to the horizon, $a\rightarrow 0$, the dominant contribution
comes from the lower limit of the integration in (\ref{phi2int}), and we
have
\begin{equation}
\langle \varphi ^{2}\rangle ^{(ab)}\approx \frac{r_{H}^{1-D}\delta _{B_{a}}}{%
2\pi S_{D}\ln ^{2}(r_{H}/a)}\sum_{l=0}^{\infty }D_{l}\frac{\bar{K}%
_{0}^{(b)}(\lambda _{l}b)}{\bar{I}_{0}^{(b)}(\lambda _{l}b)}\left[
2K_{0}(\lambda _{l}\xi )-\frac{\bar{K}_{0}^{(b)}(\lambda _{l}b)}{\bar{I}%
_{0}^{(b)}(\lambda _{l}b)}I_{0}(\lambda _{l}\xi )\right] I_{0}(\lambda
_{l}\xi ). \label{phi2abnearhoriz}
\end{equation}%
In the limit $b\rightarrow \infty $ for fixed values $a$ and $\xi $, the
main contribution comes from the lowest mode $l=0$, and to the leading order
one finds%
\begin{equation}
\langle \varphi ^{2}\rangle ^{(ab)}\approx \frac{e^{-2\lambda _{0}b}}{%
S_{D}r_{H}^{D-1}}\frac{A_{b}-B_{b}\lambda _{0}}{A_{b}+B_{b}\lambda _{0}}%
\int_{0}^{\infty }d\omega \frac{\bar{I}_{\omega }^{(a)}(\lambda _{0}a)}{\bar{%
K}_{\omega }^{(a)}(\lambda _{0}a)}\left[ 2I_{\omega }(\lambda _{0}\xi )-%
\frac{\bar{I}_{\omega }^{(a)}(\lambda _{0}a)}{\bar{K}_{\omega
}^{(a)}(\lambda _{0}a)}K_{\omega }(\lambda _{0}\xi )\right] K_{\omega
}(\lambda _{0}\xi ), \label{phi2abfar}
\end{equation}%
with the exponentially suppressed interference part. The behavior
of the interference part in the limit $r_{H}\rightarrow 0$ can be
investigated in the way similar to that for a single brane part.
For a non-minimally coupled scalar field the interference part is
dominated by the $l=0$ term and is suppressed by the factor $\exp
[-2\sqrt{\zeta n(n+1)}(b-a)/r_{H}]$. For a minimally coupled
scalar field the leading term is given by the $l=0$
summand with $\lambda _{0}=m$ and the interference part behaves as $%
r_{H}^{1-D}$. In the near-horizon limit, $a,b\ll r_{H}$, replacing the
summation by the integration in accordance with formula (\ref{sumtoint}), it
can be seen that from (\ref{phi2int}) the result for the geometry of two
parallel plates uniformly accelerated through the Fulling-Rindler vacuum is
obtained.
\subsection{Energy-momentum tensor}
The VEV of the energy-momentum tensor is expressed in terms of the Wightman
function as
\begin{equation}
\langle 0\vert T_{ik}\vert 0\rangle =\lim_{x^{\prime }\rightarrow x}\partial
_{i}\partial _{k}^{\prime }G^{+}(x,x^{\prime })+\left[ \left( \zeta -\frac{1%
}{4}\right) g_{ik}\nabla _{l}\nabla ^{l}-\zeta \nabla _{i}\nabla _{k}-\zeta
R_{ik}\right] \langle 0\vert \varphi ^{2}\vert 0\rangle , \label{vevemtW}
\end{equation}%
where $R_{ik}$ is the Ricci tensor for the bulk geometry. Making use of the
formulae for the Wightman function and the VEV of the field square, one
obtains two equivalent forms, corresponding to $j=a$ and $j=b$ (no summation
over $i$):
\begin{eqnarray}
\langle 0|T_{i}^{k}|0\rangle &=&\langle 0_{0}|T_{i}^{k}|0_{0}\rangle
+\langle T_{i}^{k}\rangle ^{(j)}-\delta _{i}^{k}\frac{r_{H}^{1-D}}{\pi S_{D}}%
\sum_{l=0}^{\infty }D_{l}\lambda _{l}^{2} \notag \\
&&\times \int_{0}^{\infty }d\omega \,\Omega _{j\omega }(\lambda
_{l}a,\lambda _{l}b)F^{(i)}\left[ Z_{\omega }^{(j)}(\lambda _{l}\xi ,\lambda
_{l}j)\right] . \label{Tik1}
\end{eqnarray}
In this formula, for a given function $g(z)$ we use the notations
\begin{eqnarray}
F^{(0)}[g(z)] &=&\left( \frac{1}{2}-2\zeta \right) \left[ \left( \frac{dg(z)%
}{dz}\right) ^{2}+\left( 1+\frac{\omega ^{2}}{z^{2}}\right) g^{2}(z)\right] +%
\frac{\zeta }{z}\frac{d}{dz}g^{2}(z)-\frac{\omega ^{2}}{z^{2}}g^{2}(z),
\label{f0} \\
F^{(1)}[g(z)] &=&-\frac{1}{2}\left( \frac{dg(z)}{dz}\right) ^{2}-\frac{\zeta
}{z}\frac{d}{dz}g^{2}(z)+\frac{1}{2}\left( 1+\frac{\omega ^{2}}{z^{2}}%
\right) g^{2}(z), \label{f1} \\
F^{(i)}[g(z)] &=&\left( \frac{1}{2}-2\zeta \right) \left[ \left( \frac{dg(z)%
}{dz}\right) ^{2}+\left( 1+\frac{\omega ^{2}}{z^{2}}\right) g^{2}(z)\right] -%
\frac{g^{2}(z)}{D-1}\frac{\lambda _{l}^{2}-m^{2}}{\lambda _{l}^{2}},
\label{f23}
\end{eqnarray}%
with $g(z)=Z_{\omega }^{(j)}(z,\lambda _{l}j)$, where $i=2,\ldots ,D$ and
the indices 0,1 correspond to the coordinates $\tau $, $\xi $, respectively.
In formula (\ref{Tik1}),
\begin{equation}
\langle 0_{0}|T_{i}^{k}|0_{0}\rangle =\delta _{i}^{k}\frac{r_{H}^{1-D}}{\pi
^{2}S_{D}}\sum_{l=0}^{\infty }D_{l}\lambda _{l}^{2}\int_{0}^{\infty }d\omega
\sinh \pi \omega \,f^{(i)}[K_{i\omega }(\lambda _{l}\xi )] \label{DFR}
\end{equation}%
is the corresponding VEV for the vacuum without boundaries, and the term $%
\langle T_{i}^{k}\rangle ^{(j)}$ is induced by the presence of a single
spherical brane located at $\xi =j$. For the brane at $\xi =a$ and in the
region $\xi >a$ one has (no summation over $i$)
\begin{equation}
\langle T_{i}^{k}\rangle ^{(a)}=-\delta _{i}^{k}\frac{r_{H}^{1-D}}{\pi S_{D}}%
\sum_{l=0}^{\infty }D_{l}\lambda _{l}^{2}\int_{0}^{\infty }d\omega \frac{%
\bar{I}_{\omega }^{(a)}(\lambda _{l}a)}{\bar{K}_{\omega }^{(a)}(\lambda
_{l}a)}F^{(i)}[K_{\omega }(\lambda _{l}\xi )]. \label{D1plateboundb}
\end{equation}%
For the geometry of a single brane at $\xi =b$, the corresponding expression
in the region $\xi <b$ is obtained from (\ref{D1plateboundb}) by the
replacements (\ref{replacement}). The expressions for the functions $%
f^{(i)}[g(z)]$ in (\ref{DFR}) are obtained from the corresponding
expressions for $F^{(i)}[g(z)]$ by the replacement $\omega \rightarrow
i\omega $. It can be easily seen that for a conformally coupled massless
scalar the boundary induced part in the energy-momentum tensor is traceless.
The boundary-free part (\ref{DFR}) and the single brane part $\langle
T_{i}^{k}\rangle ^{(j)}$ in the region $\xi <j$ are investigated in Ref.
\cite{Saha05}.
Now we turn to the investigation of the brane-induced VEVs in limiting
cases. First of all let us consider single brane part (\ref{D1plateboundb}).
At large distances from the brane, $\xi \gg r_{H}$, the main contribution
comes from the $l=0$ term and one has
\begin{equation}
\langle T_{i}^{k}\rangle ^{(a)}\approx \lambda _{0}^{2}\delta
_{i}^{k}F_{0}^{(i)}\langle \varphi ^{2}\rangle ^{(a)}, \label{Tikafar}
\end{equation}%
where $\langle \varphi ^{2}\rangle ^{(a)}$ is given by (\ref{phi2afar}) and%
\begin{equation}
F_{0}^{(0)}=1-4\zeta ,\;F_{0}^{(1)}=\frac{4\zeta -1}{2\lambda _{0}\xi }%
,\;F_{0}^{(2)}=1-4\zeta -\frac{1-m^{2}/\lambda _{0}^{2}}{D-1}.
\label{F0ifar}
\end{equation}%
In this limit the radial vacuum stress is suppressed by the factor $\lambda
_{0}\xi $ with respect to the corresponding energy density and azimuthal
stresses. In the limit $a\rightarrow 0$ when $\xi $ is fixed the main
contribution into the $\omega $-integral comes from the lower limit and to
the leading order we obtain%
\begin{equation}
\langle T_{i}^{k}\rangle ^{(a)}\approx -\frac{\delta
_{i}^{k}r_{H}^{1-D}\delta _{B_{a}}}{2\pi S_{D}\ln ^{2}(r_{H}/a)}%
\sum_{l=0}^{\infty }D_{l}\lambda _{l}^{2}F^{(i)}[K_{\omega }(\lambda _{l}\xi
)]_{\omega =0}. \label{Tikanearhor}
\end{equation}%
For $r_{H}\rightarrow 0$, in the way similar to that for the field square it
can be seen that for a non-minimally coupled scalar field $\langle
T_{i}^{k}\rangle ^{(a)}$ is suppressed by the factor $\exp [-2\sqrt{\zeta
n(n+1)}(\xi -a)/r_{H}]$. For a minimally coupled scalar the main
contribution comes from the $l=0$ term and the brane induced VEV (\ref%
{D1plateboundb}) behaves like $r_{H}^{1-D}$.
Now let us present the VEV (\ref{Tik1}) in the form%
\begin{equation}
\langle 0|T_{i}^{k}|0\rangle =\langle 0_{0}|T_{i}^{k}|0_{0}\rangle
+\sum_{j=a,b}\langle T_{i}^{k}\rangle ^{(j)}+\langle T_{i}^{k}\rangle
^{(ab)}, \label{Tikdecomp}
\end{equation}%
where the interference part is given by the formula (no summation over $i$)
\begin{eqnarray}
\langle T_{i}^{k}\rangle ^{(ab)} &=&-\delta _{i}^{k}\frac{r_{H}^{1-D}}{\pi
S_{D}}\sum_{l=0}^{\infty }D_{l}\lambda _{l}^{2}\int_{0}^{\infty }d\omega
\bar{I}_{\omega }^{(a)}(\lambda _{l}a) \notag \\
&&\times \left[ \frac{F^{(i)}[Z_{\omega }^{(b)}(\lambda _{l}\xi ,\lambda
_{l}b)]}{\bar{I}_{\omega }^{(b)}(\lambda _{l}b)Z_{\omega }(\lambda
_{l}a,\lambda _{l}b)}-\frac{F^{(i)}[K_{\omega }(\lambda _{l}\xi )]}{\bar{K}%
_{\omega }^{(a)}(\lambda _{l}a)}\right] . \label{intterm1}
\end{eqnarray}%
The surface divergences are contained in the single brane parts and the term
(\ref{intterm1}) is finite for all values $a\leqslant \xi \leqslant b$. An
equivalent formula for $\langle T_{i}^{k}\rangle ^{(ab)}$ is obtained from
Eq. (\ref{intterm1}) by replacements (\ref{replacement}). The behavior of
the interference part (\ref{intterm1}) in the limits $a\rightarrow 0$ and $%
b\rightarrow \infty $ is similar to that for the field square. In the
near-horizon limit, $a,b\ll r_{H}$, for both single brane and interference
parts replacing the summation by the integration in accordance with formula (%
\ref{sumtoint}), the result for the parallel plates uniformly accelerated
through the Fulling-Rindler vacuum is obtained.
\section{Vacuum interaction forces between the branes}
\label{sec:IntForce}
In this section we will consider the vacuum forces acting on the branes. The
force acting per unit surface of the brane at $\xi =j$ is determined by the
radial component of the vacuum energy-momentum tensor evaluated at this
point. By using the decomposition of the VEV for the energy-momentum tensor
given by (\ref{Tik1}), the corresponding effective pressures, $%
p^{(j)}=-\langle T_{1}^{1}\rangle _{\xi =j}$, can be presented as the sum
\begin{equation}
p^{(j)}=p_{1}^{(j)}+p_{\mathrm{(int)}}^{(j)},\quad j=a,b, \label{FintD}
\end{equation}%
where the first term on the right is the pressure for a single brane at $\xi
=j$ when the second brane is absent. This term is divergent due to the
surface divergences in the subtracted vacuum expectation values and needs
additional renormalization. This can be done, for example, by applying the
generalized zeta function technique to the corresponding mode sum. This
procedure is similar to that used in Ref. \cite{Saha06} for the evaluation
of the surface energy for a single brane. The second term on the right of
Eq. (\ref{FintD}), $p_{\mathrm{(int)}}^{(j)}$, is the pressure induced by
the presence of the second brane, and can be termed as an interaction force.
This term determines the force by which the scalar vacuum acts on the brane
due to the modification of the spectrum for the zero-point fluctuations by
the presence of the second brane. It is finite for all nonzero distances
between the branes and is not affected by the renormalization procedure.
For the brane at $\xi =j$ the interaction term is due to the third summand
on the right of Eq. (\ref{Tik1}). Substituting into this term $i=k=1$, $\xi
=j$ and using the Wronskian relation for the modified Bessel functions one
finds
\begin{equation}
p_{\mathrm{(int)}}^{(j)}=\frac{A_{j}^{2}}{2j^{2}}\frac{r_{H}^{1-D}}{\pi S_{D}%
}\sum_{l=0}^{\infty }D_{l}\int_{0}^{\infty }d\omega \left[ \left( \lambda
_{l}^{2}j^{2}+\omega ^{2}\right) \beta _{j}^{2}+4\zeta \beta _{j}-1\right]
\,\Omega _{j\omega }(\lambda _{l}a,\lambda _{l}b), \label{pint2}
\end{equation}%
with $\beta _{j}=B_{j}/jA_{j}$. The interaction force acts on the surface $%
\xi =a+0$ for the brane at $\xi =a$ and on the surface $\xi =b-0$ for the
brane at $\xi =b$. In dependence of the values for the coefficients in the
boundary conditions, the effective pressures (\ref{pint2}) can be either
positive or negative, leading to repulsive or attractive forces,
respectively. For Dirichlet or Neumann boundary conditions on both branes
the interaction forces are always attractive. For Dirichlet boundary
condition on one brane and Neumann boundary condition on the other one has $%
p_{\mathrm{(int)}}^{(j)}>0$ and the interaction forces are repulsive for all
distances between the branes. Note that the interaction forces can also be
written in another equivalent form
\begin{eqnarray}
p_{\mathrm{(int)}}^{(j)} &=&\frac{n^{(j)}}{2j}\frac{r_{H}^{1-D}}{\pi S_{D}}%
\sum_{l=0}^{\infty }D_{l}\int_{0}^{\infty }d\omega \left[ 1+\frac{\left(
4\zeta -1\right) \beta _{j}}{\left( \lambda _{l}^{2}j^{2}+\omega ^{2}\right)
\beta _{j}^{2}+\beta _{j}-1}\right] \notag \\
&&\times \frac{\partial }{\partial j}\ln \left\vert 1-\frac{\bar{I}_{\omega
}^{(a)}(\lambda _{l}a)\bar{K}_{\omega }^{(b)}(\lambda _{l}b)}{\bar{I}%
_{\omega }^{(b)}(\lambda _{l}b)\bar{K}_{\omega }^{(a)}(\lambda _{l}a)}%
\right\vert . \label{pint3}
\end{eqnarray}
Now we turn to the investigation of the interaction forces in the asymptotic
regions of the parameters. In the limit $a\rightarrow b$ the dominant
contribution into the expression on the right of (\ref{pint2}) comes from
large values $l$ and $\omega $. Replacing the summation over $l$ by the
integration in accordance with $\sum_{l=0}^{\infty }D_{l}f(l)\rightarrow
2\int_{0}^{\infty }dl\,l^{D-2}f(l)/\Gamma (D-1)$, and using the uniform
asymptotic expansions for the modified Bessel functions, to the leading
order we find%
\begin{equation}
p_{\mathrm{(int)}}^{(j)}\approx \sigma _{ab}\frac{\Gamma \left( \frac{D+1}{2}%
\right) \zeta _{\mathrm{R}}(D+1)}{(4\pi )^{(D+1)/2}(b-a)^{D+1}},
\label{pjintnear}
\end{equation}%
where $\zeta _{\mathrm{R}}(x)$ is the Riemann zeta function, $\sigma
_{ab}=-1 $ for $\delta _{B_{a}}\delta _{B_{b}}=1$ and $\sigma _{ab}=1-2^{-D}$
for $\delta _{B_{a}}\delta _{B_{b}}=-1$. Hence, for small distances between
the branes the interaction forces are repulsive for the Dirichlet boundary
condition on one brane and non-Dirichlet boundary condition on the other and
are attractive for all other cases. Note that in the limit $a\rightarrow b$
the interaction part of the total vacuum force acting on the brane diverges,
whereas the renormalized single brane parts remain finite. From here it
follows that at small distances between the branes the interaction part
dominates.
When the left brane tends to the horizon, $a\rightarrow 0$, the main
contribution into the vacuum interaction forces comes from the lower limit
of the $\omega $-integral and one has%
\begin{eqnarray}
p_{\mathrm{(int)}}^{(a)} &\approx &-\frac{\delta _{B_{a}}r_{H}^{1-D}}{2\pi
S_{D}a^{2}\ln ^{3}(r_{H}/a)}\sum_{l=0}^{\infty }D_{l}\frac{\bar{K}%
_{0}^{(b)}(\lambda _{l}b)}{\bar{I}_{0}^{(b)}(\lambda _{l}b)},
\label{panearhor} \\
p_{\mathrm{(int)}}^{(b)} &\approx &\frac{r_{H}^{1-D}\delta _{B_{a}}A_{b}^{2}%
}{4\pi S_{D}b^{2}\ln ^{2}(r_{H}/a)}\sum_{l=0}^{\infty }D_{l}\frac{\lambda
_{l}^{2}b^{2}\beta _{b}^{2}+4\zeta \beta _{b}-1}{\bar{I}_{0}^{(b)2}(\lambda
_{l}b)}. \label{pbnearhor}
\end{eqnarray}%
In this limit the interaction forces have different signs for the Dirichlet
and non-Dirichlet boundary conditions on the brane $\xi =a$. The combination
$\delta _{B_{a}}p_{\mathrm{(int)}}^{(j)}$ is positive for large values $%
\beta _{b}$ and is negative for small values of this parameter. In the limit
$b\rightarrow \infty $ for fixed $a$, the dominant contribution comes from
the lowest mode $l=0$ and assuming that $A_{b}\neq \pm \lambda _{0}B_{b}$,
we have the estimates%
\begin{eqnarray}
p_{\mathrm{(int)}}^{(a)} &\approx &\frac{A_{a}^{2}e^{-2\lambda _{0}b}}{%
2S_{D}a^{2}r_{H}^{D-1}}\frac{A_{b}-\lambda _{0}B_{b}}{A_{b}+\lambda _{0}B_{b}%
}\int_{0}^{\infty }d\omega \,\frac{\left( \lambda _{l}^{2}a^{2}+\omega
^{2}\right) \beta _{a}^{2}+4\zeta \beta _{a}-1}{\bar{K}_{0}^{(a)2}(\lambda
_{0}a)}, \label{pafar} \\
p_{\mathrm{(int)}}^{(b)} &\approx &-\frac{\lambda _{0}e^{-2\lambda _{0}b}}{%
S_{D}br_{H}^{D-1}}\frac{A_{b}-\lambda _{0}B_{b}}{A_{b}+\lambda _{0}B_{b}}%
\int_{0}^{\infty }d\omega \frac{\bar{I}_{\omega }^{(a)}(\lambda _{0}a)}{\bar{%
K}_{\omega }^{(a)}(\lambda _{0}a)}, \label{pbfar}
\end{eqnarray}%
with the exponentially small interaction forces for both branes. In
particular, the combination $(A_{b}^{2}-\lambda _{0}^{2}B_{b}^{2})p_{\mathrm{%
(int)}}^{(j)}$ is positive/negative for large/small values $\beta
_{a}$. For small values of the curvature radius $r_{H}$, in the
way similar to that used for the VEV of the field square, it can
be seen that for a non-minimally coupled scalar field the main
contribution comes from the $l=0$ term and to the leading order we
have
\begin{equation}
p_{\mathrm{(int)}}^{(j)}\approx -\frac{\delta _{B_{a}}\delta _{B_{b}}[\zeta
n(n+1)]^{3/4}\sqrt{ab}}{2\sqrt{\pi }S_{D}r_{H}^{D+1/2}j\sqrt{b-a}}\exp [-2%
\sqrt{\zeta n(n+1)}(b-a)/r_{H}]. \label{pintlargecurv}
\end{equation}%
For a minimally coupled scalar field the dominant contribution comes from
the $l=0$ term with $\lambda _{0}=m$ and the interaction forces per unit
surface behave like $r_{H}^{1-D}$. In the near-horizon limit, $a,b\ll r_{H}$%
, in (\ref{pint2}) we replace the summation over $l$ by the integration in
accordance with (\ref{sumtoint}) and to the leading order the result for the
geometry of two parallel plates uniformly accelerated through the
Fulling-Rindler vacuum is obtained.
\section{Conclusion}
\label{sec:Conc}
In this paper, we investigate the polarization of the scalar vacuum induced
by two spherical branes in the $(D+1)$-dimensional bulk $Ri\times S^{D-1}$,
assuming that on the branes the field obeys the Robin boundary conditions.
In the corresponding braneworld scenario based on the orbifolded version of
the model the coefficients in the boundary conditions are expressed in terms
of the brane mass parameters by formula (\ref{ABbraneworld}). The most
important characteristics of the vacuum properties are the expectation
values of quantities bilinear in the field operator such as the field square
and the energy-momentum tensor. As the first step in the investigation of
these VEVs we evaluate the positive frequency Wightman function. The
corresponding mode sum contains the summation over the eigenfrequencies. In
the region between the branes the latter are the zeros of the bilinear
combination of the modified Bessel functions and their derivatives. For the
summation of the series over these zeros we employ a variant of the
generalized Abel-Plana formula. This allows us to present the Wightman
function as the sum of a single brane and second brane induced parts,
formulae (\ref{Wigh3}) and (\ref{Wigh31}).
The corresponding VEVs of the field square and the energy-momentum tensor
are obtained from the Wightman function in the coincidence limit and are
investigated in section \ref{sec:VEVEMT}. These VEVs are given by formulae (%
\ref{phi2sq1}) and (\ref{Tik1}), where $j=a,b$ provide two equivalent
representations. We have considered various limiting cases of the general
formulae. In particular, we have shown that when the left brane tends to the
horizon the interference parts in the VEVs of the field square and the
energy-momentum tensor vanish as $1/\ln ^{2}(r_{H}/a)$. In the limit when
the right brane tends to infinity, $b\rightarrow \infty $, the interference
parts are suppressed by the factor $\exp (-2\lambda _{0}b)$. In the high
curvature regime, corresponding to small values $r_{H}$, the behavior of the
VEVs is essentially different for minimally and non-minimally coupled scalar
fields. For a non-minimally coupled field the VEVs are suppressed by the
factor $\exp [-2\sqrt{\zeta n(n+1)}(\xi -a)/r_{H}]$ for single brane parts
and by $\exp [-2\sqrt{\zeta n(n+1)}(b-a)/r_{H}]$ for the interference parts.
For a minimally coupled field the main contribution comes from the $l=0$
term and the VEVs behave as $r_{H}^{1-D}$. In the limit when the both branes
are near the horizon, to the leading order the VEVs are obtained for the
geometry of two parallel plates uniformly accelerated through the
Fulling-Rindler vacuum.
In section \ref{sec:IntForce} we have investigated the vacuum forces acting
on the branes. These forces are presented as the sum of self-action and
interaction parts. Due to the well-known surface divergences, the
self-action part needs additional subtractions. The interaction forces are
finite for all nonzero interbrane distances and are given by formula (\ref%
{pint2}) or equivalently by (\ref{pint3}). In general, these forces are
different for the left and right branes and, in dependence of the values of
the parameters, they can be either attractive or repulsive. In particular,
at small interbrane distances they are repulsive for the Dirichlet boundary
condition on one brane and non-Dirichlet boundary condition on the other,
and are attractive for all other cases. When the left brane tends to the
horizon the interaction forces acting on the left and right branes behave as
$1/[a^{2}\ln (r_{H}/a)]$ and $1/\ln ^{2}(r_{H}/a)$, respectively. In the
limit when the $\xi $-coordinate of the right brane tends to infinity, the
interaction forces for both branes are suppressed by the factor $\exp
(-2\lambda _{0}b)$.
\section*{Acknowledgement}
AAS was supported in part by PVE/CAPES Program and by the Armenian Ministry
of Education and Science Grant No. 0124.
|
2,877,628,089,123 | arxiv | \section{Introduction}
Magnetic fields are present in about 10\,\% of white dwarfs (WDs). The observed field strength ranges from a few kiloGauss to almost a gigaGauss in different WDs, a span of more than five orders of magnitude. Generally, the observed fields have a globally organised structure, with large areas of emerging flux on one hemisphere, and entering flux on the other. Observations of variations of the mean line-of-sight magnetic field strength \ensuremath{\langle B_z \rangle}, and sometimes of photometric stellar magnitude, reveal rotation periods that are typically between minutes and weeks \citep[e.g. ][]{Brinetal13,Garyetal13,Ferretal15,Landetal17}.
The magnetic fields of WDs appear to currently be of fossil nature. They do not seem to be maintained by any current dynamo action, but are probably the remnants of fields generated during earlier stages of stellar evolution. They have survived very slow Ohmic decay by virtue of the size and high electrical conductivity of the host star. The origin of these fields is still very uncertain. They may be generated as a result of dynamo action in the convective core during an earlier stage of stellar evolution, or they may be the result of the intense interaction during a binary system merger \citep{Toutetal08,Ferretal15}. There may be multiple evolution paths leading to presence or absence of a global magnetic field in the final WD state.
It is worth noting that, on the whole, WDs are quite uninformative about their previous evolution. The majority show the spectral lines of only one element, hydrogen, in their optical spectra. A significant number show no spectral lines at all. From spectroscopy, it is often possible to derive accurate values of mass, radius, effective temperature, luminosity, and cooling age. However, very little further information can be extracted from spectra. In principle, magnetic fields can break this degeneracy because only a fraction of WDs carry detectable fields, and these fields vary widely in strength and probably in structure. Once we understand how to read the information contained in them, the fields should thus provide really valuable clues about physical processes that occurred in previous evolution. It is certainly worthwhile to try to understand how these fields develop, evolve, and reflect earlier evolution.
A solid observational description of which individual WDs have fields is fundamental to understanding the physical processes that these fields may reveal. A comprehensive description includes what fraction of WDs host these fields, how the observed fields are related to WD initial and present mass, atmospheric chemistry, and age since contraction to the WD state. Although WD magnetism has been known for almost 50 years, information about all of these characteristics is still quite fragmentary.
In order to characterise the global qualities of magnetic white dwarfs (MWDs), and to identify their relationships to other WDs and to the larger framework of stellar evolution, it is useful to study a volume-limited sample of WDs. Since the great majority of stars evolve to a WD final state, such a sample represents a kind of time capsule, which encodes in a direct way the results of more than $10^{10}$\,yr of star formation and stellar evolution in our region of the Milky Way galaxy.
The largest volume-limited sample of WDs that has been extensively studied is the sample currently lying within 20\,pc of the Sun \citep{Holbetal16,Holletal18}. This volume of space is large enough to contain well over 100 WDs, enough to provides useful statistical information about sub-samples. It is believed that WD membership in this sample is now very close to complete. Accordingly, we have been examining in detail the subset of this sample that are MWDs in order to identify systematic features of this sample. With such data, we can try to answer such questions as whether the presence or absence of a magnetic field, or MWD field strength or magnetic field structure, are related to WD atmospheric composition, to initial and final mass, or to cooling age. Such information should, in turn, help us to understand more clearly the evolution processes encoded in the observed magnetic fields.
Magnetic data about the WDs in the 20\,pc volume sample are still extremely incomplete. Some of the WDs in the sample have been closely examined for magnetic fields over the years. A wide variety of fields have been discovered. However, for many of the WDs, the available data a few years ago were only sufficient to identify fields above about one MG, or they did not constrain the possible presence or absence of a magnetic field at all. In response, we have been actively observing WDs in this sample by a variety of methods in order to obtain a more complete sample of the MWDs present in the 20\,pc WD sample, and to obtain the strongest practical upper limits on the remainder.
In the course of our survey we have already identified two new DA MWDs, WD\,2047+372, and WD\,2150+591, that are members of the 20\,pc sample \citep{Landetal16,LandBagn19}, and we have presented evidence that a DZ WD member, WD\,2138-332, may be magnetic \citep{BagnLand18}. In this paper we report the discovery of another DA MWD, WD\,0011-721, which is also resident in this volume, and we describe its characteristics. We then carry out a preliminary assessment of the frequency of occurrence of magnetic fields among the DA WDs, based on examination of the sample of DA MWDs in the 20\,pc volume.
\section{Observations, reduction, and measurements}
The observations presented in this paper were obtained with the FORS2 instrument
\citep{Appetal98} of the ESO VLT. WD\,0011-721\ was observed
with grism 1200R, covering the spectral range 5600--7300\,\AA,
with a 1\arcsec\ slit width, for a spectral resolving power of 2140 (spectral resolution $\Delta \lambda \approx 3.1$\,\AA).
Our target was observed in spectropolarimetric mode
to measure circular polarisation (Stokes $V/I$) as well as the unpolarised
spectrum (intensity, Stoke $I$). Observations were
carried out using the beam-swapping technique \citep[e.g.][]{Bagetal09}
to minimise instrumental effects, and reduced as explained by
\citet{BagnLand18}. Magnetic field values were measured as explained in
the section below.
The log of the observations is given in Table~\ref{Tab_Log}.
\begin{table}
\caption{Parameters of the newly discovered DA magnetic WD}
\label{Tab_obs-stars}
\centering
\begin{tabular}{l r }
\hline\hline
\multicolumn{2}{c}{ WD\,0011-721} \\
\hline
Alternate name & LP\,50-73 \\
$\alpha$ (J2000) & 00 13 49.91 \\
$\delta$ (J2000) & --71 49 05.03 \\
$\pi$ (mas) & 53.23 \\
Johnson $V$ & 15.17 \\
Gaia $G$ & 15.05 \\
Spectrum & DA\,7.8 \\
$T_{\rm eff}$ (K) & 6340 \\
$\log g$ (cgs) & 7.89 \\
age (Gyr) & 1.66 \\
mass ($M_\odot$) & 0.53 \\
\hline
\end{tabular}
\tablefoot{Data from \citet{Subaetal17} and Gaia Collaboration (\citeyear{Gaiaetal18})}
\end{table}
\begin{table*}
\caption{Magnetic measurements of newly discovered magnetic WD}
\label{Tab_Log}
\centering
\begin{tabular}{l l l l c c r r@{$\pm$}l r@{$\pm $}l}
\hline\hline
Star & Instrument & Grism & MJD &\multicolumn{2}{c}{Date} & Exp.&
\multicolumn{2}{c}{\ensuremath{\langle B_z \rangle}}&\multicolumn{2}{c}{\ensuremath{\langle \vert B \vert \rangle}}\\
& & & & yyyy-mm-dd& hh:mm&
\multicolumn{1}{c}{(s)}&\multicolumn{2}{c}{(kG)}&\multicolumn{2}{c}{(kG)}\\
\hline
WD\,0011-721 & FORS2 & 1200R & 58439.696 & 2018-11-17 & 04:42 & 2700 &$ 74.9$&1.6& 343 & 15 \\[2mm]
\hline
\end{tabular}
\end{table*}
\section{WD\,0011-721}
\begin{figure}
\includegraphics*[width=9.3cm,trim=0.75cm 15cm 0cm 2cm,clip]{Fig_WD0011_2018.pdf}
\caption{\label{Fig_wda}
Polarised spectrum of WD\,0011-721\ obtained with the FORS2 instrument with grism 1200R. The flux spectrum $I$ is shown in arbitarary units and not corrected for the instrument+telescope
transmission function; the $V/I$ (red) and $N/I$ (blue) spectra are in percentage units, with $N/I$ shifted by -6\% for clarity. }
\end{figure}
\begin{figure}[ht]
\includegraphics*[width=9.3cm,trim=1.75cm 6cm 0cm 2.5cm,clip]{Figure_2a.pdf}
\caption{\label{Fig_wda_detail}
Detail of polarised spectrum with continuum normalisation of WD\,0011-721, obtained with the FORS2 instrument with grism 1200R. Here we show $I$ and $V$ (normalised to 1.0 in the $I$ continuum, with $V$ shifted upwards by +0.65 for clarity) rather than $I$ and $V/I$, in order to make clear the relative amplitude of the Stokes components $I$ and $V$. The null profile is also shown offset by $+0.55$\,\%.}
\end{figure}
WD\,0011-721 is a cool DA with $\ensuremath{T_{\rm eff}} \approx 6340$ \citep{Subaetal17} and is known to be a single star \citep[][and references therein]{Tooetal17}. In the optical spectrum, rather sharp lines of the H Balmer series are visible; no metal lines are detected. According to \citet{Subaetal17}, the star has a mass of $M = 0.53\,M_\odot$, slightly below the average of $0.6 M_\odot$ characterising most WDs. The main characteristics of this star are summarised in Table~\ref{Tab_obs-stars}. A low-resolution optical spectrum from \citet{Giametal12} is available on the Montreal White Dwarf Database\footnote{http://www.montrealwhitedwarfdatabase.org} \citep[MWDD; ][]{Dufoetal17}. To our knowledge, there are no previous high-resolution observations available, nor any kind of polarimetric measurement; available low-resolution spectroscopy provides an upper limit to the field of the order of 1--2\,MG.
Our single FORS2 polarised spectrum, obtained around H$\alpha$\ using grism 1200R, shown in Fig.~\ref{Fig_wda}, reveals both a clear Zeeman split line triplet in the core of H$\alpha$, and very strong circular polarisation of the $\sigma$ components of the line, significant at about the $40 \sigma$ level, indicating a substantial line-of-sight component of the field. The region immediately around H$\alpha$, rectified and normalised with a quadratic fit to the nearby continuum of Fig.~\ref{Fig_wda}, and expressed in units of $I$ and $V$ (rather than $I$ and $V/I$), in order to show clearly the relative amplitude of the two Stokes parameters) is shown in Fig~\ref{Fig_wda_detail}.
The presence of clear Zeeman splitting in the core of H$\alpha$\ allows us to measure the mean of the magnitude of the surface magnetic field {\bf B} averaged over the visible hemisphere, known as the mean field modulus \ensuremath{\langle \vert B \vert \rangle}. When the field structure is such that there is a clear Zeeman triplet displaying three components of comparable strength and shape, a good estimator of this quantity is to measure the separation between the central, undisplaced $\pi$ component of the Zeeman triplet and the position of one the shifted $\sigma$ components. As discussed for example by \citet{Landetal15} and \citet[see Eq. ~(2)]{Landetal17}, \ensuremath{\langle \vert B \vert \rangle}\ is related to the measured $\pi - \sigma$ separation $\Delta \lambda_{\rm Z}$ by
\begin{equation}
\Delta \lambda_{\rm Z} = 4.67\,10^{-13} g \lambda_0^2 \ensuremath{\langle \vert B \vert \rangle},
\end{equation}
where $\lambda_0$ is the undisplaced wavelength of the spectral line, $g$ is the Land{\'e} factor, equal to 1.0 for Balmer lines, and wavelengths are measured in \AA. Although the three Zeeman components are slightly asymmetric, the positions of the components of the line core, measured below the level where the three components merge, can be measured by eye or by fitting a Gaussian model to each component separately. The two methods agree well, and the mean $\pi - \sigma$ separation can be measured fairly accurately as $\Delta \lambda_{\rm Z} \approx 6.9 \pm 0.3$\,\AA, leading to $\ensuremath{\langle \vert B \vert \rangle} \approx 343 \pm 15$\,kG. This field is small enough that no significant correction is required for higher-order terms in the line splitting physics. Specifically, the quadratic Zeeman effect \citep[e.g.][]{Pres70} is negligible.
Notice that for this star the value of \ensuremath{\langle \vert B \vert \rangle}\ is quite well defined. The two $\sigma$ components of the Zeeman triplet have nearly the same FWHM as the central $\pi$ component, and in fact all three components have FWHM of slightly more than 3\,\AA. This width is considerably wider than the normal FWHM of the non-magnetic non-LTE core of H$\alpha$\ (about 1\,\AA), which is also the expected intrinsic width of the magnetically split central $\pi$ component. The observed widths of these three components appear to be defined largely by the 3.1\,\AA\ resolution of the spectra. The compactness of the $\sigma$ components indicates that the local value of {\bf B} does not vary greatly over most of the visible hemisphere of the star. This compactness is confirmed by the Stokes $V$ spectrum; the regions of clearly non-zero $V$ are only a little wider than the apparent width of the $\sigma$ components in $I$. We estimate from the width of the $\sigma$ components that the intrinsic broadening is probably no more than about 2\,\AA, corresponding to a range of {\bf B} of only about $\pm 50$\,G over most of the visible hemisphere.
The mean line-of-sight component of the magnetic field, averaged over the visible hemisphere at the time of observation, \ensuremath{\langle B_z \rangle}, is obtained by computing the mean position of the spectral line as seen in right and left circular polarisation. The expression used for this measurement is
\begin{equation}
\ensuremath{\langle B_z \rangle} \approx -2.14\,10^{12} \frac{\int v V(v) {\rm d}v}
{\lambda_0 c \int [I_{\rm cont} - I(v)]{\rm d}v},
\end{equation}
where $v$ is wavelength measured in velocity units (cm\,s$^{-1}$) from $\lambda_0$, $I(v)$ and $V(v)$ are the measured intensity and circular polarisation Stokes components as functions of $v$, and $I_{\rm cont}$ is the continuum level relative to which the shifted right and left polarised line component positions are measured \citep{Math89,Donaetal97}. In most (hotter) weak-field magnetic DA WDs, Zeeman polarisation is detected far more strongly in the deep NLTE line core than in the very broad line wings, and the integral of Eq.\,(2) is measured using only the line core. In the case of WD\,0011-721, however, the H$\alpha$\ line resembles a strong metal line, almost without important extended wings, and the entire line profile can be used to measure accurately the shift of the line centroid between its position as seen in left and in right circular polarisation. In this case we take the value of the rectified continuum, $I_{\rm cont}$, to be 1.0 (as in Fig.~\ref{Fig_wda_detail}).
Using Eq.\,(2) and following the discussion of \citet{Landetal15} to estimate its uncertainty, we estimate $\ensuremath{\langle B_z \rangle} \approx 74.9 \pm 1.6$\,kG. This value is a significant fraction of \ensuremath{\langle \vert B \vert \rangle}. For a globally dipolar field structure, the ratio of the maximum absolute value of \ensuremath{\langle B_z \rangle}\ to \ensuremath{\langle \vert B \vert \rangle}\ does not exceed about 0.3 \citep{Hensetal77,Land88}, so we expect that the maximum \ensuremath{\langle B_z \rangle}\ value observed from a line of sight parallel to the magnetic field axis would be roughly 100\,kG. The actual vale of \ensuremath{\langle B_z \rangle}\ measured suggests that at the time of observation we were observing WD\,0011-721\ along a line of sight inclined by roughly 40$^\circ$ from the magnetic axis.
With a single observation, we have no information yet about possible variability in the measured \ensuremath{\langle B_z \rangle}\ and \ensuremath{\langle \vert B \vert \rangle}\ magnetic field strengths. If detected, such variations should enable us to determine the WD rotation period, and to obtain a simple model of the surface magnetic field, as was done for WD2047+372 and WD2359--434 \citep{Landetal17}.
\section{The frequency of occurrence of magnetic fields in DA WDs}
\begin{table*}
\caption{Known DA MWDs within 20\,pc volume around Sun}
\label{Tab_da_mwds}
\centering
\tabcolsep=0.14cm
\begin{tabular}{l l l c r r r r r r c l}
\hline\hline
WD name & Other name & Class & Distance & \ensuremath{T_{\rm eff}} & $\log g$ & Mass & Radius & Age & \ensuremath{\langle \vert B \vert \rangle} &\multicolumn{1}{c}{Mag.\ flux} & References \\
& & & (pc) & (K) & (cgs) & ($M_\odot$) & ($10^8$\,cm) & (Gyr) & (MG) &
\multicolumn{1}{c}{$\times 10^{-18}$\,MG\,cm$^2$}& \\
\hline
0009+501 & GJ 1004 & DAH & 10.87 & 6502 & 8.23 & 0.73 & 7.58 & 3.0 & 0.25 & 0.45 & 1, 2, a \\
0011--721 & LP 50-73 & DAH & 18.79 & 6340 & 7.89 & 0.53 & 9.47 & 1.7 & 0.34 & 0.96 & 3, b \\
0011--134 & GJ 3016 & DAH & 18.58 & 5992 & 8.21 & 0.72 & 7.68 & 3.7 & 9.7 & 18.0 & 4, 5, a \\
0121--429 & LHS 1243 & DAH & 18.48 & 6299 & 7.66 & 0.41 & 10.9 & 1.3 & 6.3 & 23.5 & 6, a \\
0233--242 & LHS 1421 & DAH & 18.50 & 5270 & 7.77 & 0.45 & 10.2 & 2.4 & 3.8 & 12.4 & 7, b \\
0322--019 & GJ 3223 & DAZH & 16.91 & 5300 & 8.12 & 0.66 & 8.17 & 5.5 & 0.12 & 0.25 & 8, b \\
0503--174 & LHS 1734 & DAH & 19.35 & 5316 & 7.62 & 0.38 & 11.0 & 1.9 & 4.0 & 15.2 & 4, a \\
0553+053 & LHS 212 & DAH & 7.99 & 5785 & 8.22 & 0.72 & 7.69 & 4.3 & 20 & 37.2 & 9, 10, a \\
1309+853 & GJ 3768 & DAH & 16.47 & 5440 & 8.20 & 0.71 & 7.75 & 5.5 & 4.9 & 9.24 & 11, 12, a \\
1350--090 & PG 1350-090& DAH & 19.71 & 9580 & 8.13 & 0.68 & 8.18 & 0.8 & 0.45 & 0.95 & 1, a \\
1900+705 & LAWD 73 & DAP & 12.88 & 11835 & 8.53 & 0.93 & 6.02 & 0.9 & 320 & 364 & 13, 14, 15, 16, a \\
1953--011 & LAWD 79 & DAH & 11.56 & 7868 & 8.23 & 0.73 & 7.66 & 1.6 & 0.50 & 0.92 & 17, 18, 19, a \\
2047+372 & GJ 4165 & DAH & 17.57 & 14600 & 8.33 & 0.82 & 7.11 & 0.4 & 0.06 & 0.095 & 20, 21, b \\
2105--820 & LAWD 83 & DAZP & 16.17 & 9820 & 8.29 & 0.78 & 7.27 & 1.0 & 0.04 & 0.066 & 22, 23, b \\
2150+591 & & DAH & 8.47 & 5095 & 7.98 & & 9.1 & 5.0 & 0.80 & 2.08 & 24, c \\
2359--434 & LAWD 96 & DAP & 8.34 & 8390 & 8.37 & 0.83 & 6.89 & 1.8 & 0.10 & 0.15 & 25, 21, b \\
\hline
\end{tabular}
\tablefoot{ Sources for magnetic data: (1) \citet{SchmSmit94}, (2) \citet{Valyetal05}, (3) this work,
(4) \citet{Bergetal92}, (5) \citet{Putn97}, (6) \citet{Subaetal07},
(7) \citet{Vennetal18}, (8) \citet{Farietal11}, (9) \citet{Liebetal75},
(10) \citet{PutnJord95}, (11) \citet{Putn95}, (12) \citet{Putn97},
(13) \citet{Kempetal70}, (14) \citet{LandAnge75}, (15) \citet{Angeetal85},
(16) \citet{BagnLand19}, (17) \citet{Koesetal98}, (18) \citet{Maxtetal00},
(19) \citet{Valyetal08}, (20) \citet{Landetal16}, (21) \citet{Landetal17},
(22) \citet{Koesetal98}, (23) \citet{Landetal12}, (24) \citet{LandBagn19},
(25) \citet{Aznaetal04}. \\
Sources for \ensuremath{T_{\rm eff}}, $\log g$, mass, age of WDs: (a) \citet{Giametal12}, (b) \citet{Subaetal17}, (c) \citet{Holletal18} }
\end{table*}
This star is found to be within the 20\,pc volume closest to the Sun. This is an extremely important set of WDs; it is a sample containing almost all the stellar remnants from completed stellar evolution in this volume, and effectively records the results of some 10\,Gyr of completed stellar evolution by all but the most massive stars that have existed in the solar neighbourhood. Because of the importance of this sample, it is of interest to explore in a preliminary way the information contained about the DA stars in the 20\,pc sample in spite of the fact that, at present, the WD sample is still not completely catalogued for spectral class, and the magnetic surveys of the sample are still significantly incomplete \citep{Holletal18}. The statistical properties of magnetic stars in the 20\,pc volume will be examined in greater detail in a forthcoming paper; here we simply sketch some early results.
We take as the fundamental 20\,pc sample of WDs the list provided by \citet{Holletal18}, which is based on careful examination of the recent Gaia Data Release 2 (DR2, Gaia Collaboration \citeyear{Gaiaetal18}). This list (including several WDs without complete Gaia DR2 records) is expected to be at least 95\,\% complete. The list of WDs within the 20\,pc volume presently includes 145 WDs. The new list excludes several stars previously regarded as members of this sample \citep[e.g.][]{Holbetal16}, and now includes a number of WDs not previously thought to be within the sample, as well as (currently) ten stars not previously classified by spectroscopy as WDs (some of which may turn out to be DAs). Consequently, although membership in the latest 20\,pc sample is more firmly established than in earlier versions, the sample is not yet completely definitive, nor fully characterised.
Recent papers have suggested that the incidence of magnetic fields among WDs may depend strongly on chemical subtype. In particular, it has been suggested that relative to samples including all types of WDs, fields are more frequently found in hot DQ stars \citep{Dufoetal13}, in DAZ stars \citep{KawkVenn14,Kawketal19}, and in DZ stars \citep{Holletal15}. Thus it is important to look at statistics of occurrences of fields in sub-samples of volume-limited samples when this is practical.
The sub-sample of the 20\,pc WD sample known to have hydrogen-rich outer layers consists of 80 WDs that have been spectroscopically classified be DA, DAZ, DAH or DAP stars. This sub-sample of the full 20\,pc WD sample is particularly interesting. It still contains more than half of the total 20\,pc sample, but because all these stars have evolved into WDs with H-rich outer layers, this sub-sample is probably significantly more homogeneous in origin and evolution than the full 20\,pc sample. Furthermore, all the identified members of the 20\,pc DA sample have reliable upper limits to magnetic fields that might be present, because all have been observed spectroscopically for classification. Such classification spectra usually have sufficient resolution and S/Ns to detect fields with field modulus \ensuremath{\langle B_z \rangle}\ in excess of 1--2\,MG, and in fact fields were discovered by spectroscopy alone, without spectropolarimetry, in four of the 20\,pc DA stars.
As will be discussed in more detail in forthcoming publications, most of the stars in the 20\,pc DA sample have been observed spectropolarimetrically. Many of the spectropolarimetric measurements have been published \citep[e.g.][]{Liebetal75,SchmSmit94,SchmSmit95,Putn95,Putn97,Aznaetal04,Jordetal07,Landetal12,Landetal15,Landetal16,BagnLand18,LandBagn19}, but a substantial number of our measurements are still being prepared for publication. Only about a dozen 20\,pc DA WDs remain unobserved; these are generally faint DA stars with \ensuremath{T_{\rm eff}}\ around 5000\,K and very weak H$\alpha$, or in close binary systems. The spectropolarimetric observations generally have sensitivity enough to detect longitudinal fields \ensuremath{\langle B_z \rangle}\ of a few kG or at worst tens of kG. This sample is thus not completely observed at the presently attainable level of precision, but is sufficiently thoroughly observed to deserve a preliminary assessment.
The sample of known DA stars within 20\,pc includes 16 well-established DA magnetic stars, with fields (as characterised by measured or estimated values of \ensuremath{\langle \vert B \vert \rangle}\ rather than by inferred dipole field strength) ranging from about 40\,kG to over 300\,MG. These 16 stars are listed in Table~\ref{Tab_da_mwds}. The spectral classes listed in this Table have been given the final letter H if the field is clearly detected in intensity spectra of suitable resolving power, or P if the field is detected only through polarimetry. Distances are taken from the Gaia DR2 (Gaia Collaboration \citeyear{Gaiaetal18}). Values of \ensuremath{T_{\rm eff}}, $\log g$, and mass are adopted from \citet{Giametal12}, \citet{Subaetal17}, or \citet{Holletal18}, as indicated in the footnotes to the Table. The radius of each MWD is computed from the luminosities and \ensuremath{T_{\rm eff}}\ values tabulated by the two main sources of other astrophysical data, except for WD\,2150+591, for which R is deduced from the values of mass and $\log g$ in \citet{Holletal18}.
With 16 MWDs out of 80 WDs, the magnetic fraction of DAs is about $(20.0 \pm 5.0)$\,\%. This is well above the overall frequencies generally reported \citep{Liebetal03,Kawketal07,Holbetal16}, but is similar to the frequency suggested by \citet{Jordetal07} for fields in DA stars, and to the global frequency reported (with large uncertainty) for the 13\,pc volume by \citet{KawkVenn14}.
Of the 16 MWDs in the 20\,pc DA sample, nine have $\ensuremath{\langle \vert B \vert \rangle} < 1$\,MG, six lie in the range of $1 < \ensuremath{\langle \vert B \vert \rangle} < 100$\,MG, and one has a still larger field. It is thus clear that the large fraction of MWDs found in this sample is a result of the large number of weak fields found with sensitive searches in recent years. A sufficiently large number of weak-field MWDs occur to {\em substantially} increase the statistics of occurrence compared to what would be found by a survey sensitive only to MG fields.
Within the full DA sample, half have atmospheres with $\ensuremath{T_{\rm eff}} \leq 7000$\,K and half are hotter than this. This median temperature corresponds to a cooling age of about 1.6\,Gyr \citep{Bergetal95}. The oldest DA WDs in the 20\,pc volume are roughly 5 Gyr old. (We note that older WDs with H-rich atmospheres are almost certainly present in the 20\,pc sample; they have simply cooled to the point of no longer showing atmospheric Balmer lines, and are consequently classified DC stars. These stars are currently not included in our DA sample.) Of the DA MWDs, 10 have $\ensuremath{T_{\rm eff}} \leq 7000$\,kG, while six are hotter than this. The median temperature of the DA MWD sample is about 6300\,K. The sample of DA MWDs is thus mildly, but not very much, cooler, and older than the general DA sample.
This high fraction of MWDs among the DA WDs provides a simple test of the hypothesis \citep{Angeetal81} that the MWDs are descendants, through magnetic flux conservation, of the magnetic Ap and Bp stars of the main sequence. Because of their relatively short lives, a substantial fraction of the current DA population is surely descended from A and B main sequence stars of more than about $1.5 M_\odot$ \citep{Holletal15}. These stars currently show a roughly constant magnetic field detection fraction throughout the middle and upper main sequence of about $7 \pm 1$\,\% \citep[e.g.][]{GrunNein15}. If all DA WDs descended from these middle and upper main sequence stars, we would expect to find about 7\,\% of DA WDs to be magnetic. In fact, some of the present DA WDs are certainly descended from less massive stars, so this would be an upper limit to the frequency of occurrence of MWDs. From this argument, and from the observed DA MWD fraction of about 20\,\%, it is clear that even if magnetic main sequence stars do generally become MWDs, there must be other major channels, or else the frequency of magnetic main sequence stars must have been at least three or four times higher in the past than it is today.
\begin{figure}
\includegraphics*[width=9.3cm,trim=0cm 0cm 0cm 2cm,clip]{logB-vs-age.pdf}
\caption{\label{Fig_bs_vs_age}
Mean field modulus of individual DA MWDs within 20\,pc volume-limited sample, as function of WD cooling age. }
\end{figure}
\begin{figure}
\includegraphics*[width=9.3cm,trim=0cm 0cm 0cm 2cm,clip]{logPhi-vs-age.pdf}
\caption{\label{Fig_phi_vs_age}
Estimated magnetic flux of individual DA MWDs within 20\,pc volume-limited sample, as function of cooling age. The magnetic flux is in units of $10^{-18}$\,MG\,cm$^2$, and the age is in Gyr. }
\end{figure}
The volume-limited 20\,pc DA sample is large enough to be examined for evolutionary trends. In particular, we may look at this sample to see if clear evidence exists of field decay, which would show up as systematically lower field strengths and magnetic fluxes in older MWDs compared to younger stars. This kind of study, looking at the statistical variations of field strength and magnetic flux as a function of stellar main sequence age, has clearly revealed that both field strength \ensuremath{\langle \vert B \vert \rangle}\ and total magnetic flux in upper main sequence Bp stars decline with time spent on the main sequence \citep{Landetal08,Sikoetal19}.
In order to study how magnetic field strength and flux depend on cooling age, we need reasonably accurate cooling times. The cooling ages reported in Table~\ref{Tab_da_mwds} are taken from the same sources as the basic stellar parameters. These cooling ages have been computed for non-magnetic WD models. However, for most of the WDs in our sample, the field is able to suppress or at least greatly reduce the heat flux carried by convection. The inhibition of convection certainly alters the structure of the outer layers of the WDs, and may well affect the spectral diagnostics used to determine basic stellar parameters in ways that have not yet been adequately explored.
It has been suggested that such major structural change could have very important effects on computed cooling times \citep{Valyetal14}, making the values computed for non-magnetic WDs seriously incorrect for our magnetic DA sample. However, this issue has been explored n depth by \citet{Tremetal15}, who have shown convincingly that convective suppression of convection has no effect whatever on cooling rates for values of \ensuremath{T_{\rm eff}}\ larger than about 6000\,K. Furthermore, the changes to the cooling times for lower values of \ensuremath{T_{\rm eff}}\ \citep[see Figure~6 in][] {Tremetal15}, particularly for values the WD mass around $0.6 M_\odot$ which apply to the coolest MWDs in our sample, are completely inconsequential for \ensuremath{T_{\rm eff}}\ values above 5000\,K. Thus we may safely use the cooling times as computed by \citet{Giametal12} and \citet{Subaetal17}.
Plots showing \ensuremath{\langle \vert B \vert \rangle}\ and the magnetic fluxes in this sample (estimated as $\Phi \sim \pi R^2 \ensuremath{\langle \vert B \vert \rangle}$ and tabulated as 'Flux' in Table~\ref{Tab_da_mwds} in units of $10^{-18}$\,MG-cm$^2$) as functions of the cooling age of each MWD are shown in Fig.~\ref{Fig_bs_vs_age} and \ref{Fig_phi_vs_age}. There are no clear trends with age; there is no obvious evidence in this sample for either Ohmic field decay with age, nor for new field generation during cooling. This result is in remarkable contrast to the situation of upper main sequence magnetic stars, in which the flux declines considerably on a time scale of the order of 0.1\,Gy, while for MWDs there is no evidence of decay on a time scale 50 times longer.
Another interesting feature of these two figures is that it appears that the creation rate of new MWDs per Gyr has not increased remarkably over the 6\,Gyr covered by the figure.
One may notice that the only MWD with a very large field ($\ensuremath{\langle \vert B \vert \rangle} > 10^2$\,MG) occurs at a relatively young age, while there are no corresponding very high field old objects. This is a small-numbers effect, as there are a number of very high-field objects with low values of \ensuremath{T_{\rm eff}}\ and corresponding advanced cooling ages. For example, both G240-72 and G227-35 have fields above 100\,MG, \ensuremath{T_{\rm eff}}\ below 6500\,K, ages of at least about 2\,Gyr, and are within the 20\,pc volume; they are not included in the DA sample because they are not thought to have H-rich atmospheres.
\section{Conclusions}
In the course of our ongoing spectropolarimetric survey of mostly faint, cool but nearby WDs for weak magnetic fields, we have discovered a magnetic field in WD\,0011-721. This star is spectral type DA, meaning that the outer layers are H-rich, and the visible spectrum shows only spectral line from the hydrogen Balmer series. Because of the relatively low effective temperature of this WD, about 6340\,K, these lines are rather sharp. It is found from our single FORS polarised spectrum, taken with a resolving power of a little more than 2000, that H$\alpha$\ shows clear Zeeman splitting, indicating a mean surface field of about 343\,kG at the time of observation. The H$\alpha$\ line also shows strong circular polarisation in the two $\sigma$ components, revealing a longitudinal field \ensuremath{\langle B_z \rangle}\ of about 75\,kG. This field is just slightly too small to have been noticed in classification spectra.
This star is a member of the very important volume-limited sample of WD stars within 20\,pc of the Sun. As noted above, this sample is effectively a time capsule recording the results of about 10\,Gyr of stellar evolution in the solar neighbourhood. It is by far the most intensely studied volume-limited sample of WDs in searches for magnetic WDs, and contains a significant fraction of all known MWDs with sub-MG magnetic fields.
The 20\,pc volume contains almost 150 WDs, of which more than half (about 80) are known to have H-rich outer layers. We suspect that the DAs in this volume have probably followed a more homogeneous evolution path that the full sample of 20\,pc WDs. In addition, because all the DAs show at least weak Balmer line spectra, we can detect MG fields in them by conventional low-resolution classification spectroscopy, so that all the classified DAs in the 20\,pc volume either are known to have MG fields, or to have field upper limits of the order of 1--2\,MG. For most of the DAs, spectropolarimetric measurements are also available, with either detected longitudinal fields \ensuremath{\langle B_z \rangle}\ weaker than 1\,MG, or set upper limits of some kG to such fields. Consequently, although the 20\,pc survey is still incomplete, it is worthwhile to carry out a preliminary examination of the statistics of field occurrence among the DA WDs of the 20\,pc sample.
It is found that 16 WDs, representing about 20\,\% of the DA WDs in the 20\,pc volume, possess detectable magnetic fields of a few kG or more. More than half of these stars have fields below 1\,MG, and were only found to be magnetic as a result of spectropolarimetric observations. It is the addition of these weak-field MWDs to the sample that raises the occurrence frequency of magnetic fields in DA WDs from around 10\% to around 20\,\%. The MWDs in the DA sub-sample of the 20\,pc sample are found to be slightly cooler and older than the typical temperatures and ages of the full DA sample.
Unlike the magnetic Bp stars of the upper main sequence, in which magnetic flux declines markedly during the $\sim 10^8$\,yr of main sequence evolution, these DA MWDs shown no clear evidence either of field or flux decay during cooling, nor of new field generation, even over a time interval 50 times longer.
\begin{acknowledgements}
We thank the referee, S. O. Kepler, for several helpful suggestions.
Based on observations made with ESO Telescopes at the
La Silla Paranal Observatory, under programme ID 0102.D-0045.
JDL acknowledges the financial support of the Natural Sciences and
Engineering Research Council of Canada (NSERC), funding reference
number 6377-2016.
\end{acknowledgements}
|
2,877,628,089,124 | arxiv | \section*{Acknowledgments}
We thank David Liu for many interesting discussions, and for collaborating with us on some of the open questions posed in this paper. We thank Eric Allender and Andy Drucker for asking whether ``Extended Frege-provable PIT'' implied that $\I$ was equivalent to Extended Frege, which led to the results of Section~\ref{sec:EF}. We thank Pascal Koiran for providing the second half of the proof of Proposition~\ref{prop:coMAGRH}. We thank Iddo Tzameret for useful discussions that led to Proposition~\ref{prop:pitassi}. Finally, in addition to several useful discussions, we also thank Eric Allender for suggesting the name ``Ideal Proof System''---all of our other potential names didn't even hold a candle to this one. We gratefully acknowledge financial support from NSERC; in particular, J. A. G. was supported by A. Borodin's NSERC Grant \# 482671.
\section{Additional Background} \label{app:background}
\subsection{Algebraic Complexity} \label{app:background:complexity}
A polynomial $f(\vec{x})$ is a \definedWord{projection} of a polynomial $g(\vec{y})$ if $f(\vec{x}) = g(L(\vec{x}))$ identically as polynomials in $\vec{x}$, for some map $L$ that assigns to each $y_i$ either a variable or a constant. A family of polynomials $(f_n)$ is a polynomial projection or \definedWord{p-projection} of another family $(g_n)$, denoted $(f_n) \leq_{p} (g_n)$, if there is a function $t(n) = n^{\Theta(1)}$ such that $f_n$ is a projection of $g_{t(n)}$ for all (sufficiently large) $n$. The primary value of projections is that they are very simple, and thus preserve bounds on nearly all natural complexity measures. Valiant \cite{valiant, valiantProjections} was the first to point out not only their value but their ubiquity in computational complexity---nearly all problems that are known to be complete for some natural class, even in the Boolean setting, are complete under p-projections. We say that two families $f=(f_n)$ and $g=(g_n)$ are of the same p-degree if each is a p-projection of the other, which we denote $f \equiv_{p} g$.
By analogy with Turing reductions or circuit reductions, \Burgisser \cite{burgisserbook} introduced the more general, but somewhat messier, notion of c-reduction (``c'' for ``computation''). An oracle computation of $f$ from $g$ is an algebraic circuit $C$ with ``oracle gates'' such that when $g$ is plugged in for each oracle gate, the resulting circuit computes $f$. We say that a family $(f_n)$ is a c-reduction of $(g_n)$ if there is a function $t(n) = n^{\Theta(1)}$ such that there is a polynomial-size oracle reduction from $f_n$ to $g_{t(n)}$ for all sufficiently large $n$. We define c-degrees by analogy with p-degrees, and denote them by $\equiv_{c}$.
Despite its central role in computation, and the fact that $\cc{VP} = \cc{VNC}^2$ \cite{VSBR}, the determinant is not known to be $\cc{VP}$-complete. The determinant is $\cc{VQP}$-complete ($\cc{VQP}$ is defined just like $\cc{VP}$ but with a quasi-polynomial bound on the size and degree of the circuits) under qp-projections (like p-projections, but with a quasi-polynomial bound). Weakly skew circuits help clarify the complexity of the determinant (see Malod and Portier \cite{malodPortier} for some history of weakly skew circuits and for highlights of their utility). A circuit of fan-in at most $2$ is \definedWord{weakly skew} if for every multiplication gate $g$ receiving inputs from gates $g_1$ and $g_2$, at least one of the subcircuits $C_i$ rooted at $g_i$ is only connected to the rest of the circuit through $g$. In other words, for every multiplication gate, one of its two incoming factors was computed entirely and solely for the purpose of being used in that multiplication gate. Toda \cite{todaDet2} (see also Malod and Portier \cite{malodPortier} showed that a polynomial family $f=(f_n)$ is a p-projection of the determinant family $(\det_n)$ if and only if $f$ is computed by polynomial-size weakly skew circuits.
\subsection{Proof Complexity} \label{app:background:proof}
Here we give formal definitions of proof systems and probabilistic proof systems
for $\cc{coNP}$ languages, and discuss several important and standard proof systems
for TAUT.
\begin{definition}
Let $L \subseteq \{0,1\}^*$ be a $\cc{coNP}$ language. A \definedWord{proof system $P$ for $L$} is a polynomial-time
function of two inputs $x,y \in \{0,1\}^*$ with the following properties:
\begin{enumerate}
\item (Perfect Soundness) If $x$ is not in $L$, then
for every $y$, $P(x,y)=0$.
\item (Completeness) If $x$ is in $L$, then there exists a $y$
such that $P(x,y)=1$.
\end{enumerate}
$P$ is \definedWord{polynomially bounded} if for every $x \in L$, there exists a $y$
such that $|y|\leq poly(|x|)$ and $P(x,y)=1$.
\end{definition}
As this is just the definition of an $\cc{NP}$ procedure for $L$,
it follows that for any $\cc{coNP}$-complete language $L$, $L$ has
a polynomially bounded proof system if and only if $\cc{coNP} \subseteq \cc{NP}$.
Cook and Reckhow \cite{cookReckhow} formalized proof systems for the language TAUT (all
Boolean tautologies) in a slightly different way, although their definition is
essentialy equivalent to the one above. We prefer the above definition as it
is consistent with definitions of interactive proofs.
\begin{definition}
A \definedWord{Cook--Reckhow proof system} is a polynomial-time function
$P'$ of just one input $y$, and whose range is the set of all yes instances of $L$.
If $x \in L$, then any $y$ such that
$P'(y)=x$ is called a $P'$ proof of $x$. $P'$ must satisfy the
following properties:
\begin{enumerate}
\item (Soundness) For every $x,y \in \{0,1\}^*$, if $P'(y)=x$, then $x \in L$.
\item (Completeness) For every $x \in L$, there exists an $y$ such that $P'(y)=x$.
\end{enumerate}
$P'$ is \definedWord{polynomially bounded} if for every $x \in L$, there exists a $y$
such that $|y| \leq poly(|x|)$ and $P(y)=x$.
\end{definition}
Intuitively, we think of $P'$ as a procedure for verifying that $y$ is a proof that some $x \in L$
and if so, it outputs $x$.
(For all strings $x$ that do not encode valid proofs, $P'(x)$ may just output some canonical $x_0 \in L$.)
It is a simple exercise to see that for every language $L$, any propositional proof system $P$
according to our definition can be converted to a Cook-R-eckow proof system $P'$, and vice versa,
and furthermore the runtime properties of $P$ and $P'$ will be the same.
In the forward direction, say $P$ is a proof system for $L$ according to our definition.
Define Merlin's string $y$ as encoding a pair $(x,y')$, and on input $y=(x,y')$, $P'$
runs $P$ on the pair $(x,y')$. If $P$ accepts, then $P'(y)$ outputs $x$,
and if $P$ rejects, then $P'(y)$ outputs (the encoding of) a canonical $x^0$ in $L$.
Conversely, say that $P'$ is a Cook-Reckhow proof system for $L$.
$P(x,y)$ runs $P'$ on $y$ and accepts if and only if $P'(y)=x$.
\begin{definition}
Let $P_1$ and $P_2$ be two proof systems for a language $L$ in $\cc{coNP}$.
$P_1$ p-simulates $P_2$ if for every $x \in L$ and for every $y$ such that
$P_2(x,y)=1$, there exists $y'$ such that $|y'|\leq \poly(|y|)$, and $P_1(x,y')=1$.
\end{definition}
Informally, $P_1$ p-simulates $P_2$ if proofs in $P_1$ are no longer than proofs in $P_2$ (up to
polynomial factors) .
\begin{definition}
Let $P_1$ and $P_2$ be two proof systems for a language $L$ in $\cc{coNP}$.
$P_1$ and $P_2$ are \definedWord{p-equivalent} if $P_1$ p-simulates $P_2$ and $P_2$ p-simulates $P_1$.
\end{definition}
\noindent{\bf Standard Propositional Proof Systems}
For TAUT (or UNSAT), there are a variety of standard and
well-studied proof systems, the most important ones including Extended Frege (EF), Frege, Bounded-depth Frege,
and Resolution. A Frege rule is an inference rule of the form:
$B_1, \ldots, B_n \implies B$, where $B_1,\ldots, B_n,B$ are propositional formulas.
If $n=0$ then the rule is an axiom.
For example, $A \lor \neg A$ is a typical Frege axiom, and
$A, \neg A \lor B \implies B$ is a typical Frege rule.
A Frege system is specified by a finite set, $R$ of rules.
Given a collection $R$ of rules, a derivation of 3DNF formula $f$ is a sequence
of formulas $f_1,\ldots,f_m$ such that each $f_i$ is either an instance
of an axiom scheme, or follows from two previous formulas by one of the rules in $R$,
and such that the final formula $f_m$ is $f$.
In order for a Frege system to be a proof system in the Cook-Reckhow sense, its
corresponding set of rules must be sound and complete.
Work by Cook and Reckhow in the 70's (REF) showed that Frege systems are
very robust in the sense that all Frege systems are
polynomially-equivalent.
Bounded-depth Frege proofs ($\cc{AC}^0$-Frege) are proofs that are Frege
proofs but with the additional restriction that each formula in the
proof has bounded depth. (Because our connectives are binary AND, OR and negation,
by depth we assume the formula has all negations at the leaves, and
we count the maximum number of alternations of AND/OR connectives in the formula.)
Polynomial-sized $AC^0$-Frege proofs correspond to the complexity class
$AC^0$ because such proofs allow a polynomial number of lines, each of which
must be in $AC^0$.
Extended Frege systems generalize Frege systems by allowing, in addition to
all of the Frege rules, a new axiom of the form $y \leftrightarrow A$,
where $A$ is a formula, and $y$ is a new variable not occurring in $A$.
Whereas polynomially-size Frege proofs allow a polynomial number of
lines, each of which must be a polynomial-sized formula,
using the new axiom, polynomial-size EF proofs allow a polynomial number
of lines, each of which can be a polynomial-sized circuit.
See \cite{krajicekBook} for precise definitions of Frege, $\cc{AC}^0$-Frege, and EF proof systems.
\medskip
\noindent {\bf Probabilistic Proof Systems}
The concept of a proof system for a language in $\cc{coNP}$ can be generalized
in the natural way, to obtain Merlin--Arthur style proof systems.
\begin{definition}
Let $L$ be a language in $\cc{coNP}$, and let $V$ be a probabilistic polynomial-time algorithm
with two inputs $x,y \in \{0,1\}^*$.
(We think of $V$ as the verifier.)
$V$ is a \definedWord{probabilistic proof system} for $L$ if:
\begin{enumerate}
\item (Perfect Soundness) For every $x$ that is not in $L$,
and for every $y$,
$$Pr_r[P(x,y) =1] =0 ,$$
where the probability is over the random coin tosses, $r$ of $P$.
\item (Completeness) For every $x$ in $L$,
there exists a $y$ such that
$$Pr_r[P(x,y) =1] \geq 3/4.$$
\end{enumerate}
$P$ is \definedWord{polynomially bounded} if for every $x \in L$, there exists $y$ such that
$|y|=poly(|x|)$ and $Pr_r[P(x,y)=1] \geq 3/4$.
\end{definition}
It is clear that for any $\cc{coNP}$-complete language $L$, there is a polynomially
bounded probabilistic proof system for $L$ if and only if $\cc{coNP} \subseteq \cc{MA}$.
Again we have chosen to define our probabilitic proof systems to
match the definition of $\cc{MA}$. The probabilistic proof system that
would be analogous to the standard Cook--Reckhow proof system would
be somewhat different, as defined below.
Again, a simple argument like the one above shows that our probablistic proof systems
are essentially equivalent to a probabilistic Cook--Reckhow proof systems.
\begin{definition}
A \definedWord{probabilistic Cook--Reckhow proof system} is a probabilistic polynomial-time algorithm
$A$ (whose run time is independent of its random choices) such that
\begin{enumerate}
\item There is a surjective function $f\colon \Sigma^{*} \to TAUT$ such that $A(x)=f(x)$ with probability at least $2/3$ (over $A$'s random choices), and
\item Regardless of $A$'s random choices, its output is always a tautology.
\end{enumerate}
Such a proof system is \definedWord{polynomially bounded} or \definedWord{p-bounded} if for every tautology $\varphi$, there is some $\pi$ (for ``proof'') such that $f(\pi)=\varphi$ and $|\pi| \leq \poly(|\varphi|)$.
\end{definition}
We note that both Pitassi's algebraic proof system \cite{pitassi96} and the Ideal Proof System are
probabilistic Cook--Reckhow systems. The algorithm $P$
takes as input a description of a (constant-free) algebraic circuit $C$ together with a tautology $\varphi$,
and then verifies that the circuit is indeed an \I-certificate for $\varphi$ by using the standard $\cc{coRP}$
algorithm for polynomial identity testing.
The proof that Pitassi's algebraic proof system is a probabilistic Cook--Reckhow system is essentially the same.
\subsection{Commutative algebra} \label{app:background:algebra}
The following preliminaries from commutative algebra are needed only in Section~\ref{sec:syzygy} and Appendix~\ref{app:RIPS}.
A \definedWord{module} over a ring $R$ is defined just like a vector space, except over a ring instead of a field. That is, a module $M$ over $R$ is a set with two operations: addition (making $M$ an abelian group), and multiplication by elements of $R$ (``scalars''), satisfying the expected axioms (see any textbook on commutative algebra, \eg, \cite{atiyahMacdonald,eisenbud}). A module over a field $R = \F$ is exactly a vector space over $\F$. Every ring $R$ is naturally an $R$-module (using the ring multiplication for the scalar multiplication), as is $R^{n}$, the set of $n$-tuples of elements of $R$. Every ideal $I \subseteq R$ is an $R$-module---indeed, an ideal could be defined, if one desired, as an $R$-submodule of $R$---and every quotient ring $R/I$ is also an $R$-module, by $r \cdot (r_0 + I) = rr_0 + I$.
Unlike vector spaces, however, there is not so nice a notion of ``dimension'' for modules over arbitrary rings. Two differences will be particularly relevant in our setting. First, although every vector subspace of $\F^{n}$ is finite-dimensional, hence finitely generated, this need not be true of every submodule of $R^n$ for an arbitrary ring $R$. Second, every (finite-dimensional) vector space $V$ has a basis, and every element of $V$ can be written as a \emph{unique} $\F$-linear combination of basis elements, but this need not be true of every $R$-module, even if the $R$-module is finitely generated, as in the following example.
\begin{example}
Let $R=\C[x,y]$ and consider the ideal $I = \langle x, y \rangle$ as an $R$-module. For clarity, let us call the generators of this $R$-module $g_1 = x$ and $g_2 = y$. First, $I$ cannot be generated as an $R$-module by fewer than two elements: if $I$ were generated by a single element, say, $f$, then we would necessarily have $x=r_1 f$ and $y=r_2 f$ for some $r_1,r_2 \in R$, and thus $f$ would be a common divisor of $x$ and $y$ in $R$ (here we are using the fact that $I$ is both a module and a subset of $R$). But the GCD of $x$ and $y$ is $1$, and the only submodule of $R$ containing $1$ is $R \neq I$. So $\{g_1, g_2\}$ is a minimum generating set of $I$. But not every element of $I$ has a unique representation in terms of this (or, indeed, any) generating set: for example, $xy \in I$ can be written either as $r_1 g_1$ with $r_1=y$ or $r_2 g_2$ with $r_2 = x$.
\end{example}
A ring $R$ is \definedWord{Noetherian} if there is no strictly increasing, infinite chain of ideals $I_1 \subsetneq I_2 \subsetneq I_3 \subsetneq \dotsb$. Fields are Noetherian (every field has only two ideals: the zero ideal and the whole field), as are the integers $\Z$. Hilbert's Basis Theorem says that every ideal in a Noetherian ring is finitely generated. Hilbert's (other) Basis Theorem says that if $R$ is finitely generated, then so is the polynomial ring $R[x]$ (and hence any polynomial ring $R[\vec{x}]$. Quotient rings of Noetherian rings are Noetherian, so every ring that is finitely generated over a field (or more generally, over a Noetherian ring $R$) is Noetherian.
Similarly, an $R$-module $M$ is Noetherian if there is no strictly increasing, infinite chain of submodules $M_1 \subsetneq M_2 \subsetneq M_3 \subsetneq \dotsb$. If $R$ is Noetherian as a ring, then it is Noetherian as an $R$-module. It is easily verified that direct sums of Noetherian modules are Noetherian, so if $R$ is a Noetherian ring, then it is a Noetherian $R$-module, and consequently $R^{n}$ is a Noetherian $R$-module for any finite $n$. Just as for ideals, every submodule of a Noetherian module is finitely generated.
The remaining preliminaries from commutative algebra are only needed in Appendix~\ref{app:RIPS}.
The \definedWord{radical} of an ideal $I \subseteq R$ is the ideal $\sqrt{I}$ consisting of all $r \in R$ such that $r^k \in I$ for some $k > 0$. An ideal $I$ is \definedWord{prime} if whenever $rs \in P$, at least one of $r$ or $s$ is in $P$. For any ideal $I$, its radical is equal to the intersection of the prime ideals containing $I$: $\sqrt{I} = \bigcap_{\text{prime } P \supseteq I} P$. We refer to prime ideals that are minimal under inclusion, subject to containing $I$, as ``minimal over $I$;'' there are only finitely many such prime ideals. The radical $\sqrt{I}$ is thus also equal to the intersection of the primes minimal over $I$.
An \definedWord{algebraic set} in $\F^n$ is any set of the form $\{\vec{x} \in \F^{n} : F_1(\vec{x}) = \dotsb = F_m(\vec{x}) = 0\}$, which we denote $V(F_1, \dotsc, F_m)$ (``$V$'' for ``variety''). The algebraic set $V(F_1, \dotsc, F_m)$ depends only on the ideal $\langle F_1, \dotsc, F_m \rangle$, and even its radical, in the sense that $V(F_1, \dotsc, F_m) = V(\sqrt{\langle F_1, \dotsc, F_m \rangle})$. Conversely, the set of all polynomials vanishing on a given algebraic set $V$ is a radical ideal, denoted $I(V)$. An algebraic set is \definedWord{irreducible} if it cannot be written as a union of two algebraic proper subsets. $V$ is irreducible if and only if $I(V)$ is prime. The \definedWord{irreducible components} of an algebraic set $V = V(I)$ are the maximal irreducible algebraic subsets of $V$, which are exactly the algebraic sets corresponding to the prime ideals minimal over $I$.
If $U$ is any subset of a ring $R$ that is closed under multiplication---$a,b \in U$ implies $ab \in U$---we may define the localization of $R$ at $U$ to be the ring in which we formally adjoin multiplicative inverses to the elements of $U$. Equivalently, we may think of the localization of $R$ at $U$ as the ring of fractions over $R$ where the denominators are all in $U$. If $P$ is a prime ideal, its complement is a multiplicatively closed subset (this is an easy and instructive exercise in the definition of prime ideal). In this case, rather than speak of the localization of $R$ at $R \backslash P$, it is common usage to refer to the localization of $R$ and $P$, denoted $R_P$. Similar statements hold for the union of finitely many prime ideals. We will use the fact that the localization of a Noetherian ring is again Noetherian (however, even if $R$ is finitely generated its localizations need not be, \eg the localization of $\Z$ at $P = \langle 2 \rangle$ consists of all rationals with odd denominators; this is one of the ways in which the condition of being Noetherian is nicer than that of being merely finitely generated).
\subsection{Summary and open questions} \label{sec:conclusion}
We introduced the Ideal Proof System \I (Definition~\ref{def:IPS}) and showed that it is a very close algebraic analog of Extended Frege---the most powerful, natural system currently studied for proving propositional tautologies. We showed that lower bounds on \I imply (algebraic) circuit lower bounds, which to our knowledge is the first time that lower bounds on a proof system have been shown to imply any sort of computational lower bounds. Using the same techniques, we were also able to show that lower bounds on the number of \emph{lines} (rather than the usual measure of number of monomials) in Polynomial Calculus proofs also imply strong algebraic circuit lower bounds. Because proofs in \I are just algebraic circuits satisfying certain polynomial identity tests, many results from algebraic circuit complexity apply immediately to \I. In particular, the chasms at depth 3 and 4 in algebraic circuit complexity imply that lower bounds on even depth 3 or 4 \I proofs would be very interesting. We introduced natural propositional axioms for polynomial identity testing (PIT), and showed that these axioms play a key role in understanding the thirty-year open question of $\cc{AC}^0[p]$-Frege lower bounds: either there are $\cc{AC}^0[p]$-Frege lower bounds on the PIT axioms, or any $\cc{AC}^0[p]$-Frege lower bounds are as hard as showing $\cc{VP} \neq \cc{VNP}$ over a field of characteristic $p$. In appendices, we discuss a variant of the Ideal Proof System that allows divisions, and its utility and limitations, as well as a geometric variant of the Ideal Proof System which suggests further geometric properties that might be of interest for computational and proof complexity. And finally, through an analysis of the set of all \I proofs of a given unsatisfiable system of equations, we suggest how one might transfer techniques from algebraic circuit complexity to prove lower bounds on \I (and thus on Extended Frege).
The Ideal Proof System raises many new questions, not only about itself, but also about PIT, new examples of $\cc{VNP}$ functions coming from propositional tautologies, and the complexity of ideals or modules of polynomials.
In Proposition~\ref{prop:gen2Hilb} we show that if a general \I-certificate $C$ has only polynomially many $\vec{\f}$-monomials (with coefficients in $\F[\vec{x}]$), and the maximum degree of each $\f_i$ is polynomially bounded, then $C$ can be converted to a polynomial-size Hilbert-like certificate. However, without this sparsity assumption general \I appears to be stronger than Hilbert-like \I.
\begin{open}
What, if any, is the difference in size between the smallest Hilbert-like and general \I certificates for a given unsatisfiable system of equations? What about for systems of equations coming from propositional tautologies?
\end{open}
\begin{open}[Degree versus size]
Is there a super-polynomial size separation---or indeed any nontrivial size separation---between \I certificates of degree $\leq d_{small}(n)$ and \I certificates of degree $\geq d_{large}(n)$ for some bounds $d_{small} < d_{large}$?
\end{open}
This question is particularly interesting in the following cases: a) certificates for systems of equations coming from propositional tautologies, where $d_{small}(n) = n$ and $d_{large}(n) \geq \omega(n)$, since we know that every such system of equations has \emph{some} (not necessarily small) certificate of degree $\leq n$, and b) certificates for unsatisfiable systems of equations taking $d_{small}$ to be the bound given by the best-known effective Nullstellens\"{a}tze, which are all exponential \cite{brownawell,kollar,sombraSparse}.
\begin{open} \label{open:min}
Are there tautologies for which the certificate family constructed in Theorem~\ref{thm:VNP} is the one of minimum complexity (under p-projections or c-reductions, see Appendix~\ref{app:background:complexity})?
\end{open}
If there is any family $\varphi = (\varphi_n)$ of tautologies for which Question~\ref{open:min} has a positive answer and for which the certificates constructed in Theorem~\ref{thm:VNP} are $\cc{VNP}$-complete (Question~\ref{open:complete} below), then super-polynomial size lower bounds on \I-proofs of $\varphi$ would be equivalent to $\cc{VP} \neq \cc{VNP}$. This highlights the potential importance of understanding the structure of the set of certificates under computational reducibilities.
Since the set of all [Hilbert-like] \I-certificates is a coset of a finitely generated ideal [respectively, module], the preceding question is a special case of considering, for a given family of cosets of ideals or modules $(f^{(0)}_n + I_n)$ ($I_n \subseteq R[x_1, \dotsc, x_{\poly(n)}]$), the relationships under various reductions between all families of functions $(f_n)$ with $f_n \in f^{(0)}_n + I_n$ for each $n$. This next question is of a more general nature than the others we ask; we think it deserves further study.
\begin{question} \label{open:ideal}
Given a family of cosets of ideals $f^{(0)}_n + I_n$ (or more generally modules) of polynomials, with $I_n \subseteq R[x_1, \dotsc, x_{\poly(n)}]$, consider the function families $(f_n) \in (f^{(0)}_n + I_n)$ (meaning that $f_n \in f^{(0)}_n + I_n$ for all $n$) under any computational reducibility $\leq$ such as p-projections. What can the $\leq$ structure look like? When, if ever, is there such a unique $\leq$-minimum (even a single nontrivial example would be interesting, as in Question~\ref{open:min})? Can there be infinitely many incomparable $\leq$-minima?
Say a $\leq$-degree $\mathbf{d}$ is ``saturated'' in $(f^{(0)}_n + I_n)$ if every $\leq$-degree $\mathbf{d'} \geq \mathbf{d}$ has some representative in $f^{(0)} + I$. Must saturated degrees always exist? We suspect yes, given that one may multiply any element of $I$ by arbitrarily complex polynomials. What can the set of saturated degrees look like for a given $(f^{(0)}_n + I_n)$? Must every $\leq$-degree in $f^{(0)} + I$ be \emph{below} some saturated degree? What can the $\leq$-structure of $f^{(0)} + I$ look like below a saturated degree?
\end{question}
Question~\ref{open:ideal} is of interest even when $f^{(0)} = 0$, that is, for ideals and modules of functions rather than their nontrivial cosets.
\begin{open}
Can we leverage the fact that the set of \I certificates is not only a finitely generated coset intersection, but also closed under multiplication?
\end{open}
We note that it is not difficult to show that a coset $c + I$ of an ideal is closed under multiplication if and only if $c^2 - c \in I$. Equivalently, this means that $c$ is idempotent ($c^2 = c$) in the quotient ring $R/I$. For example, if $I$ is a prime ideal, then $R/I$ has no zero-divisors, and thus the only choices for $c+I$ are $I$ and $1+I$. We note that the ideal generated by the $n^2$ equations $XY-I=0$ in the setting of the Hard Matrix Identities is prime (see Appendix~\ref{app:RIPS}). It seems unlikely that all ideals coming from propositional tautologies are prime, however.
The complexity of \Grobner basis computations obviously depends on the degrees and the number of polynomials that one starts with. From this point of view, Mayr and Meyer \cite{mayrMeyer} showed that the doubly-exponential upper bound on the degree of a \Grobner basis \cite{hermann} (see also \cite{seidenberg,masserWustholz}) could not be improved in general. However, in practice many \Grobner basis computations seem to work much more efficiently, and even theoretically many classes of instances---such as proving that $1$ is in a given ideal---can be shown to have only a singly-exponential degree upper bound \cite{brownawell, kollar, sombraSparse}. These points of view are reconciled by the more refined measure of the (Castelnuovo--Mumford) \emph{regularity} of an ideal or module. For the definition of regularity and a discussion of its close connection with the complexity of \Grobner basis and syzygy computations, we refer the reader to the original papers \cite{bayerStillman1, bayerStillman2, bayerStillman3} or the survey \cite{bayerMumfordSurvey}.
Given that the syzygy module or ideal of zero-certificates are so crucial to the complexity of \I-certificates, and the tight connection between these modules/ideals and the computation of the \Grobner basis of the ideal one started with, we ask:
\begin{question}
Is there a formal connection between the proof complexity of individual instances of TAUT (in, say, the Ideal Proof System), and the Castelnuovo--Mumford regularity of the corresponding syzygy module or ideal of zero-certificates?
\end{question}
The certificates constructed in the proof of Theorem~\ref{thm:VNP} provide many new examples of polynomial families in $\cc{VNP}$. There are many natural questions one can ask about these polynomials. For example, the construction itself depends on the order of the clauses; does the complexity of the resulting polynomial family depend on this order? As another example, we suspect that, for any $\equiv_{p}$ or $\equiv_{c}$-degree within $\cc{VNP}$ (see Appendix~\ref{app:background:complexity}), there is some family of tautologies for which the above polynomials are of that degree. However, we have not yet proved this for even a single degree.
\begin{open} \label{open:complete}
Are there tautologies for which the certificates constructed in Theorem~\ref{thm:VNP} are $\cc{VNP}$-complete? More generally, for any given $\equiv_{p}$ or $\equiv_{c}$-degree within $\cc{VNP}$, are there tautologies for which this certificate is of that degree?
\end{open}
Prior to our work, much work was done on bounds for the Ideal Membership Problem---$\cc{EXPSPACE}$-complete \cite{mayrMeyer, mayrEXPSPACE}---the so-called Effective Nullstellensatz---where exponential degree bounds are known, and known to be tight \cite{brownawell,kollar,sombraSparse,einLazarsfeld}---and the arithmetic Nullstellensatz over $\Z$, where one wishes to bound not only the degree of the polynomials but the sizes of the integer coefficients appearing \cite{krickPardoSombra}. The viewpoint afforded by the Ideal Proof Systems raises new questions about potential strengthening of these results.
In particular, the following is a natural extension of Definition~\ref{def:IPS}.
\begin{definition} \label{def:IPSideal}
An \definedWord{\I certificate} that a polynomial $G(\vec{x}) \in \F[\vec{x}]$ is in the ideal [respectively, radical of the ideal] generated by $F_1(\vec{x}), \dotsc, F_m(\vec{x})$ is a polynomial $C(\vec{x}, \vec{\f})$ such that
\begin{enumerate}
\item $C(\vec{x}, \vec{0}) = 0$, and
\item $C(\vec{x}, F_1(\vec{x}), \dotsc, F_m(\vec{x})) = G(\vec{x})$ [respectively, $G(\vec{x})^{k}$ for any $k > 0$].
\end{enumerate}
An \definedWord{\I derivation} of $G$ from $F_1, \dotsc, F_m$ is a circuit computing some \I certificate that $G \in \langle F_1, \dotsc, F_m \rangle$.
\end{definition}
For the Ideal Membership Problem, the $\cc{EXPSPACE}$ lower bound \cite{mayrMeyer, mayrEXPSPACE} implies an subexponential-size lower bound on constant-free circuits computing \I-certificates of ideal membership (or non-constant-free circuits in characteristic zero, assuming GRH, see Proposition~\ref{prop:coMAGRH}). Here by ``sub-exponential'' we mean a function $f(n) \in \bigcap_{\varepsilon > 0} O(2^{n^{\varepsilon}})$. Indeed, if for every $G(\vec{x}) \in \langle F_1(\vec{x}), \dotsc, F_m(\vec{x}) \rangle$ there were a constant-free circuit of subexponential size computing some \I certificate for the membership of $G$ in $\langle F_1, \dotsc, F_m \rangle$, then guessing that circuit and verifying its correctness using PIT gives a $\cc{MA}_{\text{subexp}} \subseteq \cc{SUBEXPSPACE}$ algorithm for the Ideal Membership Problem. The $\cc{EXPSPACE}$-completeness of Ideal Membership would then imply that $\cc{EXPSPACE} \subseteq \cc{SUBEXPSPACE}$, contradicting the Space Hierarchy Theorem \cite{hartmanisStearns}. Under special circumstances, of course, one may be able to achieve better upper bounds.
However, for the effective Nullstellensatz and its arithmetic variant, we leave the following open:
\begin{open}
For any $G, F_1, \dotsc, F_m$ on $x_1, \dotsc, x_n$, as in Definition~\ref{def:IPSideal}, is there always an \I-certificate of subexponential size that $G$ is in the \emph{radical} of $\langle F_1, \dotsc, F_m \rangle$? Similarly, if $G, F_1, \dotsc, F_m \in \Z[x_1, \dotsc, x_n]$ is there a constant-free $\I_{\Z}$-certificate of subexponential size that $aG(\vec{x})$ is in the \emph{radical} of the ideal $\langle F_1, \dotsc, F_m \rangle$ for some integer $a$?
\end{open}
\section{Divisions: the Rational Ideal Proof System} \label{app:RIPS}
We begin with an example where it is advantageous to include divisions in an \I-certificate. Note that this is different than merely computing a polynomial \I-certificate using divisions. In the latter case, divisions can be eliminated \cite{strassenDivision}. In the case we discuss here, the certificate itself is no longer a polynomial but is a rational function.
\begin{example} \label{ex:inversion}
The inversion principle, one of the so-called ``Hard Matrix Identities'' \cite{soltysCook}, states that
\[
XY = I \Rightarrow YX = I.
\]
They are called ``Hard'' because they were proposed as possible examples---over $\F_2$ or $\Z$---of propositional tautologies separating Extended Frege from Frege. Indeed, it was only in the last 10 years that they were shown to have efficient Extended Frege proofs \cite{soltysCook}, and it was quite nontrivial to show that they have efficient $\cc{NC}^2$-Frege proofs \cite{hrubesTzameretDet}, despite the fact that the determinant can be computed in $\cc{NC}^2$. It is still open whether the Hard Matrix Identities have ($\cc{NC}^1$)-Frege proofs, and believed not to be the case.
In terms of ideals, the inversion principle says that the $n^2$ polynomials $(YX - I)_{i,j}$ (the entries of the matrix $YX -I$) are in the ideal generated by the $n^2$ polynomials $(XY-I)_{i,j}$. The simplest rational proof of the inversion principle that we are aware of is as follows:
\[
X^{-1} (XY-I) X = YX-I
\]
Note that $X^{-1}$ here involves dividing by the determinant. When converted into a certificate, if we write $Q$ for a matrix of placeholder variables $q_{i,j}$ corresponding to the entries of the matrix $XY-I$, we get $n^2$ certificates from the entries of $X^{-1} Q X$. Note that each of these certificates is a rational function that has $\det(X)$ in its denominator. Turning this into a proof that does not use divisions is the main focus of the paper \cite{hrubesTzameretDet}; thus, if we had a proof system that allowed divisions in this manner, it would potentially allow for significantly simpler proofs. In this particular case, we assure ourselves that this is a valid proof because if $XY-I=0$, then $X$ is invertible, so $X^{-1}$ exists (or equivalently, $\det(X) \neq 0$).
\end{example}
In order to introduce an \I-like proof system that allows rational certificates, we generalize the preceding reasoning. We must be careful what we allow ourselves to divide by. If we are allowed to divide by arbitrary polynomials, this would yield an unsound proof system, because then from any polynomials $F_1(\vec{x}), \dotsc, F_m(\vec{x})$ we could derive \emph{any} other polynomial $G(\vec{x})$ via the false ``certificate'' $\frac{G(x)}{F(x)}\f_1$. The following definition is justified by Proposition~\ref{prop:RIPS}.
Unfortunately, although we try to eschew as many definitions as possible, the results here are made much cleaner by using some additional (standard) terminology from commutative algebra which is covered in Appendix~\ref{app:background:algebra} such as prime ideals, irreducible components of algebraic sets, and localization of rings.
\begin{definition}[Rational Ideal Proof System] \label{def:RIPS}
A \definedWord{rational \I certificate} or \definedWord{R\I-certificate} that a polynomial $G(\vec{x}) \in \F[\vec{x}]$ is in the radical of the $\overline{\F}[\vec{x}]$-ideal generated by $F_1(\vec{x}), \dotsc, F_m(\vec{x})$ is a rational function $C(\vec{x}, \vec{\f})$ such that
\begin{enumerate}
\setcounter{enumi}{-1}
\item \label{condition:local} Write $C = C'/D$ with $C',D$ relatively prime polynomials. Then $1/D(\vec{x}, \vec{F}(\vec{x}))$ must be in the localization of $\F[\vec{x}]$ at the union of the prime ideals that are minimal subject to containing the ideal $\langle F_1(\vec{x}), \dotsc, F_m(\vec{x}) \rangle$ (We give a more elementary explanation of this condition below),
\item \label{condition:RIPSideal}$C(x_1,\dotsc,x_n,\vec{0}) = 0$, and
\item \label{condition:RIPSnss} $C(x_1,\dotsc,x_n,F_1(\vec{x}),\dotsc,F_m(\vec{x})) = G(\vec{x})$.
\end{enumerate}
A \definedWord{R\I proof} that $G(\vec{x})$ is in the radical of the ideal $\langle F_1(\vec{x}), \dotsc, F_m(\vec{x}) \rangle$ is an $\F$-algebraic circuit with divisions on inputs $x_1,\ldots,x_n,\f_1,\ldots,\f_m$ computing some R\I certificate.
\end{definition}
Condition~(\ref{condition:local}) is equivalent to: if $G(\vec{x})$ is an invertible constant, then $D(\vec{x}, \vec{\f})$ is also an invertible constant and thus $C$ is a polynomial; otherwise, after substituting the $F_i(\vec{x})$ for the $\f_i$, the denominator $D(\vec{x}, \vec{F}(\vec{x}))$ does not vanish identically on any of the irreducible components (over the algebraic closure $\overline{\F}$) of the algebraic set $V(\langle F_1(\vec{x}), \dotsc, F_m(\vec{x}) \rangle) \subseteq \overline{\F}^{n}$. In particular, for proofs of unsatisfiability of systems of equations, the Rational Ideal Proof System reduces by definition to the Ideal Proof System. For derivations of one polynomial from a set of polynomials, this need not be the case, however; indeed, there are examples for which \emph{every} R\I-certificate has a nonconstant denominator, that is, there is a R$\I$-certifiate but there are no \I-certificates (see Example~\ref{ex:divNeeded}).
The following proposition establishes that Definition~\ref{def:RIPS} indeed defines a sound proof system.
\begin{proposition} \label{prop:RIPS}
If there is a R\I-certificate that $G(\vec{x})$ is in the radical of $\langle F_1(\vec{x}), \dotsc, F_m(\vec{x}) \rangle$, then $G(\vec{x})$ is in fact in the radical of $\langle F_1(\vec{x}), \dotsc, F_m(\vec{x}) \rangle$.
\end{proposition}
\begin{proof}
Let $C(\vec{x}, \vec{\f}) = \frac{1}{D(\vec{x}, \vec{\f})} C'(\vec{x}, \vec{\f})$ be a R\I certificate that $G$ is in $\sqrt{\langle F_1, \dotsc, F_m \rangle}$, where $D$ and $C'$ are relatively prime polynomials. Then $C'(\vec{x}, \vec{\f})$ is an \I-certificate that $G(\vec{x})D(\vec{x}, \vec{F}(\vec{x}))$ is in the ideal $\langle F_1(\vec{x}), \dotsc, F_m(\vec{x}) \rangle$ (recall Definition~\ref{def:IPSideal}). Let $D_{F}(\vec{x}) = D(\vec{x}, \vec{F}(\vec{x}))$.
Geometric proof: since $G(\vec{x}) D_{F}(\vec{x}) \in \langle F_1(\vec{x}), \dotsc, F_m(\vec{x}) \rangle$, $GD_{F}$ must vanish identically on every irreducible component of the algebraic set $V(F_1, \dotsc, F_m)$. On each irreducible component $V_i$, since $D_{F}(\vec{x})$ does not vanish identically on $V_i$, $G(\vec{x})$ must vanish everywhere except for the proper subset $V(D_{F}(\vec{x})) \cap V_i$. Since $D_{F}$ does not vanish identically on $V_i$, we have $\dim V(D_{F}) \cap V_i \leq \dim V_i - 1$ (in fact this is an equality). In particular, this means that $G$ must vanish on a dense subset of $V_i$. Since $G$ is a polynomial, by (Zariski-)continuity, $G$ must vanish on all of $V_i$. Finally, since $G$ vanishes on every irreducible component of $V(F_1, \dotsc, F_m)$, it vanishes on $V(F_1, \dotsc, F_m)$ itself, and by the Nullstellensatz, $G \in \sqrt{\langle F_1, \dotsc, F_m\rangle}$.
Algebraic proof: for each prime ideal $P_i \subseteq \overline{\F}[\vec{x}]$ that is minimal subject to containing $\langle F_1, \dotsc, F_m \rangle$, $D_{F}$ is not in $P_i$, by the definition of $R\I$-certificate. Since $GD_{F} \in \langle F_1, \dotsc, F_m \rangle \subseteq P_i$, by the definition of prime ideal $G$ must be in $P_i$. Hence $G$ is in the intersection $\bigcap_i P_i$ over all minimal prime ideals $P_i \supseteq \langle F_1, \dotsc, F_m \rangle$. This intersection is exactly the radical $\sqrt{\langle F_1, \dotsc, F_m \rangle}$.
\end{proof}
Any derivation of a polynomial $G$ that is in the radical of an ideal $I$ but not in $I$ itself will require divisions. Although it is not \emph{a priori} clear that R\I could derive even one such $G$, the next example shows that this is the case. In other words, the next example shows that certain derivations \emph{require} rational functions.
\begin{example} \label{ex:divNeeded}
Let $G(x_1, x_2) = x_1$, $F_1(\vec{x}) = x_1^2$, $F_2(\vec{x}) = x_1 x_2$. Then $C(\vec{x}, \vec{\f}) = \frac{1}{x_1-x_2}(\f_1 - \f_2)$ is a R\I-certificate that $G \in \sqrt{\langle F_1, F_2 \rangle}$: by plugging in one can verify that $C(\vec{x}, \vec{F}(\vec{x})) = G(\vec{x})$. For Condition~(\ref{condition:local}), we see that $V(F_1, F_2)$ is the entire $x_2$-axis, on which $x_1 - x_2$ only vanishes at the origin. However, there is no \I-certificate that $G \in \langle F_1, F_2 \rangle$, since $G$ is \emph{not} in $\langle F_1, F_2 \rangle$: $\langle F_1, F_2 \rangle = \{ x(H_1(\vec{x}) x_1 + H_2(\vec{x}) x_2)\}$ where $H_1, H_2$ may be arbitrary polynomials. Since the only constant of the form $H_1(\vec{x}) x_1 + H_2(\vec{x}) x_2$ is zero, $G(x) = x \notin \langle F_1, F_2 \rangle$.
\end{example}
In the following circumstances a R\I-certificate can be converted into an \I-certificate.
\paragraph{Notational convention.} Throughout, we continue to use the notation that if $D$ is a function of the placeholder variables $\f_i$ (and possibly other variables), then $D_{F}$ denotes $D$ after substituting in $F_i(\vec{x})$ for the placeholder variable $\f_i$.
\begin{proposition} \label{prop:RIPS2IPS}
If $C = C'/D$ is a R\I proof that $G(\vec{x}) \in \sqrt{\langle F_1(\vec{x}), \dotsc, F_m(\vec{x}) \rangle}$, such that $D_{F}(\vec{x})$ does not vanish \emph{anywhere} on the algebraic set $V(F_1(\vec{x}), \dotsc, F_m(\vec{x}))$, then $G(\vec{x})$ is in fact in the ideal $\langle F_1(\vec{x}), \dotsc, F_m(\vec{x}) \rangle$. Furthermore, there is an \I proof that $G(\vec{x}) \in \langle F_1(\vec{x}), \dotsc, F_m(\vec{x}) \rangle$ of size $\poly(|C|,|E|)$ where $E$ is an $\I$ proof of the unsolvability of $D_{F}(\vec{x}) = F_1(\vec{x}) = \dotsb = F_m(\vec{x}) = 0$.
\end{proposition}
\begin{proof}
Since $D_{F}(\vec{x})$ does not vanish anywhere on $V(F_1, \dotsc, F_m)$, the system of equations $D_F(\vec{x}) = F_1(\vec{x}) = \dotsb = F_m(\vec{x}) = 0$ is unsovlable.
Geometric proof idea: The preceding means that when restricted to the algebraic set $V(F_1, \dotsc, F_m)$, $D_{F}$ has a multiplicative inverse $\Delta$. Rather than dividing by $D$, we then multiply by $\Delta$, which, for points on $V(F_1, \dotsc, F_m)$, amounts to the same thing.
Algebraic proof: Let $E(\vec{x}, \vec{\f}, d)$ be an \I-certificate for the unsolvability of this system, where $d$ is a new placeholder variable corresponding to the polynomial $D_{F}(\vec{x}) = D(\vec{x}, \vec{F}(\vec{x}))$. By separating out all of the terms involving $d$, we may write $E(\vec{x}, \vec{\f}, d)$ as $d\Delta(\vec{x}, \vec{\f}, d) + E'(\vec{x}, \vec{\f})$. As $E(\vec{x}, \vec{F}(\vec{x}), D_{F}(\vec{x})) = 1$ (by the definition of \I), we get:
\[
D_{F}(\vec{x})\Delta(\vec{x}, \vec{F}(\vec{x}), D_{F}(\vec{x})) = 1 - E'(\vec{x}, \vec{F}(\vec{x})).
\]
Since $E'(\vec{x}, \vec{\f}) \in \langle \f_1, \dotsc, \f_m \rangle$, this tells us that $\Delta(\vec{x}, \vec{F}(\vec{x}), D_{F}(\vec{x}))$ is a multiplicative inverse of $D_{F}(\vec{x})$ modulo the ideal $\langle F_1, \dotsc, F_m \rangle$. The idea is then to multiply by $\Delta$ instead of dividing by $D$. More precisely, the following is an \I-proof that $G \in \langle F_1, \dotsc, F_m \rangle$:
\begin{equation} \label{eqn:RIPScert}
C_{\Delta}(\vec{x}, \vec{\f}) \defeq C'(\vec{x}, \vec{\f})\Delta(\vec{x}, \vec{\f}, D(\vec{x}, \vec{\f})) + G(\vec{x})E'(\vec{x}, \vec{\f}).
\end{equation}
Since $C'$ and $E'$ must individually be in $\langle \f_1, \dotsc, \f_m \rangle$, the entirety of $C_{\Delta}$ is as well. To see that we get $G(\vec{x})$ after plugging in the $F_i(\vec{x})$ for the $\f_i$, we compute:
\begin{eqnarray*}
C_{\Delta}(\vec{x}, \vec{F}(\vec{x})) & = & C'(\vec{x}, \vec{F}(\vec{x}))\Delta(\vec{x}, \vec{F}(\vec{x}), D(\vec{x}, \vec{F}(\vec{x}))) + G(\vec{x})E'(\vec{x}, \vec{F}(\vec{x})) \\
& = & C'(\vec{x}, \vec{F}(\vec{x}))\left(\frac{1-E'(\vec{x}, \vec{F}(\vec{x}))}{D_{F}(\vec{x})} \right) + G(\vec{x})E'(\vec{x}, \vec{F}(\vec{x})) \\
& = & G(\vec{x})\left(1-E'(\vec{x}, \vec{F}(\vec{x}))\right) + G(\vec{x})E'(\vec{x}, \vec{F}(\vec{x})) \\
& = & G(\vec{x}).
\end{eqnarray*}
Finally, we give an upper bound on the size of a circuit for $C_{\Delta}$. The numerator and denominator of a rational function computed by a circuit of size $s$ can be computed individually by circuits of size $O(s)$. The basic idea, going back to Strassen \cite{strassenDivision}, is to replace each wire by a pair of wires explicitly encoding the numerator and denominator, to replace a multiplication gate by a pair of multiplication gates---since $(A/B) \times (C/D) = (A \times C)/(B \times D)$---and to replace an addition gate by the appropriate gadget encoding the expression $(A/B) + (C/D) = (AD + BC)/BD$. In particular, we may assume that a circuit computing $C'/D$ has the following form: it first computes $C'$ and $D$ separately, and then has a single division gate computing $C'/D$.
Thus from a circuit for $C$ we can get circuits of essentially the same size for both $C'$ and $D$. Given a circuit for $E = d' \Delta + E'$, we get a circuit for $E'$ by setting $d'=0$. We can then get a circuit for $d'\Delta$ as $E - E'$. From a circuit for $d'\Delta$ we can get a circuit for $\Delta$ alone by first dividing $d'\Delta$ by $d'$, and then eliminating that division using Strassen \cite{strassenDivision}. Combining these, we then easily construct a circuit for the \I-certificate $C_{\Delta}$ of size $\poly(|C|, |E|)$.
\end{proof}
\begin{example}
Returning to the inversion principle, we find that the certificate from Example~\ref{ex:inversion} only divided by $\det(X)$, which we've already remarked does not vanish \emph{anywhere} that $XY - I$ vanishes. By the preceding proposition, there is thus an \I-certificate for the inversion principle of polynomial size, \emph{if} there is an \I-certificate for the unsatisfiability of $\det(X) = 0 \land XY-I=0$ of polynomial size. In this case we can guess at the multiplicative inverse of $\det(X)$ modulo $XY-I$, namely $\det(Y)$, since we know that $\det(X)\det(Y) = 1$ if $XY=I$. Hence, we can try to find a certificate for the unsatisfiability of $\det(X) = 0 \land XY-I=0$ of the form
\[
\det(X) \det(Y) + (\text{something in the ideal of } \langle (XY-I)_{i,j \in [n]} \rangle) = 1.
\]
In other words, we want a refutation-style \I-proof of the implication $XY = I \Rightarrow \det(X)\det(Y)=1$, which is another one of the Hard Matrix Identities. Such a refutation is exactly what Hrubes and Tzameret provide \cite{hrubesTzameretDet}.
\end{example}
In fact, for this particular example we could have anticipated that a rational certificate was unnecessary, because the ideal generated by $XY-I$ is prime and hence radical. (Indeed, the ring $\F[X, Y]/\langle XY - I \rangle$ is the coordinate ring of the algebraic group $\text{GL}_n(\F)$, which is an irreducible variety.)
Unfortunately, the Rational Ideal Proof System is not complete, as the next example shows.
\begin{example}
Let $F_1(x) = x^2$ and $G(x) = x$. Then $G(x) \in \sqrt{\langle F_1(\vec{x}) \rangle}$, but any R\I certificate would show $G(x) D(x) = F_1(x) H(x)$ for some $D, H$. Plugging in, we get $x D(x) = x^2 H(x)$, and by unique factorization we must have that $D(x) = x D'(x)$ for some $D'$. But then $D$ vanishes identically on $V(F_1)$, contrary to the definition of R\I-certificate.
\end{example}
To get a more complete proof system, we could generalize the definition of R\I to allow dividing by any polynomial that does not vanish to appropriate \emph{multiplicity} on each irreducible component (see, \eg, \cite[Section~3.6]{eisenbud} for the definition of multiplicity). For example, this would allow dividing by $x$ to show that $x \in \sqrt{\langle x^2 \rangle}$, but would disallow dividing by $x^2$ or any higher power of $x$. However, the proof of soundness of this generalized system is more involved, and the results of the next section seem not to hold for such a proof system. As of this writing we do not know of any better characterization of when R\I certificates exist other than the definition itself.
\begin{definition}
A R\I certificate is \definedWord{Hilbert-like} if the denominator doesn't involve the placeholder variables $\f_i$ and the numerator is $\vec{\f}$-linear. In other words, a Hilbert-like R\I certificate has the form $\frac{1}{D(\vec{x})}\sum_{i} \f_i G_i(\vec{x})$.
\end{definition}
\begin{lemma} \label{lem:RIPSgen2Hilb}
If there is a R\I certificate that $G \in \sqrt{\langle F_1, \dotsc, F_m \rangle}$, then there is a Hilbert-like R\I certificate proving the same.
\end{lemma}
\begin{proof}
Let $C = C'(\vec{x}, \vec{\f})/D(\vec{x}, \vec{\f})$ be a R\I certificate. First, we replace the denominator by $D_{F}(\vec{x}) = D(\vec{x}, \vec{F}(\vec{x}))$. Next, for each monomial appearing in $C'$, we replace all but one of the $\f_i$ in that monomial with the corresponding $F_i(\vec{x})$, reducing the monomial to one that is $\vec{\f}$-linear.
\end{proof}
As in the case of \I, we only know how to guarantee a size-efficient reduction under a sparsity condition. The following is the R\I-analogue of Proposition~\ref{prop:gen2Hilb}.
\begin{corollary} \label{cor:RIPSgen2Hilb}
If $C = C'/D$ is a R\I proof that $G \in \sqrt{\langle F_1, \dotsc, F_m \rangle}$, where the numerator $C'$ satisfies the same sparsity condition as in Proposition~\ref{prop:gen2Hilb}, then there is a Hilbert-like R\I proof that $G \in \sqrt{\langle F_1, \dotsc, F_m \rangle}$, of size $\poly(|C|)$.
\end{corollary}
\begin{proof}
We follow the proof of Lemma~\ref{lem:RIPSgen2Hilb}, making each step effective. As in the last paragraph of the proof of Proposition~\ref{prop:RIPS2IPS}, any circuit with divisions computing a rational function $C'/D$, where $C',D$ are relatively prime polynomials can be converted into a circuit without divisions computing the pair $(C', D)$. By at most doubling the size of the circuit, we can assume that the subcircuits computing $C'$ and $D$ are disjoint. Now replace each $\f_i$ input to the subcircuit computing $D$ with a small circuit computing $F_i(\vec{x})$. Next, we apply sparse multivariate interpolation to the numerator $C'$ exactly as in Proposition~\ref{prop:gen2Hilb}. The resulting circuit now computes a Hilbert-like R\I certificate.
\end{proof}
\subsection{Towards lower bounds}
We begin by noting that, since the numerator and denominator can be computed separately (originally due to Strassen \cite{strassenDivision}, see the proof of Proposition~\ref{prop:RIPS2IPS} above for the idea), it suffices to prove a lower bound on, for each R\I-certificate, either the denominator or the numerator.
As in the case of Hilbert-like \I and general \I (recall Section~\ref{sec:syzygy}), the set of R\I certificates showing that $G \in \sqrt{\langle F_1, \dotsc, F_m \rangle}$ is a coset of a finitely generated ideal.
\begin{lemma}
The set of R\I-certificates showing that $G \in \sqrt{\langle F_1, \dotsc, F_m \rangle}$ is a coset of a finitely generated ideal in $R$, where $R$ is the localization of $\F[\vec{x}, \vec{\f}]$ at $\bigcup_i P_i$, where the union is over the prime ideals minimal over $\langle F_1, \dotsc, F_m \rangle$.
Similarly, the set of Hilbert-like R\I certificates is a coset of a finitely generated submodule of $R'^{m}$, where $R' = R \cap \F[\vec{x}]$ is the localization of $\F[\vec{x}]$ at $\bigcup_i (P_i \cap \F[\vec{x}])$.
\end{lemma}
\begin{proof}
The proof is essentially the same as that of Lemma~\ref{lem:fgsyz}, but with one more ingredient. Namely, we need to know that the rings $R$ and $R'$ are Noetherian. This follows from the fact that polynomial rings over fields are Noetherian, together with the general fact that any localization of a Noetherian ring is again Noetherian.
\end{proof}
Exactly analogous to the the case of \I certificates, we define general and Hilbert-like R\I zero-certificates to be those for which, after plugging in the $F_i$ for $\f_i$, the resulting function is identically zero. In the case of Hilbert-like R\I, these are again syzygies of the $F_i$, but now syzygies with coefficients in the localization $R' = \F[\vec{x}]_{P_1 \cup \dotsb \cup P_k}$.
However, somewhat surprisingly, we seem to be able to go further in the case of R\I than \I, as follows. In general, the ring $\F[\vec{x}, \vec{\f}]_{P_1 \cup \dotsb \cup P_k}$ is a Noetherian \emph{semi-local} ring, that is, in addition to being Noetherian, it has finitely many maximal ideals, namely $P_1, \dotsc, P_k$. Ideals in and modules over semi-local rings enjoy properties not shared by ideals and modules over arbitrary rings.
In the special case when there is just a single prime ideal $P_1$, the localization is a \emph{local} ring (just one maximal ideal). We note that this is the case in the setting of the Inversion Principle, as the ideal generated by the $n^2$ polynomials $XY-I$ is prime. Local rings are in some ways very close to fields---if $R$ is a local ring with unique maximal ideal $P$, then $R/P$ is a field---and modules over local rings are much closer to vector spaces than are modules over more general rings. This follows from the fact that $M/P$ is then in fact a vector space over the field $R/P$, together with Nakayama's Lemma (see, \eg, \cite[Corollary~4.8]{eisenbud} or \cite[Section~2.8]{reidCA}). Once nice feature is that, if $M$ is a module over a local ring, then every minimal generating set has the same size, which is the dimension of $M/P$ as an $R/P$-vector space. We also get that for every minimal generating set $b_1, \dotsc, b_k$ of $M$ (``$b$'' for ``basis'', even though the word basis is reserved for free modules), for each $m \in M$, any two representations $m = \sum_{i=1}^{k} r_i b_i$ with $r_i \in R$ differ by an element in $PM$. This near-uniqueness could be very helpful in proving lower bounds, as normal forms have proved useful in proving many circuit lower bounds.
\begin{open}
Does every R\I proof of the $n \times n$ Inversion Principle $XY = I \Rightarrow YX = I$ require computing a determinant? That is, is it the case that for every R\I certificate $C=C'/D$, some determinant of size $n^{\Omega(1)}$ reduces to one of $C, C', D$ by a $O(\log n)$-depth circuit reduction?
\end{open}
A positive answer to this question would imply that the Hard Matrix Identities do not have $O(\log n)$-depth R\I proofs unless the determinant can be computed by a polynomial-size algebraic formula. Since \I (and hence R\I) simulates Frege-style systems in a depth-preserving way (Theorem~\ref{thm:depth}), a positive answer would also imply that there are not ($\cc{NC}^1$-)Frege proofs of the Boolean Hard Matrix Identities unless the determinant has polynomial-size \emph{algebraic} formulas. Although answering this question may be difficult, the fact that we can even \emph{state} such a precise question on this matter should be contrasted with the preceding state of affairs regarding Frege proofs of the Boolean Hard Matrix Identities (which was essentially just a strong intuition that they should not exist unless the determinant is in $\cc{NC}^1$).
\section{Extended abstract} \label{sec:eabs}
\subsection{Introduction}
$\cc{NP}$ versus $\cc{coNP}$ is the very natural question of whether, for every graph that doesn't have a Hamiltonian path, there is a short proof of this fact. One of the arguments for the utility of proof complexity is that by proving lower bounds against stronger and stronger proof systems, we ``make progress'' towards proving $\cc{NP} \neq \cc{coNP}$. However, until now this argument has been more the expression of a philosophy or hope, as there is no known proof system for which lower bounds imply computational complexity lower bounds of any kind, let alone $\cc{NP} \neq \cc{coNP}$.
We remedy this situation by introducing a very natural algebraic proof system, which has tight connections to (algebraic) circuit complexity. We show that any super-polynomial lower bound on any Boolean tautology in our proof system implies that the permanent does not have polynomial-size algebraic circuits ($\cc{VNP} \neq \cc{VP}$). Note that, prior to our work, essentially all implications went the opposite direction: a circuit complexity lower bound implying a proof complexity lower bound. We use this result to begin to explain why several long-open lower bound questions in proof complexity---lower bounds on Extended Frege, on $\cc{AC}^0[p]$-Frege, and on number-of-lines in Polynomial Calculus-style proofs---have been so apparently difficult.
\subsubsection{Background and Motivation}
\paragraph{Algebraic Circuit Complexity.} The most natural way to compute a polynomial function $f(x_1,\dotsc,x_n)$ is
with a sequence of instructions $g_1,\dotsc,g_m = f$, starting from the inputs $x_1, \dotsc, x_n$, and where each instruction $g_i$ is of the form $g_j \circ g_k$ for some $j,k < i$, where $\circ$ is either a linear combination or multiplication. Such computations are called algebraic circuits or straight-line programs. The goal of algebraic complexity is to understand the optimal asymptotic complexity of computing a given polynomial family $(f_n(x_1,\dotsc,x_{\poly(n)})_{n=1}^{\infty}$, typically in terms of size and depth. In addition to the intrinsic interest in these questions, since Valiant's work \cite{valiant, valiantPerm, valiantProjections} algebraic complexity has become more and more important for Boolean computational complexity. Valiant argued that understanding algebraic complexity could give new intuitions that may lead to better understanding of other models of computation (see also \cite{Gat2}); several direct connections have been found between algebraic and Boolean complexity \cite{kabanetsImpagliazzo, burgisserCookValiant, jansenSanthanam, mulmuleyPRAM}; and the Geometric Complexity Theory Program (see, \eg, the survey \cite{gctCACM} and references therein) suggests how algebraic techniques might be used to resolve major Boolean complexity conjectures.
Two central functions in this area are the determinant and permanent polynomials,
which are fundamental both because of their prominent role in many areas of mathematics and because they are complete for various natural complexity classes. In particular, the permanent of $\{0,1\}$-matrices is $\cc{\# P}$-complete, and the permanent of arbitrary matrices is $\cc{VNP}$-complete. Valiant's Permanent versus Determinant Conjecture \cite{valiant} states that the permanent of an $n \times n$ matrix, as a polynomial in $n^2$ variables, cannot be written as the determinant of any polynomially larger matrix all of whose entries are variables or constants. In some ways this is an algebraic analog of $\cc{P} \neq \cc{NP}$, although it is in fact much closer to $\cc{FNC}^2 \neq \cc{\# P}$. In addition to this analogy, the Permanent versus Determinant Conjecture is also known to be a formal consequence of the nonuniform lower bound $\cc{NP} \not\subseteq \cc{P/poly}$ \cite{burgisserCookValiant}, and is thus thought to be an important step towards showing $\cc{P} \neq \cc{NP}$.
Unlike in Boolean circuit complexity, (slightly) non-trivial lower bounds for the size of algebraic circuits are known \cite{strassenDegree,baurStrassen}. Their methods, however, only give lower bounds up to $\Omega (n\log n)$. Moreover, their methods are based on a degree analysis of certain algebraic varieties and do not give lower bounds for polynomials of constant degree. Recent exciting work \cite{agrawalVinay, koiranChasm, tavenas} has shown that polynomial-size algebraic circuits computing functions of polynomial degree can in fact be computed by subexponential-size depth 4 algebraic circuits. Thus, strong enough lower bounds for depth 4 algebraic circuits for the permanent would already prove $\cc{VP} \neq \cc{VNP}$.
\medskip
\paragraph{Proof Complexity.} Despite considerable progress obtaining super-polynomial lower bounds for many weak proof systems (resolution, cutting planes, bounded-depth Frege systems), there has been essentially no progress in the last 25 years for stronger proof systems such as Extended Frege systems or Frege systems. More surprisingly, no nontrivial lower bounds are known for the seemingly weak $\cc{AC}^0[p]$-Frege system. Note that in contrast, the analogous result in circuit complexity---proving super-polynomial $\cc{AC}^0[p]$ lower bounds for an explicit function---was resolved by Smolensky over 25 years ago \cite{smolensky}. To date, there has been no satisfactory explanation for this state of affairs.
In proof complexity, there are no known formal barriers such as relativization \cite{bakerGillSolovay}, Razborov--Rudich natural proofs \cite{razborovRudich}, or algebrization \cite{aaronsonWigderson} that exist in Boolean function complexity. Moreover, there has not even been progress by way of conditional lower bounds. That is, trivially $\cc{NP} \neq \cc{coNP}$ implies superpolynomial lower bounds for $\cc{AC}^0[p]$-Frege, but we know of no weaker complexity assumption that implies such lower bounds. The only formal implication in this direction shows that certain circuit lower bounds imply lower bounds for proof systems that admit feasible interpolation, but unfortunately only weak proof systems (not Frege nor even $\cc{AC}^0$-Frege) have this property \cite{Bonet,Bonet2}. In the converse direction, there are essentially no implications at all. For example, we do not know if $\cc{AC}^0[p]$-Frege lower bounds---nor even Frege nor Extended Frege lower bounds---imply any nontrivial circuit lower bounds.
\subsubsection{Our Results}
In this paper, we define a simple and natural proof system that we call the Ideal Proof System (IPS)
based on Hilbert's Nullstellensatz. Our system is similar in spirit to related
algebraic proof systems that have been previously studied, but is different in a crucial way that we explain below.
Given a set of polynomials $F_1,\ldots,F_m$ in $n$ variables $x_1,\ldots,x_n$ over a field $\F$ without a
common zero over the algebraic closure of $\F$, Hilbert's Nullstellensatz says that there exist polynomials
$G_1,\ldots,G_m \in \F[x_1,\ldots,x_n]$ such that $\sum F_i G_i =1$, \ie, that $1$ is in the ideal generated by the $F_i$. In the Ideal Proof System, we introduce new variables $\f_i$ which serve as placeholders into which the original polynomials $F_i$ will
eventually be substituted:
\begin{definition}[Ideal Proof System] \label{def:IPS}
An \definedWord{\I certificate} that a system of $\F$-polynomial equations
$F_1(\vec{x})=F_2(\vec{x}) = \dotsb = F_m(\vec{x}) = 0$ is unsatisfiable over $\overline{\F}$ is
a polynomial $C(\vec{x}, \vec{\f})$ in the variables $x_1,\ldots,x_n$ and $\f_1,\ldots,\f_m$ such that
\begin{enumerate}
\item \label{condition:ideal} $C(x_1,\dotsc,x_n,\vec{0}) = 0$, and
\item \label{condition:nss} $C(x_1,\dotsc,x_n,F_1(\vec{x}),\dotsc,F_m(\vec{x})) = 1$.
\end{enumerate}
The first condition is equivalent to $C$ being in the ideal generated by $\f_1, \dotsc, \f_m$, and the two conditions together therefore imply that $1$ is in the ideal generated by the $F_i$, and hence that $F_1(\vec{x}) = \dotsb = F_m(\vec{x})=0$ is unsatisfiable.
An \definedWord{\I proof} of the unsatisfiability of the polynomials $F_i$ is an $\F$-algebraic circuit on inputs $x_1,\ldots,x_n,\f_1,\ldots,\f_m$ computing some \I certificate of unsatisfiability.
\end{definition}
For any class $\mathcal{C}$ of polynomial families, we may speak of $\mathcal{C}$-\I proofs of a family of systems of equations $(\mathcal{F}_n)$ where $\mathcal{F}_n$ is $F_{n,1}(\vec{x}) = \dotsb = F_{n,\poly(n)}(\vec{x}) = 0$. When we refer to \I without further qualification, we mean $\cc{VP}$-\I, that is, the family of \I proofs should be computed by circuits of polynomial size \emph{and polynomial degree}, unless specified otherwise.
The Ideal Proof System (without any size bounds) is easily shown to be sound, and its completeness follows from the Nullstellensatz.
We typically consider \I as a propositional proof system by translating a CNF tautology $\varphi$ into a system of equations as follows. We translate a clause $\kappa$ of $\varphi$ into a single algebraic equation $F(\vec{x})$ as follows: $x \mapsto 1-x$, $x \vee y \mapsto xy$. This translation has the property that a $\{0,1\}$ assignment satisfies $\kappa$ if and only if it satisfies the equation $F = 0$. Let $\kappa_1, \dotsc, \kappa_m$ denote all the clauses of $\varphi$, and let $F_i$ be the corresponding polynomials. Then the system of equations we consider is $F_1(\vec{x}) = \dotsb = F_m(\vec{x}) = x_1^2 - x_1 = \dotsb = x_n^2 - x_n = 0$. The latter equations force any solution to this system of equations to be $\{0,1\}$-valued. Despite our indexing here, when we speak of the system of equations corresponding to a tautology, we always assume that the $x_i^2 - x_i$ are among the equations.
Like previously defined algebraic systems \cite{BIKPP,CEI,pitassi96,pitassiICM}, proofs in our system can be
checked in randomized polynomial time.
The key difference between our system and previously studied
ones is that those systems are axiomatic in the sense that they require that \emph{every}
sub-computation (derived polynomial) be in the ideal generated by the original polynomial equations $F_i$, and thus be a sound consequence of the equations $F_1=\dotsb=F_m=0$.
In contrast our system has no such requirement; an \I proof can compute potentially
unsound sub-computations (whose vanishing does not follow from $F_1=\dotsb=F_m=0$), as long as the \emph{final polynomial} is in the ideal
generated by the equations. This key difference allows \I proofs to be
\emph{ordinary algebraic circuits}, and thus nearly all results in
algebraic circuit complexity apply directly to the Ideal Proof System. To quote the tagline of a common US food chain, the Ideal Proof System is a ``No rules, just right'' proof system.
Our first main theorem shows one of the advantages of this close connection with algebraic circuits. To the best of our knowledge, this is the first implication showing that a proof complexity lower bound implies any sort of computational complexity lower bound.
\begin{VNPthm}
Super-polynomial lower bounds for the Ideal Proof System imply that the permanent does not have polynomial-size
algebraic circuits, that is, $\cc{VNP} \neq \cc{VP}$.
\end{VNPthm}
From the proof of this result, together with one of our simulation results (Proposition~\ref{prop:pitassi}), we also get:
\begin{corollary} \label{cor:PC}
Super-polynomial lower bounds on the number of lines in Polynomial Calculus proofs imply the Permanent versus Determinant Conjecture.\footnote{Although Corollary~\ref{cor:PC} may seem to be saying that lower bounds on PC imply a circuit lower bound, this is not precisely the case, because complexity in PC is emphatically not measured by the number of lines, but rather by the total number of monomials appearing in a PC proof. This is true both definitionally and in practice, in that all previous papers on PC use the number-of-monomials complexity measure.}
\end{corollary}
Under a reasonable assumption on polynomial identity testing (PIT), which we discuss further below, we are able to show that Extended Frege is equivalent to the Ideal Proof System. Extended Frege (EF) is the strongest natural deduction-style propositional proof system that has been proposed, and is the proof complexity analog of $\cc{P/poly}$ (that is, Extended Frege = $\cc{P/poly}$-Frege).
\begin{EFthm}
Let $K$ be a family of polynomial-size Boolean circuits for PIT such that the PIT axioms for $K$ (see Definition~\ref{def:PITaxioms}) have polynomial-size EF proofs. Then EF polynomially simulates \I, and hence the EF and \I are polynomially equivalent.
\end{EFthm}
Under this assumption about PIT, Theorems~\ref{thm:VNP} and \ref{thm:EF} in combination suggest a precise reason that proving lower bounds on Extended Frege is so difficult, namely, that doing so implies $\cc{VP} \neq \cc{VNP}$. Theorem~\ref{thm:EF} also suggests that to make progress toward proving lower bounds in proof complexity, it may be necessary to prove lower bounds for the
Ideal Proof System, which we feel is more natural, and creates the possibility of harnessing tools from algebra, representation theory, and algebraic circuit complexity. We give a specific suggestion of how to apply these tools towards proof complexity lower bounds in Section~\ref{sec:syzygy}.
\begin{remark} \label{rmk:PIT}
Given that $PIT \in \cc{P}$ is known to imply lower bounds, one may wonder if the combination of the above two theorems really gives any explanation at all for the difficulty of proving lower bounds on Extended Frege. There are at least two reasons that it does.
First, the best lower bound known to follow from $PIT \in \cc{P}$ is an algebraic circuit-size lower bound on an integer polynomial that can be evaluated in $\cc{NEXP} \cap \cc{coNEXP}$ \cite{jansenSanthanam} (via personal communication we have learned that Impagliazzo and Williams have also proved similar results), whereas our conclusion is a lower bound on algebraic circuit-size for an integer polynomial computable in $\cc{\# P} \subseteq \cc{PSPACE}$.
Second, the hypothesis that our PIT axioms can be proven efficiently in Extended Frege seems to be somewhat orthogonal to, and may be no stronger than, the widely-believed hypothesis that PIT is in $\cc{P}$. As Extended Frege is a nonuniform proof system, efficient Extended Frege proofs of our PIT axioms are unlikely to have any implications about the uniform complexity of PIT (and given that we already know unconditionally that PIT is in $\cc{P/poly}$, uniformity is what the entire question of derandomizing PIT is about). In the opposite direction, it's a well-known observation in proof complexity that nearly all natural uniform polynomial-time algorithms have feasible (Extended Frege) correctness proofs. If this phenomenon doesn't apply to PIT, it would be interesting for both proof complexity and circuit complexity, as it indicates the difficulty of proving that PIT is in $\cc{P}$. \rmkqed
\end{remark}
Although PIT has long been a central problem of study in computational complexity---both because of its importance in many algorithms, as well as its strong connection to circuit lower bounds---our theorems highlight the importance of PIT in proof complexity. Next we prove that Theorem~\ref{thm:EF} can be scaled down to obtain similar results for weaker Frege systems, and discuss some of its more striking consequences.
\begin{AC0thm}
Let $\mathcal{C}$ be any of the standard circuit classes $\cc{AC}^k, \cc{AC}^k[p], \cc{ACC}^k, \cc{TC}^k, \cc{NC}^k$. Let $K$ be a family of polynomial-size Boolean circuits for PIT (not necessarily in $\mathcal{C}$) such that the PIT axioms for $K$ have polynomial-size $\mathcal{C}$-Frege proofs. Then $\mathcal{C}$-Frege is polynomially equivalent to \I, and consequently to Extended Frege as well.
\end{AC0thm}
Theorem~\ref{thm:AC0} also highlights the importance of our PIT axioms for getting $\cc{AC}^0[p]$-Frege lower bounds, which has been an open question for nearly thirty years. (For even weaker systems, Theorem~\ref{thm:AC0} in combination with known results yields an unconditional lower bound on $\cc{AC}^0$-Frege proofs of the PIT axioms.) In particular,
we are in the following win-win scenario:
\begin{AC0pcor}
For any $d$, either:
\begin{itemize}
\item There are polynomial-size $\cc{AC}^0[p]$-Frege proofs of the depth $d$ PIT axioms, in which case \emph{any superpolynomial lower bounds on $\cc{AC}^0[p]$-Frege imply $\cc{VNP}_{\F_p}$ does not have polynomial-size depth $d$ algebraic circuits}, thus explaining the difficulty of obtaining such lower bounds, or
\item There are no polynomial-size $\cc{AC}^0[p]$-Frege proofs of the depth $d$ PIT axioms, in which case we've gotten $\cc{AC}^0[p]$-Frege lower bounds.
\end{itemize}
\end{AC0pcor}
Finally, in Section~\ref{sec:syzygy} we suggest a new framework for proving lower bounds for
the Ideal Proof System which we feel has promise. Along the way, we make precise the difference in difficulty between proof complexity lower bounds (on \I, which may also apply to Extended Frege via Theorem~\ref{thm:EF}) and algebraic circuit lower bounds. In particular, the set of \emph{all $\I$-certificates} for a given unsatisfiable system of equations is, in a certain precise sense, ``finitely generated.'' We suggest how one might take advantage of this finite generation to transfer techniques from algebraic circuit complexity to prove lower bounds on $\I$, and consequently on Extended Frege (since $\I$ p-simulates Extended Frege unconditionally), giving hope for the long-sought length-of-proof lower bounds on an algebraic proof system. We hope to pursue this approach in future work.
\subsubsection{Related Work}
We will see in Section~\ref{sec:others} that many previously studied proof systems can be p-simulated by \I, and furthermore can be viewed simply as different complexity measures on $\I$ proofs, or as $\mathcal{C}$-\I for certain classes $\mathcal{C}$. In particular, the Nullstellensatz system \cite{BIKPP}, the Polynomial Calculus (or \Grobner) proof system \cite{CEI}, and Polynomial Calculus with Resolution \cite{PCR} are all particular measures on \I, and Pitassi's previous algebraic systems \cite{pitassi96, pitassiICM} are subsystems of \I.
Raz and Tzameret \cite{razTzameret} introduced various multilinear algebraic proof systems. Although their systems are not so easily defined in terms of \I, the Ideal Proof System nonetheless p-simulates all of their systems. Amongst other results, they show that a super-polynomial separation between two variants of their system---one representing lines by multilinear circuits, and one representing lines by general algebraic circuits---would imply a super-polynomial separation between general and multilinear circuits computing multilinear polynomials. However, they only get implications to lower bounds on multilinear circuits rather than general circuits, and they do not prove a statement analogous to our Theorem~\ref{thm:VNP}, that lower bounds on a single system imply algebraic circuit lower bounds.
\subsubsection{Outline}
The remainder of Section~\ref{sec:eabs} gives proofs of some foundational results, and summarizes the rest of the paper, giving detailed versions of all statements and discussing their proofs and significance. In Section~\ref{sec:eabs} many proofs are only sketched or are delayed until later in the paper, but all proofs of all results are present either in Section~\ref{sec:eabs} or in Sections~\ref{sec:simulations}--\ref{sec:PIT}.
We start in Section~\ref{sec:general}, by proving several basic facts about \I (some proofs are deferred to Section~\ref{sec:simulations}). We discuss the relationship between \I and previously studied proof systems.
We also highlight several consequences of results from algebraic complexity theory for the Ideal Proof System, such as division elimination \cite{strassenDivision} and the chasms at depth 3 \cite{GKKSchasm,tavenas} and 4 \cite{agrawalVinay,koiranChasm,tavenas}.
In Section~\ref{sec:VNPeabs}, we outline the proof that lower bounds on \I imply algebraic circuit lower bounds (Theorem~\ref{thm:VNP}; full proof in Section~\ref{sec:VNP}). We also show how this result gives as a corollary a new, simpler proof that $\cc{NP} \not\subseteq \cc{coMA} \Rightarrow \cc{VNP}^0 \neq \cc{VP}^0$. In Section~\ref{sec:PITeabs} we introduce our PIT axioms in detail and outline the proof of Theorems~\ref{thm:EF} and \ref{thm:AC0} (full proofs in Section~\ref{sec:EF}). We also discuss in detail many variants of Theorem~\ref{thm:AC0} and their consequences, as briefly mentioned above. In Section~\ref{sec:syzygy} we suggest a new framework for transferring techniques from algebraic circuit complexity to (algebraic) proof complexity lower bounds. Finally, in Section~\ref{sec:conclusion} we gather a long list of open questions raised by our work, many of which we believe may be quite approachable.
Appendix~\ref{app:background} contains more complete preliminaries. In Appendices~\ref{app:RIPS} and \ref{app:geom} we introduce two variants of the Ideal Proof System---one of which allows certificates to be rational functions and not only polynomials, and one of which has a more geometric flavor---and discuss their relationship to \I. These systems further suggest that tools from geometry and algebra could potentially be useful for understanding the complexity of various propositional tautologies and more generally the complexity of individual instances of $\cc{NP}$-complete problems.
\subsection{A few preliminaries}
In this section we cover the bare bones preliminaries that we think may be less familiar to some of our readers. Remaining background material on algebraic complexity, proof complexity, and commutative algebra can be found in Appendix~\ref{app:background}. As general references, we refer the reader to \Burgisser--Clausen--Shokrollahi \cite{BCS} and the surveys \cite{shpilkaYehudayoff,chenKayalWigderson} for algebraic complexity, to \Krajicek \cite{krajicekBook} for proof complexity, and to any of the standard books \cite{eisenbud,atiyahMacdonald,matsumura,reidCA} for commutative algebra.
\subsubsection{Algebraic Complexity} \label{sec:prelim:comp}
Over a ring $R$, $\cc{VP}_{R}$ is the class of families $f=(f_n)_{n=1}^{\infty}$ of formal polynomials---that is, considered as symbolic polynomials, rather than as functions---$f_n$ such that $f_n$ has $\poly(n)$ input variables, is of $\poly(n)$ degree, and can be computed by algebraic circuits over $R$ of $\poly(n)$ size. $\cc{VNP}_{R}$ is the class of families $g$ of polynomials $g_n$ such that $g_n$ has $\poly(n)$ input variables and is of $\poly(n)$ degree, and can be written as
\[
g_n(x_1,\dotsc,x_{\poly(n)}) = \sum_{\vec{e} \in \{0,1\}^{\poly(n)}} f_n(\vec{e}, \vec{x})
\]
for some family $(f_n) \in \cc{VP}_{R}$.
A family of algebraic circuits is said to be \definedWord{constant-free} if the only constants used in the circuit are $\{0,1,-1\}$. Other constants can be used, but must be built up using algebraic operations, which then count towards the size of the circuit. We note that over a fixed finite field $\F_q$, $\cc{VP}^0_{\F_q} = \cc{VP}_{\F_q}$, since there are only finitely many possible constants. Consequently, $\cc{VNP}^0_{\F_q} = \cc{VNP}_{\F_q}$ as well. Over the integers, $\cc{VP}^0_{\Z}$ coincides with those families in $\cc{VP}_{\Z}$ that are computable by algebraic circuits of polynomial total \emph{bit-size}: note that any integer of polynomial bit-size can be constructed by a constant-free circuit by using its binary expansion $b_n \dotsb b_1 = \sum_{i=0}^{n-1} b_i 2^i$, and computing the powers of $2$ by linearly many successive multiplications. A similar trick shows that over the algebraic closure $\overline{\F}_p$ of a finite field, $\cc{VP}^0_{\overline{\F}_p}$ coincides with those families in $\cc{VP}_{\overline{\F}_p}$ that are computable by algebraic circuits of polynomial total bit-size, or equivalently where the constants they use lie in subfields of $\overline{\F}_p$ of total size bounded by $2^{n^{O(1)}}$. (Recall that $\F_{p^{a}}$ is a subfield of $\F_{p^b}$ whenever $a | b$, and that the algebraic closure $\overline{\F}_p$ is just the union of $\F_{p^{a}}$ over all integers $a$.)
\subsubsection{Proof Complexity} \label{sec:prelim:proof}
In brief, a \definedWord{proof system} for a language $L \in \cc{coNP}$ is a nondeterministic algorithm for $L$, or equivalently a deterministic polynomial-time verifier $P$ such that $x \in L \Leftrightarrow (\exists y)[P(x,y)=1]$, and we refer to any such $y$ as a $P$-proof that $x \in L$.\footnote{This notion is essentially due to Cook and Reckhow \cite{cookReckhow}; although their definition was formalized slightly differently, it is essentially equivalent to the one we give here.} We say that $P$ is \definedWord{polynomially bounded} if for every $x \in L$ there is a $P$-proof of length polynomially bounded in $|x|$: $|y| \leq \poly(|x|)$. We will generally be considering proof systems for the $\cc{coNP}$-complete language TAUT consisting of all propositional tautologies; there is a polynomially bounded proof system for TAUT if and only if $\cc{NP} = \cc{coNP}$.
Given two proof systems $P_1$ and $P_2$ for the same language $L \in \cc{coNP}$, we say that $P_1$ \definedWord{polynomially simulates} or \definedWord{p-simulates} $P_2$ if there is a polynomial-time function $f$ that transforms $P_1$-proofs into $P_2$-proofs, that is, $P_1(x,y)=1 \Leftrightarrow P_2(x,f(y))=1$. We say that $P_1$ and $P_2$ are \definedWord{polynomially equivalent} or \definedWord{p-equivalent} if each p-simulates the other. (This is the proof complexity version of Levin reductions between $\cc{NP}$ problems.)
For TAUT (or UNSAT), there are a variety of standard and well-studied proof systems. In this paper we will be primarily concerned with Frege---a standard, school-style line-by-line deductive system---and its variants such as Extended Frege (EF) and $\cc{AC}^0$-Frege.
Bounded-depth Frege or $\cc{AC}^0$-Frege are Frege proofs but with the additional restriction that each formula appearing in the
proof has bounded depth \emph{syntactically} (the \emph{syntactic} nature of this condition is crucial: since every formula appearing in a proof is a tautology, semantically all such formulas are the constant-true function and can be computed by trivial circuits). As with $\cc{AC}^0$ circuits, $\cc{AC}^0$-Frege has rules for handling unbounded fan-in AND and OR connectives, in addition to negations.
For almost any syntactically-defined class of circuits $\mathcal{C}$, one can similarly define $\mathcal{C}$-Frege. For example, $\cc{NC}^1$-Frege is p-equivalent to Frege. However, despite the seeming similarities, there are some differences between a circuit class and its corresponding Frege system. Exponential lower bounds are known for $\cc{AC}^0$-Frege \cite{BIKPPW}, which use the Switching Lemma as for lower bounds on $\cc{AC}^0$ circuits, but in a more complicated way. However, unlike the case of $\cc{AC}^0[p]$ circuits for which we have exponential lower bounds \cite{razborov, smolensky}, essentially no nontrivial lower bounds are known for $\cc{AC}^0[p]$-Frege.
Extended Frege systems generalize Frege systems by allowing, in addition to all of the Frege rules, a new axiom schema of the form $y \leftrightarrow A$, where $A$ can be any formula, and $y$ is a new variable not occurring in $A$. Whereas polynomial-size Frege proofs allow a polynomial number of lines, each of which must be a polynomial-sized formula, using the new axiom, polynomial-size EF proofs allow a polynomial number of lines, each of which can essentially be a polynomial-sized circuit (you can think of the new variables introduced by this axiom schema as names for the gates of a circuit, in that once a formula is named by a single variable, it can be reused without having to create another copy of the whole formula). In particular, a natural definition of $\cc{P/poly}$-Frege is equivalent to Extended Frege. Extended Frege is the strongest natural system known for proving propositional tautologies. One may also consider seemingly much stronger systems such as Peano Arithmetic or ZFC, but it is unclear and unknown if these systems can prove Boolean tautologies (with no quantifiers) any more efficiently than Extended Frege.
We define all of the algebraic systems we consider in Section~\ref{sec:others} below.
\subsection{Foundational results} \label{sec:general}
\subsubsection{Relation with \texorpdfstring{$\cc{coMA}$}{coMA}}
\begin{proposition} \label{prop:coMA}
For any field $\F$, if every propositional tautology has a polynomial-size constant-free $\I_{\F}$-proof, then $\cc{NP} \subseteq \cc{coMA}$, and hence the polynomial hierarchy collapses to its second level.
\end{proposition}
If we wish to drop the restriction of ``constant-free'' (which, recall, is no restriction at all over a finite field), we may do so either by using the Blum--Shub--Smale analogs of $\cc{NP}$ and $\cc{coMA}$ using essentially the same proof, or over fields of characteristic zero using the Generalized Riemann Hypothesis (Proposition~\ref{prop:coMAGRH}).
\begin{proof}
Merlin nondeterministically guesses the polynomial-size constant-free \I proof, and then Arthur must check conditions (\ref{condition:ideal}) and (\ref{condition:nss}) of Definition~\ref{def:IPS}. (We need constant-free so that the algebraic proof has polynomial bit-size and thus can in fact be guessed by a Boolean Merlin.) Both conditions of Definition~\ref{def:IPS} are instances of Polynomial Identity Testing (PIT), which can thus be solved in randomized polynomial time by the standard Schwarz--Zippel--DeMillo--Lipton $\cc{coRP}$ algorithm for PIT.
\end{proof}
\subsubsection{Chasms, depth reduction, and other circuit transformations}
Recently, many strong depth reduction theorems have been proved for circuit complexity \cite{agrawalVinay,koiranChasm,GKKSchasm,tavenas}, which have been called ``chasms'' since Agrawal and Vinay \cite{agrawalVinay}. In particular, they imply that sufficiently strong lower bounds against depth 3 or 4 circuits imply super-polynomial lower bounds against arbitrary circuits. Since an \I proof is just a circuit, these depth reduction chasms apply equally well to \I proof size. Note that it was not clear to us how to adapt the proofs of these chasms to the type of circuits used in the Polynomial Calculus or other previous algebraic systems \cite{pitassiICM}, and indeed this was part of the motivation to move to our more general notion of \I proof.
\begin{observation}[Chasms for \I proof size] \label{obs:chasm}
If a system of $n^{O(1)}$ polynomial equations in $n$ variables has an \I proof of unsatisfiability of size $s$ and (semantic) degree $d$, then it also has:
\begin{enumerate}
\item A $O(\log d (\log s + \log d))$-depth \I proof of size $poly(ds)$ (follows from Valiant--Skyum--Berkowitz--Rackoff \cite{VSBR});
\item A depth 4 \I formula proof of size $n^{O(\sqrt{d})}$ (follows from Koiran \cite{koiranChasm}) or a depth 4 \I proof of size $2^{O(\sqrt{d \log(ds) \log n})}$ (follows from Tavenas \cite{tavenas}).
\item (Over fields of characteristic zero) A depth 3 \I proof of size $2^{O(\sqrt{d \log d \log n \log s})}$ (follows from Gupta, Kayal, Kamath, and Saptharishi \cite{GKKSchasm}) or even $2^{O(\sqrt{d \log n \log s})}$ (follows from Tavenas \cite{tavenas}). \rmkqed
\end{enumerate}
\end{observation}
This observation helps explain why size lower bounds for algebraic proofs for the stronger notion of size---namely number of lines, used here and in Pitassi \cite{pitassi96}, rather than number of monomials---have been difficult to obtain. This also suggests that size lower bounds for \I proofs in restricted circuit classes would be interesting, even for restricted kinds of depth 3 circuits.
Similarly, since \I proofs are just circuits, any \I certificate family of polynomially bounded degree that is computed by a polynomial-size family of algebraic circuits with divisions can also be computed by a polynomial-size family of algebraic circuits without divisions (follows from Strassen \cite{strassenDivision}). We note, however, that one could in principle consider \I certificates that were not merely polynomials, but even rational functions, under suitable conditions; divisions for computing these cannot always be eliminated. We discuss this ``Rational Ideal Proof System,'' the exact conditions needed, and when such divisions can be effectively eliminated in Appendix~\ref{app:RIPS}.
\subsubsection{Simulations and definitions of other algebraic proof systems in terms of \texorpdfstring{$\I$}{\Itext}}
\label{sec:others}
Previously studied algebraic proof systems can be viewed as particular complexity measures on the Ideal Proof System, including the Polynomial Calculus (or \Grobner) proof system (PC) \cite{CEI}, Polynomial Calculus with Resolution (PCR) \cite{PCR}, the Nullstellensatz proof system \cite{BIKPP}, and Pitassi's algebraic systems \cite{pitassi96, pitassiICM}, as we explain below.
Before explaining these, we note that although the Nullstellensatz says that if $F_1(\vec{x}) = \dotsb = F_m(\vec{x}) = 0$ is unsatisfiable then there always exists a certificate that is linear in the $\f_i$---that is, of the form $\sum \f_i G_i(\vec{x})$---our definition of \I certificate does not enforce $\vec{\f}$-linearity. The definition of \I certificate allows certificates with $\vec{\f}$-monomials of higher degree, and it is conceivable that one could achieve a savings in size by considering such certificates rather than only considering $\vec{\f}$-linear ones. As the linear form is closer to the original way Hilbert expressed the Nullstellensatz (see, \eg, the translation \cite{hilbertPapers}), we refer to certificates of the form $\sum \f_i G_i(\vec{x})$ as \definedWord{Hilbert-like \I certificates}.
All of the previous algebraic proof systems are rule-based systems, in that they syntactically enforce the condition that every line of the proof is a polynomial in the ideal of the original polynomials $F_1(\vec{x}), \dotsc, F_m(\vec{x})$. Typically they do this by allowing two derivation rules: 1) from $G$ and $H$, derive $\alpha G + \beta H$ for $\alpha,\beta$ constants, and 2) from $G$, derive $Gx_i$ for any variable $x_i$. By ``rule-based circuits'' we mean circuits with inputs $\f_1, \dotsc, \f_m$ having linear combination gates and, for each $i=1,\dotsc,n$, gates that multiply their input by $x_i$. (Alternatively, one may view the $x_i$ as inputs, require that the circuit by syntactically \emph{linear} in the $\f_i$, and that each $x_i$ is only an input to multiplication gates, each of which syntactically depends on at least one $\f_i$. Again alternatively, one may view the $x_i$ as inputs, but with the requirement that the polynomial computed \emph{at each gate} is a polynomial of $\f_i$-degree one in the ideal $\langle \f_1, \dotsc, \f_m \rangle \subseteq \F[\vec{x}, \vec{\f}]$.) In particular, rule-based circuits necessarily produce Hilbert-like certificates.
Now we come to the definitions of previous algebraic proof systems in terms of complexity measures on the Ideal Proof System:
\begin{itemize}
\item Complexity in the Nullstellensatz proof system, or ``Nullstellensatz degree,'' is simply the minimal degree of any Hilbert-like certificate (for systems of equations of constant degree, such as the algebraic translations of tautologies.)
\item ``Polynomial Calculus size'' is the sum of the (semantic) number of monomials at each gate in $C(\vec{x}, \vec{F}(\vec{x}))$, where $C$ ranges over rule-based circuits.
\item ``PC degree'' is the minimum over rule-based circuits $C(\vec{x}, \vec{\f})$ of the maximum semantic degree at any gate in $C(\vec{x}, \vec{F}(\vec{x}))$.
\item Pitassi's 1997 algebraic proof system \cite{pitassiICM} is essentially PC, except where size is measured by number of lines of the proof (rather than total number of monomials appearing). This corresponds exactly to the smallest size of any rule-based circuit $C(\vec{x}, \vec{\f})$ computing any Hilbert-like \I certificate.
\item Polynomial Calculus with Resolution (PCR) \cite{PCR} also allows variables $\overline{x}_i$ and adds the equations $\overline{x}_i = 1 - x_i$ and $x_i \overline{x}_i = 0$. This is easily accommodated into the Ideal Proof System: add the $\overline{x}_i$ as new variables, with the same restrictions as are placed on the $x_i$'s in a rule-based circuit, and add the polynomials $\overline{x}_i - 1 + x_i$ and $x_i \overline{x}_i$ to the list of equations $F_i$. Note that while this may have an effect on the PC size as it can decrease the total number of monomials needed, it has essentially no effect on the number of lines of the proof.
\end{itemize}
\begin{pitassiProp}
Pitassi's 1996 algebraic proof system \cite{pitassi96} is p-equivalent to Hilbert-like \I.
Pitassi's 1997 algebraic proof system \cite{pitassiICM}---equivalent to the number-of-lines measure on PC proofs---is p-equivalent to Hilbert-like $\det$-\I or $\cc{VP}_{ws}$-\I.
\end{pitassiProp}
Combining Proposition~\ref{prop:pitassi} with the techniques used in Theorem~\ref{thm:VNP} shows that super-polynomial lower bounds on the number of lines in PC proofs would positively resolve the Permanent Versus Determinant Conjecture, explaining the difficulty of such proof complexity lower bounds.
In light of this proposition (which we prove in Section~\ref{sec:pitassi}),
we henceforth refer to the systems from \cite{pitassi96} and \cite{pitassiICM} as Hilbert-like \I and Hilbert-like $\det$-\I, respectively. Pitassi \cite[Theorem~1]{pitassi96} showed that Hilbert-like \I p-simulates Polynomial Calculus and Frege. Essentially the same proof shows that Hilbert-like \I p-simulates Extended Frege as well.
Unfortunately, the proof of the simulation in \cite{pitassi96} does not seem to generalize to give a depth-preserving simulation. Nonetheless, our next proposition shows that there is indeed a depth-preserving simulation.
\begin{depthThm}
For any $d(n)$, depth-$(d+2)$ $\I_{\F_p}$ p-simulates depth-$d$ Frege proofs with unbounded fan-in $\lor, \land, MOD_p$ connectives (for $d=O(1)$, this is $\cc{AC}^0_{d}[p]$-Frege).
\end{depthThm}
\subsection{Lower bounds on \texorpdfstring{\I}{\Itext} imply circuit lower bounds} \label{sec:VNPeabs}
\begin{VNPthm}
A super-polynomial lower bound on [constant-free] Hilbert-like $\I_{R}$ proofs of any family of tautologies implies $\cc{VNP}_{R} \neq \cc{VP}_{R}$ [respectively, $\cc{VNP}^0_{R} \neq \cc{VP}^0_{R}$], for any ring $R$.
A super-polynomial lower bound on the number of lines in Polynomial Calculus proofs implies the Permanent versus Determinant Conjecture ($\cc{VNP} \neq \cc{VP}_{ws}$).
\end{VNPthm}
Together with Proposition~\ref{prop:coMA}, this immediately gives an alternative, and we believe simpler, proof of the following result:
\begin{corollary}
If $\cc{NP} \not\subseteq \cc{coMA}$, then $\cc{VNP}^{0}_{R} \neq \cc{VP}^0_{R}$, for any ring $R$.
\end{corollary}
For comparison, here is a brief sketch of the only previous proof of this result that we are aware of, which only seems to work when $R$ is a finite field or, assuming the Generalized Riemann Hypothesis, a field of characteristic zero, and uses several other significant results. The previous proof combines: 1) \Burgisser's results \cite{burgisserCookValiant} relating $\cc{VP}$ and $\cc{VNP}$ over various fields to standard Boolean complexity classes such as $\cc{NC/poly}$, $\cc{\# P/poly}$ (uses GRH), and $\cc{Mod}_{p}\cc{P/poly}$, and 2) the implication $\cc{NP} \not\subseteq \cc{coMA} \Rightarrow \cc{NC/poly} \neq \cc{\# P/poly}$ (and similarly with $\cc{\# P/poly}$ replaced by $\cc{Mod}_{p}\cc{P/poly}$), which uses the downward self-reducibility of complete functions for $\cc{\# P/poly}$ (the permanent \cite{valiant}) and $\cc{Mod}_{p}\cc{P/poly}$ \cite{feigenbaumFortnow}, as well as Valiant--Vazirani \cite{VV}.
The following lemma is the key to Theorem~\ref{thm:VNP}.
\begin{VNPlem}
Every family of CNF tautologies $(\varphi_n)$ has a Hilbert-like family of \I certificates $(C_n)$ in $\cc{VNP}^{0}_{R}$.
\end{VNPlem}
Here we show how Theorem~\ref{thm:VNP} follows from Lemma~\ref{lem:VNP}. Lemma~\ref{lem:VNP} is proved in Section~\ref{sec:VNP}.
\begin{proof}[Proof of Theorem~\ref{thm:VNP}, assuming Lemma~\ref{lem:VNP}]
For a given set $\mathcal{F}$ of unsatisfiable polynomial equations $F_1=\dotsb=F_m=0$, a lower bound on \I refutations of $\mathcal{F}$ is equivalent to giving the same circuit lower bound on \emph{all} \I certificates for $\mathcal{F}$. A super-polynomial lower bound on Hilbert-like \I implies that some function in $\cc{VNP}$---namely, the $\cc{VNP}$-\I certificate guaranteed by Lemma~\ref{lem:VNP}---cannot be computed by polynomial-size algebraic circuits, and hence that $\cc{VNP} \neq \cc{VP}$. Since Lemma~\ref{lem:VNP} even guarantees a constant-free certificate, we get the analogous consequence for constant-free lower bounds.
The second part of Theorem~\ref{thm:VNP} follows from the fact that number of lines in a PC proof is p-equivalent to Hilbert-like $\det$-\I (Proposition~\ref{prop:pitassi}). As in the first part, a super-polynomial lower bound on Hilbert-like $\det$-\I implies that some function family in $\cc{VNP}$ is not a p-projection of the determinant. Since the permanent is $\cc{VNP}$-complete under p-projections, the result follows.
\end{proof}
\subsection{PIT as a bridge between circuit complexity and proof complexity} \label{sec:PITeabs}
In this section we state our PIT axioms and give an outline of the proof of Theorems~\ref{thm:EF} and \ref{thm:AC0}, which say that Extended Frege (EF) (respectively, $\cc{AC}^0$- or $\cc{AC}^0[p]$-Frege) is polynomially equivalent to the Ideal Proof System if there are polynomial-size circuits for PIT whose correctness---suitably formulated---can be efficiently proved in EF (respectively, $\cc{AC}^0$- or $\cc{AC}^0[p]$-Frege).
More precisely, we identify a small set of natural axioms for PIT and show that if these axioms can be proven efficiently in EF, then EF is p-equivalent to $\I$. Theorem~\ref{thm:AC0} begins to explain why $\cc{AC}^0[p]$-Frege lower bounds have been so difficult to obtain, and highlights the importance of our PIT axioms for $\cc{AC}^0[p]$-Frege lower bounds. We begin by describing and discussing these axioms.
Fix some standard Boolean encoding of constant-free algebraic circuits, so that the encoding of any size-$m$ constant-free algebraic circuit has size $\poly(m)$. We use ``$[C]$'' to denote the encoding of the algebraic circuit $C$. Let $K = \{K_{m,n}\}$ denote a family of Boolean circuits for solving polynomial identity testing. That is, $K_{m,n}$ is a Boolean function that takes as input the encoding of a size $m$ constant-free algebraic circuit, $C$, over variables $x_1, \ldots, x_n$, and if $C$ has polynomial degree, then $K$ outputs 1 if and only if the polynomial computed by $C$ is the 0 polynomial.
\paragraph{Notational convention:} We underline parts of a statement that involve propositional variables. For example, if in a propositional statement we write ``$[C]$'', this refers to a fixed Boolean string that is encoding the (fixed) algebraic circuit $C$. In contrast, if we write $\prop{[C]}$, this denotes a Boolean string \emph{of propositional variables}, which is to be interpreted as a description of an as-yet-unspecified algebraic circuit $C$; any setting of the propositional variables corresponds to a particular algebraic circuit $C$. Throughout, we use $\vec{p}$ and $\vec{q}$ to denote propositional variables (which we do not bother underlining except when needed for emphasis), and $\vec{x}, \vec{y}, \vec{z},\dotsc$ to denote the algebraic variables that are the inputs to algebraic circuits. Thus, $C(\vec{x})$ is an algebraic circuit with inputs $\vec{x}$, $[C(\vec{x})]$ is a fixed Boolean string encoding some particular algebraic circuit $C$, $\prop{[C(\vec{x})]}$ is a string of propositional variables encoding an unspecified algebraic circuit $C$, and $[C(\prop{\vec{p}})]$ denotes a Boolean string together with propositional variables $\vec{p}$ that describes a fixed algebraic circuit $C$ whose inputs have been set to the propositional variables $\vec{p}$.
\begin{definition} \label{def:PITaxioms}
Our PIT axioms for a Boolean circuit $K$ are as follows. (This definition makes sense even if $K$ does not correctly compute PIT, but that case isn't particularly interesting or useful.)
\begin{enumerate}
\item\label{axiom:Boolean} Intuitively, the first axiom states that if $C$ is a circuit computing
the identically 0 polynomial, then the polynomial evaluates to 0 on all
Boolean inputs.
\[
K(\prop{[C(\vec{x})]}) \rightarrow K(\prop{[C(\vec{p})]})
\]
Note that the only variables on the left-hand side of the implication
are Boolean propositional variables, $\vec{q}$, that encode an algebraic circuit of size $m$ over
$n$ algebraic variables $\vec{x}$ (these latter are \emph{not} propositional variables of the above formula). The variables on the right-hand side are $\vec{q}$ plus
Boolean variables ${\vec p}$, where some of the variables in $\vec{q}$---those encoding the $x_i$---have been replaced by constants or $\vec{p}$ in such a way that $[C(\vec{p})]$ encodes a circuit that plugs in the $\{0,1\}$-valued $p_i$ for its algebraic inputs $x_i$. In other words, when we say $\prop{[C({\vec p})]}$ we mean
the encoding of the circuit $C$ where Boolean constants are plugged in for
the original algebraic $\vec{x}$ variables, as specified by the variables $\vec{p}$.
\item\label{axiom:one} Intuitively, the second axiom states that if $C$ is a circuit computing
the zero polynomial, then the circuit $1-C$ does not compute the zero polynomial.
\[
K(\prop{[C({\vec x})]}) \rightarrow \neg K(\prop{[1-C({\vec x})]})
\]
Here, if $\vec{q}$ are the propositional variables describing $C$, these are the only variables that appear in the above statement. We abuse syntax slightly in writing $[1-C]$: it is meant to denote a Boolean formula $\varphi(\vec{q})$ such that if $\vec{q}=[C]$ describes a circuit $C$, then $\varphi(\vec{q})$ describes the circuit $1-C$ (with one subtraction gate more than $C$).
\item\label{axiom:subzero} Intuitively, the third axiom states that PIT circuits respect certain substitutions.
More specifically, if the polynomial computed by circuit
$G$ is 0, then $G$ can be substituted for the constant $0$.
\[
K(\prop{[G({\vec x})]}) \land K(\prop{[C({\vec x},0)]}) \rightarrow
K(\prop{[C({\vec x},G({\vec x}))]})
\]
Here the notations $[C(\vec{x},0)]$ and $[C(\vec{x}, G(\vec{x}))]$ are similar abuses of notation to above; we use these and similar shorthands without further mention.
\item\label{axiom:perm} Intuitively, the last axiom states that PIT is closed under
permutations of the (algebraic) variables. More specifically if $C(\vec{x})$ is identically 0,
then so is $C(\pi(\vec{x}))$ for all permutations $\pi$.
\[
K(\prop{[C({\vec x})]}) \rightarrow K(\prop{[C(\pi({\vec x}))]})
\]
\end{enumerate}
\end{definition}
We can now state and discuss two of our main theorems precisely.
\begin{EFthm}
If there is a family $K$ of polynomial-size Boolean circuits that correctly compute PIT, such that the PIT axioms for $K$ have polynomial-size EF proofs, then EF is polynomially equivalent to $\I$.
\end{EFthm}
Note that the issue is not the existence of small circuits for PIT since we would be happy with nonuniform polynomial-size PIT circuits, which do exist. Unfortunately the known constructions are highly nonuniform---they involve picking uniformly random points---and we do not see how to prove the above axioms for these constructions. Nonetheless, it seems very plausible to us that there exists a polynomial-size family of PIT circuits where the above axioms are efficiently provable in EF, especially in light of Remark~\ref{rmk:PIT}.
To prove the theorem (which we do in Section~\ref{sec:EF}), we first show that EF is p-equivalent to $\I$ if a family of propositional formulas expressing soundness of $\I$ are efficiently EF provable. Then we show that efficient EF proofs of $Soundness_{\I}$ follows from efficient EF proofs for the PIT axioms.
Our next main result shows that the previous result can be scaled down to much weaker proof systems than EF.
\begin{AC0thm}
Let $\mathcal{C}$ be any class of circuits closed under $\cc{AC}^0$ circuit reductions. If there is a family $K$ of polynomial-size Boolean circuits computing PIT such that the PIT axioms for $K$ have polynomial-size $\mathcal{C}$-Frege proofs, then $\mathcal{C}$-Frege is polynomially equivalent to $\I$, and consequently polynomially equivalent to Extended Frege.
\end{AC0thm}
Note that here we \emph{do not} need to restrict the circuit family $K$ to be in the class $\mathcal{C}$. This requires one more (standard) technical device compared to the proof of Theorem~\ref{thm:EF}, namely the use of auxiliary variables for the gates of $K$. Here we prove and discuss some corollaries of Theorem~\ref{thm:AC0}; the proof of Theorem~\ref{thm:AC0} is given in Section~\ref{sec:AC0p}.
As $\cc{AC}^0$ is known unconditionally to be strictly weaker than Extended Frege \cite{Ajtai}, we immediately get that $\cc{AC}^0$-Frege cannot efficiently prove the PIT axioms for any Boolean circuit family $K$ correctly computing PIT.
Using essentially the same proof as Theorem~\ref{thm:AC0}, we also get the following result. By ``depth $d$ PIT axioms'' we mean a variant where the algebraic circuits $C$ (encoded as $[C]$ in the statement of the axioms) have depth at most $d$. Note that, even over finite fields, for any $d \geq 4$ super-polynomial lower bounds on depth $d$ algebraic circuits are a notoriously open problem. (The chasm at depth $4$ says that depth $4$ lower bounds of size $2^{\omega(\sqrt{n} \log n)}$ imply super-polynomial size lower bounds on general algebraic circuits, but this does not give any indication of why merely super-polynomial lower bounds on depth $4$ circuits should be difficult.)
\begin{corollary} \label{cor:AC0p}
For any $d$, if there is a family of tautologies with no polynomial-size $\cc{AC}^0[p]$-Frege proof, and $\cc{AC}^0[p]$-Frege has polynomial-size proofs of the [depth $d$] PIT axioms for some $K$, then $\cc{VNP}_{\F_p}$ does not have polynomial-size [depth $d$] algebraic circuits.
\end{corollary}
This corollary makes the following question of central importance in getting lower bounds on $\cc{AC}^0[p]$-Frege:
\begin{open}
For some $d \geq 4$, is there some $K$ computing depth $d$ PIT, for which the depth $d$ PIT axioms have $\cc{AC}^0[p]$-Frege proofs of polynomial size?
\end{open}
This question has the virtue that answering it either way is highly interesting:
\begin{itemize}
\item If $\cc{AC}^0[p]$-Frege does not have polynomial-size proofs of the [depth $d$] PIT axioms for any $K$, then we have super-polynomial size lower bounds on $\cc{AC}^0[p]$-Frege, answering a question that has been open for nearly thirty years.
\item Otherwise, super-polynomial size lower bounds on $\cc{AC}^0[p]$-Frege imply that the permanent does not have polynomial-size algebraic circuits [of depth $d$] over any finite field of characteristic $p$. This would then explain why getting superpolynomial lower bounds on $\cc{AC}^0[p]$-Frege has been so difficult.
\end{itemize}
This dichotomy is in some sense like a ``completeness result for $\cc{AC}^0[p]$-Frege, modulo proving strong algebraic circuit lower bounds on $\cc{VNP}$'': if one hopes to prove $\cc{AC}^0[p]$-Frege lower bounds \emph{without proving} strong lower bounds on $\cc{VNP}$, then one must prove $\cc{AC}^0[p]$-Frege lower bounds on the PIT axioms. For example, if you believe that proving $\cc{VP} \neq \cc{VNP}$ [or that proving $\cc{VNP}$ does not have bounded-depth polynomial-size circuits] is very difficult, and that proving $\cc{AC}^0[p]$-Frege lower bounds is comparatively easy, then to be consistent you must also believe that proving $\cc{AC}^0[p]$-Frege lower bounds \emph{on the [bounded-depth] PIT axioms} is easy.
Similarly, along with Theorem~\ref{thm:depth}, we get the following corollary.
\begin{corollary}
If for every constant $d$, there is a constant $d'$ such that the depth $d$ PIT axioms have polynomial-size depth $d'$ $\cc{AC}^0_{d'}[p]$-Frege proofs , then $\cc{AC}^0[p]$-Frege is polynomially equivalent to constant-depth $\I_{\F_p}$.
\end{corollary}
Using the chasms at depth 3 and 4 for algebraic circuits \cite{agrawalVinay,koiranChasm,tavenas} (see Observation~\ref{obs:chasm} above), we can also help explain why sufficiently strong exponential lower bounds for $\cc{AC}^0$-Frege---that is, lower bounds that don't depend on the depth, or don't depend so badly on the depth, which have also been open for nearly thirty years---have been difficult to obtain:
\begin{corollary}
Let $\F$ be any field, and let $c$ be a sufficiently large constant. If there is a family of tautologies $(\varphi_n)$ such that any $\cc{AC}^0$-Frege proof of $\varphi_n$ has size at least $2^{c\sqrt{n} \log n}$, and $\cc{AC}^0$-Frege has polynomial-size proofs of the depth $4$ PIT$_{\F}$ axioms for some $K$, then $\cc{VP}^0_{\F} \neq \cc{VNP}^0_{\F}$.
If $\F$ has characteristic zero, we may replace ``depth $4$'' above with ``depth $3$.''
\end{corollary}
\begin{proof}
Suppose that $\cc{AC}^0$-Frege can efficiently prove the depth $4$ PIT$_\F$ axioms for some Boolean circuit $K$. Let $(\varphi_n)$ be a family of tautologies. If $\cc{VNP}^0_{\F} = \cc{VP}^0_{\F}$, then there is a polynomial-size \I proof of $\varphi_n$. By Observation~\ref{obs:chasm}, the same certificate is computed by a depth $4$ $\F$-algebraic circuit of size $2^{O(\sqrt{n} \log n)}$. By assumption, $\cc{AC}^0$-Frege can efficiently prove the depth $4$ PIT$_\F$ axioms for $K$, and therefore $\cc{AC}^0$-Frege p-simulates depth 4 \I. Thus there are $\cc{AC}^0$-Frege proofs of $\varphi_n$ of size $2^{O(\sqrt{n} \log n)}$.
If $\F$ has characteristic zero, we may instead use the best-known chasm at depth 3, for which we only need depth 3 PIT and depth 3 \I, and yields the same bounds.
\end{proof}
As with Corollary~\ref{cor:AC0p}, we conclude a similar dichotomy: either $\cc{AC}^0$-Frege can efficiently prove the depth 4 PIT axioms (depth 3 in characteristic zero), or proving $2^{\omega(\sqrt{n} \log n)}$ lower bounds on $\cc{AC}^0$-Frege implies $\cc{VP}^0 \neq \cc{VNP}^0$.
\subsection{Towards lower bounds} \label{sec:syzygy}
Theorem~\ref{thm:VNP} shows that proving lower bounds on (even Hilbert-like) $\I$, or on the number of lines in Polynomial Calculus proofs (equivalent to Hilbert-like $\det$-\I), is at least as hard as proving algebraic circuit lower bounds. In this section we begin to make the difference between proving proof complexity lower bounds and proving circuit lower bounds more precise, and use this precision to suggest a direction for proving new proof complexity lower bounds, aimed at proving the long-sought-for length-of-proof lower bounds on an algebraic proof system.
The key fact we use is embodied in Lemma~\ref{lem:fgsyz}, which says that the set of (Hilbert-like) certificates for a given unsatisfiable system of equations is, in a precise sense, ``finitely generated.'' The basic idea is then to leverage this finite generation to extend lower bound techniques from individual polynomials to entire ``finitely generated'' sets of polynomials.
Because Hilbert-like certificates are somewhat simpler to deal with, we begin with those and then proceed to general certificates. But keep in mind that all our key conclusions about Hilbert-like certificates will also apply to general certificates. For this section we will need the notion of a module over a ring (the ring-analogue of a vector space over a field) and a few basic results about such modules; these are reviewed in Appendix~\ref{app:background:algebra}.
Recall that a \definedWord{Hilbert-like} $\I$-certificate $C(\vec{x}, \vec{\f})$ is one that is linear in the $\f$-variables, that is, it has the form $\sum_{i=1}^{m}G_i(\vec{x}) \f_i$.
Each function of the form $\sum_i G_i(\vec{x})\f_i$ is completely determined by the tuple $(G_1(\vec{x}), \dotsb, G_m(\vec{x}))$, and the set of all such tuples is exactly the $R[\vec{x}]$-module $R[\vec{x}]^{m}$.
The algebraic circuit size of a Hilbert-like certificate $C=\sum_i G_i(\vec{x}) \f_i$ is equivalent (up to a small constant factor and an additive $O(n)$) to the algebraic circuit size of computing the entire tuple $(G_1(\vec{x}), \dotsc, G_m(\vec{x}))$. A circuit computing the tuple can easily be converted to a circuit computing $C$ by adding $m$ times gates and a single plus gate.
Conversely, for each $i$ we can recover $G_i(\vec{x})$ from $C(\vec{x}, \vec{\f})$ by plugging in $0$ for all $\f_j$ with $j \neq i$ and $1$ for $\f_i$.
So from the point of view of lower bounds, we may consider Hilbert-like certificates, and their representation as tuples, essentially without loss of generality. This holds even in the setting of Hilbert-like depth 3 \I-proofs.
Using the representation of Hilbert-like certificates as tuples, we find that Hilbert-like \I-certificates are in bijective correspondence with $R[\vec{x}]$ solutions (in the new variables $g_i$) to the following $R[\vec{x}]$-linear equation:
\[
\left(\begin{array}{ccc}
F_1(\vec{x}) & \dotsb & F_m(\vec{x})
\end{array}\right)
\left(\begin{array}{c}
g_1 \\
\vdots \\
g_m
\end{array}
\right) = 1
\]
Just as in linear algebra over a field, the set of such solutions can be described by taking one solution and adding to it all solutions to the associated homogeneous equation:
\begin{equation} \label{eqn:homog}
\left(\begin{array}{ccc}
F_1(\vec{x}) & \dotsb & F_m(\vec{x})
\end{array}\right)
\left(\begin{array}{c}
g_1 \\
\vdots \\
g_m
\end{array}
\right) = 0
\end{equation}
(To see why this is so, mimic the usual linear algebra proof: given two solutions of the inhomogeneous equation, consider their difference.) Solutions to the latter equation are commonly called ``syzygies'' amongst the $F_i$.
Syzygies and their properties are well-studied---though not always well-understood---in commutative algebra and algebraic geometry, so lower and upper bounds on Hilbert-like \I-proofs may benefit from known results in algebra and geometry.
We now come to the key lemma for Hilbert-like certificates.
\begin{lemma} \label{lem:fgsyz}
For a given set of unsatisfiable polynomial equations $F_1(\vec{x})=\dotsb=F_m(\vec{x})=0$ over a Noetherian ring $R$ (such as a field or $\Z$), the set of Hilbert-like \I-certificates is a coset of a finitely generated submodule of $R[\vec{x}]^{m}$.
\end{lemma}
\begin{proof}
The discussion above shows that the set of Hilbert-like certificates is a coset of a $R[\vec{x}]$-submodule of $R[\vec{x}]^{m}$, namely the solutions to (\ref{eqn:homog}). As $R$ is a Noetherian ring, so is $R[\vec{x}]$ (by Hilbert's Basis Theorem). Thus $R[\vec{x}]^{m}$ is a Noetherian $R[\vec{x}]$-module, and hence every submodule of it is finitely generated.
\end{proof}
Lemma~\ref{lem:fgsyz} seems so conceptually important that it is worth re-stating:
\begin{quotation}
\noindent \textbf{The set of all Hilbert-like $\I$-certificates for a given system of equations can be described by giving a single Hilbert-like \I-certificate, together with a finite generating set for the syzygies.}
\end{quotation}
Its importance may be underscored by contrasting the preceding statement with the structure (if any?) of the set of all proofs in other proof systems, particularly non-algebraic ones.
Note that a finite generating set for the syzygies (indeed, even a \Grobner basis) can be found in the process of computing a \Grobner basis for the $R[\vec{x}]$-ideal $\langle F_1(\vec{x}), \dotsc, F_m(\vec{x}) \rangle$. This process is to Buchberger's \Grobner basis algorithm as the extended Euclidean algorithm is to the usual Euclidean algorithm; an excellent exposition can be found in the book by Ene and Herzog \cite{eneHerzog} (see also \cite[Section~15.5]{eisenbud}).
Lemma~\ref{lem:fgsyz} suggests that one might be able to prove size lower bounds on Hilbert-like-\I along the following lines: 1) find a single family of Hilbert-like \I-certificates $(G_n)_{n=1}^{\infty}$, $G_n = \sum_{i=1}^{\poly(n)} \f_i G_i(\vec{x})$ (one for each input size $n$), 2) use your favorite algebraic circuit lower bound technique to prove a lower bound on the polynomial family $G$, 3) find a (hopefully nice) generating set for the syzygies, and 4) show that when adding to $G$ any $R[\vec{x}]$-linear combinations of the generators of the syzygies, whatever useful property was used in the lower bound on $G$ still holds. Although this indeed seems significantly more difficult than proving a single algebraic circuit complexity lower bound, it at least suggests a recipe for proving lower bounds on Hilbert-like \I (and its subsystems such as homogeneous depth $3$, depth $4$, multilinear, etc.), which should be contrasted with the difficulty of transferring lower bounds for a circuit class to lower bounds on previous related proof systems, \eg transferring $\cc{AC}^0[p]$ lower bounds \cite{razborov,smolensky} to $\cc{AC}^0[p]$-Frege.
This entire discussion also applies to general \I-certificates, with the following modifications. We leave a certificate $C(\vec{x}, \vec{\f})$ as is, and instead of a module of syzygies we get an ideal (still finitely generated) of what we call zero-certificates. The difference between any two \I-certificates is a zero-certificate; equivalently, a \definedWord{zero-certificate} is a polynomial $C(\vec{x}, \vec{\f})$ such that $C(\vec{x}, \vec{0}) = 0$ and $C(\vec{x}, \vec{F}(\vec{x})) = 0$ as well (contrast with the definition of \I certificate, which has $C(\vec{x}, \vec{F}(\vec{x})) = 1$). The set of \I-certificates is then the coset intersection
\[
\langle \f_1, \dotsc, \f_m \rangle \cap \left( 1 + \langle \f_1 - F_1(\vec{x}), \dotsc, \f_m - F_m(\vec{x})\rangle\right)
\]
which is either empty or a coset of the ideal of zero-certificates: $\langle \f_1, \dotsc, \f_m \rangle \cap \langle \f_1 - F_1(\vec{x}), \dotsc, \f_m - F_m(\vec{x})\rangle$. The intersection ideal $\langle \f_1, \dotsc, \f_m \rangle \cap \langle \f_1 - F_1(\vec{x}), \dotsc, \f_m - F_m(\vec{x}) \rangle$ plays the role here that the set of syzygies played for Hilbert-like \I-certificates.\footnote{Note that the ideal of zero-certificates is not merely the set of all functions in the ideal $\langle \f_1 - F_1(\vec{x}), \dotsc, \f_m - F_m(\vec{x}) \rangle$ that only involve the $\f_i$, since the ideal $\langle \f_1, \dotsc, \f_m \rangle \subseteq R[\vec{x}, \vec{\f}]$ consists of all polynomials in the $\f_i$ with coefficients in $R[\vec{x}]$. Certificates only involving the $\f_i$ do have a potentially useful geometric meaning, however, which we consider in Appendix~\ref{app:geom}.}
A finite generating set for the ideal of zero-certificates can be computed using \Grobner bases (see, \eg, \cite[Section~3.2.1]{eneHerzog}).
Just as for Hilbert-like certificates, we get:
\begin{quotation}
\noindent \textbf{The set of all $\I$-certificates for a given system of equations can be described by giving a single \I-certificate, together with a finite generating set for the ideal of zero-certificates.}
\end{quotation}
Our suggestions above for lower bounds on Hilbert-like \I apply \emph{mutatis mutandis} to general \I-certificates, suggesting a route to proving true size lower bounds on \I using known techniques from algebraic complexity theory.
The discussion here raises many basic and interesting questions about the complexity of sets of (families of) functions in an ideal or module, which we propose in Section~\ref{sec:conclusion}.
\input{conclusion.tex}
\section{Geometric \texorpdfstring{\I}{\Itext}-certificates} \label{app:geom}
We may consider $F_1(x_1, \dotsc, x_n), \dotsc, F_m(x_1,\dotsc,x_n)$ as a polynomial map $F = (F_1,\dotsc,F_m)\colon\F^{n} \to \F^{m}$. Then this system of polynomials has a common zero if and only if $0$ is the image of $F$. In fact, we show that for any Boolean system of equations, which are those that include $x_1^2 - x_1 = \dotsb = x_n^2 - x_n = 0$, or multiplicative Boolean equations---those that include $x_1^2 - 1 = \dotsb = x_n^2 - 1 = 0$---the system of polynomials has a common zero if and only if $0$ is in the \emph{closure} of the image of $F$.
The preceding is the geometric picture we pursue in this section; next we describe the corresponding algebra. The set of \I certificates is the intersection of the ideal $\langle \f_1, \dotsc, \f_m \rangle$ with the coset $1 + \langle \f_1 - F_1(\vec{x}), \dotsc, \f_m - F_m(\vec{x}) \rangle$. The map $a \mapsto 1 - a$ is a bijection between this coset intersection and the coset intersection $\left(1 + \langle \f_1, \dotsc, \f_m \rangle \right) \cap \langle \f_1 - F_1(\vec{x}), \dotsc, \f_m - F_m(\vec{x}) \rangle$. In particular, the system of equations $F_1 = \dotsb = F_m = 0$ is unsatisfiable if and only if the latter coset intersection is nonempty.
We show below that if the latter coset intersection contains a polynomial involving only the $\f_i$'s---that is, its intersection with the subring $\F[\vec{\f}]$ (rather than the much larger ideal $\langle \vec{\f} \rangle \subseteq \F[\vec{x}, \vec{\f}]$) is nonempty---then $0$ is not even in the closure of the image of $F$. Hence we call such polynomials ``geometric certificates:''
\begin{definition}[The Geometric Ideal Proof System] \label{def:geompf}
A \definedWord{geometric \I certificate} that a system of $\F$-polynomial equations $F_1(\vec{x}) = \dotsb = F_m(\vec{x}) = 0$ is unsatisfiable over $\overline{\F}$ is a polynomial $C \in \F[\f_1, \dotsc, \f_m]$ such that
\begin{enumerate}
\item \label{condition:geom_nonzero} $C(0,0,\dotsc,0) = 1$, and
\item \label{condition:geom_ideal} $C(F_1(\vec{x}), \dotsc, F_{m}(\vec{x})) = 0$. In other words, $C$ is a polynomial relation amongst the $F_i$.
\end{enumerate}
A \definedWord{geometric \I proof} of the unsatisfiability of $F_1 = \dotsb = F_m = 0$, or a \definedWord{geometric \I refutation} of $F_1 = \dotsb = F_m = 0$, is an $\F$-algebraic circuit on inputs $\f_1, \dotsc,\f_m$ computing some geometric certificate of unsatisfiability.
\end{definition}
If $C$ is a geometric certificate, then $1-C$ is an \I certificate that involves only the $\f_i$'s, somewhat the ``opposite'' of a Hilbert-like certificate. Hence the smallest circuit size of any geometric certificate is at most the smallest circuit size of any algebraic certificate. We do not know, however, if these complexity measures are polynomially related:
\begin{open} \label{question:geometric}
For Boolean systems of equations, Geometric \I polynomially equivalent to \I? That is, is there always a geometric certificate whose circuit size is at most a polynomial in the circuit size of the smallest algebraic certificate?
\end{open}
Although the Nullstellensatz doesn't guarantee the existence of geometric certificates for arbitrary unsatisfiable systems of equations---and indeed, geometric certificates need not always exist---for \emph{Boolean} systems of equations (usual or multiplicative) geometric certificates always exist. In fact, this holds for any system of equations which contains at least one polynomial containing only the variable $x_i$, for each variable $x_i$:
\begin{proposition} \label{prop:geometric}
Let $\F$ be either a (topologically) dense subfield of $\C$ or any algebraically closed field. A Boolean system of equations over $\F$---or more generally any system of equations containing, for each variable $x_i$, at least one non-constant equation involving only $x_i$\footnote{We believe that the ``correct'' generalization here is to systems of equations $F_1 = \dotsb = F_m = 0$ such that
the corresponding map $F \colon \F^{n} \to \image(F)$ is \emph{flat} (see, \eg, \cite[Chapter~6]{eisenbud}) and has zero-dimensional fibers, that is, the inverse image of any point is a finite set. Systems satisfying the hypothesis of Proposition~\ref{prop:geometric} satisfy these hypotheses as well, but we have not checked carefully if the result extends in this generality.}
---has a common root if and only if it does not have a geometric certificate.
\end{proposition}
The condition of this proposition is almost surely more stringent than necessary, but the next example shows that at least some condition is necessary.
\begin{example}
Let $F_1(x,y) = xy - 1$ and $F_2(x,y) = x^2 y$. There is no solution to $F_1 = F_2 = 0$, as $F_1 = 0$ implies that both $x$ and $y$ are nonzero, but if this is the case then $x^2 y = F_2(x,y)$ is also nonzero. Yet $0$ is in the closure of the image of the map $F = (F_1, F_2)\colon \F^{2} \to \F^2$. There are (at least) two ways to see this. First, we exhibit $0$ as an explicit limit of points in the image. Let $\chi_1(\varepsilon) = \varepsilon$ and $\chi_2(\varepsilon) = 1/\varepsilon$. Then $F_1(\chi_1(\varepsilon), \chi_2(\varepsilon)) = 0$ identically in $\varepsilon$, and $F_2(\chi_1(\varepsilon), \chi_2(\varepsilon)) = \varepsilon$. Thus, if we take the limit as $\varepsilon \to 0$, we find that $0$ is in the closure of the image of $F$.\footnote{If $\F$ is a dense subfield of $\C$, this limit may be taken in the usual sense of the Euclidean topology. For arbitrary algebraically closed fields $\F$, the same construction works, but must now be interpreted in the context of Lemma~\ref{lem:eps}.}
Alternatively, in this case we can determine the entire image exactly (usually a very daunting task): it is $\{(a,b) \in \F^2 : a \neq -1 \text{ and } b \neq 0\} \cup \{(-1,0)\}$. This can be determined by solving the equations by the elementary method of substitution, and careful but not complicated case analysis. It is then clear (geometrically in the case of subfields of $\C$, and by a dimension argument over an arbitrary algebraically closed field) that the closure of the image is the entirety of $\F^2$, and in particular contains $0$.
\end{example}
The next example rules out another natural attempt at generalizing Proposition~\ref{prop:geometric}, and also shows that the existence of geometric certificates for a given set of equations can depend on the equations themselves, and not just on the ideal they generate.
\begin{example}
Let $F_1(x,y) = xy-1$ and $F_2(x,y)=x^2y$ as before, and now also add $F_3(x,y) = x^2(1-y)$. We already saw that $F_1 = F_2 = 0$ is unsatisfiable, so $F_1 = F_2 = F_3 = 0$ is unsatisfiable as well. However, $F_1 = F_3 = 0$ has one, and only one, solution, namely $x=y=1$. Let $F = (F_1,F_2,F_3)\colon \F^2 \to \F^3$. To see that $\vec{0}$ is in the closure of the image of $F$, we again consider $\lim_{\varepsilon \to 0} F(\varepsilon, 1/\varepsilon)$. As before $F_1(\varepsilon,1/\varepsilon)=0$ and $F_2(\varepsilon,1/\varepsilon) = \varepsilon$, whose limit is zero as $\varepsilon \to 0$. Similarly, we get $F_3(\varepsilon, 1/\varepsilon) = \varepsilon^2 (1 - 1/\varepsilon) = \varepsilon (\varepsilon - 1)$, which again goes to $0$ as $\varepsilon \to 0$.
Note that if we replace equations $F_1$ and $F_3$ by another set of equations with the same set of solutions (in this case, a singleton set), but satisfying the conditions of Proposition~\ref{prop:geometric}, such as $F_1' = (x-1)^k$ and $F_3' = (y-1)^\ell$ for some $k,\ell > 0$, then $\vec{0}$ is no longer in the closure of the image. For if $(F_1',F_2,F_3')$ approaches $(0,0,0)$, then $x$ and $y$ must both approach $1$, but then $F_2 = x^2 y$ also approaches $1$. Furthermore, by the Nullstellensatz, for some $k,\ell > 0$, the polynomials $(x-1)^k$ and $(y-1)^\ell$ both in the ideal $\langle F_1, F_3 \rangle$. Thus, although the solvability of a system of equations is determined entirely by (the radical of) the ideal they generate, the geometry of the corresponding map---and even the existence of geometric certificates---can change depending on which elements of the ideal are used in defining the map.
\end{example}
The following lemma is the key to Proposition~\ref{prop:geometric}.
\begin{lemma} \label{lem:geometric}
Let $\F$ be (1) a dense subfield of $\C$ (in the Euclidean topology), or (2) any algebraically closed field. Let $F_1(\vec{x}), \dotsc, F_{m}(\vec{x})$ be a system of equations over $\F$, and let $F=(F_1,\dotsc,F_m)\colon \F^{n} \to \F^{m}$ be the associated polynomial map, as above. If, for $i=1,\dotsc,n$, $F_i(\vec{x})$ is a nonzero function of $x_i$ alone, then the set of equations $F_1 = \dotsb = F_m = 0$ has a solution if and only if $0$ is in the closure $\overline{\image(F)}$.
\end{lemma}
\begin{proof}
If the system $F$ has a common solutions, then $0$ is in the image of $F$ and hence in its closure.
Conversely, suppose $0$ is in the closure of the image of $F$. We first prove case (1) (the characteristic zero case) as it is somewhat simpler and gives the main idea, and then we prove case (2), the case of an arbitrary algebraically closed field.
(1) Dense subfields of $\C$. First, we note that the closure of the image of $F$ in the Zariski topology agrees with its closure in the standard Euclidean topology on $\F^{n}$, induced by the Euclidean topology on $\C^{n}$. For $\F = \C$, see, \eg, \cite[Theorem~2.33]{mumford}. For other dense $\F \subsetneq \C$, suppose $\vec{y}$ is in the $\F$-Zariski-closure of $F(\F^{n})$, that is, every $\F$-polynomial that vanishes everywhere on $F(\F^{n})$ also vanishes at $\vec{y}$. By the aforementioned result for $\C$, there is a sequence of points $\vec{x}_1, \vec{x}_2, \dotsc \in \C^{n}$ such that $\vec{y} = \lim_{k \to \infty} F(\vec{x}_k)$. As $\F$ is dense in $\C$ in the Euclidean topology, there is similarly a sequence of points $\vec{x}'_1, \vec{x}'_2, \dotsc \in \F^{n}$ such that $|\vec{x}_k - \vec{x}'_k| \leq 1/k$ for all $k$. Hence $\lim_{k \to \infty} \vec{x}_k = \lim_{k \to \infty} \vec{x}'_k$. Each $F(\vec{x}'_k) \in \F^{m}$, so we get a sequence of points $F(\vec{x}'_1), F(\vec{x}'_2), \dotsc \in \F^{m}$ whose limit is $\vec{y}$.
In particular, $0$ is in the (Zariski-)closure of the image of $F$ if and only if there is a sequence of points $v^{(1)}, v^{(2)}, v^{(3)}, \dotsc \in \image(F)$ such that $\lim_{k \to \infty} v^{(k)} = 0$. As each $v^{(k)}$ is in the image of $F$, there is some point $\nu^{(k)} \in \F^{n}$ such that $v^{(k)} = F(\nu^{(k)})$. As the $v^{(k)}$ approach the origin, each $F_i(\nu^{(k)})$ approaches $0$, since it is the $i$-th coordinate of $v^{(k)}$: $v^{(k)}_i = F_i(\nu^{(k)})$.
In particular, since $F_1(\vec{x})$ depends only on $x_1$ and is nonzero (by assumption), the first coordinates $\nu^{(k)}_{1}$ must accumulate around the finitely many zeroes of $F_1(x_1)$. Similarly for each coordinate $i=1,\dotsc,n$ of $\nu^{(k)}$.
Thus there is an infinite subsequence of the $\nu^{(k)}$ that approaches one single solution $\vec{z}$ to $F=0$. By choosing such a subsequence and re-indexing, we may assume that $\lim_{k \to \infty} \nu^{(k)} = \vec{z}$.
Finally, by assumption and continuity, we have
\[
0 = \lim_{k \to \infty} v^{(k)} = \lim_{k \to \infty} F(\nu^{(k)}) = F(\lim_{k \to \infty} \nu^{(k)}) = F(\vec{z}),
\]
so $\vec{z}$ is a common root of the original system $F_1 = \dotsb = F_m = 0$. Hence, if $0$ is in the closure of the image of $F$, then $0$ is in the image.
(2) $\F$ any algebraically closed field. Here we cannot use an argument based on the Euclidean topology, but there is a suitable, purely algebraic analogue, encapsulated in the following lemma:
\begin{lemma}[{See, \eg, \cite[Lemma~20.28]{BCS}}] \label{lem:eps}
If $p$ is a point in the closure of the image of a polynomial map $F\colon \F^{n} \to \F^{m}$, then there are formal Laurent series\footnote{A formal Laurent series is a formal sum of the form $\sum_{k=-k_0}^{\infty} a_k \varepsilon^{k}$. By ``formal'' we mean that we are paying no attention to issues of convergence (which need not even make sense over various fields), but are just using the degree of $\varepsilon$ as an indexing scheme.} $\chi_1(\varepsilon), \dotsc, \chi_n(\varepsilon)$ in a new variable $\varepsilon$ such that $F_i(\chi_1(\varepsilon), \dotsc, \chi_n(\varepsilon))$ is in fact a \emph{power series}---that is, involves no negative powers of $\varepsilon$---for each $i=1,\dotsc,m$, and such that evaluating the power series $(F_1(\vec{\chi}(\varepsilon)), \dotsc, F_m(\vec{\chi}(\varepsilon))$ at $\varepsilon=0$ yields the point $p$.
\end{lemma}
Note that the evaluation at $\varepsilon=0$ must occur \emph{after} applying $F_i$, since each individual $\chi_i$ may involve negative powers of $\varepsilon$.
As $F_1$ involves only $x_1$, in order for $F_1(\vec{\chi}(\varepsilon)) = F_1(\chi_1(\varepsilon))$ to be a power series in $\varepsilon$, it must be the case that $\chi_1(\varepsilon)$ itself is a power series (contains no negative powers of $\varepsilon$). For if the highest degree term of $F_1$ is some constant times $x_1^{d}$, and the lowest degree term of $\chi_1(\varepsilon)$ is of degree $-D$, then $F_1(\chi_1(\varepsilon))$ contains the monomial $\varepsilon^{-dD}$ with nonzero coefficient. A similar argument applies to $\chi_i$ for $i=1,\dotsc, n$. Thus each $\chi_i$ is in fact a power series, involving no negative terms of $\varepsilon$, and hence can be evaluated at $0$. Since evaluating at $\varepsilon=0$ now makes sense even before applying the $F_i$, and is a ring homomorphism (we might say, ``is continuous with respect to the ring operations''), we get that
\[
0 = F_i(\vec{\chi}(\varepsilon))|_{\varepsilon=0} = F_i(\vec{\chi}(\varepsilon)|_{\varepsilon=0}) = F_i(\vec{\chi}(0))
\]
for each $i=1,\dotsc,m$, and hence $\vec{\chi}(0)$ is a solution to $F_1(\vec{x}) = \dotsb = F_m(\vec{x}) = 0$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:geometric}]
Let $F_1, \dotsc, F_m$ be an unsatisfiable system of equations over $\F$ satisfying the conditions of Lemma~\ref{lem:geometric}, and let $F = (F_1, \dotsc, F_m) \colon \F^{n} \to \F^{m}$ be the corresponding polynomial map.
First, suppose that $F_1 = \dotsb = F_m = 0$ has a solution. Then $0 \in \image(F)$, so any $C(\f_1, \dotsc, \f_m)$ that vanishes everywhere on $\image(F)$, as required by condition (\ref{condition:geom_ideal}) of Definition~\ref{def:geompf}, must vanish at $\vec{0}$. In other words, $C(0,\dotsc,0) = 0$, contradicting condition (\ref{condition:geom_nonzero}). So there are no geometric certificates.
Conversely, suppose $C(\f_1, \dotsc, \f_m)$ is a geometric certificate. Then $C$ vanishes at every point of the image $\image(F)$ and hence at every point of its closure $\overline{\image(F)}$, by (Zariski-)continuity. By condition (\ref{condition:geom_nonzero}) of Definition~\ref{def:geompf}, $C(0,\dotsc,0) = 1$. Since $C$ does not vanish at the origin, $\vec{0} \notin \overline{\image(F)}$. Then by Lemma~\ref{lem:geometric}, $\vec{0}$ is not in the image of $F$ and hence $F_1 = \dotsb = F_m = 0$ has no solution.
\end{proof}
Finally, as with \I certificates and Hilbert-like \I certificates (see Section~\ref{sec:syzygy}), a \definedWord{geometric zero-certificate} for a system of equations $F_1(\vec{x}), \dotsc, F_m(\vec{x})$ is a polynomial $C(\f_1, \dotsc, \f_m) \in \langle \f_1, \dotsc, \f_m \rangle$---that is, such that $C(0,\dotsc,0) = 0$---and such that $C(F_1(\vec{x}), \dotsc, F_{m}(\vec{x})) = 0$ identically as a polynomial in $\vec{x}$. The same arguments as in the case of algebraic certificates show that any two geometric certificates differ by a geometric zero-certificate, and that the geometric certificates are closed under multiplication. Furthermore, the set of geometric zero-certificates is the intersection of the ideal of (algebraic) zero-certificates $\langle \f_1, \dotsc, \f_m \rangle \cap \langle \f_1 - F_1(\vec{x}), \dotsc, \f_m - F_m(\vec{x}) \rangle$ with the subring $\F[\vec{\f}] \subset \F[\vec{x}, \vec{\f}]$. As such, it is an ideal of $\F[\vec{\f}]$ and so is finitely generated. Thus, as in the case of \I certificates, the set of all geometric certificates can be specified by giving a single geometric certificate and a finite generating set for the ideal of geometric zero-certificates, suggesting an approach to lower bounds on the Geometric Ideal Proof System.
We note that geometric zero-certificates are also called syzygies amongst the $F_i$---sometimes ``geometric syzygies'' or ``polynomial syzygies'' to distinguish them from the ``module-type syzygies'' we discussed above in relation to Hilbert-like \I. As in all the other cases we've discussed, a generating set of the geometric syzygies can be computed using \Grobner bases, this time using elimination theory: compute a \Grobner basis for the ideal $\langle \f_1 - F_1(\vec{x}), \dotsc, \f_m - F_m(\vec{x}) \rangle$ using an order that eliminates the $x$-variables, and then take the subset of the \Grobner basis that consists of polynomials only involving the $\f$-variables. The ideal of geometric syzygies is exactly the ideal of the closure of the image of the map $F$, and for this reason this kind of syzygy is also well-studied. This suggests that geometric properties of the image of the map $F$ (or its closure) may be useful in understanding the complexity of individual instances of $\cc{coNP}$-complete problems.
\section{Introduction}
$\cc{NP}$ versus $\cc{coNP}$ is the very natural question of whether, for every graph that doesn't have a Hamiltonian path, there is a short proof of this fact. One of the arguments for the utility of proof complexity is that by proving lower bounds against stronger and stronger proof systems, we ``make progress'' towards proving $\cc{NP} \neq \cc{coNP}$. However, until now this argument has been more the expression of a philosophy or hope, as there is no known proof system for which lower bounds imply computational complexity lower bounds of any kind, let alone $\cc{NP} \neq \cc{coNP}$.
We remedy this situation by introducing a very natural algebraic proof system, which has tight connections to (algebraic) circuit complexity. We show that any super-polynomial lower bound on any Boolean tautology in our proof system implies that the permanent does not have polynomial-size algebraic circuits ($\cc{VNP} \neq \cc{VP}$). Note that, prior to our work, essentially all implications went the opposite direction: a circuit complexity lower bound implying a proof complexity lower bound.
We show that under a reasonable assumption on PIT, our proof system is p-equivalent to Extended Frege, the strongest-known natural propositional proof system.
In combination with the above result, this helps explain why obtaining lower bounds on Extended Frege has been so difficult. More specifically, we hypothesize that Extended Frege can efficiently prove certain propositional axioms satisfied by any Boolean circuit computing PIT (this hypothesis appears to be somewhere in between the major hypothesis that PIT$\in \cc{P}$ and the known result that PIT$\in \cc{P/poly}$). Note that, although PIT has played an increasingly prominent role in circuit complexity and algorithms, until now it has been almost entirely absent from the setting of proof complexity.
We also highlight the importance of our PIT axioms for understanding lower bounds on $\cc{AC}^0[p]$-Frege, which have been open for nearly 30 years, despite having the corresponding $\cc{AC}^0[p]$ circuit lower bounds. We show:
\begin{enumerate}
\item Either proving superpolynomial lower bounds on $\cc{AC}^0[p]$-Frege implies $\cc{VP}_{\F_p} \neq \cc{VNP}_{\F_p}$, thus explaining the difficulty of lower bounds on $\cc{AC}^0[p]$-Frege, or
\item $\cc{AC}^0[p]$-Frege cannot efficiently prove the PIT axioms, and hence we have a lower bound on $\cc{AC}^0[p]$-Frege.
\end{enumerate}
We get similar statements regarding a) depth 4 PIT axioms and $2^{\omega(\sqrt{n} \log n)}$ lower bounds on depth $d$ $\cc{AC}^0$-Frege---another problem that has been open for nearly 30 years with seemingly no explanation for its difficulty---and b) depth $d+2$ PIT axioms, super-polynomial lower bounds on depth $d$ $\cc{AC}^0$- or $\cc{AC}^0[p]$-Frege, and super-polynomial lower bounds on depth $d+2$ algebraic circuits for the permanent.
We get a similar dichotomy between truly exponential ($2^{\omega(\sqrt{n} \log n)}$) depth $d$ $\cc{AC}^0$-Frege lower bounds---another 30-year-open question with seemingly no explanation for its difficulty---and polynomial-size $\cc{AC}^0_d$-Frege proofs of the depth 4 version of our PIT axioms.
Finally, using the algebraic structure of our proof system, we propose a novel way to extend techniques from algebraic circuit complexity to prove lower bounds in proof complexity. Although we have not yet succeeded in proving such lower bounds, this proposal should be contrasted with the difficulty of extending $\cc{AC}^0[p]$ circuit lower bounds to $\cc{AC}^0[p]$-Frege lower bounds.
\subsection{Background and Motivation}
\paragraph{Algebraic Circuit Complexity.} The most natural way to compute a polynomial function $f(x_1,\ldots,x_n)$ is
with an algebraic circuit, where the input consists of the variables $x_1,\ldots,x_n$,
and the computation is performed
using the arithmetic operations: addition, multiplication and subtraction.
The ultimate goal is to understand the optimal complexity, in terms of size and depth,
of computing a given polynomial family.
The importance of the arithmetic circuit model was increased by several
results of Valiant \cite{valiant,valiantPerm,valiantProjections}, who
argues that understanding
arithmetic circuits is very important, not only because they are
interesting in themselves, but also because that understanding may
give new intuitions
that may be used for the understanding of
other models of computation. (See also \cite{Gat2}.)
Two central functions in this area are the determinant and permanent polynomials,
which are fundamental because of their prominent role in many areas of mathematics,
and in addition they are known to be complete
for the classes $\cc{VP}$ and $\cc{VNP}$ respectively.
$\cc{VP}$ and $\cc{VNP}$ can be viewed as algebraic analogs of the complexity classes
$\cc{P}$ and $\cc{NP}$, respectively,\footnote{It is somewhat more accurate to say that $\cc{VP}$ is the algebraic analog of $\cc{FNC}^2$ $\cc{VNP}$ is the algebraic analog of $\cc{\# P}$.} and thus a major open problem is to
show that $\cc{VP} \neq \cc{VNP}$, or in other words, that the permanent cannot
be computed by arithmetic circuits of polynomial size, thus resolving
an algebraic analog of the $\cc{P}$ versus $\cc{NP}$ question.
Unlike in Boolean circuit complexity,
non-trivial lower bounds
for the size of arithmetic circuits
are known (proven in the seminal papers of Strassen
and Baur--Strassen \cite{strassenDegree,baurStrassen}).
Their methods, however, only give lower bounds of
up to
$\Omega (n\log n)$.
Moreover, their methods are based on a degree analysis
of certain algebraic varieties and do not give lower bounds for
polynomials of constant degree.
Recent exciting work [REF] has shown that polynomial-size algebraic
circuits can in fact be computed by subexponential-size depth 4 algebraic
circuits.
Thus, strong enough lower bounds for depth 4 arithmetic circuits for
the permanent would already prove $\cc{VP} \neq \cc{VNP}$.
Recently work shows that this problem is essentially equivalent
to another major open problem in complexity theory,
obtaining a deterministic, polynomial-time algorithm for
polynomial identity testing.
(or to the derandomization of of all Boolean polynomial-time computations.?)
\medskip
\paragraph{Proof Complexity.} One of the most basic questions of logic is the following: given a tautology, what is the
length of the shortest proof of the statement in some standard axiomatic proof system? This question inspired Cook's
seminal paper on $\cc{NP}$-completeness \cite{cook:np} and were contemplated even earlier by \Godel in his now
well-known letter to von Neumann (see \cite{buss:kprov}). The propositional version of this question, whether there exists a propositional proof system giving rise to short proofs of all tautologies, is equivalent to the central question in complexity theory of whether $\cc{NP}$ equals $\cc{coNP}$ \cite{cookReckhow}. In this same line of work, they proposed a research program
of proving super-polynomial lower bounds for increasingly powerful standard proof systems. However, despite considerable progress obtaining superpolynomial lower bounds for many weak proof systems (resolution, cutting planes,
bounded-depth Frege systems), there has been essentially no progress in the last 25 years for stronger proof systems such as Extended Frege systems or Frege systems.
More surprisingly, proving nontrivial lower bounds for $\cc{AC}^0[p]$-Frege systems is still unresolved.
Note that in contrast, the analogous result in circuit complexity (proving superpolynomial $\cc{AC}^0[p]$ lower bounds for an explicit function) was resolved by Smolensky over 25 years ago \cite{smolensky}.
To date, there has been no satisfactory explanation for this state of affairs.
In proof complexity, there are no known formal barriers such as relativization \cite{bakerGillSolovay}, Razborov--Rudich natural proofs \cite{razborovRudich}, or algebrization \cite{aaronsonWigderson} that exist in Boolean function complexity.
Moreover, there has not even been progress by way of conditional
lower bounds. That is, trivially $\cc{NP} \neq \cc{coNP}$ implies superpolynomial
lower bounds for $\cc{AC}^0[p]$-Frege, but we know of no weaker complexity assumption
that implies such lower bounds.
The only formal implication in this direction shows that certain
circuit lower bounds imply lower bounds for proof systems that admit
feasible interpolation, but unfortunately only weak proof systems
(not Frege nor even $\cc{AC}^0$-Frege) have this property \cite{Bonet,Bonet2}.
In the converse direction, there are essentially no implications
at all. For example, we do not know if $\cc{AC}^0[p]$-Frege
lower bounds (nor even Frege nor Extended Frege lower bounds) imply any
nontrivial circuit lower bounds.
\subsection{Our Results}
In this paper, we define a simple and natural proof system that we call the Ideal Proof System (IPS)
based on Hilbert's Nullstellensatz. Our system is similar in spirit to related
algebraic proof systems that have been previously studied, but is different in a crucial way (explained below).
Given a set of polynomials $F_1,\ldots,F_m$ in $n$ variables $x_1,\ldots,x_n$ over a field $\F$ without a
common zero over the algebraic closure of $\F$, Hilbert's Nullstellensatz says that there exist polynomials
$G_1,\ldots,G_m \in \F[x_1,\ldots,x_n]$ such that $\sum F_i G_i =1$, \ie, that $1$ is in the ideal generated by the $F_i$. In the Ideal Proof System, we introduce new variables $\f_i$ which serve as placeholders into which the original polynomials $F_i$ will
eventually be substituted:
\begin{definition}[Ideal Proof System] \label{def:IPS}
An \definedWord{\I certificate} that a system of $\F$-polynomial equations
$F_1(\vec{x})=F_2(\vec{x}) = \dotsb = F_m(\vec{x}) = 0$ is unsatisfiable over $\overline{\F}$ is
a polynomial $C(\vec{x}, \vec{\f})$ in the variables $x_1,\ldots,x_n$ and $\f_1,\ldots,\f_m$ such that
\begin{enumerate}
\item \label{condition:ideal} $C(x_1,\dotsc,x_n,\vec{0}) = 0$, and
\item \label{condition:nss} $C(x_1,\dotsc,x_n,F_1(\vec{x}),\dotsc,F_m(\vec{x})) = 1$.
\end{enumerate}
The first condition is equivalent to $C$ being in the ideal generated by $\f_1, \dotsc, \f_m$, and the two conditions together therefore imply that $1$ is in the ideal generated by the $F_i$, and hence that $F_1(\vec{x}) = \dotsb = F_m(\vec{x})=0$ is unsatisfiable.
An \definedWord{\I proof} of the unsatisfiability of the polynomials $F_i$ is
an $\F$-algebraic circuit on inputs $x_1,\ldots,x_n,\f_1,\ldots,\f_m$ computing some algebraic
certificate of unsatisfiability.
\end{definition}
For any class of algebraic circuits $\mathcal{C}$, we may speak of $\mathcal{C}$-\I proofs of a family of systems of equations $(\mathcal{F}_n)$ where $\mathcal{F}_n$ is $F_{n,1}(\vec{x}) = \dotsb = F_{n,\poly(n)}(\vec{x}) = 0$. When we refer to \I without further qualification, we mean $\cc{VP}$-\I, that is, the family of \I proofs should be computed by circuits of polynomial size \emph{and polynomial degree}, unless specified otherwise.
Like previously defined algebraic systems \cite{BIKPP,CEI,pitassi96,pitassiICM}, proofs in our system can be
checked in randomized polynomial time.
The key difference between our system and previously studied
ones is that those systems are axiomatic in the sense that they require that \emph{every}
subcomputation (derived polynomial) be in the ideal generated by the original polynomial equations $F_i$, and thus be a sound consequence of the equations $F_1=\dotsb=F_m=0$.
In contrast our system has no such requirement; an \I proof can compute potentially
unsound subcomputations (whose vanishing does not follow from $F_1=\dotsb=F_m=0$), as long as the \emph{final polynomial} is in the ideal
generated by the equations. This key difference allows \I proofs to be
\emph{ordinary algebraic circuits}, and thus nearly all results in
algebraic circuit complexity apply directly to the Ideal Proof System. To quote the tagline of a common US food chain, the Ideal Proof System is a ``No rules, just right'' proof system.
Our first main theorem shows one of the advantages of this close connection with algebraic circuits. To the best of our knowledge, this is the first implication showing that a proof complexity lower bound implies any sort of computational complexity lower bound.
\begin{VNPthm}
Superpolynomial lower bounds for the Ideal Proof System imply that the permanent does not have polynomial-size
algebraic circuits, that is, $\cc{VNP} \neq \cc{VP}$.
\end{VNPthm}
\begin{comment:parityP}
As a simple corollary, we obtain a very different and arguably simpler proof
of the following result:
\begin{corollary}
If $\cc{NP}$ is not contained in $\cc{coMA}$, then $\cc{\parity P/poly}$ is not equal to $\cc{NC^1/poly}$.
\end{corollary}
\end{comment:parityP}
Under a reasonable assumption on polynomial identity testing (PIT), which we discuss further below, we are able to show that Extended Frege is equivalent to the Ideal Proof System. Extended Frege (EF) is the strongest natural deduction-style propositional proof system that has been proposed, and the proof complexity analog of $\cc{P/poly}$ (that is, Extended Frege = $\cc{P/poly}$-Frege).
\begin{EFthm}
Let $K$ be a family of polynomial-size Boolean circuits for PIT such that the PIT axioms for $K$ (see Definition~\ref{def:PITaxioms}) have polynomial-size EF proofs. Then EF polynomially simulates \I, and hence the EF and \I are polynomially equivalent.
\end{EFthm}
Under this assumption about PIT, Theorems~\ref{thm:VNP} and \ref{thm:EF} in combination suggest a precise reason that proving lower bounds on Extended Frege is so difficult, namely, that doing so implies $\cc{VP} \neq \cc{VNP}$. Theorem~\ref{thm:EF} also suggests that to make progress toward proving lower bounds in proof complexity, it may be necessary to prove lower bounds for the
Ideal Proof System, which we feel is more natural, and creates the possibility of harnessing tools from algebra, representation theory, and algebraic circuit complexity. We give a specific suggestion of how to apply these tools towards proof complexity lower bounds in Section~\ref{sec:syzygy}.
\begin{remark}
Given that $PIT \in \cc{P}$ is known to imply lower bounds, one may wonder if the combination of the above two theorems really gives any explanation at all for the difficulty of proving lower bounds on Extended Frege. There are at least two reasons that it does.
First, the best known lower bound that follows from $PIT \in \cc{P}$ is an algebraic circuit-size lower bound on an integer polynomial that is computable in $\cc{NE} \cap \cc{coNE}$, whereas our conclusion is a lower bound on algebraic circuit-size for a function in $\cc{\# P} \subseteq \cc{PSPACE}$.
Second, the hypothesis that our PIT axioms can be proven efficiently in Extended Frege seems to be somewhat orthogonal to, but likely no stronger than, the widely-believed hypothesis that PIT is in $\cc{P}$. As Extended Frege is a nonuniform proof system, efficient Extended Frege proofs of our PIT axioms are unlikely to have any implications about the uniform complexity of PIT (and given that we already know unconditionally that PIT is in $\cc{P/poly}$, uniformity is what the entire question of derandomizing PIT is about). In the opposite direction, we note that under a standard working hypothesis in proof complexity, if PIT is in $\cc{P}$, then our PIT axioms can indeed be efficiently proven in Extended Frege (see Remark~\ref{rmk:PITproof} for details). \rmkqed
\end{remark}
Although PIT has long been a central problem of study in computational complexity---both because of its importance in many algorithms, as well as its strong connection to circuit lower bounds---our theorems highlight the importance of PIT in proof complexity. Next we prove that Theorem~\ref{thm:EF} can be scaled down to obtain similar resuls for weaker Frege systems, and discuss some of its more striking consequences.
\begin{AC0thm}
Let $\mathcal{C}$ be any of the standard circuit classes $\cc{AC}^k, \cc{AC}^k[p], \cc{ACC}^k, \cc{TC}^k, \cc{NC}^k$. Let $K$ be a family of polynomial-size Boolean circuits for PIT (not necessarily in $\mathcal{C}$) such that the PIT axioms for $K$ have polynomial-size $\mathcal{C}$-Frege proofs. Then $\mathcal{C}$-Frege is polynomially equivalent to \I, and consequently to Extended Frege as well.
\end{AC0thm}
Theorem~\ref{thm:AC0} also highlights the importance of our PIT axioms for getting $\cc{AC}^0[p]$-Frege lower bounds, which has been an open question for more than thirty years. (For even weaker systems, Theorem~\ref{thm:AC0} in combination with known results yields an unconditional lower bound on $\cc{AC}^0$-Frege proofs of the PIT axioms.) In particular, we are in the following win-win situation:
\begin{AC0pcor}
Either
\begin{itemize}
\item There are polynomial-size $\cc{AC}^0[p]$-Frege proofs of the PIT axioms, in which case \emph{any superpolynomial lower bounds on $\cc{AC}^0[p]$-Frege imply $\cc{VNP}_{\F_p} \neq \cc{VP}_{\F_p}$}, thus explaining the difficulty of obtaining such lower bounds, or
\item There are no polynomial-size $\cc{AC}^0[p]$-Frege proofs of the PIT axioms, in which case we've gotten $\cc{AC}^0[p]$-Frege lower bounds.
\end{itemize}
\end{AC0pcor}
Finally, in Section~\ref{sec:syzygy} we suggest a new framework for proving lower bounds for
the Ideal Proof System which we feel has promise. Along the way, we make precise the difference in difficulty between proof complexity lower bounds (on \I, which may also apply to Extended Frege via Theorem~\ref{thm:EF}) and algebraic circuit lower bounds. In particular, the set of \emph{all $\I$-certificates} for a given unsatisfiable system of equations is, in a certain precise sense, ``finitely generated.'' We suggest how one might take advantage of this finite generation to transfer techniques from algebraic circuit complexity to prove lower bounds on $\I$, and consequently on Extended Frege (since $\I$ p-simulates Extended Frege unconditionally), giving hope for the first time for length-of-proof lower bounds on an algebraic proof system. We hope to pursue this approach in future work.
\subsection{Related Work}
Mention algebraic proof systems (PC, Nullstellensatz, P96/PICM).
Mention Tzameret/Hrubes proof system for PITs and why it is different.
They basically start with the ring axioms for ring of multivariate polynomials over
$x_1,\ldots,x_n$, and study what equations they can derive from these axioms.
They study families of circuits that have various PIT algs, and see which
families of circuits computing identically zero polynomials have proofs of this fact
in their equational calculus. For example, they show for all circuits computing
polynomials of a restricted form, that any such circuit in this class
computing the identically zero polynoimal has a proof of this fact in their system.
In contrast we want to reason directly about circuits $K$ for solving PIT.
Contrast with Raz--Tzameret
\subsection{Outline}
We start in Section~\ref{sec:general}, by proving several basic facts about \I. We discuss the relationship between \I and previously studied proof systems; in particular, we are able to very quickly and naturally define most of the previously studied algebraic proof systems in terms of familiar circuit-complexity measures on \I. We also highlight several consequences of results from algebraic complexity theory for the Ideal Proof System, such as division elimination \cite{strassenDivision} and the chasms at depth 3 \cite{GKKSchasm,tavenas} and 4 \cite{agrawalVinay,koiranChasm,tavenas}.
In Section~\ref{sec:VNP}, we prove that lower bounds on \I imply algebraic circuit lower bounds (Theorem~\ref{thm:VNP}).
In Section~\ref{sec:EF} we introduce our PIT axioms in detail and prove that, if the PIT axioms are efficiently provable in Extended Frege, then EF is p-equivalent to \I. In Section~\ref{sec:AC0p} we show how this proof can be modified to extend to much weaker systems such as $\cc{AC}^0$-Frege and $\cc{AC}^0[p]$-Frege, and discuss in more detail the consequences of this result, as already mentioned briefly above.
In Section~\ref{sec:syzygy} we suggest a new framework for transfering techniques from algebraic circuit complexity to (algebraic) proof complexity lower bounds.
Finally, in Section~\ref{sec:conclusion} we gather a long list of open questions raised by our work, many of which we believe may be quite approachable.
Appendices~\ref{app:encoding} and \ref{app:background} contain additional details, and Appendix~\ref{app:examples} contains two examples worked out in detail (Tseitin tautologies and the Pigeonhole Principle PHP).
The remaining appendices contain further fundamental developments; these are relegated to appendices only to focus attention on our main contributions as outlined above. In Appendix~\ref{app:found} we give further basic results on \I, including more algebraic structure on the set of all certificates, and a geometric variant of the Ideal Proof System that may be interesting in its own right. In Appendix~\ref{app:GI} we put forward a suggestive relationship between proof complexity (a la Ideal Proof System) and Graph Isomorphism. In Appendix~\ref{app:RIPS} we discuss a variant of the Ideal Proof System that allows rational functions and division in its certificates.
\section{PIT as a bridge between circuit complexity and proof complexity} \label{sec:PIT}
Having already introduced and discussed our PIT axioms in Section~\ref{sec:PITeabs}, here we complete the proofs of Theorems~\ref{thm:EF} and \ref{thm:AC0}. We maintain the notations and conventions of Section~\ref{sec:PITeabs}.
\subsection{Extended Frege is p-equivalent to \texorpdfstring{\I}{\Itext} if PIT is EF-provably easy} \label{sec:EF}
\begin{theorem} \label{thm:EF}
If there is a family $K$ of polynomial-size Boolean circuits computing PIT, such that the PIT axioms for $K$ have polynomial-size EF proofs, then EF is polynomially equivalent to $\I$.
\end{theorem}
To prove the theorem, we will first show that EF is p-equivalent to $\I$ if a family of propositional formulas expressing soundness of $\I$ are efficiently EF provable. Then we will show that efficient EF proofs of $Soundness_{\I}$ follows from efficient EF proofs for our PIT axioms.
\myparagraph{Soundness of \I}
It is well-known that for standard Cook--Reckhow proof systems, a proof system $P$ can p-simulate another proof system $P'$
if and only if $P$ can prove soundness of $P'$. Our proof system is not standard because verifying a proof requires probabilistic, rather than deterministic, polynomial-time. Still we will show how to formalize soundness of $\I$ propositionally, and we will show that if EF can efficiently prove soundness of $\I$ then EF is p-equivalent to $\I$.
Let $\varphi = \kappa_1 \land \ldots \land \kappa_m$ be an unsatisfiable propositional 3CNF formula over variables $p_1,\ldots,p_n$, and let $Q^\varphi_1, \ldots, Q^\varphi_m$ be the corresponding polynomial equations (each of degree at most 3) such that $\kappa_i(\alpha)=1$ if and only if $Q^\varphi_i(\alpha)=0$ for $\alpha \in \{0,1\}^{n}$. An $\I$-refutation of $\varphi$ is an algebraic circuit, $C$, which demonstrates that $1$ is in the ideal generated by the polynomial equations $\vec{Q}^\varphi$. (This demonstrates that the polynomial equations $\vec{Q}^\varphi=0$ are unsolvable, which is equivalent to proving that $\varphi$ is unsatisfiable.) In particular, recall that $C$ has two types of inputs: $x_1,\dotsc,x_n$ (corresponding to the propositional variables $p_1,\dotsc,p_n$) and the placeholder variables $\f_1, \ldots, \f_m$ (corresponding to the equation $Q^{\varphi}_1,\dotsc,Q^\varphi_m$), and satisfies the following two properties:
\begin{enumerate}
\item $C(\vec{x},\vec{0})=0$. This property essentially states that the polynomial computed by $C(\vec{x},\vec{Q}(\vec{x}))$ is in the ideal generated by $Q^\varphi_1,\ldots,Q^\varphi_m$.
\item $C(\vec{x},\vec{Q}^\varphi(\vec{x}))=1$. This property states that the polynomial computed by $C$, when we substitute the $Q^\varphi_i$'s for the $\f_i$'s, is the identically 1 polynomial.
\end{enumerate}
\myparagraph{Encoding \I Proofs}
Let $K$ be a family of polynomial-size circuits for PIT.
Using $K_{m,n}$, we can create a polynomial-size Boolean circuit, $Proof_\I([C], [\varphi])$ that
is true if and only if $C$ is an $\I$-proof of the unsatisfiability of $\vec{Q}^\varphi=0$.
The polynomial-sized Boolean circuit $Proof_\I([C],[\varphi])$ first takes the encoding of the
algebraic circuit $C$ (which has
$x$-variables and placeholder variables), and creates the encoding of a new algebraic circuit, $[C']$, where
$C'$ is like $C$ but with each $\f_i$ variable replaced by 0.
Secondly, it takes the encoding of $C$ and $[\varphi]$ and creates the encoding of a new
circuit $C''$, where $C''$ is like $C$ but now with each $\f_i$ variable replaced by $Q^\varphi_i$.
(Note that whereas $C$ has $n+m$ underlying algebraic variables, both $C'$ and $C''$ have only $n$ underlying variables.)
$Proof_\I([C], [\varphi])$ is true if and only if
$K([C'])$---that is, $C'(\vec{x})=C(\vec{x},\vec{0})$ computes the $0$ polynomial---and
$K([1-C''])=0$---that is, $C''(\vec{x})=C(\vec{x},\vec{Q}^{\varphi}(\vec{x}))$ computes the $1$ polynomial.
\begin{definition}
Let formula $Truth_{bool}(\vec{p},\vec{q})$ state that the truth assignment $\vec{q}$
satisfies the Boolean formula coded by $\vec{p}$.
The soundness of $\I$ says that if
$\varphi$ has a refutation in $\I$, then $\varphi$ is unsatisfiable.
That is, $Soundness_{\I,m,n}([C], [\varphi], \vec{p})$ has variables that encode
a size $m$ \I-proof $C$, variables that encode a 3CNF formula $\varphi$ over $n$ variables, and $n$
additional Boolean variables, $\vec{p}$.
$Soundness_{\I,m,n}([C], [\varphi], \vec{p})$ states:
\[
Proof_\I(\prop{[C]},\prop{[\varphi]}) \rightarrow \neg Truth_{bool}(\prop{[\varphi]}, \vec{p}).
\]
\end{definition}
\begin{lemma} \label{lem:soundness}
If EF can efficiently prove $Soundness_\I$ for some polynomial-size Boolean circuit family $K$ computing PIT, then EF is p-equivalent to $\I$.
\end{lemma}
\begin{proof}
Because $\I$ can p-simulate EF, it suffices to show that
if EF can prove Soundness of $\I$, then EF can p-simulate $\I$.
Assume that we have a polynomial-size EF proof
of $Soundness_\I$.
Now suppose that $C$ is an \I-refutation of an unsatisfiable 3CNF formula $\varphi$
on variables $\vec{p}$.
We will show that EF can also prove $\neg \varphi$ with a proof of size polynomial in $|C|$.
First,
we claim that it follows from a natural encoding (see Section~\ref{sec:encoding}) that EF can efficiently prove:
\[
\varphi \rightarrow Truth_{bool}([\varphi],\vec{p}).
\]
(Variables of this statement just the $p$ variables, because $\varphi$ is a fixed 3CNF formula, so the encoding $[\varphi]$ is a variable-free Boolean string.)
Second,
if $C$ is an $\I$-refutation of $\varphi$, then EF can prove $Proof_\I([C],[\varphi])$.\footnote{The fact that $Proof_\I([C],[\varphi])$ is even true, given that $C$ is an \I-refutation of $\varphi$, follows from the completeness of the circuit $K$ computing PIT---that is, if $C \equiv 0$, then $K([C])$ accepts. This is one of only two places in the proof of Theorem~\ref{thm:EF} that we actually need the assumption that $K$ correctly computes PIT, rather than merely assuming that $K$ satisfies our PIT axioms. However, it is clear that this usage of this assumption is crucial. The other usage is in Step~\ref{step:axioms:1} of Lemma~\ref{lem:axioms}.} This holds because both $C$ and $\varphi$ are fixed, so this formula is variable-free. Thus, EF can just verify that it is true.
Third,
by soundness of $\I$, which we are assuming is EF-provable, and the
fact that EF can prove $Proof_\I([C],[\varphi])$ (step 2),
it follows by modus ponens that EF can prove
$\neg Truth_{bool}([\varphi], \vec{p})$.
(The statement $Soundness_\I([C],[\varphi],\vec{p})$ for this instance will only involve variables $\vec{p}$: the other two sets of inputs to the $Soundness_\I$ statement, $[C]$ and $[\varphi]$, are constants here since both $C$ and $\varphi$ are fixed.)
Finally, by modus ponens and the contrapositive of $\varphi \rightarrow Truth_{bool}([\varphi], \vec{p})$, we conclude in EF $\neg \varphi$, as desired.
\end{proof}
Theorem~\ref{thm:EF} follows from the preceding lemma and the next one.
\begin{lemma} \label{lem:axioms}
If EF can efficiently prove the PIT axioms for some polynomial-size Boolean circuit family $K$ computing PIT,
then EF can efficiently prove $Soundness_{\I}$ (for that same $K$).
\end{lemma}
\begin{proof}
Starting with $Truth_{bool}(\prop{[\varphi]},\vec{p})$, $K(\prop{[C(\vec{x},\vec{0})]})$,
$K(\prop{[1-C(\vec{x},\vec{Q}(\vec{x}))]})$, we will derive a contradiction.
\begin{enumerate}
\item\label{step:axioms:1} First show for every $i \in [m]$,
$Truth_{bool}([\varphi],\vec{p}) \rightarrow K(\prop{[Q_i^\varphi(\vec{p})]})$, where
$Q_i^\varphi$ is the low degree polynomial corresponding to the clause, $\kappa_i$, of $\varphi$. Note that, as $\varphi$ is not a fixed formula but is determined by the propositional variables encoding $\prop{[\varphi]}$, the encoding $\prop{[Q_i^\varphi]}$ depends on a subset of these variables.
$Truth_{bool}(\prop{[\varphi]},\vec{p})$ states that each clause $\kappa_i$ in $\varphi$ evaluates
to true under $\vec{p}$. It is a tautology that if $\kappa_i$
evaluates to true under $\vec{p}$, then $Q_i^{\varphi}$ evaluates to $0$ at $\vec{p}$. Since $K$ correctly computes PIT, \begin{equation} \label{eqn:truth}
Truth_{bool}(\prop{[\kappa_i]},\vec{p}) \rightarrow K(\prop{[Q_i^\varphi(\vec{p})]})\tag{*}
\end{equation}
is a tautology. Furthermore, although both the encoding $\prop{[\kappa_i]}$ and $\prop{[Q_i^{\varphi}]}$ depend on the propositional variables encoding $\prop{[\varphi]}$, since we assume that $\varphi$ is a 3CNF, these only depend on \emph{constantly many} of the variables encoding $\prop{[\varphi]}$. Thus the tautology (\ref{eqn:truth}) can be proven in EF by brute force.
Putting these together we can derive $Truth_{bool}(\prop{[\varphi]},\vec{p}) \rightarrow K(\prop{[Q_i^\varphi(\vec{p})]})$,
as desired.
\item\label{step:axioms:2} Using the assumption $Truth_{bool}(\prop{[\varphi]},\vec{p})$ together with (\ref{step:axioms:1}) we
derive $K(\prop{[Q_i^\varphi(\vec{p})]})$ for all $i \in [m]$.
\item\label{step:axioms:3} Using Axiom~\ref{axiom:Boolean} we can prove
$K(\prop{[C(\vec{x},\vec{0})]}) \rightarrow K(\prop{[C(\vec{p},\vec{0})]})$. Using modus ponens with the assumption $K(\prop{[C(\vec{x},\vec{0})]})$, we derive $K(\prop{[C(\vec{p},\vec{0})]})$.
\item\label{step:axioms:4} Repeatedly using Axiom~\ref{axiom:subzero} and Axiom~\ref{axiom:perm} we can prove
\[
K(\prop{[Q_1^\varphi(\vec{p})]}), K(\prop{[Q_2^\varphi(\vec{p})]}), \ldots , K(\prop{[Q_m^\varphi(\vec{p})]}), K(\prop{[C(\vec{p},\vec{0})]}) \rightarrow
K(\prop{[C(\vec{p},\vec{Q}(\vec{p}))]}).
\]
\item\label{step:axioms:5} Applying modus ponens repeatedly with (\ref{step:axioms:4}), (\ref{step:axioms:2}) and (\ref{step:axioms:3}) we can prove
$K(\prop{[C(\vec{p},\vec{Q}(\vec{p}))]})$.
\item\label{step:axioms:6} Applying Axiom~\ref{axiom:one} to (\ref{step:axioms:5}) we get $\neg K(\prop{[ 1 - C(\vec{p},\vec{Q}(\vec{p}))]})$.
\item\label{step:axioms:7} Using Axiom~\ref{axiom:Boolean} we can prove
$K(\prop{[1 - C(\vec{x},\vec{Q}(\vec{x}))]}) \rightarrow K(\prop{[1-C(\vec{p},\vec{Q}(\vec{p}))]})$.
Using our assumption $K(\prop{[1-C(\vec{x},\vec{Q}(\vec{x}))]})$ and modus ponens, we conclude
$K(\prop{[1-C(\vec{p},\vec{Q}(\vec{p}))]})$.
\end{enumerate}
Finally, (\ref{step:axioms:6}) and (\ref{step:axioms:7}) give a contradiction.
\end{proof}
\subsection{Proofs relating \texorpdfstring{$\cc{AC}^0[p]$-Frege}{AC0[p]-Frege} lower bounds, PIT, and circuit lower bounds}
\label{sec:AC0p}
Having already discussed the corollaries and consequences of Theorem~\ref{thm:AC0}, here we merely complete its proof.
\begin{theorem} \label{thm:AC0}
Let $\mathcal{C}$ be any class of circuits closed under $\cc{AC}^0$ circuit reductions. If there is a family $K$ of polynomial-size Boolean circuits for PIT such that the PIT axioms for $K$ have polynomial-size $\mathcal{C}$-Frege proofs,
then $\mathcal{C}$-Frege is polynomially equivalent to $\I$, and consequently polynomially equivalent to Extended Frege.
\end{theorem}
\newcommand{\op}[1]{\, op_{#1} \,}
Note that here we \emph{do not} need to restrict the circuit $K$ to be in the class $\mathcal{C}$. This requires one more technical device compared to the proofs in the previous section. The proof of Theorem~\ref{thm:AC0} follows the proof of Theorem~\ref{thm:EF} very closely. The main new ingredient is a folklore technical device that allows even very weak systems such as $\cc{AC}^0$-Frege to make statements about arbitrary circuits $K$, together with a careful analysis of what was needed in the proof of Theorem~\ref{thm:EF}.
\myparagraph{Encoding $K$ into weak proof systems}
Extended Frege can easily reason about arbitrary circuits $K$: for each gate $g$ of $K$ (or even each gate of each instance of $K$ in a statement, if so desired), with children $g_{\ell}, g_{r}$, EF can introduce a new variable $k_g$ together with the requirement that $k_g \leftrightarrow k_{g_{\ell}} \op{g} k_{g_{r}}$, where $\op{g}$ is the corresponding operation $g = g_{\ell} \op{g} g_{r}$ (\eg, $\land$, $\lor$, etc.). But weaker proof systems such as Frege (=$\cc{NC}^1$-Frege), $\cc{AC}^0[p]$-Frege, or $\cc{AC}^0$-Frege do not have this capability. We thus need to help them out by introducing these new variables and formulae ahead of time.
For each gate $g$, the statement $k_g \leftrightarrow k_{g_{\ell}} \op{g} k_{g_{r}}$ only involves 3 variables, and thus can be converted into a 3CNF of constant size. We refer to these clauses as the ``$K$-clauses.'' Note that the $K$-clauses do not set the inputs of $K$ to any particular values nor require its output to be any particular value. We denote the variables corresponding to $K$'s inputs as $k_{in,i}$ and the variable corresponding to $K$'s output as $k_{out}$.
The modified statement $Proof_\I(\prop{[C]},\prop{[\varphi]})$ now takes the following form. Recall that $Proof_\I$ involves two uses of $K$: $K(\prop{[C(\vec{x},\vec{0})]})$ and $K(\prop{[1-C(\vec{x}, \vec{Q}^\varphi(\vec{x}))]})$. Each of these instances of $K$ needs to get its own set of variables, which we denote $k^{(1)}_{g}$ for gate $g$ in the first instance, and $k^{(2)}_{g}$ for gate $g$ in the second instance, together with their own copies of the $K$-clauses. For an encoding $[C]$ or $[\varphi]$, let $[C]_{i}$ denote it's $i$-th bit, which may be a constant, a propositional variable, or even a propositional formula. Then $Proof_\I(\prop{[C]}, \prop{[\varphi}])$ is
\[
\begin{split}
& \bigwedge_{g} \left(k^{(1)}_{g} \leftrightarrow k^{(1)}_{g_{\ell}} \op{g} k^{(1)}_{g_{r}}\right)
\land \bigwedge_{i} \left(k^{(1)}_{in,i} \leftrightarrow \prop{[C(\vec{x},\vec{0})]}_{i}\right) \\
\land & \bigwedge_{g} \left(k^{(2)}_{g} \leftrightarrow k^{(2)}_{g_{\ell}} \op{g} k^{(2)}_{g_{r}}\right)
\land \bigwedge_{i} \left(k^{(2)}_{in,i} \leftrightarrow \prop{[1-C(\vec{x}, \vec{Q}^{\varphi}(\vec{x}))]}_{i}\right) \\
\rightarrow & k^{(1)}_{out} \land k^{(2)}_{out} \\
\end{split}
\]
Throughout, we use the same notation $Proof_\I(\prop{[C]}, \prop{[\varphi}])$ as before to mean this modified statement (we will no longer be referring to the original, EF-style statement). The modified statement $Soundness_\I(\prop{[C]}, \prop{[\varphi]}, \prop{\vec{p}})$ will now take the form
\[
\left( (\text{dummy statements}) \land Proof_\I(\prop{[C]}, \prop{[\varphi}]) \right)\rightarrow \neg Truth_{bool}(\prop{[\varphi]}, \vec{p}),
\]
using the new version of $Proof_\I$. Here ``dummy statements'' refers to certain statements that we will explain in Lemma~\ref{lem:AC0soundness}. These dummy statements will only involve variables that do not appear in the rest of $Soundness_\I$, and therefore will be immediately seen not to affect its truth or provability.
\myparagraph{The proofs}
Lemmata~\ref{lem:AC0soundness} and \ref{lem:AC0axioms} are the $\cc{AC}^0$-analogs of Lemmata~\ref{lem:soundness} and \ref{lem:axioms}, respectively. The proof of Lemma~\ref{lem:AC0soundness} will cause no trouble, and the proof of Lemma~\ref{lem:AC0axioms} will need one additional technical device (the ``dummy statements'' above).
Before getting to their proofs, we state the main additional lemma that we use to handle the new $K$ variables. We say that a variable $k^{(i)}_{in,j}$ corresponding to an input gate of $K$ is \definedWord{set to $\psi$} by a propositional statement if $k^{(i)}_{in,j} \leftrightarrow \psi$ occurs in the statement.
\begin{lemma} \label{lem:K}
Let $(\varphi_n)$ be a sequence of tautologies on $\poly(n)$ variables, including any number of copies of the $K$ variables, of the form $\varphi = \left(\left(\bigwedge_{i} \alpha_i\right) \rightarrow \omega\right)$. Let $\vec{p}$ denote the other (non-$K$) variables. Suppose that 1) there are at most $O(\log n)$ non-$K$ variables in $\varphi$, 2) for each copy of $K$, the corresponding $K$-clauses appear amongst the $\alpha_i$, 3) the only $K$ variables that appear in $\omega$ are output variables $k^{(i)}_{out}$, and 4) if $k^{(i)}_{out}$ appears in $\omega$, then all the inputs to $K^{(i)}$ are set to formulas that syntactically depend on at most $\vec{p}$.
Then there is a $\poly(n)$-size $\cc{AC}^0$-Frege proof of $\varphi$.
\end{lemma}
\begin{proof}[Proof sketch]
The basic idea is that $\cc{AC}^0$-Frege can brute force over all $\poly(n)$-many assignments to the $O(\log n)$ non-$K$ variables, and for each such assignment can then just evaluate each copy of $K$ gate by gate to verify the tautology. Any copy $K^{(i)}$ of $K$ all of whose input variables are unset must not affect the truth of $\varphi$, since none of the $k^{(i)}$ variables can appear in the consequent $\omega$ of $\varphi$. In fact, for such copies of $K$, the $K$-clauses merely appear as disjuncts of $\varphi$, since it then takes the form $\varphi = \bigvee_{i} (\neg \alpha_i) \vee \omega = \left(\bigvee_{g} \neg (k^{(i)}_{g} \leftrightarrow k^{(i)}_{g_{\ell}} \op{g} k^{(i)}_{g_r}) \right) \vee \left(\bigvee_{\text{remaining clauses $i$}} \neg \alpha_i \right) \vee \omega$. Thus, if $\cc{AC}^0$-Frege can prove that the rest of $\varphi$, namely $\left(\bigvee_{\text{remaining clauses $i$}} \neg \alpha_i \right) \vee \omega$ is a tautology, then it can prove that $\varphi$ is a tautology.
\end{proof}
Now we state the analogs of Lemmata~\ref{lem:soundness} and \ref{lem:axioms} for $\mathcal{C}$-Frege. Because of the similarity of the proofs to the previous case, we merely indicate how their proofs differ from the Extended Frege case.
\begin{lemma}[$\cc{AC}^0$ analog of Lemma~\ref{lem:soundness}] \label{lem:AC0soundness}
Let $\mathcal{C}$ be a class of circuits closed under $\cc{AC}^0$ circuit reductions. If there is a family $K$ of polynomial-size Boolean circuits computing PIT, such that the PIT axioms for $K$ have polynomial-size $\mathcal{C}$-Frege proofs,
then $\mathcal{C}$-Frege is polynomially equivalent to $\I$.
\end{lemma}
\begin{proof}
Mimic the proof of Lemma~\ref{lem:soundness}. The third and fourth steps of that proof are just modus ponens, so we need only check the first two steps.
The first step is to show that $\mathcal{C}$-Frege can prove $\varphi \rightarrow Truth_{bool}([\varphi], \prop{\vec{p}})$. This follows directly from the details of the encoding of $[\varphi]$ and the full definition of $Truth_{bool}$; see Lemma~\ref{lem:truth}.
The second step is to show that $\mathcal{C}$-Frege can prove $Proof_\I([C],[\varphi])$ for a fixed $C,\varphi$. In Lemma~\ref{lem:soundness}, this followed because this statement was variable-free. Now this statement is no longer variable-free, since it involve two copies of $K$ and the corresponding variables and $K$-clauses. However, $Proof_\I([C],[\varphi])$ satisfies the requirements of Lemma~\ref{lem:K}, and applying that lemma we are done.
\end{proof}
\begin{lemma}[$\cc{AC}^0$ analog of Lemma~\ref{lem:axioms}] \label{lem:AC0axioms}
Let $\mathcal{C}$ be a class of circuits closed under $\cc{AC}^0$ circuit reductions. If $\mathcal{C}$-Frege can efficiently prove the PIT axioms for some polynomial-sized family of circuits $K$ computing PIT, then $\mathcal{C}$-Frege can efficiently prove $Soundness_{\I}$ (for that same $K$).
\end{lemma}
\begin{proof}
We mimic the proof of Lemma~\ref{lem:axioms}. In steps (\ref{step:axioms:1}), (\ref{step:axioms:2}), and (\ref{step:axioms:4}) of that proof we used $m$ additional copies of $K$, where $m$ is the number of clauses in the CNF $\varphi$ encoded by $\prop{[\varphi]}$, and thus $m \leq \poly(n)$. In order to talk about these copies of $K$ in $\mathcal{C}$-Frege, however, the $K$ variables must already be present in the statement we wish to prove in $\mathcal{C}$-Frege. The ``dummy statements'' in the new version of soundness are the $K$-clauses---with inputs and outputs not set to anything---for each of $m$ new copies of $K$, which we denote $K^{(3)}, \dotsc, K^{(m+2)}$ (recall that the first two copies $K^{(1)}$ and $K^{(2)}$ are already used in the statement of $Proof_\I$). We won't actually need these clauses anywhere in the proof, we just need their variables to be present from the beginning.
Starting with $Truth_{bool}(\prop{[\varphi]},\vec{p})$, $K^{(1)}(\prop{[C(\vec{x},\vec{0})]})$, $K^{(2)}(\prop{[1-C(\vec{x},\vec{Q}(\vec{x}))]})$ we'll derive a contradiction.
The only step of the proof of Lemma~\ref{lem:axioms} that was not either the use of an axiom or modus ponens was step (\ref{step:axioms:1}), so it suffices to verify that this can be carried out in $\cc{AC}^0$-Frege with the $K$-clauses.
Step (\ref{step:axioms:1}) was to show for every $i \in [m]$, $Truth_{bool}([\varphi],\vec{p}) \rightarrow K(\prop{[Q_i^\varphi(\vec{p})]})$, where $Q_i^\varphi$ is the low degree polynomial corresponding to the clause, $\kappa_i$, of $\varphi$. Note that, as $\varphi$ is not a fixed formula but is determined by the propositional variables encoding $\prop{[\varphi]}$, the encoding $\prop{[Q_i^\varphi]}$ depends on a subset of these variables.
$Truth_{bool}(\prop{[\varphi]},\vec{p})$ states that each clause $\kappa_i$ in $\varphi$ evaluates
to true under $\vec{p}$. It is a tautology that if $\kappa_i$
evaluates to true under $\vec{p}$, then $Q_i^{\varphi}$ evaluates to $0$ at $\vec{p}$. Since $K$ correctly computes PIT, \begin{equation} \label{eqn:truthAC0}
Truth_{bool}(\prop{[\kappa_i]},\vec{p}) \rightarrow K^{(i+2)}(\prop{[Q_i^\varphi(\vec{p})]})\tag{**}
\end{equation}
is a tautology. Furthermore, although both the encoding $\prop{[\kappa_i]}$ and $\prop{[Q_i^{\varphi}]}$ depend on the propositional variables encoding $\prop{[\varphi]}$, since we assume that $\varphi$ is a 3CNF, these only depend on \emph{constantly many} of the variables encoding $\prop{[\varphi]}$. Writing out (\ref{eqn:truthAC0}) it has the form
\[
Truth_{bool} \rightarrow \left(\text{$K^{(i+2)}$-clauses } \land (\text{ setting inputs of $K^{(i+2)}$ to $\prop{[Q_i^\varphi(\vec{p})]}$}) \rightarrow k^{(i+2)}_{out} \right),
\]
which is equivalent to
\[
Truth_{bool} \land (K^{(i+2)}\text{-clauses}) \land (\text{ setting inputs of $K^{(i+2)}$ to $\prop{[Q_i^\varphi(\vec{p})]}$}) \rightarrow k^{(i+2)}_{out}.
\]
Thus (\ref{eqn:truthAC0}) satisfies the conditions of Lemma~\ref{lem:K} and has a short $\cc{AC}^0$-Frege proof. Since $Truth_{bool}(\prop{[\varphi]}, \vec{p})$ is defined as $\bigwedge_i Truth_{bool}(\prop{[\kappa_i]}, \vec{p})$ (see Section~\ref{sec:encoding}), we can then derive
\[
Truth_{bool}(\prop{[\varphi]},\vec{p}) \rightarrow K^{(i+2)}(\prop{[Q_i^\varphi(\vec{p})]}),
\]
as desired.
\end{proof}
\subsection{Some details of the encodings} \label{sec:encoding}
For an $\leq m$-clause, $\leq n$-variable 3CNF $\varphi = \kappa_1 \land \dotsb \land \kappa_m$, its encoding is a Boolean string of length $3m( \lceil \log_2(n) \rceil+1)$. Each literal $x_i$ or $\neg x_i$ is encoded as the binary encoding of $i$ ($\lceil \log_2(n) \rceil$ bits) plus a single other bit indicating whether the literal is positive (1) or negative (0). The encoding of a single clause is just the concatenation of the encodings of the three literals, and the encoding of $\varphi$ is the concatenation of these encodings.
We define
\[
Truth_{bool,n,m}(\prop{[\varphi]}, \vec{p}) \defeq \bigwedge_{i=1}^{m} Truth_{bool,n}(\prop{[\kappa_i]}, \vec{p}).
\]
For a single 3-literal clause $\kappa$, we define $Truth_{bool,n}(\prop{[\kappa]}, \vec{p})$ as follows. For an integer $i$, let $[i]$ denote the standard binary encoding of $i-1$ (so that the numbers $1,\dotsc,2^k$ are put into bijective correspondence with $\{0,1\}^{k}$). Let $\prop{[\kappa]} = \vec{q_1} s_1 \vec{q_2} s_2 \vec{q_3} s_3$ where each $s_i$ is the sign bit (positive/negative) and each $\vec{q_i}$ is a length-$\lceil \log_2 n \rceil$ string of variables corresponding to the encoding of the index of a variable. We write $\vec{q} = [k]$ as shorthand for $\bigwedge_{i=1}^{\lceil \log_2 n \rceil} (q_i \leftrightarrow [k]_i)$, where $x \leftrightarrow y$ is shorthand for $(x \land y) \lor (\neg x \land \neg y)$. Finally, we define:
\[
Truth_{bool,n}(\prop{[\kappa]}, \vec{p}) \defeq \bigvee_{j=1}^{3} \bigvee_{i=1}^{n} (\vec{q}_j = [i] \land (p_i \leftrightarrow s_j)).
\]
(Hereafter we drop the subscripts $n,m$; they should be clear from context.)
\begin{lemma} \label{lem:truth}
For any 3CNF $\varphi$ on $n$ variables, there are $\poly(n)$-size $\cc{AC}^0$-Frege proofs of $\varphi(\vec{p}) \rightarrow Truth_{bool}([\varphi], \vec{p})$.
\end{lemma}
\begin{proof}
In fact, we will see that for a fixed clause $\kappa$, after simplifying constants---that is, $\varphi \land 1$ and $\varphi \lor 0$ both simplify to $\varphi$, $\varphi \land 0$ simplifies to $0$, and $\varphi \lor 1$ simplifies to $1$---that $Truth_{bool}([\kappa], \vec{p})$ in fact becomes \emph{syntactically identical} to $\kappa(\vec{p})$. By the definition of $Truth_{bool}([\varphi], \vec{p})$, we get the same conclusion for any fixed CNF $\varphi$. Simplifying constants can easily be carried out in $\cc{AC}^0$-Frege.
For a fixed $\kappa$, $\vec{q}_j$ and $s_j$ become fixed to constants for $j=1,2,3$. Denote the indices of the three variables in $\kappa$ by $i_1, i_2, i_3$. The only variables left in the statement $Truth_{bool}([\kappa], \vec{p})$ are $\vec{p}$. Since the $\vec{q}_{j}$ and $[i]$ are all fixed, every term in $\bigvee_{i}( \vec{q}_j = [i] \land (p_i \leftrightarrow s_j))$ except for the $i_j$ term simplifies to $0$, so this entire disjunction simplifies to $(p_{i_j} \leftrightarrow s_j)$. Since the $s_j$ are also fixed, if $s_j=1$ then $(p_{i_j} \leftrightarrow s_j)$ simplifies to $p_{i_j}$, and if $s_j=0$ then it simplifies to $\neg p_{i_j}$. With this understanding, we write $\pm p_{i_j}$ for the corresponding literal. Then $Truth_{bool}([\kappa], \vec{p})$ simplifies to $(\pm p_{i_1} \lor \pm p_{i_2} \lor \pm p_{i_3})$ (with signs as described previously). This is exactly $\kappa(\vec{p})$.
\end{proof}
\section{Simulations} \label{sec:simulations}
In this section we start with a result we haven't yet mentioned relating general \I to Hilbert-like \I, and then complete the proofs of any remaining simulation results that we've stated previously. Namely, we relate Pitassi's previous algebraic systems \cite{pitassi96, pitassiICM} and number-of-lines in Polynomial Calculus proofs with subsystems of \I; we show that $\I_{\F_p}$ p-simulates $\cc{AC}^0[p]$-Frege in a depth-preserving way; and we show that over fields of characteristic zero, $\I$-proofs of polynomial size \emph{with arbitrary constants} can be simulated in $\cc{coAM}$, assuming the Generalized Riemann Hypothesis.
\subsection{General versus Hilbert-like \texorpdfstring{\I}{\Itext}}
\begin{proposition} \label{prop:gen2Hilb}
Let $F_1 = \dotsb = F_m = 0$ be a polynomial system of equations in $n$ variables $x_1, \dotsc, x_n$ and let $C(\vec{x}, \vec{\f})$ be an \I-certificate of the unsatisfiability of this system. Let $D = \max_{i} \deg_{\f_i} C$ and let $t$ be the number of terms of $C$, when viewed as a polynomial in the $\f_i$ with coefficients in $\F[\vec{x}]$. Suppose $C$ and each $F_i$ can be computed by a circuit of size $\leq s$.
Then a Hilbert-like \I-certificate for this system can be computed by a circuit of size $poly(D,t,n,s)$.\footnote{If the base field $\F$ has size less than $T = Dt\binom{n}{2}$, and the original circuit had multiplication gates of fan-in bounded by $k$, then the size of the resulting Hilbert-like certificate should be multiplied by $(\log T)^{k}$.}
\end{proposition}
The proof uses known sparse multivariate polynomial interpolation algorithms. The threshold $T$ is essentially the number of points at which the polynomial must be evaluated in the course of the interpolation algorithm. Here we use one of the early, elegant interpolation algorithms due to Zippel \cite{zippel}. Although Zippel's algorithm chooses random points at which to evaluate polynomials for the interpolation, in our nonuniform setting it suffices merely for points with the required properties to exist (which they do as long as $|\F| \geq T$). Better bounds may be achievable using more recent interpolation algorithms such as those of Ben-Or and Tiwari \cite{benOrTiwari} or Kaltofen and Yagati \cite{kaltofenYagati}. We note that all of these interpolation algorithms only give us limited control on the \emph{depth} of the resulting Hilbert-like \I-certificate (as a function of the depth of the original \I-certificate $f$), because they all involve solving linear systems of equations, which is not known to be computable efficiently in constant depth.
\begin{proof}
Using a sparse multivariate interpolation algorithm such as Zippel's \cite{zippel}, for each monomial in the placeholder variables $\vec{\f}$ that appears in $C$, there is a polynomial-size algebraic circuit for its coefficient, which is an element of $\F[\vec{x}]$. For each such monomial $\vec{\f}^{\vec{e}} = \f_1^{e_1} \dotsb \f_m^{e_m}$, with coefficient $c_{\vec{e}}(\vec{x})$, there is a small circuit $C'$ computing $c_{\vec{e}}(\vec{x}) \vec{\f}^{\vec{e}}$. Since every $\vec{\f}$-monomial appearing in $C$ is non-constant, at least one of the exponents $e_i > 0$. Let $i_0$ be the least index of such an exponent. Then we get a small circuit computing $c(\vec{e})(\vec{x}) \f_{i_0} F_{i_0}(\vec{x})^{e_{i_0}-1} F_{i_0 + 1}(\vec{x})^{e_{i_0 + 1}} \dotsb F_{m}(\vec{x})^{e_m}$ as follows. Divide $C'$ by $\f_{i_0}$, and then eliminate this division using Strassen \cite{strassenDivision} (or alternatively consider $\frac{1}{e_{i_0}} \frac{\partial C'}{\partial \f_{i_0}}$ using Baur--Strassen \cite{baurStrassen}). In the resulting circuit, replace each input $\f_i$ by a small circuit computing $F_i(\vec{x})$. Then multiply the resulting circuit by $\f_{i_0}$. Repeat this procedure for each monomial appearing (the list of monomials appearing in $C$ is one of the outputs of the sparse multivariate interpolation algorithm), and then add them all together.
\end{proof}
\subsection{Number of lines in Polynomial Calculus is equivalent to determinantal \texorpdfstring{\I}{\Itext}}
\label{sec:pitassi}
We begin by recalling Pitassi's 1996 and 1997 algebraic proof systems \cite{pitassi96, pitassiICM}. In the 1996 system, a proof of the unsatisfiability of $F_1(\vec{x}) = \dotsb = F_m(\vec{x}) = 0$ is a circuit computing a vector $(G_1(\vec{x}), \dotsc, G_m(\vec{x}))$ such that $\sum_i F_i(\vec{x}) G_i(\vec{x}) = 1$. Size is measured by the size of the corresponding circuit.
In the 1997 system, a proof is a rule-based derivation of $1$ starting from the $F_i$. Recall that rule-based algebraic derivations have the following two rules: 1) from $G$ and $H$, derive $\alpha G + \beta H$ for any fields elements $\alpha, \beta \in \F$, and 2) from $G$, derive $Gx_i$ for any variable $x_i$. This is essentially the same as the Polynomial Calculus, but with size measured by the number of lines, rather than by the total number of monomials appearing.
\begin{proposition} \label{prop:pitassi}
Pitassi's 1996 algebraic proof system \cite{pitassi96} is p-equivalent to Hilbert-like \I.
Pitassi's 1997 algebraic proof system \cite{pitassiICM}---equivalent to the number-of-lines measure on PC proofs---is p-equivalent to Hilbert-like $\det$-\I or $\cc{VP}_{ws}$-\I.
\end{proposition}
\begin{proof}
Let $C$ be a proof in the 1996 system \cite{pitassi96}, namely a circuit computing $(G_1(\vec{x}), \dotsc, G_m(\vec{x}))$. Then with $m$ product gates and a single fan-in-$m$ addition gate, we get a circuit $C'$ computing the Hilbert-like \I certificate $\sum_{i=1}^{m} \f_i G_i(\vec{x})$.
Conversely, if $C'$ is a Hilbert-like \I-proof computing the certificate $\sum_i \f_i G_i'(\vec{x})$, then by Baur--Strassen \cite{baurStrassen} there is a circuit $C$ of size at most $O(|C'|)$ computing the vector $(\frac{\partial C'}{\f_1}, \dotsc, \frac{C'}{\f_m}) = (G_1'(\vec{x}), \dotsc, G_m'(\vec{x}))$, which is exactly a proof in the 1996 system. (Alternatively, more simply, but at slightly more cost, we may create $m$ copies of $C'$, and in the $i$-th copy of $C'$ plug in $1$ for one of the $\f_i$ and $0$ for all of the others.
The proof of the second statement takes a bit more work. At this point the reader may wish to recall the definition of weakly skew circuit from Appendix~\ref{app:background:complexity}.
Suppose we have a derivation of $1$ from $F_1(\vec{x}), \dotsc, F_m(\vec{x})$ in the 1997 system \cite{pitassiICM}. First, replace each $F_i(\vec{x})$ at the beginning of the derivation with the corresponding placeholder variable $\f_i$. Since size in the 1997 system is measured by number of lines in the proof, this has not changed the size. Furthermore, the final step no longer derives $1$, but rather derives an \I certificate. By structural induction on the two possible rules, one easily sees that this is in fact a Hilbert-like \I-certificate. Convert each linear combination step into a linear combination gate, and each ``multiply by $x_i$'' step into a product gate one of whose inputs is a new leaf with the variable $x_i$. As we create a new leaf for every application of the product rule, these new leaves are clearly cut off from the rest of the circuit by removing their connection to their product gate. As these are the only product gates introduced, we have a weakly-skew circuit computing a Hilbert-like \I certificate.
The converse takes a bit more work, so we first show that a Hilbert-like \emph{formula}-\I proof can be converted at polynomial cost into a proof in the 1997 system \cite{pitassiICM}, and then explain why the same proof works for $\cc{VP}_{ws}$-\I. This proof is based on a folklore result (see the remark after Definition~2.6 in Raz--Tzameret \cite{razTzameret}); we thank Iddo Tzameret for a conversation clarifying it, which led us to realize that the result also applies to weakly skew circuits.
Let $C$ be a formula computing a Hilbert-like \I-certificate $\sum_{i=1}^{m} \f_i G_i(\vec{x})$. Using the trick above of substituting in $\{0,1\}$-values for the $\f_i$ (one $1$ at a time), we find that each $G_i(\vec{x})$ can be computed by a formula $\Gamma_i$ no larger than $|C|$. For each $i$ we show how to derive $F_i(\vec{x}) G_i(\vec{x})$ in the 1997 system. These can then be combined using the linear combination rule. Thus for simplicity we drop the subscript $i$ and refer to $\f$, $F(\vec{x})$, $G(\vec{x})$, and the formula $\Gamma$ computing $G$. Without loss of generality (with a polynomial blow-up if needed) we can assume that all of $\Gamma$'s gates have fan-in at most $2$.
We proceed by induction on the size of the formula $\Gamma$. Our inductive hypothesis is: for all formulas $\Gamma'$ of size $|\Gamma'| < |\Gamma|$, for all polynomials $P(\vec{x})$, in the 1997 system one can derive $P(\vec{x}) \Gamma'(\vec{x})$ starting from $P(\vec{x})$, using at most $|\Gamma'|$ lines. The base case is $|\Gamma|=1$, in which case $G(\vec{x})$ is a single variable $x_i$, and from $P(\vec{x})$ we can compute $P(\vec{x}) x_i$ in a single step using the variable-product rule.
If $\Gamma$ has a linear combination gate at the top, say $\Gamma = \alpha \Gamma_1 + \beta \Gamma_2$. By induction, from $P(\vec{x})$ we can derive $P(\vec{x}) \Gamma_i(\vec{x})$ in $|\Gamma_i|$ steps for $i=1,2$. Do those two derivations, then apply the linear combination rule to derive $\alpha P(\vec{x}) \Gamma_1(\vec{x}) + \beta P(\vec{x}) \Gamma_2(\vec{x}) = P(\vec{x}) \Gamma(\vec{x})$ in one additional step. The total length of this derivation is then $|\Gamma_1| + |\Gamma_2| + 1 = |\Gamma|$.
If $\Gamma$ has a product gate at the top, say $\Gamma = \Gamma_1 \times \Gamma_2$. Unlike the case of linear combinations where we proceeded in parallel, here we proceed sequentially and use more of the strength of our inductive assumption. Starting from $P(\vec{x})$, we derive $P(\vec{x}) \Gamma_1(\vec{x})$ in $|\Gamma_1|$ steps. Now, starting from $P'(\vec{x}) = P(\vec{x}) \Gamma_1(\vec{x})$, we derive $P'(\vec{x}) \Gamma_2(\vec{x})$ in $|\Gamma_2|$ steps. But $P' \Gamma_2 = P \Gamma_1 \Gamma_2 = P \Gamma$, which we derived in $|\Gamma_1| + |\Gamma_2| \leq |\Gamma|$ steps. This completes the proof of this direction for Hilbert-like \emph{formula}-\I.
For Hilbert-like weakly-skew \I the proof is similar. However, because gates can now be reused, we must also allow lines in our constructed proof to be reused (otherwise we'd be effectively unrolling our weakly skew circuit into a formula, for which the best known upper bound is only quasi-polynomial). We still induct on the size of the weakly-skew circuit, but now we allow circuits with multiple outputs. We change the induction hypothesis to: for all weakly skew circuits $\Gamma'$ of size $|\Gamma'| < |\Gamma|$, possibly with multiple outputs that we denote $\Gamma'_{out,1}, \dotsc, \Gamma'_{out,s}$, from any $P(\vec{x})$ one can derive the tuple $P \Gamma'_{out,1}, \dotsc, P \Gamma'_{out,s}$ in the 1997 system using at most $|\Gamma'|$ lines.
To simplify matters, we assume that every multiplication gate in a weakly skew circuit has a label indicating which one of its children is separated from the rest of the circuit by this gate.
The base case is the same as before, since a circuit of size one can only have one output, a single variable.
Linear combinations are similar to before, except now we have a multi-output weakly skew circuit of some size, say $s$, that outputs $\Gamma_1$ and $\Gamma_2$. By the induction hypothesis, there is a derivation of size $\leq s$ that derives both $P \Gamma_1$ and $P \Gamma_2$. Then we apply one additional linear combination rule, as before.
For a product gate $\Gamma = \Gamma_1 \times \Gamma_2$, suppose without loss of generality that $\Gamma_2$ is the child that is isolated from the larger circuit by this product gate (recall that we've assumed $\Gamma$ comes with an indicator of which child this is). Then we proceed as before, first computing $P \Gamma_1$ from $P$, and then $(P \Gamma_1) \Gamma_2$ from $(P \Gamma_1)$. Because we apply ``multiplication by $\Gamma_1$'' and ``multiplication by $\Gamma_2$'' in sequence, it is crucial that the gates computing $\Gamma_2$ don't depend on those computing $\Gamma_1$, for the gates $g$ in $\Gamma_1$ get translated into lines computing $P g$, and if we reused \emph{that} in computing $\Gamma_2$, rather than getting $g$ as needed, we would be getting $Pg$.
\end{proof}
It is interesting to note that the condition of being weakly skew is precisely the condition we needed to make this proof go through.
\subsection{Depth-preserving simulation of Frege systems by the Ideal Proof System}
\newcommand{\text{depth}}{\text{depth}}
\begin{theorem} \label{thm:depth}
For any $d(n)$, depth-$(d+2)$ $\I_{\F_p}$ p-simulates depth-$d$ Frege proofs with unbounded fan-in $\lor, \neg, MOD_p$ connectives.
\end{theorem}
\begin{proof}
For simplicity we will present the proof for $p=2$.
The generalization to other values of $p$ is straightforward.
We will use a small modification of the formalization of $\cc{AC}^0[2]$-Frege as given by Maciel and Pitassi \cite{macielPitassi}.
The underlying connectives are: negation, unbounded fanin OR, unbounded fanin AND, and unbounded fanin XOR gates.
We will work in a sequent calculus style proof system, where lines are cedents of the form
$\Gamma \rightarrow \Delta$, where both $\Gamma$ and $\Delta$ are $\{\lor, \neg, MOD_p\}$-formulas, where each of $\neg \Gamma_i$ (for $\Gamma_i \in \Gamma$) and $\Delta_i \in \Delta$ has depth at most $d(n)$; the intended meaning is that the conjunction of the formulas in $\Gamma$ implies the disjunction of the formulas in $\Delta$. For notational convenience, we state the rest of the proof only for $\cc{AC}^0[2]$-Frege, but it will be clear that nothing in the proof depends on the depth being constant.
The axioms are as follows.
\begin{enumerate}
\item $A \rightarrow A$
\item (false implies nothing) $\lor () \rightarrow $
\item $\rightarrow \neg \parity ()$
\end{enumerate}
The rules of inference are as follows:
\medskip
\begin{tabular}{rccp{0cm}rcc}
Weakening & $\displaystyle \frac{\Gamma \rightarrow \Delta}{\Gamma \rightarrow \Delta,A}$ & $\displaystyle \frac{\Gamma \rightarrow \Delta}{A, \Gamma \rightarrow \Delta}$ & &
\\[0.25in]
Cut & \multicolumn{2}{c}{$\displaystyle \frac{\rightarrow A, \Gamma \qquad \rightarrow \neg A, \Gamma}{\rightarrow \Gamma}$} & &
Negation & $\displaystyle \frac{\Gamma, A \rightarrow \Delta}{\Gamma \rightarrow \neg A, \Delta}$ & $\displaystyle \frac{\Gamma \rightarrow A, \Delta}{\Gamma, \neg A \rightarrow \Delta}$ \\[0.25in]
Or-Left & \multicolumn{5}{c}{$\displaystyle \frac{A_1,\Gamma \rightarrow \Delta \qquad \lor(A_2,\dotsc,A_n),\Gamma \rightarrow \Delta}{\lor(A_1,\dotsc,A_n),\Gamma \rightarrow \Delta}$} \\[0.25in]
Or-Right & \multicolumn{5}{c}{$\displaystyle \frac{\Gamma \rightarrow A_1,\lor(A_2,\dotsc,A_n),\Delta}{\Gamma \rightarrow \lor(A_1,\dotsc,A_n),\Delta}$} \\[0.25in]
Parity-Left & \multicolumn{5}{c}{$\displaystyle \frac{A_1, \neg \parity(A_2,\dotsc,A_n),\Gamma \rightarrow \Delta \qquad \parity(A_2,\dotsc,A_n),\Gamma \rightarrow A_1, \Delta}{\parity(A_1,\dotsc,A_n),\Gamma \rightarrow \Delta}$} \\[0.25in]
Parity-Right & \multicolumn{5}{c}{$\displaystyle \frac{A_1,\Gamma \rightarrow \neg \parity(A_2,\dotsc,A_n),\Delta \qquad \Gamma \rightarrow A_1, \parity(A_2,\dotsc,A_n),\Delta}{\Gamma \rightarrow \parity(A_1,\dotsc,A_n),\Delta}$} \\[0.25in]
\end{tabular}
A refutation of a 3CNF formula $\varphi=\kappa_1 \land \kappa_2 \land \dotsb \land \kappa_m$ in $\cc{AC}^0[2]$-Frege
is a sequence of cedents, where each cedent is either one of the $\kappa_i$'s, or
an instance of an axiom scheme, or follows from two earlier cedents by
one of the above inference rules, and the final cedent is the empty cedent.
It is well known that any Frege refutation can be efficiently converted into a
tree-like proof.\footnote{By tree-like, we mean that the underlying directed acyclic graph structure
of the proof is a tree, and therefore every cedent, other than the final empty cedent, in the refutation is used
exactly once.}
We define a translation $t(A)$ from Boolean formulas to algebraic
circuits (over $\F_2$) such that
for any assignment $\alpha$, $A(\alpha)=1$ if and only if
$t(A)(\alpha)=0$.
The translation is defined inductively as follows.
\begin{enumerate}
\item $t(x)=1-x$ for $x$ atomic (a Boolean variable).
\item $t(\neg A) = 1-t(A)$
\item $t(\lor(A_1,\dotsc,A_n))=t(A_1)t(A_2)\dotsb t(A_n)$
\item $t(\parity(A_1,\dotsc,A_n)) = n-t(A_1)-t(A_2)\dotsb - t(A_n)$ (recall $n$ will be interpreted mod $2$).
\end{enumerate}
Note that the depth of $t(A)$ as an algebraic formula is at most the depth of $A$ as a Boolean formula.
For a cedent $\Gamma \rightarrow \Delta$, we will translate the cedent by moving everything to the right of the arrow. That is, the cedent $L = A_1, \dotsc,A_n \rightarrow B_1,\dotsc,B_m$ will be translated to $t(L) = t(\neg A_1 \lor \dotsb \lor \neg A_n \lor B_1 \lor \dotsb \lor B_m) = (1-t(A_1))(1-t(A_2))\dotsb(1-t(A_n))t(B_1)\dotsb t(B_n)$.
Let $R$ be a tree-like $\cc{AC}^0[2]$-Frege refutation of $\varphi$. We will prove by induction on the number of cedents of $R$ that for each cedent $L$ in the refutation, we can derive $t(L)$ via a Hilbert-like IPS proof (see Definition~\ref{def:IPSideal}) of the form $\sum_i G_i \f_i$, where the $\f_i$'s are the placeholder variables for the initial polynomials (the sum may contain each $\f_i$ more than once), each $G_i$ is a depth $d$ formula, and the overall size is polynomial in the size of the original $\cc{AC}^0[2]$-Frege refutation. (NB: as will become clear below, in order to preserve the depth, we wait to gather like terms in the sum until the end of the proof.) The placeholder variables $\f_1, \dotsc, \f_m$ correspond to $t(\kappa_1), \dotsc, t(\kappa_m)$, and $\f_{m+1}, \dotsc, \f_{m+n}$ correspond to the Boolean axioms $x_1^2 - x_1, \dotsc, x_n^2 - x_n$.
For the base case, each initial cedent of the form $\rightarrow \kappa_i$ translates to $\f_i$, and thus has the right form.
The axiom $A \rightarrow A$ translates to $t(A)(1-t(A))$. A simple induction on the structure of $A$ shows that $t(A)(1-t(A))$ can be derived from the $x_i^2 - x_i$ by an \I derivation of depth at most the depth of $A$.
The other axioms translate to the identically zero polynomial, so again have the right form.
For the inductive step, it is a matter of going through all of the rules. We assume inductively that we have a list $L$ of circuits each of the form $G_i \f_i$, such that each $G_i$ has a product gate at its output, and $\sum_{L} G_i \f_i$ is a derivation of the antecedents of the rule (note that, as $L$ is a list, each $\f_i$ may appear more than once in this sum).
\begin{enumerate}
\item (Weakening) Assume $\sum G_i \f_i$ is a derivation of $t(\Gamma \rightarrow \Delta)$.
We want to obtain a derivation of $t(\Gamma \rightarrow \Delta, A)$.
Since we move everything to the right when we translate, this is
equivalent to showing that if $\sum G_i \f_i$ is a derivation of
$t(\rightarrow A_1,\dotsc,A_n) = t(A_1)t(A_2)\dotsb t(A_n)$,
that we can obtain a derivation of
$t(\rightarrow A_1,\dotsc,A_n,B) = t(A_1)t(A_2)\dotsb t(A_n)t(B)$.
Multiplying each $G_i \f_i$ by $t(B)$ achieves this. The resulting derivation
is equivalent to $\sum G_i' \f_i$, where the depth of $G_i'$ is $\max\{\text{depth}(G_i), \text{depth}(B)\}$ (we do not need to add $1$ to the depth because we've assumed that $G_i$ has a product gate at the top).
\item (Cut) We want to show that if $\sum G_i \f_i$ is a derivation
of $t(\rightarrow \neg A, B_1,\dotsc, B_n) = (1-t(A))t(B_1)\dotsb t(B_n)$ and
$\sum G_i' \f_i$ is a derivation of
$t(\rightarrow A, B_1,\dotsc B_n) = t(A)t(B_1)\dotsb t(B_n)$, that
we can derive $t(\rightarrow B_1 \dotsc B_n) = t(B_1)\dotsb t(B_n)$. Semantically, adding these two derivations gives what we want. In order to preserve the inductive assumption, we do \emph{not} gather terms, but rather concatenate the two lists $(G_i \f_i)$ and $(G_i'\f_i)$, so that each term still has a product gate at the top without increasing the depth.
\item (Negation) Because our translation moves everything to the right, the translated versions become syntactically identical, and there is nothing to do for the negation rules.
\item (Or-Left) We want to show that if $\sum G_i \f_i$ is a derivation of
$t( \rightarrow \neg A_1, \Delta)$, and $\sum G_i' \f_i$ is a derivation
of $t( \rightarrow \neg \lor(A_2,\dotsc,A_n), \Delta)$, then we
can derive $t(\rightarrow \neg \lor(A_1,\dotsc,A_n), \Delta)$.
We have $$\sum G_i F_i = t(\rightarrow \neg A_1, \Delta) = (1-t(A_1))t(\Delta),$$
$$\sum G_i' F_i = t(\rightarrow \neg \lor(A_2,\dotsc,A_n), \Delta) = (1-t(A_2)t(A_3)\dotsb t(A_n))t(\Delta).$$
Multiplying the second by $t(A_1)$ and ``adding'' to the first gives the desired derivation. Again, when we ``add'' we do not gather terms, but rather just concatenate lists, so that each $G_i$ has a product gate at the top.
\item (Or-Right) The translation of the derived formula is syntactically identical to the original formula, so there is nothing to do.
\item (Parity-Left) We want to show that if $\sum G_i \f_i$ is a derivation
of $t(\rightarrow \neg A_1, \parity(A_2,\dotsc,A_n), \Delta)$ and $\sum G_i' \f_i$ is
a derivation of
$t( \rightarrow A_1, \neg \parity(A_2,\dotsc,A_n), \Delta)$, then we can derive
$t(\rightarrow \neg \parity(A_1,\dotsc,A_n), \Delta)$.
We have
$$t(\rightarrow \neg A_1, \parity(A_2,\dotsc,A_n), \Delta) = (1-t(A_1))(n-1-t(A_2)-t(A_3)- \dotsb - t(A_n))t(\Delta),$$
$$t(\rightarrow A_1, \neg \parity(A_2, \dotsc, A_n), \Delta)=t(A_1)(1 - (n-1-t(A_2)-t(A_3)- \dotsb - t(A_n)))t(\Delta).$$
It is easily verified that subtracting the latter from the former yields $t(\rightarrow \neg \parity(A_1,\dotsc,A_n),\Delta)$. To perform ``subtraction'' while maintaining a product gate at the top, we multiply the latter by $-1$ and then concatenate the two lists.
\item (Parity-Right) This case is similar to Parity-left.
\end{enumerate}
In all cases, we can derive the bottom cedent as $\sum_i G_i \f_i$, where
each $G_i$ has constant depth (in fact, depth at most one greater than the depth
of the original proof), and the overall size is polynomial in the original
proof size. Since we've actually just been maintaing a list of terms $G_i \f_i$ in which the $\f_i$ may appear multiple times, the final step is to add these all together and gather terms, leading to our final derivation
of polynomial size, and depth at most $d+2$, where $d$ was the original depth.
\end{proof}
\subsection{Simulating \texorpdfstring{\I}{\Itext}-proofs with arbitrary constants in \texorpdfstring{$\cc{coAM}$}{coAM}}
The following proposition shows how we may conclude that $\cc{NP} \subseteq \cc{coAM}$ from the assumption of polynomial-size \I proofs for all tautologies, \emph{without} assuming the \I proofs are constant-free (but using the Generalized Riemann Hypothesis). We thank Pascal Koiran for the second half of the proof.
\begin{proposition} \label{prop:coMAGRH}
Assuming the Generalized Riemann Hypothesis, over any field $\F$ of characteristic zero, if every propositional tautology has a polynomial-size $\I_{\F}$-proof of polynomial degree, then $\cc{NP} \subseteq \cc{coAM}$.
\end{proposition}
We do not know how to improve this result from $\cc{coAM}$ to $\cc{coMA}$ (as in Proposition~\ref{prop:coMA}).
\begin{proof}[Proof (with P. Koiran)]
We reduce to the fact that deciding Hilbert's Nullstellensatz---that is, given a system of integer polynomials over $\Z$, deciding if they have a solution over $\C$---is in $\cc{AM}$ \cite{koiranNS}. Rather than looking at solvability of the original set of equations $F_1(\vec{x}) = \dotsb = F_m(\vec{x}) = 0$, we consider solvability of a set of equations whose solutions describe all of the polynomial-size $\I$-certficiates for $F$. Namely, consider a \emph{generic} polynomial-size circuit, meaning a layered circuit of $\poly(n)$ depth and $\poly(n)$ width, with $n$ inputs $x_1, \dotsc, x_n,\f_1,\dotsc,\f_m$, and alternating layers of linear combination and product gates, where every edge $e$ terminating at any linear combination gate gets its own independent variable $z_{e}$. The output gate of this generic circuit computes a polynomial $C(\vec{x}, \vec{\f}, \vec{z})$, and for any setting of the $z_{e}$ variables to constants $\zeta_{e}$, we get a particular polynomial-size circuit computing a polynomial $C_{\vec{\zeta}}(\vec{x}, \vec{\f}) := C(\vec{x}, \vec{\f}, \vec{\zeta})$. Furthermore, any function computed by a polynomial-size circuit is equal to $C_{\vec{\zeta}}(\vec{x},\vec{\f})$ for some setting of $\vec{\zeta}$. In particular, if there is a polynomial size \I proof $C'$ for $F$, then there is some $\vec{\zeta} \in \F^{n}$ such that $C' = C_{\vec{\zeta}}(\vec{x}, \vec{\f})$.
We will translate the conditions that a circuit be an \I certificate into \emph{equations} on the new $z$ variables. Pick sufficiently many random values $\vec{\xi}^{(1)}, \vec{\xi}^{(2)}, \dotsc, \vec{\xi}^{(h)}$ to be substituted into $\vec{x}$; think of the $\vec{\xi}^{(i)}$ as a hitting set for the $x$-variables. Then we consider the solvability of the following set of $2h$ equations in $\vec{z}$:
\begin{eqnarray*}
\text{(For $i=1,\dotsc,h$)} & & C(\vec{\xi}^{(i)}, \vec{0}, \vec{z}) = 0 \\
\text{(For $i=1,\dotsc,h$)} & & C(\vec{\xi}^{(i)}, \vec{F}(\vec{\xi}^{(i)}), \vec{z}) = 1 \\
\end{eqnarray*}
Determining whether a system of polynomial equations, given by circuits over a field $\F$ of characteristic zero, has a solution in the algebraic closure $\overline{\F}$ can be done in $\cc{AM}$ \cite{koiranNS}. If $\vec{\zeta}$ is such that $C_{\vec{\zeta}}(\vec{x}, \vec{\f})=C(\vec{x}, \vec{\f}, \vec{\zeta})$ is in fact an \I certificate, then the preceding equalities will be satisfied regardless of the choices of the $\vec{\xi}^{(i)}$. Otherwise, at least one monomial in $C(\vec{x}, 0, \vec{\zeta})$ or $C(\vec{x}, \vec{F}(\vec{x}),\vec{\zeta})-1$ will be nonzero. Since all the monomials have polynomial degree, the usual DeMillo--Lipton--Schwarz--Zippel lemma implies that with high probability, a random point $\vec{\xi}$ will make any such nonzero monomial evaluate to a nonzero value. Choosing polynomially many points thus suffices. Composing Koiran's $\cc{AM}$ algorithm for the Nullstellensatz with the random guesses for the $\vec{\xi}^{(i)}$, and assuming that every family of propositional tautologies has $\cc{VP}$-\I certificates, we get an $\cc{AM}$ algorithm for TAUT.
\end{proof}
\section{Lower bounds on \texorpdfstring{\I}{\Itext} imply circuit lower bounds} \label{sec:VNP}
Here we complete the proof of the following theorem:
\begin{theorem} \label{thm:VNP}
A super-polynomial lower bound on [constant-free] Hilbert-like $\I_{R}$ proofs of any family of tautologies implies $\cc{VNP}_{R} \neq \cc{VP}_{R}$ [respectively, $\cc{VNP}^0_{R} \neq \cc{VP}^0_{R}$], for any ring $R$.
A super-polynomial lower bound on the number of lines in Polynomial Calculus proofs implies the Permanent versus Determinant Conjecture ($\cc{VNP} \neq \cc{VP}_{ws}$).
\end{theorem}
In Section~\ref{sec:VNPeabs} we proved this theorem assuming the following key lemma, which we now prove in full.
\begin{lemma} \label{lem:VNP}
Every family of CNF tautologies $(\varphi_n)$ has a Hilbert-like family of \I certificates $(C_n)$ in $\cc{VNP}^{0}_{R}$.
\end{lemma}
\begin{proof}
We mimic one of the proofs of completeness for Hilbert-like \I \cite[Theorem~1]{pitassi96} (recall Proposition~\ref{prop:pitassi}), and then show that this proof can in fact be carried out in $\cc{VNP}^{0}$. We omit any mention of the ground ring, as it will not be relevant.
Let $\varphi_n(\vec{x}) = \kappa_1(\vec{x}) \wedge \dotsb \wedge \kappa_m(\vec{x})$ be an unsatisfiable CNF, where each $\kappa_i$ is a disjunction of literals. Let $C_i(\vec{x})$ denote the (negated) polynomial translation of $\kappa_i$ via $\neg x \mapsto x$, $x \mapsto 1-x$ and $f \vee g \mapsto fg$; in particular, $C_i(\vec{x}) = 0$ if and only if $\kappa_i(\vec{x}) = 1$, and thus $\varphi_n$ is unsatisfiable if and only if the system of equations $C_1(\vec{x})=\dotsb=C_m(\vec{x})=x_1^2 - x_1 = \dotsb = x_n^2 - x_n = 0$ is unsatisfiable. In fact, as we'll see in the course of the proof, we won't need the equations $x_i^2 - x_i = 0$. It will be convenient to introduce the function $b(e,x)=ex + (1-e)(1-x)$, \ie, $b(1,x) = x$ and $b(0,x)=1-x$. For example, the clause $\kappa_i(\vec{x}) = (x_1 \vee \neg x_{17} \vee x_{42})$ gets translated into $C_i(\vec{x}) = (1-x_1)x_{17}(1-x_{42}) = b(0,x_1)b(1,x_{17})b(0,x_{42})$, and therefore an assignment falsifies $\kappa_i$ if and only if $(x_1,x_{17},x_{42}) \mapsto (0,1,0)$.
Just as $1 = x_1 x_2 + x_1(1-x_2) + (1-x_2)x_1 + (1-x_2)(1-x_1)$, an easy induction shows that
\begin{equation} \label{eqn:1}
1 = \sum_{\vec{e} \in \{0,1\}^{n}} \prod_{i=1}^{n}b(e_i,x_i).
\end{equation}
We will show how to turn this expression---which is already syntactically in $\cc{VNP}^{0}$ form---into a $\cc{VNP}$ certificate refuting $\varphi_n$. Let $c_i$ be the placeholder variable corresponding to $C_i(\vec{x})$.
The idea is to partition the assignments $\{0,1\}^{n}$ into $m$ parts $A_1,\dotsc,A_m$, where all assignments in the $i$-th part $A_i$ falsify clause $i$. This will then allow us to rewrite equation (\ref{eqn:1}) as
\begin{equation} \label{eqn:rewrite}
1 = \sum_{i=1}^{m} C_i(\vec{x})\left(\sum_{\vec{e} \in A_i} \prod_{j : x_j \notin \kappa_i} b(e_j,x_j)\right),
\end{equation}
where ``$x_j \notin \kappa_i$'' means that neither $x_j$ nor its negation appears in $\kappa_i$. Equation (\ref{eqn:rewrite}) then becomes the \I-certificate $\sum_{i=1}^{m} c_i \cdot \left(\sum_{\vec{e} \in A_i} \prod_{j : x_j \notin \kappa_i} b(e_j,x_j)\right)$. What remains is to show that the sum can indeed be rewritten this way, and that there is some partition $(A_1,\dotsc, A_m)$ as above such that the resulting certificate is in fact in $\cc{VNP}$.
First, let us see why such a partition allows us to rewrite (\ref{eqn:1}) as (\ref{eqn:rewrite}). The key fact here is that the clause polynomial $C_i(\vec{x})$ divides the term $t_{\vec{e}}(\vec{x}) := \prod_{i=1}^{n} b(e_i, x_i)$ if and only if $C_i(\vec{e}) = 1$, if and only if $\vec{e}$ \emph{falsifies} $\kappa_i$. Let $C_i(\vec{x})=\prod_{i \in I} b(f_i,x_i)$, where $I \subseteq [n]$ is the set of indices of the variables appearing in clause $i$. By the properties of $b$ discussed above, $1=C_i(\vec{e})=\prod_{i \in I} b(f_i, e_i)$ if and only if $b(f_i,e_i)=1$ for all $i \in I$, if and only if $f_i=e_i$ for all $i \in I$. In other words, if $1=C_i(\vec{e})$ then $C_i = \prod_{i \in I} b(e_i, x_i)$, which clearly divides $t_{\vec{e}}$. Conversely, suppose $C_i(\vec{x})$ divides $t_{\vec{e}}(\vec{x})$. Since $t_{\vec{e}}(\vec{e})=1$ and every factor of $t_{\vec{e}}$ only takes on Boolean values on Boolean inputs, it follows that every factor of $t_{\vec{e}}$ evaluates to $1$ at $\vec{e}$, in particular $C_i(\vec{e})=1$.
Let $A_1, \dotsc, A_m$ be a partition of $\{0,1\}^n$ such that every assignment in $A_i$ falsifies $\kappa_i$. Since $C_i$ divides every term $t_{\vec{e}}$ such that $\vec{e}$ falsifies clause $i$, $C_i$ divides every term $t_{\vec{e}}$ with $\vec{e} \in A_i$, and thus we can indeed rewrite (\ref{eqn:1}) as (\ref{eqn:rewrite}).
Next, we show how to construct a partition $A_1, \dotsc, A_m$ as above so that the resulting certificate is in $\cc{VNP}$. The partition we'll use is a greedy one. $A_1$ will consist of \emph{all} assignments that falsify $\kappa_1$. $A_2$ will consist of all \emph{remaining} assignments that falsify $\kappa_2$. And so on. In particular, $A_i$ consists of all assignments that falsify $\kappa_i$ and \emph{satisfy} all $A_j$ with $j < i$. (If at some clause $\kappa_i$ before we reach the end, we have used up all the assignments---which happens if and only if the first $i$ clauses on their own are unsatisfiable---that's okay: nothing we've done so far nor anything we do below assumes that all $A_i$ are nonempty.)
Equivalently, $A_i = \{\vec{e} \in \{0,1\}^n | C_i(\vec{e})=1 \text{ and } C_j(\vec{e})=0 \text{ for all } j < i\}$. For any property $\Pi$, we write $\llbracket \Pi(\vec{e}) \rrbracket$ for the indicator function of $\Pi$: $\llbracket \Pi(\vec{e}) \rrbracket=1$ if and only if $\Pi(\vec{e})$ holds, and $0$ otherwise. We thus get the certificate:
\begin{eqnarray*}
& & \sum_{i=1}^{m} c_i \cdot \left(\sum_{\vec{e} \in \{0,1\}^n} \llbracket \vec{e} \text{ falsifies } \kappa_i \text{ and satisfies $\kappa_j$ for all } j < i \rrbracket \prod_{j : x_j \notin \kappa_i} b(e_j, x_j) \right) \\
& = & \sum_{i=1}^{m} c_i \cdot \left(\sum_{\vec{e} \in \{0,1\}^n} \llbracket C_i(\vec{e})=1 \text{ and } C_j(\vec{e})=0 \text{ for all } j < i \rrbracket \prod_{j : x_j \notin \kappa_i} b(e_j, x_j) \right) \\
& = & \sum_{i=1}^{m} c_i \cdot \left(\sum_{\vec{e} \in \{0,1\}^n} \left(C_i(\vec{e}) \prod_{j < i}(1-C_j(\vec{e})) \right) \prod_{j : x_j \notin \kappa_i} b(e_j, x_j) \right) \\
& = & \sum_{e \in \{0,1\}^{n}} \sum_{i=1}^{m} c_i C_i(\vec{e})\left(\prod_{j < i}(1-C_j(\vec{e}))\right)\left(\prod_{j : x_j \notin \kappa_i} b(e_j, x_j)\right)
\end{eqnarray*}
Finally, it is readily visible that the polynomial function of $\vec{c}$, $\vec{e}$, and $\vec{x}$ that is the summand of the outermost sum $\sum_{\vec{e} \in \{0,1\}^{n}}$ is computed by a polynomial-size circuit of polynomial degree, and thus the entire certificate is in $\cc{VNP}$. Indeed, the expression as written exhibits it as a small \emph{formula} of constant depth with unbounded fan-in gates. By inspection, this circuit only uses the constants $0,1,-1$, hence the certificate is in $\cc{VNP}^{0}$.
\end{proof}
|
2,877,628,089,125 | arxiv | \section{Introduction}
The gate-based model and the measurement-based model are two fundamentally different approaches to implementing quantum computations.
In the gate-based model \cite{NielsenChuang}, the bulk of the computation is performed via unitary (i.e.\ reversible) one- and two-qubit gates.
Measurements serve mainly to read out data and may be postponed to the end of the computation.
Conversely, in the measurement-based model, the bulk of the computation is performed via measurements on some general resource state, which is independent of the specific computation.
We focus here on the one-way model~\cite{MBQC1}, where the resource states are \emph{graph states} (see Section~\ref{sec:MBQC}).
In this paper, we study ways of converting between these two different approaches to quantum computation with a view towards optimizing the implementation of both.
While computations in the gate-based model are represented as quantum circuits,
computations in the one-way model are usually represented by \emph{measurement patterns}, which describe both the graph state and the measurements performed on it~\cite{Patterns,danos_kashefi_panangaden_perdrix_2009}.
Measurement patterns in the one-way model generally do not allow arbitrary single-qubit measurements.
Instead, measurements are often restricted to the `planes' of the Bloch sphere that are orthogonal to the principal axes, labelled the \normalfont XY\xspace-, \normalfont XZ\xspace-, and \normalfont YZ\xspace-planes.
In fact, most research has focused on measurements in just the \normalfont XY\xspace-plane, which alone are sufficient for universal quantum computation~\cite{Patterns}.
Similarly, measurements in the \normalfont XZ\xspace-plane are also universal~\cite{mhalla2012graph}, although this model has been explored less in the literature.
In this paper, we will consider measurements in all three of the planes, since this usually leads to patterns involving fewer qubits, and allows for more non-trivial transformations of the graph state.
Due to the non-deterministic nature of quantum measurements, a one-way computation needs to be adaptive, with later measurement angles depending on the outcomes of earlier measurements~\cite{MBQC1}.
While the ability to correct undesired measurement outcomes is necessary for obtaining a deterministic computation, not all sequences of measurements support such corrections.
The ability to perform a deterministic computation depends on the underlying graph state and the choice of measurement planes, which together form an object called a labelled open graph.
If all measurements are in the \normalfont XY\xspace-plane, \emph{causal flow} (sometimes simply called `flow') is a sufficient condition for the labelled open graph\ to support deterministically implementable patterns~\cite{Danos2006Determinism-in-}.
Yet causal flow is not necessary for determinism.
Instead, the condition of \emph{generalized flow} (or \emph{gflow}) \cite{GFlow} is both sufficient and necessary for deterministic implementability.
Gflow can be defined for labelled open graph{}s containing measurements in all three planes, in which case it is sometimes called \emph{extended gflow}~\cite[Theorems~2 \&~3]{GFlow}.
A given representation of some computation can be transformed into a different representation of the same computation using local rewriting.
The new representation may be chosen to have more desirable properties.
For quantum circuits, such desirable properties include low depth~\cite{amy2013meet}, small total gate count~\cite{CliffOpt}, or small counts of some particular type of gate, such as the T-gate~\cite{amy2014polynomial}.
For measurement patterns, desirable properties include a small number of qubits~\cite{eslamy2018optimization,houshmand2018minimal} or a particularly simple underlying graph state~\cite{mhalla2012graph}.
Local processes can also be used to translate a pattern into a circuit.
This is used, for example, to verify that the pattern represents the desired operation~\cite{Danos2006Determinism-in-,beaudrap2010unitary,duncan2010rewriting,daSilva2013compact,miyazaki2015analysis}.
Conversely, a translation of a circuit into a pattern can be used to implement known algorithms in the one-way model, or it can be combined with a translation back to a circuit to trade depth against width, to parallelise Clifford operations, or to reduce the number of T gates~\cite{broadbent_2009_parallelizing,daSilva2013global,houshmand2017quantum}.
No complete set of rewrite rules is known for quantum circuits or for measurement patterns, although a completeness result does exist for 2-qubit circuits over the Clifford+T gate set \cite{Bian2Qubit}.
Rewriting of patterns or circuits, as well as translations between the two models, can be performed using the \zxcalculus, a graphical language for quantum computation \cite{CD2}.
This language is more flexible than quantum circuit notation and also has multiple complete sets of graphical rewrite rules \cite{SimonCompleteness,HarnyAmarCompleteness,JPV-universal,euler-zx}.
While translating a measurement pattern to a quantum circuit can be difficult, the translation between patterns and \zxdiagrams is straightforward \cite{duncan2010rewriting,cliff-simp,kissinger2017MBQC}.
\subsection{Our contributions}
In this paper, we give an algorithm that extracts a quantum circuit from any measurement pattern whose underlying labelled open graph\ has extended gflow.
Our algorithm does not use ancillae.
This is the first circuit extraction algorithm for extended gflow, i.e.\ where patterns may contain measurements in more than one plane.
The algorithm works by translating the pattern into the \zxcalculus and transforming the resulting \zxdiagram into a circuit-like form.
It generalises a similar algorithm, which works only for patterns where all measurements are in the \normalfont XY\xspace-plane \cite{cliff-simp}.
The circuit extraction algorithm employs the \zxcalculus, so it can be used not only on diagrams arising from measurement patterns but on any \zxdiagram satisfying certain properties.
Thus, this procedure is not only the most general circuit extraction algorithm for measurement patterns but also the most general known circuit extraction algorithm for \zxcalculus diagrams.
In developing the circuit extraction algorithm, we derive a number of explicit rewrite rules for \zxdiagrams representing measurement patterns, in particular rewrites that involve graph transformations on the underlying resource state.
We show how the gflow changes for each of these rewrite rules, i.e.\ how the rewrites affect the instructions for correcting undesired measurement outcomes.
These rewrite rules unify and formalise several rules that were previously employed in the literature in a more ad-hoc manner, e.g.\ the pivot-minor transformation in Ref.~\cite{mhalla2012graph} or the elimination of Clifford measurements first derived in a different context in Ref.~\cite{hein2004multiparty}.
The rewrite rules serve not only to prove the correctness of the algorithm, but also to simplify the measurement patterns by reducing the number of qubits involved.
Combining the different rules allows us to remove any qubit measured in a Pauli basis, while maintaining deterministic implementability.
This shows that the number of qubits needed to perform a measurement-based computation is directly related to the number of non-Clifford operations required for the computation.
We also generalise several concepts originally developed for patterns containing only \normalfont XY\xspace-plane measurements to patterns with measurements in multiple planes.
In particular, we adapt the definitions of \emph{focused gflow}~\cite{mhalla2011graph} and \emph{maximally delayed gflow}~\cite{MP08-icalp} to the extended gflow case.
Our generalisation of focused gflow differs from the three generalisations suggested by Hamrit and Perdrix~\cite{hamrit2015reversibility}; indeed, the desired applications naturally lead us to one unique generalisation (see Section~\ref{sec:focusing-extended-gflow}).
Combined with the known procedure for transforming a quantum circuit into a measurement pattern using the \zxcalculus \cite{cliff-simp}, our pattern simplification and circuit extraction procedure can be used to reduce the T-gate count of quantum circuits.
This involves translating the circuit into a \zxcalculus diagram, transforming to a diagram which corresponds to a measurement pattern, simplifying the pattern, and then re-extracting a circuit.
A rough overview of the different translation and optimisation procedures is given in Figure~\ref{figRoughTranslationsAndOptimisations}.
\begin{figure}
\ctikzfig{translationsOverview}
\caption{A rough overview over the translation procedures between the three paradigms, indicating where the optimisation steps are carried out.
\label{figRoughTranslationsAndOptimisations}}
\end{figure}
The remainder of this paper is structured as follows.
Known definitions and results relating to \zxcalculus, measurement patterns and gflow are presented in Section~\ref{sec:preliminaries}.
We derive the rewrite rules for extended measurement patterns and the corresponding gflow transformations in Section~\ref{sec:rewriting}. These results are used in Section~\ref{sec:simplifying} to simplify measurement patterns in various ways. Then in Section~\ref{sec:circuitextract}, we demonstrate the algorithm for extracting a circuit from a measurement pattern whose underlying labelled open graph\ has extended gflow.
The conclusions are given in Section~\ref{sec:conclusion}.
\section{Preliminaries} \label{sec:preliminaries}
We give a quick overview over the \zxcalculus in Section~\ref{sec:zx}
and introduce the one-way model of measurement-based quantum computing in Section~\ref{sec:MBQC}.
The graph-theoretic operations of local complementation and pivoting
(on which the rewrite rules for measurement patterns are based)
and their representation in the \zxcalculus are presented in Section~\ref{sec:lc}.
Section~\ref{sec:gflow} contains the definitions and properties of different notions of flow.
\subsection{The ZX-calculus}
\label{sec:zx}
The \zxcalculus is a diagrammatic language similar to the widely-used
quantum circuit notation. We will provide only a brief overview here,
for an in-depth reference see~\cite{CKbook}.
A \emph{\zxdiagram} consists of \emph{wires} and \emph{spiders}.
Wires entering the diagram from the left are called \emph{input wires}; wires exiting to the right are called \emph{output wires}.
Given two diagrams we can compose them horizontally (denoted $\circ$)
by joining the output wires of the first to the input wires of the second, or
form their tensor product (denoted $\otimes$) by simply stacking the two diagrams vertically.
Spiders are linear maps which can have any number of input or output
wires. There are two varieties: $Z$ spiders, depicted as green dots, and $X$ spiders, depicted as red dots.\footnote{If you are reading this
document in monochrome or otherwise have difficulty distinguishing green and red, $Z$ spiders will appear lightly-shaded and $X$ spiders will appear darkly-shaded.}
Written explicitly in Dirac `bra-ket' notation, these linear maps are:
\[
\small
\hfill \tikzfig{Zsp-a} \ := \ \ketbra{0...0}{0...0} +
e^{i \alpha} \ketbra{1...1}{1...1} \hfill
\qquad
\hfill \tikzfig{Xsp-a} \ := \ \ketbra{+...+}{+...+} +
e^{i \alpha} \ketbra{-...-}{-...-} \hfill
\]
A \zxdiagram with $m$ input wires and $n$ output wires then represents a linear map $(\mathbb C^2)^{\otimes m} \to (\mathbb C^2)^{\otimes n}$ built from
spiders (and permutations of qubits) by composition and tensor product
of linear maps. As a special case, diagrams with no inputs and $n$ outputs represent vectors in $(\mathbb C^2)^{\otimes n}$, i.e.
(unnormalised) $n$-qubit states.
\begin{example}\label{ex:basic-maps-and-states}
We can immediately write down some simple state preparations and
unitaries in the \zxcalculus:
\[
\begin{array}{rclcrcl}
\tikzfig{ket-+} & = & \ket{0} + \ket{1} \ \propto \ket{+} &
\qquad &
\tikzfig{ket-0} & = & \ket{+} + \ket{-} \ \propto \ket{0} \\[1em]
\tikzfig{Z-a} & = & \ketbra{0}{0} + e^{i \alpha} \ketbra{1}{1} =
Z_\alpha &
&
\tikzfig{X-a} & = & \ketbra{+}{+} + e^{i \alpha} \ketbra{-}{-} = X_\alpha
\end{array}
\]
In particular we have the Pauli matrices:
\[
\hfill
\tikzfig{Z} = Z \qquad\qquad \tikzfig{X} = X \qquad\qquad
\hfill
\]
\end{example}
It will be convenient to introduce a symbol -- a yellow square -- for
the Hadamard gate. This is defined by the equation:
\begin{equation}\label{eq:Hdef}
\hfill
\tikzfig{had-alt}
\hfill
\end{equation}
We will often use an alternative notation to simplify the diagrams,
and replace a Hadamard between two spiders by a blue dashed edge, as
illustrated below.
\ctikzfig{blue-edge-def}
Both the blue edge notation and the Hadamard box can always be
translated back into spiders when necessary. We will refer to the blue
edge as a \emph{Hadamard edge}.
\begin{definition}\label{def:interpretation}
The \emph{interpretation} of a \zxdiagram $D$ is the linear map that such a diagram represents
and is written $\intf{D}$.
For a full treatment of the interpretation of a ZX diagram see, e.g.\ Ref.~\cite{SimonCompleteness}.
We say two \zxdiagrams $D_1$ and $D_2$ are \emph{equivalent} when $\intf{D_1}=z\intf{D_2}$ for some non-zero complex number $z$.
\end{definition}
We define equivalence up to a global scalar, as those scalars will not be important for the class of diagrams we study in this paper.
\begin{remark}
There are many different sets of rules for the \zxcalculus. The version we present only preserves equality up to a global scalar. Versions of the \zxcalculus where equality is `on the nose' can be found in Ref.~\cite{Backens:2015aa} for the stabiliser fragment, in Ref.~\cite{SimonCompleteness} for the Clifford+T fragment, and in Ref.~\cite{JPV-universal,ng2017universal} for the full language.
\end{remark}
The interpretation of a \zxdiagram is invariant under
moving the vertices around in the plane, bending,
unbending, crossing, and uncrossing wires, so long as the connectivity
and the order of the inputs and outputs is maintained.
Furthermore, there is an additional set of equations that we call the \emph{rules} of the \zxcalculus; these are shown in
Figure~\ref{fig:zx-rules}. Two diagrams are equivalent if one can be transformed into the other using the rules of the \zxcalculus.
\begin{figure}[h]
\ctikzfig{ZX-rules}
\vspace{-3mm}
\caption{\label{fig:zx-rules} A convenient presentation for the ZX-calculus. These rules hold
for all $\alpha, \beta \in [0, 2 \pi)$, and due to $(\bm{h})$ and
$(\bm{i2})$ all rules also hold with the colours interchanged. Note the ellipsis should be read as `0 or more', hence the spiders on the LHS of \SpiderRule are connected by one or more wires.}
\end{figure}
\begin{remark}\label{rem:completeness}
The \zxcalculus is \emph{universal} in the sense that any linear map can be expressed as a \zxdiagram. Furthermore, when restricted to \textit{Clifford \zxdiagrams}, i.e. diagrams whose phases are all multiples of $\pi/2$, the version we present in Figure~\ref{fig:zx-rules} is \emph{complete}. That is, for any two Clifford \zxdiagrams that describe the same linear map, there exists a series of rewrites transforming one into the other. Recent extensions to the calculus have been introduced which are complete for the larger \textit{Clifford+T} family of \zxdiagrams \cite{SimonCompleteness}, where phases are multiples of $\pi/4$, and for \textit{all} \zxdiagrams~\cite{HarnyAmarCompleteness,JPV-universal,euler-zx}.
\end{remark}
Quantum circuits can be translated into \zxdiagrams in a straightforward manner.
We will take as our starting point circuits constructed
from the following universal set of gates:
\[
\CX
\qquad\qquad
Z_\alpha
\qquad\qquad
H
\]
We choose this gate
set because it admits a convenient representation in terms of
spiders:
\begin{align}\label{eq:zx-gates}
\CX & = \tikzfig{cnot} &
Z_\alpha & = \tikzfig{Z-a} &
H & = \tikzfig{h-alone}
\end{align}
These gates have the following interpretations:
\begin{align*}
\intf{\tikzfig{cnot}} &=
\left(\begin{matrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 \\
\end{matrix}\right)
\qquad
\intf{\tikzfig{Z-a}} &=
\left(\begin{matrix}
1 & 0 \\
0 & e^{i \alpha}
\end{matrix}\right)
\qquad
\intf{\tikzfig{h-alone}} &= \frac{1}{\sqrt{2}}
\left(\begin{matrix}
1 & 1 \\
1 & -1
\end{matrix}\right)
\end{align*}
For the \CX gate, the green spider is the first (i.e. control) qubit and the red spider is the second (i.e. target) qubit. Other common gates can easily be expressed in terms of these gates. In particular, $S := Z_{\frac\pi2}$, $T := Z_{\frac\pi4}$ and:
\begin{align}\label{eq:zx-derived-gates}
X_\alpha & = \tikzfig{X-a-expanded} &
\ensuremath{\textrm{CZ}}\xspace & = \tikzfig{cz-small}
\end{align}
\begin{remark}
Note that the directions of the wires in the depictions of the \CX and \ensuremath{\textrm{CZ}}\xspace gates are irrelevant, hence we can draw them vertically without ambiguity. Undirected wires are a general feature of \zxdiagrams, and from hence forth we will ignore wire directions without further comment. We will also freely draw inputs/outputs entering or exiting the diagram from arbitrary directions if the meaning is clear from context (as for example in Example~\ref{ex:gflow-in-action}).
\end{remark}
\noindent For our purposes, we will define quantum circuits as a special case of \zxdiagrams.
\begin{definition}\label{def:circuit}
A \emph{circuit} is a \zxdiagram generated by composition and tensor products of the \zxdiagrams in equations~\eqref{eq:zx-gates} and~\eqref{eq:zx-derived-gates}.
The interpretation of such a circuit is given by the composition and tensor products of the interpretations of the generating diagrams given
in equation~\eqref{eq:zx-gates}, in accordance with:
\begin{align*}
\intf{D \otimes D'} = \intf{D} \otimes \intf{D'} \qquad \intf{D \circ D'} = \intf{D} \circ \intf{D'}
\end{align*}
\end{definition}
Important subclasses of circuits are \textit{Clifford circuits}, sometimes called stabilizer circuits, which are obtained from compositions of only \CX, $H$, and $S$ gates. They are efficiently classically simulable, thanks to the \textit{Gottesman-Knill theorem}~\cite{aaronsongottesman2004}. A unitary is \textit{local Clifford} if it arises from a single-qubit Clifford circuit, i.e. a composition of $H$ and $S$ gates.
The addition of $T$ gates to Clifford circuits yields \textit{Clifford+T circuits}, which are capable of approximating any $n$-qubit unitary to arbitrary precision, whereas the inclusion of $Z_\alpha$ gates for all $\alpha$ enables one to construct any unitary exactly~\cite{NielsenChuang}.
\subsection{Measurement-based quantum computation}
\label{sec:MBQC}
Measurement-based quantum computing (MBQC) is a universal model for quantum computation, underlying the \emph{one-way quantum computing} scheme \cite{MBQC2}.
The basic resource for MBQC is a \emph{graph state}, built from a register of qubits by applying $CZ$-gates pairwise to obtain an entangled quantum state.\footnote{There are models of MBQC where the basic resource is not a graph state, but we do not consider those models in this paper.}
The graph state is then gradually consumed by measuring single qubits.
By choosing an appropriate resource state and appropriate measurements, any quantum computation can be performed.
The difficulty is the non-determinism inherent in quantum measurements, which means computations need to be adaptive to implement deterministic operations.
Measurement-based quantum computations are usually formalised in terms of \emph{measurement patterns}, a syntax describing both the construction of graph states and their processing via successive measurements.
In the following, we first present measurement patterns, and then introduce other
-- and, in our opinion, simpler --
formalisms for reasoning about these computations.
Instead of allowing arbitrary single-qubit measurements, measurements are usually restricted to three planes of the Bloch sphere, labelled XY, XZ, and YZ.
For each measurement, the state denoted `$+$' is taken to be the desired result of the measurement and the state denoted `$-$' is the undesired result, which will need to be adaptively corrected.
The allowed measurements are (\cite[p.~292]{danos_kashefi_panangaden_perdrix_2009}):
\begin{align*}
\ket{+_{\ensuremath\normalfont\textrm{XY}\xspace,\alpha}} &= \frac{1}{\sqrt{2}}\left(\ket{0} + e^{i\alpha} \ket{1} \right) &
\ket{-_{\ensuremath\normalfont\textrm{XY}\xspace,\alpha}} &= \frac{1}{\sqrt{2}}\left(\ket{0} - e^{i\alpha} \ket{1} \right) \\
\ket{+_{\normalfont\normalfont\textrm{XZ}\xspace,\alpha}} &= \cos\left(\frac{\alpha}{2}\right)\ket{0} + \sin\left(\frac{\alpha}{2}\right) \ket{1} &
\ket{-_{\normalfont\normalfont\textrm{XZ}\xspace,\alpha}} &= \sin\left(\frac{\alpha}{2}\right)\ket{0} - \cos\left(\frac{\alpha}{2}\right) \ket{1} \\
\ket{+_{\normalfont\normalfont\textrm{YZ}\xspace,\alpha}} &= \cos\left(\frac{\alpha}{2}\right)\ket{0} + i \sin\left(\frac{\alpha}{2}\right) \ket{1} &
\ket{-_{\normalfont\normalfont\textrm{YZ}\xspace,\alpha}} &= \sin\left(\frac{\alpha}{2}\right)\ket{0} - i \cos\left(\frac{\alpha}{2}\right) \ket{1}
\end{align*}
\noindent where $0 \leq \alpha \leq 2\pi$.
Usually, the desired measurement outcome is labelled 0 and the undesired measurement outcome is labelled 1.
\begin{definition}\label{def:meas_pattern}
A \emph{measurement pattern} consists of an $n$-qubit register $V$ with distinguished sets $I, O \subseteq V$ of input and output qubits and a sequence of commands consisting of the following operations:
\begin{itemize}
\item Preparations $N_i$, which initialise a qubit $i \notin I$ in the state $\ket{+}$.
\item Entangling operators $E_{ij}$, which apply a $CZ$-gate to two distinct qubits $i$ and $j$.
\item Destructive measurements $M_i^{\lambda, \alpha}$, which project a qubit $i\notin O$ onto the orthonormal basis $\{\ket{+_{\lambda,\alpha}},\ket{-_{\lambda,\alpha}}\}$, where $\lambda \in \{ \ensuremath\normalfont\textrm{XY}\xspace, \normalfont\normalfont\textrm{XZ}\xspace, \normalfont\normalfont\textrm{YZ}\xspace \}$ is the measurement plane and $\alpha$ is the measurement angle.
The projector $\ket{+_{\lambda,\alpha}}\bra{+_{\lambda,\alpha}}$ corresponds to outcome $0$ and $\ket{-_{\lambda,\alpha}}\bra{-_{\lambda,\alpha}}$
corresponds to outcome $1$.
\item Corrections $[X_i]^s$, which depend on a measurement outcome (or a linear combination of measurement outcomes) $s\in\{0,1\}$ and act as the Pauli-$X$ operator on qubit $i$ if $s$ is $1$ and as the identity otherwise,
\item Corrections $[Z_j]^t$, which depend on a measurement outcome (or a linear combination of measurement outcomes) $t\in\{0,1\}$ and act as the Pauli-$Z$ operator on qubit $j$ if $t$ is $1$ and as the identity otherwise.
\end{itemize}
\end{definition}
We will only consider \emph{runnable patterns}, which satisfy certain conditions ensuring they are implementable in practice.
\begin{definition}[{\cite[p.~4]{GFlow}}]\label{def:runnable_pattern}
A measurement pattern is \emph{runnable} if the following conditions hold.
\begin{itemize}
\item No correction depends on an outcome not yet measured.
\item All non-input qubits are prepared.
\item All non-output qubits are measured.
\item A command $C$ acts on a qubit $i$ only if $i$ has not already been measured, and one of (1)-(3) holds:
\begin{enumerate}[label=({\arabic*})]
\item $i$ is an input and $C$ is not a preparation,
\item $i$ has been prepared and $C$ is not a preparation,
\item $i$ has not been prepared, $i$ is not an input, and $C$ is a preparation.
\end{enumerate}
\end{itemize}
\end{definition}
Runnable measurement patterns can be \emph{standardized} \cite[p.~4]{GFlow}, so that all preparations $N_i$ appear first, then all the entangling operators $E_{ij}$, then the measurements $M_i^{\lambda, \alpha}$ and finally the corrections.
The entangling operators $E_{ij}$ in a pattern determine the resource graph state.
In fact, they correspond to the edges of the underlying \emph{labelled open graph}. This is formalised in the following definitions and remark, which will be used throughout the paper.
\begin{definition}
An \emph{open graph} is a tuple $(G,I,O)$ where $G=(V,E)$ is an undirected graph, and $I,O \subseteq V$ are distinguished (possibly overlapping) subsets representing \emph{inputs} and \emph{outputs}. We will write $\comp{O} := V\setminus O$ for the \emph{non-outputs} and $\comp{I}:= V\setminus I$ for the \emph{non-inputs}. If $v\not\in I$ and $v\not\in O$, we call $v$ an \emph{internal} vertex, and if $v\in I\cup O$, we call $v$ a \emph{boundary vertex}.
For vertices $u,v \in V$ we write $u\sim v$ when $u$ and $v$ are adjacent in $G$, and denote by $N_G(u)\coloneqq\{w\in V \mid w\sim u\}$ the set of neighbours of $u$.
\end{definition}
\begin{definition}\label{def:LOG}
A \emph{labelled open graph} is a tuple $\Gamma = (G,I,O, \lambda)$ where $(G,I,O)$ is an open graph, and $\lambda : \comp{O} \rightarrow \{ \ensuremath\normalfont\textrm{XY}\xspace, \normalfont\normalfont\textrm{YZ}\xspace, \normalfont\normalfont\textrm{XZ}\xspace\}$ assigns a measurement plane to each non-output vertex.
\end{definition}
\begin{remark}
A labelled open graph\ and an assignment of measurement angles $\alpha:\comp{O}\to [0,2\pi)$ carry the same information as a (standardized) runnable measurement pattern with no corrections.
Given the measurement pattern, the corresponding labelled open graph\ $(G,I,O,\lambda)$ may be determined as follows: the vertices of the graph $G$ are given by the set of qubits $V$.
The edges of $G$ are given by the set
\[
E = \{i\sim j \mid i,j\in V \text{ and $E_{ij}$ appears in the pattern}\}.
\]
The sets $I$ and $O$ are the ones given in Definition~\ref{def:meas_pattern}.
The functions $\lambda$ and $\alpha$ are determined by the measurement operators $M_i^{\lambda,\alpha}$; Definition~\ref{def:runnable_pattern} guarantees that both are well-defined.
Given a labelled open graph\ we can apply this process in reverse to construct a standardised runnable measurement pattern without corrections.
(In the absence of corrections, the order of the individual preparation commands, entangling commands, and measurement commands in the pattern does not matter since all commands of the same type commute.)
A labelled open graph\ and an assignment of measurement angles can also be determined from a measurement pattern including corrections by simply ignoring the latter (i.e.\ by assuming that all measurements yield the desired result).
In Section~\ref{sec:gflow}, we discuss necessary and sufficient conditions under which appropriate corrections commands can be determined when constructing a measurement pattern from a labelled open graph; these corrections then put constraints on the order of the measurements.
\end{remark}
In general, a single measurement pattern can result in a variety of measurement instructions, with each instruction depending on earlier measurement outcomes and the resulting corrections that must be applied.
We are, however, interested in the subset of measurement patterns that always implement the same linear map, regardless of the measurement outcomes.
For these patterns, we can identify the unique linear map implemented by the pattern with the set of measurement instructions obtained when all the measurement outcomes are $0$ (and thus no corrections are necessary).
This leads us to the following definition:
\begin{definition}\label{def:ogs-to-linear-map}
Suppose $\Gamma=(G,I,O,\lambda)$ is a labelled open graph, and let $\alpha:\comp{O}\to [0,2\pi)$ be an assignment of measurement angles.
The \emph{linear map associated with $\Gamma$ and $\alpha$} is given by
\[
M_{\Gamma,\alpha} := \left( \prod_{i\in\comp{O}} \bra{+_{\lambda(i),\alpha(i)}}_i \right) E_G N_{\comp{I}},
\]
where $E_G := \prod_{i\sim j} E_{ij}$ and $N_{\comp{I}} := \prod_{i\in\comp{I}} N_i$.
\end{definition}
\begin{remark}
Note that the projections $\bra{+_{\lambda(i),\alpha(i)}}_i$ on different qubits $i$ commute with each other.
Similarly, the controlled-Z operations $E_{ij}$ commute even if they involve some of the same qubits.
Finally, all the state preparations $N_i$ on different qubits commute.
Thus, $M_{\Gamma,\alpha}$ is fully determined by $\Gamma$ and $\alpha$.
\end{remark}
\begin{definition}\label{def:ogs-to-ZX}
Suppose $\Gamma=(G,I,O,\lambda)$ is a labelled open graph\ and $\alpha:\comp{O}\to [0,2\pi)$ is an assignment of measurement angles.
Then its \emph{associated \zxdiagram} $D_{\Gamma,\alpha}$ is defined by translating the expression for $M_{\Gamma,\alpha}$ from Definition~\ref{def:ogs-to-linear-map} according to Table~\ref{tab:MBQC-to-ZX}. The elements are composed in the obvious way and any sets of adjacent phase-free green spiders other than measurement effects are merged. In other words, merging affects green spiders which come from the translation of a preparation or entangling command.
\begin{table}
\centering
\renewcommand{\arraystretch}{2}
\begin{tabular}{c||c|c|c|c|c}
operator & $N_i$ & $E_{ij}$ & $\bra{+_{\ensuremath\normalfont\textrm{XY}\xspace,\alpha(i)}}_i$ & $\bra{+_{\normalfont\normalfont\textrm{XZ}\xspace,\alpha(i)}}_i$ & $\bra{+_{\normalfont\normalfont\textrm{YZ}\xspace,\alpha(i)}}_i$ \\ \hline
diagram & \tikzfig{plus-state} & \tikzfig{cz} & \tikzfig{XY-effect} & \tikzfig{XZ-effect} & \tikzfig{YZ-effect}
\end{tabular}
\renewcommand{\arraystretch}{1}
\caption{Translation from an associated linear map to a \zxdiagram.}
\label{tab:MBQC-to-ZX}
\end{table}
\end{definition}
\begin{example}
The measurement pattern with the qubit register $V=\{ 1,2,3,4\}$, input and output sets $I=\{ 1,2 \}$ and $O = \{ 1,4 \}$ and the sequence of commands
$$ M_2^{\ensuremath\normalfont\textrm{XY}\xspace,\frac{\pi} 2}M_3^{\normalfont\normalfont\textrm{YZ}\xspace,\pi}E_{14}E_{23}E_{24} E_{34} N_3 N_4$$
is represented by the following \zxdiagram:
\ctikzfig{example-MBQC-translation}
\end{example}
\begin{lemma}\label{lem:zx-equals-linear-map}
Suppose $\Gamma=(G,I,O,\lambda)$ is a labelled open graph\ and $\alpha:\comp{O}\to [0,2\pi)$ is an assignment of measurement angles.
Let $M_{\Gamma,\alpha}$ be the linear map specified in Definition~\ref{def:ogs-to-linear-map} and let $D_{\Gamma,\alpha}$ be the \zxdiagram constructed according to Definition~\ref{def:ogs-to-ZX}.
Then $\intf{D_{\Gamma,\alpha}}=M_{\Gamma,\alpha}$.
\end{lemma}
\begin{proof}
For each operator $M$ in Table~\ref{tab:MBQC-to-ZX} and its corresponding diagram $D_M$, it is straightforward to check that $\intf{D_M}=M$.
The result thus follows by the compositional properties of the interpretation $\intf{\cdot}$ and the fact that rewriting ZX-diagrams preserves semantics.
\end{proof}
In order to specify a converse direction to this result, we will define a special class of ZX-diagrams. Before we do that, we recall which ZX-diagrams correspond to graph states.
\begin{definition}\label{def:graph-state}
A \emph{graph state diagram} is a \zxdiagram where all vertices are green, all the connections between vertices are Hadamard edges and an output wire is incident on each vertex in the diagram.
The \emph{graph corresponding to a graph state diagram} is the graph whose vertices are the green spiders of the \zxdiagram and whose edges are given by the Hadmard edges of the \zxdiagram.
\end{definition}
\begin{definition}\label{def:MBQC-form}
A \zxdiagram is in \emph{MBQC form} if it consists of a graph state diagram in which each vertex of the graph may also be connected to:
\begin{itemize}
\item an input (in addition to its output), and
\item a measurement effect (in one of the three measurement planes) instead of the output.
\end{itemize}
\end{definition}
\begin{definition}\label{def:graph-of-diagram}
Given a \zxdiagram $D$ in MBQC form, its \emph{underlying graph} $G(D)$ is the graph corresponding to the graph state part of $D$.
\end{definition}
See Figure~\ref{fig:graph-state} for an example of a graph state diagram and a diagram in MBQC form.
\begin{figure}
\ctikzfig{graph-state-ex}
\caption{On the left, a graph state diagram. In the middle, a diagram in MBQC form with the same underlying graph. On the right, an MBQC+LC form diagram with the same underlying labelled open graph.\label{fig:graph-state}}
\end{figure}
\noindent Given these definitions we can now show the following:
\begin{lemma}\label{lem:ogs-to-ZX-is-MBQC-Form}
Let $\Gamma=(G,I,O,\lambda)$ be a labelled open graph\ and let $\alpha:\comp{O}\to [0,2\pi)$ be an assignment of measurement angles.
Then the \zxdiagram $D_{\Gamma,\alpha}$ constructed according to Definition~\ref{def:ogs-to-ZX} is in MBQC form.
\end{lemma}
\begin{proof}
Consider performing the translation described in Definition~\ref{def:ogs-to-ZX} in two steps.
The first step involves translating the preparation and entangling commands of the linear map $M_{\Gamma,\alpha}$ according to Table~\ref{tab:MBQC-to-ZX} and then merging any sets of adjacent green spiders.
This yields a graph state diagram with some additional inputs.
(The underlying graph is $G$.)
The second step is the translation of the measurement projections of $M_{\Gamma,\alpha}$.
This yields measurement effects on some of the outputs of the graph state diagram.
Thus, the resulting \zxdiagram is in MBQC form by Definition~\ref{def:MBQC-form}.
\end{proof}
The converse of Lemma~\ref{lem:ogs-to-ZX-is-MBQC-Form} also holds.
\begin{lemma}\label{lem:zx-to-pattern}
Suppose $D$ is a \zxdiagram in MBQC form.
Then there exists a labelled open graph\ $\Gamma=(G,I,O,\lambda)$ and an assignment of measurement angles $\alpha:\comp{O}\to [0,2\pi)$ such that $\intf{D} = M_{\Gamma,\alpha}$.
\end{lemma}
\begin{proof}
Let $G:=G(D)$ be the underlying graph of the \zxdiagram $D$, cf.\ Definition~\ref{def:graph-of-diagram}.
Define $I\subseteq V$ to be the set of vertices of $D$ on which an input wire is incident.
Analogously, define $O\subseteq V$ to be the set of vertices of $D$ on which an output wire is incident.
Fix $\lambda:\comp{O}\to\{\ensuremath\normalfont\textrm{XY}\xspace,\normalfont\normalfont\textrm{XZ}\xspace,\normalfont\normalfont\textrm{YZ}\xspace\}$ by using Table~\ref{tab:MBQC-to-ZX} in reverse to determine the measurement planes from the measurement effects in the \zxdiagram.
Let $\Gamma := (G,I,O,\lambda)$.
Finally, define $\alpha:\comp{O}\to [0,2\pi)$ to be the phase of the measurement effect connected to each non-output vertex in the \zxdiagram.
Then $D = D_{\Gamma,\alpha}$ and thus the desired result follows from Lemma~\ref{lem:zx-equals-linear-map}.
\end{proof}
\begin{remark}
Given a fixed labelling of the graph vertices
Lemmas~\ref{lem:ogs-to-ZX-is-MBQC-Form} and \ref{lem:zx-to-pattern}
show that the correspondence between MBQC form \zxdiagrams and
the pairs $(\Gamma,\alpha)$, where $\Gamma$ is a labelled open graph\ and $\alpha$ is an assignment of measurement angles, is one-to-one.
\end{remark}
It will turn out to be useful to consider a `relaxed' version of the MBQC form for \zxdiagrams.
\begin{definition}
We say a \zxdiagram is in \emph{MBQC+LC} form when it is in MBQC form (see Definition~\ref{def:MBQC-form}) up to arbitrary single-qubit Clifford unitaries on the input and output wires (LC stands for `local Clifford').
When considering the underlying graph of a \zxdiagram in MBQC+LC form, we ignore these single qubit Clifford unitaries.
\end{definition}
Note that an MBQC form diagram is an MBQC+LC form diagram with trivial single-qubit unitaries on its inputs and outputs. An example diagram in MBQC+LC form is given in Figure~\ref{fig:graph-state}.
\subsection{Graph-theoretic rewriting}
\label{sec:lc}
The rewrites we will use are based on the graph-theoretic notions of \emph{local complementation} and \emph{pivoting}.
We present these operations (and their effects) in Definitions~\ref{def:loc-comp} and~\ref{def:pivot} as they appear in Ref.~\cite{DP3}.
Our interest is in the effect these operations have on a measurement pattern.
In particular, we consider whether a \zxdiagram in MBQC form will remain in MBQC form after applying a local complementation or pivot
(or remain close enough to MBQC form to still be useful).
\begin{definition}\label{def:loc-comp}
Let $G=(V,E)$ be a graph and $u\in V$ a vertex. The {\em local complementation of $G$ about the vertex $u$} is the operation resulting in the graph
$$G\star u\coloneqq \left( V, E\mathbin{\Delta}\xspace\{(b,c) : (b,u), (c,u)\in E\ \textrm{and}\ b\neq c\}\right) ,$$
where $\mathbin{\Delta}\xspace$ is the symmetric set difference, i.e.~$A\mathbin{\Delta}\xspace B\coloneqq (A\cup B)\setminus (A\cap B)$.
\end{definition}
In other words, $G\star u$ is a graph that has the same vertices as $G$. Two neighbours $b$ and $c$ of $u$ are connected in $G\star u$ if and only if they are not connected in $G$. All other edges are the same as in $G$.
\begin{definition}\label{def:pivot}
Let $G=(V,E)$ be a graph and $u,v\in V$ two vertices connected by an edge. The \emph{pivot of $G$ about the edge $u\sim v$} is the operation resulting in the graph $G\land uv\coloneqq G\star u\star v\star u$.
\end{definition}
If we denote the set of vertices connected to both $u$ and $v$ by $A$,
the set of vertices connected to $u$ but not to $v$ by $B$
and the set of vertices connected to $v$ but not to $u$ by $C$,
then pivoting consists of interchanging $u$ and $v$ and complementing the edges between each pair of sets $A$, $B$ and $C$.
That is, a vertex in $A$ is connected to a vertex in $B$ after pivoting if and only if the two vertices are not connected before pivoting; and similarly for the two other pairs.
All the remaining edges are unchanged, including the edges internal to $A$, $B$ and $C$.
We illustrate this by the following picture, where crossing lines between two sets indicate complementing the edges.
\[G \quad\tikzfig{pivot-L}\qquad\qquad \quad G\wedge uv \quad\tikzfig{pivot-R}
\]
\begin{remark}\label{rem:pivot_sym}
From the above characterisation it follows that pivoting is symmetric in the (neighbouring) vertices, that is, $G\land uv = G\land vu$.
\end{remark}
In the \zxcalculus, a spider with a zero phase and exactly two incident wires is equivalent to a plain wire (representing the identity operation) by rule $(\textit{\textbf {i1}})$ in Figure~\ref{fig:zx-rules}. The following definition represents the corresponding graph operation, which will be used to remove such vertices.
\begin{definition}\label{def:identity-removal}
Let $G=(V,E)$ be a graph and let $u,v,w\in V$ be vertices such that $N_G(v)=\{u,w\}$ and $u\notin N_G(w)$, that is, the neighbours of $v$ are precisely $u$ and $w$, and $u$ is not connected to $w$.
We then define \emph{identity removal} as
$$G\idrem{v} w\coloneqq ((G\land uv)\setminus\{u\})\setminus\{v\}.$$
Since $v$ has exactly two neighbours, one of which is $w$,
the choice of $u$ is implicit in the notation.
We think of this as `dragging $u$ along $v$ to merge with $w$'.
\end{definition}
The effect of the identity removal is to remove the middle vertex $v$ and to fuse the vertices $u$ and $w$ into one, as illustrated in the picture below. Thus the operation is symmetric in $u$ and $w$, in the sense that $G\idrem{v} u$ and $G\idrem{v} w$ are equal up to a relabelling of one vertex. Note that $u$ and $w$ are allowed to have common neighbours (which will disconnect from the fused vertex $w$ as a result of identity removal).
\[G \quad\tikzfig{identity-removal-L}\qquad\qquad \quad G\idrem{v} w \quad\tikzfig{identity-removal-R}
\]
\begin{example}\label{ex:identity-removal}
Consider the following graph:
\[G \coloneqq \tikzfig{identity-removal-example1}.
\]
Note that the vertices $u,v$ and $w$ satisfy the condition for identity removal: $u$ and $w$ are not connected and are precisely the neighbours of $v$. Hence identity removal results in the graph
\[G\idrem{v} w = \tikzfig{identity-removal-example3}.
\]
\end{example}
\begin{remark}\label{rem:identity-removal-connected-vertices}
If we have vertices $u,v$ and $w$ with $N_G(v)=\{u,w\}$ but, unlike in the definition above, $u\in N_G(w)$, we can first perform a local complementation on $v$, so that $u$ and $w$ become disconnected, and then remove the identity vertex $v$. In symbols:
$$(G\star v)\idrem{v} w .$$
\end{remark}
The abstract application of a local complementation to a graph corresponds to the application of a specific set of local Clifford gates on the corresponding graph state:
\begin{theorem}[\cite{NestMBQC}, in the manner of {\cite[Theorem~2]{DP1}}]\label{thm:lc-in-zx}
Let $G=(V,E)$ be a graph with adjacency matrix $\theta$ and let $u\in V$, then
\[
\ket{G\star u} = X_{\pi/2,u}\otimes\bigotimes_{v\in V} Z_{-\pi/2,v}^{\theta_{uv}}\ket{G}.
\]
\end{theorem}
This result can be represented graphically in the \zxcalculus:
\begin{lemma}[{\cite[Theorem~3]{DP1}}]\label{lem:ZX-lcomp}
The following equality involving graph state diagrams and Clifford phase shifts follows from the graphical rewrite rules:
{
\ctikzfig{local-comp-ex}
}
Here, the underlying graph on the LHS is $G\star u$ and the underlying graph on the RHS is $G$.
Any vertices not adjacent to $u$ are unaffected and are not shown in the above diagram.
\end{lemma}
Combining this result with the definition of pivoting in terms of local complementations (cf.\ Definition~\ref{def:pivot}) we also get:
\begin{lemma}[{\cite[Theorem~3.3]{DP3}}]\label{lem:ZX-pivot}
The following diagram equality follows from the graphical rewrite rules:
\ctikzfig{pivot-desc}
Here $u$ and $v$ are a connected pair of vertices, and the underlying graph on the LHS is $G\wedge uv$ while the RHS is $G$.
Any vertices not adjacent to $u$ or $v$ are unaffected and are not shown in the above diagram.
\end{lemma}
\subsection{Generalised flow}
\label{sec:gflow}
The notion of \emph{flow} or \emph{causal flow} on open graphs was introduced by Danos and Kashefi~\cite{Danos2006Determinism-in-} as a sufficient condition to distinguish those open graphs capable of supporting a deterministic MBQC pattern with measurements in the \normalfont XY\xspace-plane.
Causal flow, however, is not a necessary condition for determinism.
That is there are graphs that implement a deterministic pattern even though they do not have causal flow~\cite{GFlow,duncan2010rewriting}.
Browne et al.~\cite{GFlow} adapted the notion of flow to what they called
\emph{generalised flow} (gflow), which is both a necessary and sufficient condition for
`uniformly and strongly stepwise deterministic' measurement patterns (defined below).
Unlike causal flow, gflow can also be applied to arbitrary labelled open graph{}s, i.e.\ it supports measurement patterns with measurements in all three planes.
This even more general case is sometimes called \emph{extended gflow}.
\begin{definition}\label{def:determinism}
The linear map implemented by a measurement pattern for a specific set of measurement outcomes is called a \emph{branch} of the pattern.
A pattern is \emph{deterministic} if all branches are equal up to a scalar.
A pattern is \emph{strongly deterministic} if all branches are equal up to a global phase.
It is \emph{uniformly deterministic} if it is deterministic for any choice of measurement angles.
Finally, the pattern is \emph{stepwise deterministic} if any intermediate pattern -- resulting from performing some subset of the measurements and their corresponding corrections -- is again deterministic.
\end{definition}
The existence of gflow implies the uniform, strong and stepwise determinism of any pattern on the open graph (cf.~Theorem~\ref{t-flow} below).
Hence, by applying transformations to an open graph that preserve the existence of a gflow, we ensure that the modified open graph still supports a uniformly, strongly and stepwise deterministic pattern.
Note that the condition of interest is preservation of the \textit{existence} of gflow, not preservation of the specific gflow itself.
We will now give the formal definitions of (causal) flow and (extended) gflow.
\begin{definition}[{\cite[Definition~2]{Danos2006Determinism-in-}}]\label{def:causal-flow}
Let $(G,I,O)$ be an open graph. We say $G$ has \textit{(causal) flow} if there exists a map $f:\comp{O}\longrightarrow \comp{I}$ (from measured qubits to prepared qubits) and a strict partial order $\prec$ over $V$ such that for all $u\in \comp{O}$:
\begin{itemize}
\item $u \sim f(u)$
\item $u \prec f(u)$
\item $u \prec v$ for all neighbours $v\neq u$ of $f(u)$.
\end{itemize}
\end{definition}
When vertices are smaller than a vertex $v$ in the order~$\prec$, they are referred to as being `behind' or `in the past of' of $v$.
The notion of gflow differs from the above definition of causal flow in two ways.
The value of $f(u)$ is allowed to be a set of vertices instead of a single vertex, so that corrections can be applied to more than one vertex at a time.
As a result of this change, the third condition of causal flow is now too strong: requiring that no element of $f(u)$ is adjacent to any vertex `in the past' of $u$ would be too restrictive.
Since corrections are applied to sets of vertices at a time, it is possible to make use of the following idea: if corrections are simultaneously applied to an even number of neighbours of $v$, then there is no net effect on $v$.
Thus, the second change takes the form of a parity condition: all vertices in the neighbourhood of $f(u)$ that lie `in the past' of $u$ are required to be in the even neighbourhood of $f(u)$.
As a result, net effects of corrections do not propagate into `the past'.
Allowing measurements in more than one measurement plane requires further careful adjustment of the parity conditions depending on the measurement plane of the vertex being measured.
\begin{definition}
Given a graph $G=(V,E)$, for any $K\subseteq V$, let $\odd{G}{K}= \{u\in V: \abs{N(u)\cap K}\equiv 1 \mod 2\}$ be the \emph{odd neighbourhood} of $K$ in $G$, i.e.\ the set of vertices having an odd number of neighbours in $K$.
If the graph $G$ is clear from context, we simply write $\odd{}{K}$.
The \emph{even neighbourhood} of $K$ in $G$, $\eve{G}{K}$, is defined in a similar way; $\eve{G}{K}= \{u\in V: \abs{N(u)\cap K}\equiv 0 \mod 2\}$.
\end{definition}
\begin{definition}[{\cite[Definition~3]{GFlow}}]
\label{defGFlow}
A labelled open graph{} $(G,I,O,\lambda)$ has generalised flow (or \emph{gflow}) if there exists a map $g:\comp{O}\to\pow{\comp{I}}$ and a partial order $\prec$ over $V$ such that for all $v\in \comp{O}$:
\begin{enumerate}[label=({g}\theenumi), ref=(g\theenumi)]
\item\label{it:g} If $w\in g(v)$ and $v\neq w$, then $v\prec w$.
\item\label{it:odd} If $w\in\odd{}{g(v)}$ and $v\neq w$, then $v\prec w$.
\item\label{it:XY} If $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$, then $v\notin g(v)$ and $v\in\odd{}{g(v)}$.
\item\label{it:XZ} If $\lambda(v)=\normalfont\normalfont\textrm{XZ}\xspace$, then $v\in g(v)$ and $v\in\odd{}{g(v)}$.
\item\label{it:YZ} If $\lambda(v)=\normalfont\normalfont\textrm{YZ}\xspace$, then $v\in g(v)$ and $v\notin\odd{}{g(v)}$.
\end{enumerate}
The set $g(v)$ is called the \emph{correction set} of $v$.
\end{definition}
\begin{remark}
Every causal flow is indeed a gflow, where $g(v):=\{f(v)\}$ and the partial order remains the same.
To see this, first note causal flow can only be defined on labelled open graph{}s where $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$ for all $v\in\comp{O}$, so conditions \ref{it:XZ} and \ref{it:YZ} are vacuously satisfied.
Now, \ref{it:g} follows from the second bullet point of Definition~\ref{def:causal-flow}, \ref{it:odd} follows from the third bullet point, and \ref{it:XY} follows from the first bullet point.
\end{remark}
\begin{remark}
In the original definition of gflow in Ref.~\cite{GFlow}, condition \ref{it:odd} is given as:
\begin{align}\label{eq:wrong-g2}
\text{if }j \preccurlyeq i \text{ and } j \neq i \text{ then } j \notin \odd{}{g(i)}
\end{align}
In other publications, such as Ref.~\cite{danos_kashefi_panangaden_perdrix_2009}, the definition is changed to the version we give as \ref{it:odd}, yet this is usually done without comment.
For completeness, we provide an example which demonstrates that the condition \eqref{eq:wrong-g2} is insufficient for determinism.
Consider the following open graph:
\ctikzfig{bialgebra}
Here, the set of inputs is $\{i_1,i_2\}$, the set of outputs is $\{o_1,o_2\}$, and the non-outputs are measured in the planes $\lambda(i_1) = \lambda(i_2) = \ensuremath\normalfont\textrm{XY}\xspace$.
If we choose both measurement angles to be~$0$, it is straightforwardly checked that this diagram implements the linear map:
\[
\begin{pmatrix} 1&1&1&1\\ 1&-1&-1&1\\1&1&1&1\\ 1&-1&-1&1
\end{pmatrix}
\]
This has rank 2 and thus is not invertible, and in particular not unitary.
It therefore cannot be deterministically implementable and hence it should not have a gflow.
However, it is considered to have a gflow under condition \eqref{eq:wrong-g2} instead of \ref{it:odd}:
pick the partial order $i_1 \prec o_1, o_2$ and $i_2 \prec o_1, o_2$ with all other vertices incomparable.
Set $g(i_1) = \{o_1\}$ and $g(i_2) = \{o_2\}$. It is then easily checked that $(g,\prec)$ satisfies conditions \ref{it:g}, \eqref{eq:wrong-g2}, and \ref{it:XY}--\ref{it:YZ}.
Yet $(g,\prec)$ does not satisfy condition \ref{it:odd} and hence is not a gflow under the revised definition.
\end{remark}
To demonstrate that, with condition~\ref{it:odd}, the presence of a gflow indeed guarantees determinism of a pattern, we give a detailed proof of the following sufficiency theorem, which was first stated in Ref.~\cite{GFlow} as Theorem 2 with a sketch proof. The precise statement of the theorem requires some additional notation and the proof is quite lengthy and technical, so we state a coarse version of the theorem here and refer the reader to Appendix~\ref{sec:gflow-determinism} (and more specifically to Theorem~\ref{t-flow-app}) for the details.
\begin{theorem}\label{t-flow}
Let $\Gamma = (G,I,O,\lambda)$ be a labelled open graph~with a gflow and let $\alpha:\comp{O}\rightarrow [0,2\pi)$ be an assignment of measurement angles. Then there exists a runnable measurement pattern which is uniformly, strongly and stepwise deterministic, and which realizes the associated linear map $M_{\Gamma,\alpha}$ (cf.~Definition~\ref{def:ogs-to-linear-map}).
\end{theorem}
The converse also holds:
\begin{theorem}
If a pattern is stepwise, uniformly and strongly deterministic, then its underlying labelled open graph{} $(G, I, O, \lambda)$ has a gflow.
\end{theorem}
\begin{proof}
We present the proof sketch here, a complete treatment of the proof can be found in Ref.~\cite[Theorem~7.9.7]{danos_kashefi_panangaden_perdrix_2009}. The proof is by induction on the number of measurements. If the number of measurements is $n=0$, then the pattern trivially has a gflow. Suppose the pattern has $n+1$ qubits to be measured. Since the pattern is assumed to be stepwise deterministic, after performing the first measurement, the remaining pattern is still stepwise, uniformly and strongly deterministic. Hence it has a gflow by the induction hypothesis, where the partial order is given by the order in which the measurements are performed.
It remains to extend this gflow to include the first qubit to be measured. The essential part of this is to find a subset $S \subseteq \comp{I}$ that can act as the correction set of the first measurement (cf.\ Definition~\ref{defGFlow}). Given such a subset $S$, we define the full gflow as:
\begin{align*}
g^\prime(i) :=
\begin{cases}
g(i) & i\neq n \\ S& i=n,
\end{cases}
\end{align*}
where $g$ is the gflow of the smaller pattern.
\end{proof}
It is not straightforward to actually find a concrete gflow from the procedure in this proof. A constructive algorithm has since been given in Ref.~\cite{mhalla2011graph}, which finds gflow on labelled open graph{}s where all measurements are in the \normalfont XY\xspace plane. In Section~\ref{sec:MaximallyDelayedGflow}, we extend this algorithm to find extended gflow on labelled open graph{}s with measurements in all three planes.
Gflow is a property that applies to labelled open graph{}s. For convenience we also define it for ZX-diagrams.
\begin{definition}\label{dfn:zx-gflow}
We say a \zxdiagram in MQBC(+LC) form has \emph{gflow} $(g,\prec)$ if the corresponding labelled open graph\ $\Gamma$ has gflow $(g,\prec)$.
\end{definition}
The following result from Ref.~\cite{cliff-simp} shows that any unitary circuit can be converted into a deterministic measurement pattern.
\begin{proposition}[{\cite[Lemma 3.7]{cliff-simp}}]\label{prop:circuit-to-pattern}
Given a circuit there is a procedure for converting it into an equivalent measurement pattern.
Furthermore, this measurement pattern only contains \normalfont XY\xspace-plane measurements and has causal flow, so it also has gflow.
\end{proposition}
Note that Ref.~\cite{cliff-simp} does not explicitly talk about measurement patterns.
What they call graph-like diagrams however correspond in a straightforward manner to diagrams in MBQC form with every measured vertex being measured in the \normalfont XY\xspace-plane.
(We also note that the procedure of Proposition~\ref{prop:circuit-to-pattern} takes $O(n^2)$ operations,
where $n$ is the number of vertices.)
Below, we present a concrete example illustrating how the presence of gflow allows measurement errors to be corrected as the computation progresses.
\begin{example}\label{ex:gflow-in-action}
Consider the following labelled open graph{} $\Gamma$:
\[
\tikzfig{gflow-example-geometry}
\]
where $a$ is an input, $e$ and $f$ are outputs, and the measurement planes are given by $\lambda(a)=\lambda(b)=\ensuremath\normalfont\textrm{XY}\xspace$, $\lambda(c)=\normalfont\normalfont\textrm{XZ}\xspace$ and $\lambda(d)=\normalfont\normalfont\textrm{YZ}\xspace$. As usual, we denote the measurement angles by $\alpha : V \rightarrow [0,2\pi)$, where $V$ is the vertex set of $\Gamma$. Using Definition~\ref{def:ogs-to-ZX} (that is, we translate according to Table~\ref{tab:MBQC-to-ZX}), we obtain the corresponding \zxdiagram:
\[\tikzfig{gflow-example-zx}
\]
Note that the labelled open graph\ we started with has a gflow $(g,\prec)$ given by the following partial order
$$a\prec b\prec c\prec d\prec e,f,$$
with the function $g$ taking the values
\begin{align*}
g(a) &= \{b\} \\
g(b) &= \{c\} \\
g(c) &= \{c,d\} \\
g(d) &= \{d,e,f\}.
\end{align*}
It follows that we have
\begin{align*}
\odd{}{g(a)} &= \{a,c,d,e\} \\
\odd{}{g(b)} &= \{b,d,f\} \\
\odd{}{g(c)} &= \{c,d,f\} \\
\odd{}{g(d)} &= \varnothing,
\end{align*}
from which it is easy to verify that the conditions \ref{it:g}-\ref{it:YZ} hold.
By Theorem~\ref{t-flow}, the presence of gflow guarantees that we can correct the measurement errors provided that we measure the qubits according to the partial order. We demonstrate this for $\Gamma$ in Figure~\ref{fig:error-propagation}.
Thus suppose a measurement error of $\pi$ occurs when performing the measurement corresponding to the vertex $c$, as indicated in the top left part of Figure~\ref{fig:error-propagation}. The labels in this figure refer to the rules in Figure~\ref{fig:zx-rules}. In order to get the left diagram on the second row, we move each red $\pi$-phase past the corresponding Hadamard gate, which changes the colour of the phase to green. For the right diagram, the left green $\pi$ travels past the green node on the left and flips the sign of $\alpha(d)$. Next, to obtain the left diagram on the third row, the middle green $\pi$ travels along the middle triangle and past another Hadamard gate to become a red $\pi$. Finally, in the bottom left diagram, the red $\pi$ on the left has been fused with $-\alpha(d)$; and the red $\pi$ on the right has passed through the Hadamard gate switching its colour to green, and the adjacent green nodes have fused into a node with phase $\pi-\frac{\pi}{2}=\frac{\pi}{2}$. We then rearrange the diagram so that it looks like a measurement pattern again.
Note that all the vertices that are affected by the error are above $c$ in the partial order and hence `not yet measured' at this stage of the computation. Thus the necessary corrections may be applied to these vertices when they are measured.
\end{example}
\begin{figure}
\begin{align*}
&\tikzfig{gflow-example-corr2-11}\quad\stackrel{(\boldsymbol{\pi})}{\rightsquigarrow} & &\tikzfig{gflow-example-corr2-12} \\ \\
\stackrel{(\textit{\textbf h})\ \&\ (\textit{\textbf {i2}})}{\rightsquigarrow}\quad &\tikzfig{gflow-example-corr2-21}\quad\stackrel{(\textit{\textbf f})\ \&\ (\boldsymbol{\pi})}{\rightsquigarrow} & &\tikzfig{gflow-example-corr2-22} \\ \\
\stackrel{(\textit{\textbf f}), (\textit{\textbf h})\ \&\ (\textit{\textbf {i2}})}{\rightsquigarrow}\quad &\tikzfig{gflow-example-corr2-31}\quad\stackrel{(\boldsymbol{\pi})}{\rightsquigarrow} & &\tikzfig{gflow-example-corr2-32} \\ \\
\stackrel{(\textit{\textbf f}), (\textit{\textbf h})\ \&\ (\textit{\textbf {i2}})}{\rightsquigarrow}\quad &\tikzfig{gflow-example-corr2-41}\quad\stackrel{(\textit{\textbf f})}{\rightsquigarrow} & &\tikzfig{gflow-example-corr2-42}
\end{align*}
\caption{\label{fig:error-propagation} Propagation of a measurement error of the pattern in Example~\ref{ex:gflow-in-action}.}
\end{figure}
\subsection{Focusing gflow for {\normalfont XY\xspace} plane measurements}\label{sec:focusing-gflow}
For a labelled open graph{} $(G,I,O,\lambda)$ in which all measurements are in the \normalfont XY\xspace-plane, there is a special type of gflow which is specified by the correction function alone.
This gflow is called \emph{focused} because of the property that, among non-output vertices, corrections only affect the vertex they are meant to correct.
A labelled open graph{} where all measurements are in the \normalfont XY\xspace plane has gflow if and only if it has focused gflow.
\begin{definition}[{adapted from \cite[Definition~5]{mhalla2011graph}}]\label{def:focused-gflow}
Suppose $(G,I,O,\lambda)$ is a labelled open graph{} with the property that $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$ for all $v\in\comp{O}$.
Then $(g,\prec)$ is a \emph{focused gflow} on $(G,I,O,\lambda)$ if for all $v\in\comp{O}$, we have $\odd{G}{g(v)}\cap \comp{O}=\{v\}$, and furthermore $\prec$ is the transitive closure of the relation $\{(v,w) \mid v\in\comp{O} \wedge w\in g(v)\}$.
\end{definition}
\begin{theorem}[{reformulation of \cite[Theorem~2]{mhalla2011graph}}]\label{thm:mhalla2}
Suppose $(G,I,O,\lambda)$ is a labelled open graph{} with the property that $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$ for all $v\in\comp{O}$, then $(G,I,O,\lambda)$ has gflow if and only if it has a focused gflow.
\end{theorem}
A labelled open graph{} in which all measurements are in the \normalfont XY\xspace-plane can be \emph{reversed} by swapping the roles of inputs and outputs.
More formally:
\begin{definition}\label{def:reversed-LOG}
Suppose $(G,I,O,\lambda)$ is a labelled open graph{} with the property that $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$ for all $v\in\comp{O}$.
The corresponding \emph{reversed labelled open graph{}} is the labelled open graph{} where the roles of inputs and outputs are swapped, i.e.\ it is $(G,O,I,\lambda')$, where $\lambda'(v):=\ensuremath\normalfont\textrm{XY}\xspace$ for all $v\in\comp{I}$.
\end{definition}
Now if the number of inputs and outputs in the labelled open graph{} is the same, its focused gflow can also be reversed in the following sense.
\begin{corollary}\label{cor:reverse_unitary_gflow}
Suppose $(G,I,O,\lambda)$ is a labelled open graph{} with the properties that $\abs{I}=\abs{O}$ and $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$ for all $v\in\comp{O}$, and suppose it has a focused gflow $(g,\prec)$.
For all $v\in\comp{I}$, let $g'(v):=\{w\in\comp{O} \mid v\in g(w)\}$, and for all $u,w\in V$, let $u\prec' w$ if and only if $w\prec u$.
Then $(g',\prec')$ is a focused gflow for the reversed labelled open graph{} $(G,O,I,\lambda')$.
\end{corollary}
This follows immediately from the proofs of Ref.~\cite[Theorems~3--4]{mhalla2011graph} but it is not explicitly stated in that paper.
\section{Rewriting while preserving the existence of gflow}
\label{sec:rewriting}
In this section, we study a variety of topics dealing with labelled open graph{}s and gflow.
In the first subsection, we show how certain graph operations, such as local complementation and pivoting, affect the gflow.
In Section~\ref{sec:MaximallyDelayedGflow} we give a polynomial time algorithm for finding extended gflow using the concept of \emph{maximally delayed} gflow.
We combine this notion with that of a \emph{focused} extended gflow in Section~\ref{sec:focusing-extended-gflow} to transform a given gflow to give it certain useful properties.
\subsection{Graph operations that preserve the existence of gflow}
In this section, we prove some of our main technical lemmas, establishing that local complementation and related graph rewrites interact well with the gflow of the graph.
First, we show that a labelled open graph{} resulting from the local complementation of a labelled open graph{} with gflow will also have a gflow.
\newcommand{\statelcgflow}{
Let $(g,\prec)$ be a gflow for $(G,I,O,\lambda)$ and let $u\in\comp{O}$. Then $(g',\prec)$ is a gflow for $(G\star u, I, O,\lambda')$, where
\[
\lambda'(u) := \begin{cases} \normalfont\normalfont\textrm{XZ}\xspace &\text{if } \lambda(u)=\ensuremath\normalfont\textrm{XY}\xspace \\ \ensuremath\normalfont\textrm{XY}\xspace &\text{if } \lambda(u)=\normalfont\normalfont\textrm{XZ}\xspace \\ \normalfont\normalfont\textrm{YZ}\xspace &\text{if } \lambda(u)=\normalfont\normalfont\textrm{YZ}\xspace \end{cases}
\]
and for all $v\in \comp{O}\setminus\{u\}$
\[
\lambda'(v) := \begin{cases} \normalfont\normalfont\textrm{YZ}\xspace &\text{if } v\in N_G(u) \text{ and } \lambda(v)=\normalfont\normalfont\textrm{XZ}\xspace \\ \normalfont\normalfont\textrm{XZ}\xspace &\text{if } v\in N_G(u) \text{ and } \lambda(v)=\normalfont\normalfont\textrm{YZ}\xspace \\ \lambda(v) &\text{otherwise.} \end{cases}
\]
Furthermore,
\[
g'(u) := \begin{cases} g(u)\mathbin{\Delta}\xspace \{u\} &\text{if } \lambda(u)\in\{\ensuremath\normalfont\textrm{XY}\xspace,\normalfont\normalfont\textrm{XZ}\xspace\} \\ g(u) &\text{if } \lambda(u)=\normalfont\normalfont\textrm{YZ}\xspace \end{cases}
\]
and for all $v\in \comp{O}\setminus\{u\}$,
\[
g'(v) := \begin{cases} g(v) &\text{if } u\notin\odd{G}{g(v)} \\ g(v)\mathbin{\Delta}\xspace g'(u) \mathbin{\Delta}\xspace \{u\} &\text{if } u\in\odd{G}{g(v)}. \end{cases}
\]
}
\begin{lemma}\label{lem:lc_gflow}
\statelcgflow
\end{lemma}
\noindent The proof of this lemma can be found in Appendix~\ref{sec:proofs}. Note that the condition that the complemented vertex is not an output can in fact be dropped:
\begin{lemma}
Let $(g,\prec)$ be a gflow for $(G,I,O,\lambda)$ and let $u\in O$. Then $(g',\prec)$ is a gflow for $(G\star u, I, O,\lambda')$, where for all $v\in \comp{O}$
\[
\lambda'(v) := \begin{cases} \normalfont\normalfont\textrm{YZ}\xspace &\text{if } v\in N_G(u) \text{ and } \lambda(v)=\normalfont\normalfont\textrm{XZ}\xspace \\ \normalfont\normalfont\textrm{XZ}\xspace &\text{if } v\in N_G(u) \text{ and } \lambda(v)=\normalfont\normalfont\textrm{YZ}\xspace \\ \lambda(v) &\text{otherwise.} \end{cases}
\]
Furthermore, for all $v\in \comp{O}$,
\[
g'(v) := \begin{cases} g(v) &\text{if } u\notin\odd{G}{g(v)} \\ g(v) \mathbin{\Delta}\xspace \{u\} &\text{if } u\in\odd{G}{g(v)}. \end{cases}
\]
\end{lemma}
\begin{proof}
The proof is basically the same as that of Lemma~\ref{lem:lc_gflow} if we take $g(u)$ and $g'(u)$ to be empty.
The output vertex has no label, so its label does not need to be updated.
\end{proof}
Now by applying this lemma three times we see that a pivot also preserves the existence of a gflow.
\newcommand{\statecorpivotgflow}{
Let $(G,I,O,\lambda)$ be a labelled open graph\ which has a gflow, and let $u,v\in\comp{O}$ be connected by an edge. Then $(G\land uv, I, O,\hat\lambda)$, where
\[
\hat\lambda(a) = \begin{cases} \normalfont\normalfont\textrm{YZ}\xspace &\text{if } \lambda(a)=\ensuremath\normalfont\textrm{XY}\xspace \\
\normalfont\normalfont\textrm{XZ}\xspace &\text{if } \lambda(a)=\normalfont\normalfont\textrm{XZ}\xspace \\
\ensuremath\normalfont\textrm{XY}\xspace &\text{if } \lambda(a)=\normalfont\normalfont\textrm{YZ}\xspace \end{cases}
\]
for $a\in\{u,v\}$, and $\hat\lambda(w)=\lambda(w)$ for all $w\in \comp{O}\setminus\{u,v\}$ also has a gflow.
}
\begin{corollary}\label{cor:pivot_gflow}
\statecorpivotgflow
\end{corollary}
For more details regarding the correctness of this corollary, we refer to Appendix~\ref{sec:proofs}.
Somewhat surprisingly, the deletion of some types of vertices preserves the existence of gflow:
\begin{lemma}\label{lem:deletepreservegflow}
Let $(g,\prec)$ be a gflow for $(G,I,O,\lambda)$ and let $u\in \comp{O}$ with $\lambda(u) \neq \ensuremath\normalfont\textrm{XY}\xspace$. Then $(g',\prec)$ is a gflow for $(G\setminus\{u\},I,O,\lambda)$ where $\forall v\in V, v\neq u$:
\[g'(v) := \begin{cases} g(v) &\text{if } u\not \in g(v)\\ g(v)\mathbin{\Delta}\xspace g(u) &\text{if } u \in g(v) \end{cases}\]
\end{lemma}
\begin{proof}
First, observe that $u\in g(u)$ since $\lambda(u)\neq \ensuremath\normalfont\textrm{XY}\xspace$. Thus $u\not \in g'(v)$ for either case of the definition. Hence, $g'$ is indeed a function on the graph $G\setminus u$.
To check that $g'$ is indeed a gflow we check the necessary conditions for all $v\in G\setminus\{u\}$. If $u\not \in g(v)$, then $g'(v) = g(v)$ and hence we are done. If $u \in g(v)$, then $v\prec u$ and hence also $v\prec w$ for all $w \in g(u)$ or $w\in \odd{G}{g(u)}$. Since $g'(v) = g(v)\mathbin{\Delta}\xspace g(u)$, conditions \ref{it:g} and \ref{it:odd} are satisfied. For conditions \ref{it:XY}-\ref{it:YZ}, note that we cannot have $v \in g(u)$ or $v\in \odd{G}{g(u)}$ because $v\prec u$. As a result, $v\in g'(v) \iff v\in g(v)$ and $v\in \odd{G\setminus\{u\}}{g'(v)} \iff v \in \odd{G}{g(v)}$. Since the labels of all the vertices stay the same, \ref{it:XY}-\ref{it:YZ} remain satisfied.
\end{proof}
\begin{remark}
The condition that $\lambda(u) \neq \ensuremath\normalfont\textrm{XY}\xspace$ is necessary in the previous lemma. Removing a vertex with label \normalfont XY\xspace will, in general, lead to a labelled open graph{} which no longer has a gflow. For instance consider the following labelled open graph:
\ctikzfig{line-graph}
where the first two vertices both have label \normalfont XY\xspace. This graph has a gflow specified by $I\prec u \prec O$ and $g(I) = \{u\}$, $g(u) = \{O\}$, but removing $u$ will disconnect the graph and hence the resulting graph does not have a gflow.
Note that if we were to consider the same labelled open graph, but with $u$ measured in a different plane, it would \emph{not} have a gflow to start with.
This is because, if it did, we would need $u\in g(I)$, so that $I\prec u$ but also $u\in g(u)$ so that $I\in \odd{}{g(u)}$ giving $u\prec I$.
Hence this does not contradict the lemma.
\end{remark}
The next corollary shows how the previous results can be combined to remove a vertex with arity 2 from a labelled open graph{} while preserving gflow. In the ZX-calculus, the idea behind this is that we use \IdRule to remove a vertex and then \SpiderRule to fuse the adjacent vertices (cf.~Definition~\ref{def:identity-removal}).
Recall from that definition that $G\idrem{v} w\coloneqq ((G\land uv)\setminus\{u\})\setminus\{v\}$.
\begin{corollary}\label{cor:id_removal}
Let $(g,\prec)$ be a gflow for the labelled open graph{} $(G,I,O,\lambda)$, and let $u,v\in\comp O$ and $w\in V$ be vertices such that $N_G(v)=\{u,w\}$ and $\lambda(u),\lambda(v)\neq \normalfont YZ\xspace$. Then $(\tilde g,\prec)$ as defined below is a gflow for $(G\idrem{v} w,I,O,\lambda)$.
For all $z\in \comp O\setminus\{u,v\}$ we have
\[
\tilde g(z) = \begin{cases} \hat g(z) &\text{if } u\notin\hat g(z), v\notin\hat g(z) \\
\hat g(z)\mathbin{\Delta}\xspace\hat g(v) &\text{if } u\notin\hat g(z)\mathbin{\Delta}\xspace\hat g(v), v\in\hat g(z) \\
\hat g(z)\mathbin{\Delta}\xspace\hat g(u)\mathbin{\Delta}\xspace\hat g(v) &\text{if } u\in\hat g(z)\mathbin{\Delta}\xspace\hat g(v), v\in\hat g(z)\mathbin{\Delta}\xspace\hat g(u) \\
\hat g(z)\mathbin{\Delta}\xspace\hat g(u) &\text{if } u\in\hat g(z), v\notin\hat g(z)\mathbin{\Delta}\xspace\hat g(u), \end{cases}
\]
where $\hat g$ is as defined in Corollary \ref{cor:pivot_gflow}.
\end{corollary}
\begin{proof}
A computation using Corollary \ref{cor:pivot_gflow} and Lemma \ref{lem:deletepreservegflow}.
\end{proof}
\begin{lemma}\label{lem:gflow-add-output}
Let $\Gamma=(G,I,O,\lambda)$ be a labelled open graph\ with $G=(V,E)$. Let $\Gamma'$ be the labelled open graph\ that results from converting an output $u\in O$ into a vertex measured in the \normalfont XY\xspace-plane and adding a new output vertex $u'$ in its stead:
\ctikzfig{gflow-add-output}
Formally, let $\Gamma'=(G',I,O',\lambda')$, where $G'=(V',E')$ with $V'=V\cup\{u'\}$ and $E'=E\cup\{u\sim u'\}$, $O'=(O\setminus\{u\})\cup\{u'\}$, and $\lambda'(v)$ is the extension of $\lambda$ to domain $V'\setminus O'$ with $\lambda'(u)=\ensuremath\normalfont\textrm{XY}\xspace$. Then if $\Gamma$ has gflow, $\Gamma'$ also has gflow.
\end{lemma}
\begin{proof}
Suppose $\Gamma$ has gflow $(g,\prec)$.
Let $g'$ be the extension of $g$ to domain $V'\setminus O'$ which satisfies $g'(u)=\{u'\}$, and let $\prec'$ be the transitive closure of $\prec\cup\{(u,u')\}$.
The tuple $(g',\prec')$ inherits \ref{it:g} and \ref{it:XY}--\ref{it:YZ} for all $v\in V\setminus O$ because the correction sets have not changed for any of the original vertices.
Furthermore, $u'\in\odd{G'}{g'(v)}$ for any $v$ implies $u\in g'(v)$, as $u$ is the only neighbour of $u'$.
Hence $u'\in\odd{G'}{g'(v)}$ implies $v\prec' u \prec' u'$.
Therefore \ref{it:odd} continues to be satisfied for all $v\in V\setminus O$.
Now, for $u$, \ref{it:g} holds because $u\prec' u'$ by definition, \ref{it:odd} holds because $\odd{G'}{g'(u)}=\{u\}$, and \ref{it:XY} can easily be seen to hold.
Thus, $(g',\prec')$ is a gflow for $\Gamma'$.
\end{proof}
\begin{lemma}\label{lem:gflow-add-input}
Let $\Gamma=(G,I,O,\lambda)$ be a labelled open graph\ with $G=(V,E)$.
Let $\Gamma'$ be the labelled open graph\ that results from adding an additional vertex measured in the \normalfont XY\xspace-plane `before' the input $u\in I$:
\ctikzfig{gflow-add-input}
Formally, let $\Gamma'=(G',I',O,\lambda')$, where $G'=(V',E')$ with $V'=V\cup\{u'\}$ and $E'=E\cup\{u\sim u'\}$, $I'=(I\setminus\{u\})\cup\{u'\}$, and $\lambda'(v)$ is the extension of $\lambda$ to domain $V'\setminus O$ which satisfies $\lambda'(u')=\ensuremath\normalfont\textrm{XY}\xspace$. Then if $\Gamma$ has gflow, $\Gamma'$ also has gflow.
\end{lemma}
\begin{proof}
Suppose $\Gamma$ has gflow $(g,\prec)$.
Let $g'$ be the extension of $g$ to domain $V'\setminus O$ which satisfies $g'(u')=\{u\}$, and let $\prec'$ be the transitive closure of $\prec\cup\{(u',w):w\in N_G(u)\cup\{u\}\}$.
The tuple $(g',\prec')$ inherits the gflow properties for all $v\in V\setminus O$ because the correction sets have not changed for any of the original vertices and because the additional inequalities in $\prec'$ do not affect the gflow properties for any $v\in V\setminus O$.
The latter is because
\begin{itemize}
\item $u'\notin g'(v)=g(v)$ for any $v\in V\setminus O$, and
\item $u'\notin\odd{G'}{g'(v)}=\odd{G'}{g(v)}$ for any $v\in V\setminus O$ since its only neighbour $u$ was an input in $\Gamma$ and thus satisfies $u\notin g(v)$ for any $v\in V\setminus O$.
\end{itemize}
Now, for $u'$, \ref{it:g} holds by the definition of $\prec'$.
Note that $\odd{G'}{g(u')}=N_{G'}(u)$, so \ref{it:odd} also holds by the definition of $\prec'$.
Finally, \ref{it:XY} holds because $u'\notin g(u')$ and $u'\in\odd{G'}{g(u')}=N_{G'}(u)$.
Thus, $(g',\prec')$ is a gflow for $\Gamma'$.
\end{proof}
\subsection{Finding extended gflow} \label{sec:MaximallyDelayedGflow}
Ref.~\cite{MP08-icalp} gives a polynomial time algorithm for finding gflow for labelled open graph{}s with all measurements in the \normalfont XY\xspace-plane. In this section, we present an extension of this algorithm that works for measurements in all three measurement planes.
Before doing so, we note a few details from the algorithm. The intuition behind the procedure is to `maximally delay' any measurements,
thereby keeping potential correction options available for as long as possible. As a result, the algorithm finds a gflow of minimal `depth' (a notion we will make precise later).
The algorithm works backwards:
Starting from the output vertices, it iteratively constructs disjoint subsets of vertices
that can be corrected by vertices chosen in previous steps.
Viewed instead as information travelling forwards, from the inputs to the outputs,
this corresponds to only correcting a vertex at the last possible moment,
hence the name maximal delayed.
\begin{definition}[Generalisation of {\cite[Definition~4]{MP08-icalp}} to multiple measurement planes]
\label{defVk}
For a given labelled open graph $(G,I,O,\lambda)$ and a given gflow $(g,\prec)$ of $(G,I,O,\lambda)$, let
\[
V_k^\prec = \begin{cases} \max_\prec (V) &\text{if } k= 0 \\ \max_\prec (V\setminus(\bigcup_{i<k} V_i^\prec)) &\text{if } k > 0 \end{cases}
\]
where $\max_\prec(X) := \{u\in X \text{ s.t. } \forall v\in X, \neg(u\prec v)\}$ is the set of the maximal elements of $X$.
\end{definition}
\begin{definition}[Generalisation of {\cite[Definition~5]{MP08-icalp}} to multiple measurement planes]
\label{defMoreDelayed}
For a given labelled open graph $(G,I,O,\lambda)$ and two given gflows $(g,\prec)$ and $(g',\prec')$ of $(G,I,O,\lambda)$,
$(g,\prec)$ is \emph{more delayed} than $(g',\prec')$ if for all $k$,
\[
\abs{\bigcup_{i=0}^k V_i^\prec} \geq \abs{\bigcup_{i=0}^k V_i^{\prec'}}
\]
and there exists a $k$ such that the inequality is strict.
A gflow $(g,\prec)$ is \emph{maximally delayed} if there exists no gflow of the same open graph that is more delayed.
\end{definition}
\begin{theorem}[Generalisation of {\cite[Theorem~2]{MP08-icalp}} to multiple measurement planes]
\label{thmGFlowAlgo}
There exists a polynomial time algorithm that decides whether a given
labelled open graph\ has an extended gflow, and that outputs such a gflow if it exists.
Moreover, the output gflow is maximally delayed.
\end{theorem}
The proof of this theorem can be found in Appendix~\ref{sec:FindingGflow}.
\subsection{Focusing extended gflow}\label{sec:focusing-extended-gflow}
In Section~\ref{sec:focusing-gflow}, we introduced the notion of focused gflow for labelled open graph{}s in which all measurements are in the \normalfont XY\xspace plane.
There is no canonical generalisation of this notion to labelled open graph{}s with measurements in multiple planes.
Hamrit and Perdrix suggest three different extensions of focused gflow to the case of multiple measurement planes, which restrict correction operations on non-output qubits to only a single type of Pauli operator overall \cite[Definition~2]{hamrit2015reversibility}.
Here, we go a different route by requiring that non-output qubits only appear in correction sets, or odd neighbourhoods of correction sets, if they are measured in specific planes.
This means the correction operators which may be applied to some non-output qubit depend on the plane in which that qubit is measured.
The new notion of focused gflow will combine particularly nicely with the phase-gadget form of MBQC+LC diagrams of Section~\ref{sec:phasegadgetform}.
We begin by proving some lemmas that allow any gflow to be focused in our sense.
\begin{lemma}\label{lem:successor-gflow}
Let $(G,I,O,\lambda)$ be a labelled open graph which has gflow $(g,\prec)$.
Suppose there exist $v,w\in\comp{O}$ such that $v\prec w$.
Define $g'(v):=g(v)\mathbin{\Delta}\xspace g(w)$ and $g'(u):=g(u)$ for all $u\in\comp{O}\setminus\{v\}$, then $(g',\prec)$ is a gflow.
\end{lemma}
\begin{proof}
As the correction set only changes for $v$, the gflow properties remain satisfied for all other vertices.
Now, suppose $w'\in g'(v)$, then $w'\in g(v) \vee w'\in g(w)$.
In the former case, $v\prec w'$, and in the latter case, $v\prec w\prec w'$, since $(g,\prec)$ is a gflow, so \ref{it:g} holds.
Similarly, suppose $w'\in\odd{}{g'(v)}$, then by linearity of $\odd{}{\cdot}$ we have $w'\in\odd{}{g(v)} \vee w'\in\odd{}{g(w)}$.
Again, this implies $v\prec w'$ or $v\prec w\prec w'$ since $(g,\prec)$ is a gflow, so \ref{it:odd} holds.
Finally, $v\prec w$ implies $v\notin g(w)$ and $v\notin\odd{}{g(w)}$ since $(g,\prec)$ is a gflow.
Therefore $v\in g'(v) \Longleftrightarrow v\in g(v)$ and $v\in\odd{}{g'(v)}\Longleftrightarrow v\in\odd{}{g(v)}$.
Thus \ref{it:XY}--\ref{it:YZ} hold and $(g',\prec)$ is a gflow.
\end{proof}
\begin{lemma}\label{lem:focus-single-vertex}
Let $(G,I,O,\lambda)$ be a labelled open graph, let $(g,\prec)$ be a gflow for this open graph, and let $v\in\comp{O}$.
Then there exists $g':\comp{O}\to\pow{\comp{I}}$ such that
\begin{enumerate}
\item for all $w\in\comp{O}$, either $v=w$ or $g'(w)=g(w)$,
\item for all $w\in g'(v)\cap\comp{O}$, either $v=w$ or $\lambda(w) = \ensuremath\normalfont\textrm{XY}\xspace$,
\item for all $w\in \odd{}{g'(v)}\cap\comp{O}$, either $v=w$ or $\lambda(w)\neq \ensuremath\normalfont\textrm{XY}\xspace$, and
\item $(g',\prec)$ is a gflow for $(G,I,O,\lambda)$.
\end{enumerate}
We can construct this $g'$ in a number of steps that is polynomial in the number of vertices of $G$.
\end{lemma}
\begin{proof}
Let $g_0:=g$, we will modify the function in successive steps to $g_1,g_2$, and so on.
For each non-negative integer $k$ in turn, define
\begin{align*}
S_{k,\ensuremath\normalfont\textrm{XY}\xspace} &:= \{u\in (\odd{}{g_k(v)}\cap\comp{O}) \setminus\{v\} : \lambda(u)=\ensuremath\normalfont\textrm{XY}\xspace\}, \\
S_{k,\normalfont\normalfont\textrm{XZ}\xspace} &:= \{u\in (g_k(v)\cap\comp{O}) \setminus\{v\} : \lambda(u)=\normalfont\normalfont\textrm{XZ}\xspace\}, \\
S_{k,\normalfont\normalfont\textrm{YZ}\xspace} &:= \{u\in (g_k(v)\cap\comp{O}) \setminus\{v\} : \lambda(u)=\normalfont\normalfont\textrm{YZ}\xspace\},
\end{align*}
and set $S_k := S_{k,\ensuremath\normalfont\textrm{XY}\xspace} \cup S_{k,\normalfont\normalfont\textrm{XZ}\xspace} \cup S_{k,\normalfont\normalfont\textrm{YZ}\xspace}$.
Finding $S_k$ takes $O(\abs{V}^2)$ operations.
If $S_k=\emptyset$, let $g':=g_k$ and stop.
Otherwise, choose $w_k\in S_k$ among the elements minimal in $\prec$, and define
\[
g_{k+1}(u) := \begin{cases} g_k(v)\mathbin{\Delta}\xspace g_k(w_k) &\text{if } u=v \\ g_k(u) &\text{otherwise.} \end{cases}
\]
Note $w_k\in S_k$ implies $w_k\neq v$, as well as either $w_k\in g_k(v)$ or $w_k\in\odd{}{g_k(v)}$.
Thus if $(g_k,\prec)$ is a gflow, then $v\prec w_k$, and hence by Lemma~\ref{lem:successor-gflow}, $(g_{k+1},\prec)$ is also a gflow.
Since $(g_0,\prec)$ is a gflow, this means $(g_k,\prec)$ is a gflow for all $k$.
Now, if $w_k\in S_{k,\ensuremath\normalfont\textrm{XY}\xspace}$, then $w_k\in\odd{}{g_k(w_k)}$ by \ref{it:XY}.
This implies $w_k\notin\odd{}{g_{k+1}(v)}$, and thus $w_k\notin S_{k+1}$.
Similarly, if $w_k\in S_{k,\normalfont\normalfont\textrm{XZ}\xspace} \cup S_{k,\normalfont\normalfont\textrm{YZ}\xspace}$, then $w_k\in g_k(w_k)$ by \ref{it:XZ} or \ref{it:YZ}.
This implies $w_k\notin g_{k+1}(v)$, and thus $w_k\notin S_{k+1}$.
Hence, in each step we remove a minimal element from the set.
Suppose there exists $w'\in S_{k+1}\setminus S_k$, then either $w'\in g_k(w_k)$ or $w'\in\odd{}{g_k(w_k)}$; in either case $w_k\prec w'$.
In other words, we always remove a minimal element from the set and add only elements that come strictly later in the partial order.
Therefore, the process terminates after $n\leq\abs{V}$ steps, at which point $S_n=\emptyset$,
and the process requires $O(\abs{V}^2)$ operations at each step.
The total complexity is therefore $O(\abs{V}^3)$.
The function $g'=g_n$ has the desired properties: (1) holds because we never modify the value of the function on inputs other than $v$, (2) and (3) hold because $S_n=\emptyset$, and (4) was shown to follow from Lemma~\ref{lem:successor-gflow}.
\end{proof}
Based on these lemmas, we can now show the focusing property: first for arbitrary labelled open graph{}s and then for labelled open graph{}s corresponding to an MBQC diagram in phase-gadget form.
These results state that correction sets can be simplified to only contain qubits measured in the \normalfont XY\xspace plane.
Moreover, side-effects of corrections (i.e.\ effects on qubits other than the one being corrected) never affect qubits measured in the \normalfont XY\xspace plane.
\begin{proposition}\label{prop:focused-gflow}
Let $(G,I,O,\lambda)$ be a labelled open graph which has gflow.
Then $(G,I,O,\lambda)$ has a maximally delayed gflow $(g,\prec)$ with the following properties for all $v\in V$:
\begin{itemize}
\item for all $w\in g(v)\cap\comp{O}$, either $v=w$ or $\lambda(w)= \ensuremath\normalfont\textrm{XY}\xspace$, and
\item for all $w\in \odd{}{g(v)}\cap\comp{O}$, either $v=w$ or $\lambda(w)\neq \ensuremath\normalfont\textrm{XY}\xspace$.
\end{itemize}
This maximally delayed gflow can be constructed in a number of steps that is polynomial in the number of vertices in $G$.
\end{proposition}
\begin{proof}
Let $(g_0,\prec)$ be a maximally delayed gflow of $(G,I,O,\lambda)$.
Set $n:=\abs{V}$ and consider the vertices in some order $v_1,\ldots,v_n$.
For each $k=1,\ldots,n$, let $g_k$ be the function that results from applying Lemma~\ref{lem:focus-single-vertex} to the gflow $(g_{k-1},\prec)$ and the vertex $v_k$.
Then $g_k$ satisfies the two properties for the vertex $v_k$.
The function $g_k$ also equals $g_{k-1}$ on all inputs other than $v_k$, so in fact $g_k$ satisfies the two properties for all vertices $v_1,\ldots,v_k$.
Thus, $g_n$ satisfies the two properties for all vertices.
Moreover, the partial order does not change, so $(g_n,\prec)$ is as delayed as $(g_0,\prec)$; i.e.\ it is maximally delayed.
Hence if $g:=g_n$, then $(g,\prec)$ has all the desired properties.
The construction of each successive $g_{k+1}$ via Lemma~\ref{lem:focus-single-vertex} takes $O(n^3)$ operations,
which we perform at most $n$ times, giving a complexity of $O(n^4)$.
\end{proof}
The extended notion of focused gflow also allows us to prove another result which will be useful for the optimisation algorithm later.
First, note that if a labelled open graph{} has gflow, then the labelled open graph{} that results from deleting all vertices measured in the \normalfont XZ\xspace or \normalfont YZ\xspace planes still has gflow.
\begin{lemma}\label{lem:gflow_drop_gadgets}
Suppose $(G,I,O,\lambda)$ is a labelled open graph{} which has gflow.
Let $(G',I,O,\lambda')$ be the induced labelled open graph{} on the vertex set $V'=\{v\in V\mid v\in O \text{ or } \lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace\}$.
Then $(G',I,O,\lambda')$ has gflow.
\end{lemma}
\begin{proof}
Apply Lemma~\ref{lem:deletepreservegflow} to each vertex measured in the \normalfont XZ\xspace or \normalfont YZ\xspace plane one by one.
Recall from Definition~\ref{defGFlow} that input vertices are measured in the \normalfont XY\xspace plane and so
are not removed by this process.
\end{proof}
We can now show that in a labelled open graph{} which has gflow and which satisfies $\abs{I}=\abs{O}$, any internal \normalfont XY\xspace vertex must have more than one neighbour.\footnote{The condition $\abs{I}=\abs{O}$ is necessary: consider the labelled open graph{} $(G, \emptyset, \{o\}, \lambda)$, where $G$ is the connected graph on two vertices $\{v,o\}$, and $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$.
Then $v$ is internal and has only a single neighbour, yet the labelled open graph{} has gflow with $g(v)=\{o\}$ and $v\prec o$.}
\begin{proposition}\label{prop:XY-neighbours}
Let $(G,I,O,\lambda)$ be a labelled open graph{} which has gflow and for which $\abs{I}=\abs{O}$.
Suppose $v\in\comp{O}$ satisfies $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$.
Then either $v\in I$ and $\abs{N_{G}(v)}\geq 1$, or $v\notin I$ and $\abs{N_{G}(v)}\geq 2$.
\end{proposition}
\begin{proof}
Consider some $v\in\comp{O}$ such that $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$.
Note that a vertex with no neighbours is in the even neighbourhood of any set of vertices.
Therefore we must have $\abs{N_{G}(v)}\geq 1$, since $(G,I,O,\lambda)$ has gflow and $v$ must be in the odd neighbourhood of its correction set by \ref{it:XY}.
Now suppose for a contradiction that $v\notin I$ and $\abs{N_{G}(v)}=1$.
Denote by $u$ the single neighbour of $v$.
If the labelled open graph{} contains any vertices measured in the \normalfont XZ\xspace or \normalfont YZ\xspace planes, by Lemma~\ref{lem:gflow_drop_gadgets}, we can remove those vertices while preserving the property of having gflow.
Since $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$, the removal process preserves~$v$.
The new labelled open graph{} has gflow and $v$ cannot have gained any new neighbours, so $u$ must also be preserved by the argument of the first paragraph above.
Thus, without loss of generality, we will assume that all non-outputs of $(G,I,O,\lambda)$ are measured in the \normalfont XY\xspace plane.
The labelled open graph{} $(G,I,O,\lambda)$ has gflow and satisfies $\lambda(w)=\ensuremath\normalfont\textrm{XY}\xspace$ for all $w\in\comp{O}$, so by Theorem~\ref{thm:mhalla2} it has a focused gflow $(g,\prec)$.
To satisfy the gflow condition \ref{it:XY} for $v$, that is, to satisfy $v\in\odd{G}{g(v)}$, we must have $u\in g(v)$.
This then implies $v\prec u$ by \ref{it:g}.
Since $\abs{I}=\abs{O}$, the focused gflow $(g,\prec)$ can be reversed in the sense of Corollary~\ref{cor:reverse_unitary_gflow}.
Denote by $(G,O,I,\lambda')$ the reversed labelled open graph{} (cf.\ Definition~\ref{def:reversed-LOG}) and by $(g',\prec')$ the corresponding reversed focused gflow.
Since $v\notin I$, $v$ remains a non-output in the reversed graph, so it has a correction set.
But $g'(v)=\{w\in\comp{O}\mid v\in g(w)\}$, so it cannot contain $u$ because $v\notin g(u)$ by $v\prec u$.
Thus, $v\notin\odd{G}{g'(v)}$, contradicting \ref{it:XY}.
Therefore, the initial assumption must be wrong, i.e.\ if $v\notin I$ then $\abs{N_{G}(v)}\geq 2$.
\end{proof}
\begin{remark}
This implies that in any unitary MBQC-form \zxdiagram with gflow, any vertex measured in the \normalfont XY\xspace-plane has at least two incident wires (plus the wire leading to the measurement effect), since being an input vertex of the labelled open graph{} implies being connected to an input wire in the \zxdiagram.
\end{remark}
\section{Simplifying measurement patterns}\label{sec:simplifying}
In the previous section, we saw several ways in which labelled open graph{}s can be modified while preserving the existence of gflow. In this section, we will see how these modifications can be done on measurement patterns in a way that preserves the computation being performed.
The goal of the simplifications in this section is to reduce the number of qubits needed to implement the computation.
Since we are specifically interested in patterns with gflow, we will represent a pattern by a ZX-diagram in MBQC+LC form, which carries essentially the same information.
Before we find qubit-removing rewrite rules however, we establish how local Cliffords in an MBQC+LC form diagram can be changed into measurements in Section~\ref{sec:local-Cliffords} and how local complementations affect a pattern in Section~\ref{sec:pattern-local-complementation}. We use local complementations to remove Clifford vertices from a pattern in Section~\ref{sec:removecliffordqubits}, and to change a pattern so that only two measurement planes are necessary in Section~\ref{sec:phasegadgetform}. Finally, in Section~\ref{sec:further-opt} we find some further simplifications that allow the removal of additional qubits.
\subsection{Transforming local Cliffords into measurements}\label{sec:local-Cliffords}
We used MBQC+LC diagrams as an extension of MBQC form diagrams. In this section we will see that we can always convert the local Clifford gates into measurements to turn the diagram into MBQC form.
\begin{lemma}\label{lem:SQU-to-MBQC-form}
Any \zxdiagram $D$ which is in MBQC+LC form can be brought into MBQC form.
Moreover, if the MBQC-form part of $D$ involves $n$ qubits, of which $p$ are inputs and $q$ are outputs, then the resulting MBQC-form diagram contains at most $n+2p+4q$ qubits.
\end{lemma}
\begin{proof}
Any single-qubit Clifford unitary can be expressed as a composite of three phase shifts \cite[Lemma~3]{backens1}.
Note that this result holds with either choice of colours, i.e.\ any single-qubit Clifford unitary can be expressed as \tikzfig{SQC-red} or \tikzfig{SQC-green}.
Now, with the green-red-green version, for any Clifford operator on an input, we can `push' the final green phase shift through the graph state part onto the outgoing wire.
There, it will either merge with the measurement effect or with the output Clifford unitary:
\ctikzfig{SQC-in-replacement}
If $\gamma\in\{0,\pi\}$, merging the phase shift with a measurement effect may change the angle but not the phase label, e.g.\ if $\gamma=\pi$:
\begin{center}
\tikzfig{pivot-pi-phases-XY} \qquad \tikzfig{pivot-pi-phases-XZ} \qquad \tikzfig{pivot-pi-phases-YZ}
\end{center}
If $\gamma\in\{\frac\pi2,-\frac\pi2\}$, merging the phase shift with a measurement effect will flip the phase labels \normalfont XZ\xspace and \normalfont YZ\xspace, e.g.\ if $\gamma=-\frac\pi2$:
\begin{center}
\tikzfig{lc-N-XY} \qquad \tikzfig{lc-N-XZ} \qquad \tikzfig{lc-N-YZ}
\end{center}
Thus we need to add at most two new qubits to the MBQC-form part when removing a Clifford unitary on the input.
For a Clifford unitary on the output, we have
\ctikzfig{SQU-out-replacement}
Thus we add at most four new qubits.
Combining these properties, we find that rewriting to MBQC form adds at most $2p+4q$ new qubits to the pattern.
\end{proof}
\begin{proposition}
Suppose $D$ is a \zxdiagram in MBQC+LC form and that its MBQC part has gflow.
Let $D'$ be the \zxdiagram that results from bringing $D$ into MBQC form as in Lemma~\ref{lem:SQU-to-MBQC-form}.
Then $D'$ has gflow.
\end{proposition}
\begin{proof}
By applying Lemma~\ref{lem:SQU-to-MBQC-form} repeatedly, we can incorporate any local Clifford operators into the MBQC form part of the diagram.
Lemmas~\ref{lem:gflow-add-output} and~\ref{lem:gflow-add-input} ensure that each step preserves the property of having gflow.
\end{proof}
\subsection{Local complementation and pivoting on patterns}\label{sec:pattern-local-complementation}
Lemmas~\ref{lem:ZX-lcomp} and~\ref{lem:ZX-pivot} showed how to apply a local complementation and pivot on a ZX-diagram by introducing some local Clifford spiders. In this section we will show how these rewrite rules can be used on MBQC+LC diagrams.
\begin{lemma}\label{lem:lc-MBQC-form-non-input}
Let $D$ be an MBQC+LC diagram and suppose $u\in G(D)$ is not an input vertex.
Then the diagram resulting from applying Lemma~\ref{lem:ZX-lcomp} on $u$ (\ie locally complementing), can be turned back into an MBQC+LC diagram $D'$ with $G(D')=G(D)\star u$. If $D$ had gflow, then $D'$ will also have gflow.
\end{lemma}
\begin{proof}
Suppose $D$ is an MBQC+LC diagram, $\Gamma=(G,I,O,\lambda)$ the corresponding labelled open graph, and $\alpha:\comp{O}\to[0,2\pi)$ gives the associated measurement angles.
By assumption, $u\notin I$, so -- with the exception of the output wire or the edge to the measurement effect -- all edges incident on $u$ connect to neighbouring vertices in the graph.
The input wires on the other qubits can be safely ignored.
To get back an MBQC+LC diagram after Lemma~\ref{lem:ZX-lcomp} is applied to $u$, we only need to rewrite the measurement effects, and hence we need to construct new $\lambda'$ and $\alpha'$ for these measurement effects. We do that as follows.
First of all, there are no changes to the measurement effects on vertices $v\not\in N(u)\cup\{u\}$, and hence for those vertices we have $\lambda'(v)=\lambda(v)$ and $\alpha'(v)=\alpha(v)$.
The vertex $u$ gets a red $\frac\pi2$ phase from the application of Lemma~\ref{lem:ZX-lcomp}. If $u\in O$, then it has no associated measurement plane or angle. In this case, this red $\frac\pi2$ simply stays on the output wire, as allowed in an MBQC+LC diagram. When $u\notin O$, there are three possibilities, depending on $\lambda(u)$:
\begin{itemize}
\item If $\lambda(u)=\ensuremath\normalfont\textrm{XY}\xspace$, then the new measurement effect is
\ctikzfig{lc-u-XY}
i.e.\ $\lambda'(u)=\normalfont\normalfont\textrm{XZ}\xspace$ and $\alpha'(u)=\frac{\pi}{2}-\alpha(u)$.
\item If $\lambda(u)=\normalfont\normalfont\textrm{XZ}\xspace$, then the new measurement effect is
\ctikzfig{lc-u-XZ}
i.e.\ $\lambda'(u)=\ensuremath\normalfont\textrm{XY}\xspace$ and $\alpha'(u)=\alpha(u)-\frac{\pi}{2}$.
\item If $\lambda(u)=\normalfont\normalfont\textrm{YZ}\xspace$, then the new measurement effect is
\ctikzfig{lc-u-YZ}
i.e.\ $\lambda'(u)=\normalfont\normalfont\textrm{YZ}\xspace$ and $\alpha'(u)=\alpha(u)+\frac{\pi}{2}$.
\end{itemize}
The vertices $v$ that are neighbours of $u$ get a green $-\frac\pi2$ phase. Again, if such a $v$ is an output, this phase can be put as a local Clifford on the output. If it is not an output, then there are also three possibilities depending on $\lambda(v)$:
\begin{itemize}
\item If $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$, then the new measurement effect is
\ctikzfig{lc-N-XY}
i.e.\ $\lambda'(v)=\ensuremath\normalfont\textrm{XY}\xspace$ and $\alpha'(v)=\alpha(v)-\frac{\pi}{2}$.
\item If $\lambda(v)=\normalfont\normalfont\textrm{XZ}\xspace$, then the new measurement effect is
\ctikzfig{lc-N-XZ}
i.e.\ $\lambda'(v)=\normalfont\normalfont\textrm{YZ}\xspace$ and $\alpha'(v)=\alpha(v)$.
\item If $\lambda(v)=\normalfont\normalfont\textrm{YZ}\xspace$, then the new measurement effect is
\ctikzfig{lc-N-YZ}
i.e.\ $\lambda'(v)=\normalfont\normalfont\textrm{XZ}\xspace$ and $\alpha'(v)=-\alpha(v)$.
\end{itemize}
With these changes, we see that the resulting diagram $D'$ is indeed in MBQC+LC form. The underlying graph $G(D')$ results from the local complementation about $u$ of the original graph $G(D)$. Furthermore, the measurement planes changed in the same way as in Lemma~\ref{lem:lc_gflow}, and hence if $D$ had gflow, then $D'$ will also have gflow.
\end{proof}
\begin{proposition}\label{prop:MBQC-lc-MBQC}
Let $D$ be an MBQC+LC diagram and suppose $u$ is an arbitrary vertex.
Then after application of a local complementation to $u$, the resulting diagram can be turned into an MBQC+LC diagram.
\end{proposition}
\begin{proof}
If $u$ is not an input vertex, the result is immediate from Lemma~\ref{lem:lc-MBQC-form-non-input}.
If instead $u$ is an input vertex, we modify $D$ by replacing the input wire incident on $u$ by an additional graph vertex $u'$ measured in the \normalfont XY\xspace-plane at angle 0, and a Hadamard unitary on the input wire:
\ctikzfig{input-replacement}
Throughout this process, the measurement effect on $u$ (if any) does not change, so it is left out of the above equation.
In the resulting diagram $D'$, $u$ is no longer an input.
Furthermore, $D'$ is an MBQC+LC diagram.
Thus, the desired result follows by applying Lemma~\ref{lem:lc-MBQC-form-non-input} to $D'$.
\end{proof}
A pivot is just a sequence of three local complementations.
Thus, the previous lemma already implies that when Lemma~\ref{lem:ZX-pivot} is applied to an MBQC+LC diagram the resulting diagram can also be brought back into MBQC+LC form. Nevertheless, it will be useful to explicitly write out how the measurement planes of the vertices change.
\begin{lemma}\label{lem:pivot-MBQC-form-non-input}
Let $D$ be an MBQC+LC diagram and suppose $u$ and $v$ are neighbouring vertices in the graph state and are not input vertices of the underlying labelled open graph.
Then the diagram resulting from applying Lemma~\ref{lem:ZX-pivot} to $u$ and $v$ (\ie a pivot about $u\sim v$) can be brought back into MBQC+LC form.
The resulting \zxdiagram $D'$ satisfies $G(D') = G(D)\wedge uv$. If $D$ had gflow, then $D'$ will also have gflow.
\end{lemma}
\begin{proof}
Suppose $\Gamma=(G,I,O,\lambda)$ is the labelled open graph\ underlying $D$ and suppose $\alpha:\comp{O}\to[0,2\pi)$ gives the measurement angles.
We will denote the measurement planes after pivoting by $\lambda':\comp{O}\to\{\ensuremath\normalfont\textrm{XY}\xspace,\normalfont\normalfont\textrm{XZ}\xspace,\normalfont\normalfont\textrm{YZ}\xspace\}$ and the measurement angles after pivoting by $\alpha':\comp{O}\to[0,2\pi)$.
Let $a\in\{u,v\}$, then:
\begin{itemize}
\item If $a$ is an output, we consider the Hadamard resulting from the pivot operation as a Clifford operator on the output.
\item If $\lambda(a)=\ensuremath\normalfont\textrm{XY}\xspace$ then $\lambda'(a) = \normalfont\normalfont\textrm{YZ}\xspace$ and if $\lambda(a)=\normalfont\normalfont\textrm{YZ}\xspace$ then $\lambda'(a) = \ensuremath\normalfont\textrm{XY}\xspace$:
\ctikzfig{pivot-u-XY}
In both cases, the measurement angle stays the same: $\alpha'(a) = \alpha(a)$.
\item If $\lambda(a)=\normalfont\normalfont\textrm{XZ}\xspace$, then
\ctikzfig{pivot-u-XZ}
\ie $\lambda'(a) = \normalfont\normalfont\textrm{XZ}\xspace$ and $\alpha'(a) = \frac\pi2 - \alpha(a)$.
\end{itemize}
The only other changes are new green $\pi$ phases on vertices $w\in N(u)\cap N(v)$.
For measured (i.e.\ non-output) vertices, these preserve the measurement plane and are absorbed into the measurement angle in all three cases:
\begin{align*}
(\lambda'(w), \alpha'(w)) =
\begin{cases}
(\ensuremath\normalfont\textrm{XY}\xspace, \alpha(w) + \pi) & \text{if } \lambda(w) = \ensuremath\normalfont\textrm{XY}\xspace \mspace{-1.5mu} \quad \tikzfig{pivot-pi-phases-XY} \\
(\normalfont\normalfont\textrm{YZ}\xspace, -\alpha(w)) & \text{if } \lambda(w) = \normalfont\normalfont\textrm{YZ}\xspace \quad \tikzfig{pivot-pi-phases-YZ} \\
(\normalfont\normalfont\textrm{XZ}\xspace, -\alpha(w)) & \text{if } \lambda(w) = \normalfont\normalfont\textrm{XZ}\xspace \quad \tikzfig{pivot-pi-phases-XZ}
\end{cases}
\end{align*}
If instead $w$ is an output vertex, we consider the green $\pi$ phase shift as a Clifford operator on the output wire.
The measurement planes and the graph change exactly like in Corollary~\ref{cor:pivot_gflow} and hence $D'$ has gflow when $D$ had gflow.
\end{proof}
\subsection{Removing Clifford vertices}\label{sec:removecliffordqubits}
In this section, we show that if a qubit is measured in one of the Pauli bases, i.e.\ at an angle which is an integer multiple of $\frac\pi2$, it can be removed from a pattern while preserving the semantics as well as the property of having gflow.
\begin{definition}\label{dfn:internal-boundary-Clifford}
Let $D$ be a \zxdiagram in MBQC+LC form, with underlying labelled open graph\ $(G,I,O,\lambda)$. Let $\alpha:\comp{O}\to [0,2\pi)$ be the corresponding set of measurement angles. We say a measured vertex $u\in G$ is \emph{Clifford} when $\alpha(u) = k\frac\pi2$ for some $k$.
\end{definition}
Our goal will be to remove as many internal Clifford vertices as possible.
We make a key observation for our simplification scheme: a \normalfont YZ\xspace-plane measurement with a $0$ or $\pi$ phase can be removed from the pattern by modifying its neighbours in a straightforward manner.
\begin{lemma}\label{lem:ZX-remove-YZ-Pauli}
Let $D$ be a ZX-diagram in MBQC+LC form with vertices $V$.
Suppose $u\in V$ is a non-input vertex measured in the \normalfont YZ\xspace or \normalfont XZ\xspace plane with an angle of $a\pi$ where $a=0$ or $a=1$. Then there is an equivalent diagram $D'$ with vertices $V\setminus \{u\}$. If $D$ has gflow, then $D'$ does as well.
\end{lemma}
\begin{proof}
Using the axioms of the ZX-calculus, it is straightforward to show that:
\ctikzfig{remove-YZ-measurement}
These $a\pi$ phase shifts on the right-hand side can be absorbed into the measurement effects of the neighbouring vertices (or, for output vertices, considered as a local Clifford operator). Absorbing an $a\pi$ phase shift into a measurement effect does not change the measurement plane, only the angle. The resulting diagram $D'$ is then also in MBQC+LC form. Since $G(D')$ is simply $G(D)$ with a \normalfont YZ\xspace or \normalfont XZ\xspace plane vertex removed, $D'$ has gflow if $D$ had gflow by Lemma~\ref{lem:deletepreservegflow}.
\end{proof}
We can combine this observation with local complementation and pivoting to remove vertices measured in other planes or at other angles.
\begin{lemma}\label{lem:lc-simp}
Let $D$ be a ZX-diagram in MBQC+LC form with vertices $V$. Suppose $u\in V$ is a non-input vertex measured in the \normalfont YZ\xspace or \normalfont XY\xspace plane with an angle of $\pm\frac\pi2$. Then there is an equivalent diagram $D'$ with vertices $V\setminus \{u\}$. If $D$ has gflow, then $D'$ does as well.
\end{lemma}
\begin{proof}
We apply a local complementation about $u$ using Lemma~\ref{lem:ZX-lcomp} and reduce the diagram to MBQC+LC form with Lemma~\ref{lem:lc-MBQC-form-non-input}. By these lemmas, if the original diagram had gflow, this new diagram will also have gflow.
As can be seen from Lemma~\ref{lem:lc-MBQC-form-non-input}, if $u$ was in the \normalfont XY\xspace plane, then it will be transformed to the \normalfont XZ\xspace plane and will have a measurement angle of $\frac\pi2 \mp\frac\pi2$. As a result its measurement angle is of the form $a\pi$ for $a\in\{0,1\}$.
If instead it was in the \normalfont YZ\xspace plane, then it stays in the \normalfont YZ\xspace plane, but its angle is transformed to $\frac\pi2 \pm\frac\pi2$ in which case it will also be of the form $a\pi$ for $a\in\{0,1\}$.
In both cases we can remove the vertex $u$ using Lemma~\ref{lem:ZX-remove-YZ-Pauli} while preserving semantics and the property of having gflow.
\end{proof}
\begin{lemma}\label{lem:pivot-simp}
Let $D$ be a ZX-diagram in MBQC+LC form with vertices $V$, and let $u,v \in V$ be two non-input measured vertices which are neighbours.
Suppose that either $\lambda(u)=\ensuremath\normalfont\textrm{XY}\xspace$ with $\alpha(u) = a\pi$ for $a\in \{0,1\}$ or $\lambda(u) = \normalfont\normalfont\textrm{XZ}\xspace$ with $\alpha(u) = (-1)^a\frac\pi2$.
Then there is an equivalent diagram $D'$ with vertices $V\setminus \{u\}$. Moreover, if $D$ has gflow, then $D'$ also has gflow.
\end{lemma}
\begin{proof}
We apply a pivot to $uv$ using Lemma~\ref{lem:ZX-pivot} and reduce the diagram to MBQC+LC form with Lemma~\ref{lem:pivot-MBQC-form-non-input}. If the original diagram had gflow, this new diagram will also have gflow.
As can be seen from Lemma~\ref{lem:pivot-MBQC-form-non-input}, if $\lambda(u) = \ensuremath\normalfont\textrm{XY}\xspace$ then $\lambda'(u) = \normalfont\normalfont\textrm{YZ}\xspace$ with $\alpha'(u)=\alpha(u) = a\pi$. If instead we had $\lambda(u) = \normalfont\normalfont\textrm{XZ}\xspace$ (and thus $\alpha(u) = (-1)^a\frac\pi2$), then $\lambda'(u) = \normalfont\normalfont\textrm{XZ}\xspace$, but $\alpha'(u) = \frac\pi2 - \alpha(u) = \frac\pi2 - (-1)^a \frac\pi2 = a\pi$. In both cases, using Lemma~\ref{lem:ZX-remove-YZ-Pauli}, $u$ can be removed while preserving semantics and the existence of gflow.
\end{proof}
\begin{remark}
The `graph-like' \zxdiagrams studied in Ref.~\cite{cliff-simp} are MBQC+LC form diagrams where every vertex is measured in the \normalfont XY\xspace plane.
Our Lemmas~\ref{lem:lc-simp} and \ref{lem:pivot-simp} are generalisations of the work found in Ref.~\cite[Lemmas~5.2 and 5.3]{cliff-simp} and Ref.~\cite[(P2) and (P3)]{tcountpreprint}.
\end{remark}
Combining the three previous lemmas we can remove most non-input Clifford vertices. The exceptions are some non-input Clifford vertices which are only connected to input and output vertices. While it might not always be possible to remove such vertices, when the diagram has a gflow, we can find an equivalent smaller diagram:
\begin{lemma}\label{lem:removeboundaryPauli}
Let $D$ be a ZX-diagram in MBQC+LC form with vertices $V$ that has a gflow. Let $u$ be a non-input measured vertex that is only connected to input and output vertices. Suppose that either $\lambda(u)=\ensuremath\normalfont\textrm{XY}\xspace$ with $\alpha(u) = a\pi$ for $a\in \{0,1\}$ or $\lambda(u) = \normalfont\normalfont\textrm{XZ}\xspace$ with $\alpha(u) = (-1)^a\frac\pi2$. Then there exists an equivalent diagram $D'$ which has gflow and has vertices $V\backslash\{u\}$.
\end{lemma}
\begin{proof}
We prove the result for $\lambda(u)=\ensuremath\normalfont\textrm{XY}\xspace$ and $\alpha(u) = a\pi$. The other case is similar.
We claim that $u$ is connected to at least one output that is not also an input. In order to obtain a contradiction suppose otherwise. The diagram then looks as follows:
\ctikzfig{ZX-Pauli-projector}
Here `LC' indicates that there are local Clifford operators on the inputs.
Since $D$ has gflow, the entire diagram must be (proportional to) an isometry, and hence it must still be an isometry if we remove the local Cliffords on the inputs. But we then have the map
\ctikzfig{ZX-Pauli-projector2}
as the first operation in the diagram. This map is a projector, and it is not invertible. This is a contradiction, as the entire diagram cannot then be an isometry.
Therefore, $u$ must be connected to some output vertex $v$, which is not an input. We can thus perform a pivot about $uv$. This adds a Hadamard operator after $v$, and changes the label of $u$ to \normalfont YZ\xspace. We can then remove $u$ using Lemma~\ref{lem:ZX-remove-YZ-Pauli}. As all these operations preserve gflow, the resulting diagram still has gflow.
\end{proof}
The following result about removing Pauli measurements (i.e.\ Clifford vertices) from patterns while preserving semantics is already contained in Ref.~\cite[Section III.A]{hein2004multiparty} (if outside the context of MBQC), and is also mentioned in Ref.~\cite{houshmand2018minimal}.
Nevertheless, we are the first to explicitly state the effects of this process on the measurement pattern and the gflow.
\begin{theorem}\label{thm:simplifiedZXdiagram}
Let $D$ be a ZX-diagram in MBQC+LC form that has gflow. Then we can find an equivalent ZX-diagram $D'$ in MBQC+LC form, which also has gflow and which contains no non-input Clifford vertices.
The algorithm uses a number of graph operations that is polynomial in the number of vertices of $D$.
\end{theorem}
\begin{proof}
Starting with $D$ we simplify the diagram step by step using the following algorithm:
\begin{enumerate}
\item Using Lemma~\ref{lem:lc-simp} repeatedly, remove any non-input \normalfont YZ\xspace or \normalfont XY\xspace measured vertex with a $\pm \frac\pi2$ phase.
\item Using Lemma~\ref{lem:ZX-remove-YZ-Pauli} repeatedly, remove any non-input vertex with measured in the \normalfont YZ\xspace or \normalfont XZ\xspace plane with angle $a\pi$.
\item Using Lemma~\ref{lem:pivot-simp} repeatedly, remove any \normalfont XY\xspace vertex with an $a\pi$ phase and any \normalfont XZ\xspace vertex with a $\pm \frac\pi2$ phase that are connected to any other internal vertex. If any have been removed, go back to step 1.
\item If there are non-input measured Clifford vertices that are only connected to boundary vertices, use Lemma~\ref{lem:removeboundaryPauli} to remove them. Then go back to step 1. Otherwise we are done.
\end{enumerate}
By construction there are no internal Clifford vertices left at the end. Every step preserves gflow, so the resulting diagram still has gflow.
As every step removes a vertex, this process terminates in at most $n$ steps, where $n$ is the number of vertices in $D$. Each of the steps possibly requires doing a pivot or local complementation requiring $O(n^2)$ elementary graph operations. Hence, the algorithm takes at most $O(n^3)$ elementary graph operations.
\end{proof}
\begin{theorem}\label{thm:simplifiedMBQCpattern}
Suppose $(G,I,O,\lambda,\alpha)$ is a uniformly deterministic MBQC pattern representing a unitary operation.
Assume the pattern involves $q$ inputs and $n$ qubits measured at non-Clifford angles, i.e.\ $q:=\abs{I}$ and $n := \abs{\{u\in\comp{O} \mid \alpha(u)\neq k\frac\pi2 \text{ for any } k \in \mathbb{Z}\}}$.
Then we can find a uniformly deterministic MBQC pattern that implements the same unitary and uses at most $(n+8q)$ measurements.
This process finishes in a number of steps that is polynomial in the number of vertices of $G$.
\end{theorem}
\begin{proof}
Let $D$ be the ZX-diagram in MBQC form from Lemma~\ref{lem:zx-equals-linear-map} that implements the same unitary as the MBQC pattern $\pat:=(G,I,O,\lambda,\alpha)$.
Since $\pat$ is uniformly deterministic, it has gflow, and hence $D$ also has gflow by Definition~\ref{dfn:zx-gflow}.
Let $D'$ be the ZX-diagram in MBQC+LC form produced by Theorem~\ref{thm:simplifiedZXdiagram}.
Since $D'$ has no internal Clifford vertices, its MBQC-form part can have at most $n$ internal vertices.
It may still have boundary Clifford vertices, and by unitarity $\abs{O}=\abs{I}=q$, so the MBQC-form part contains at most $(n+2q)$ vertices.
Denote by $D''$ the MBQC-form diagram produced by applying Lemma~\ref{lem:SQU-to-MBQC-form} to $D'$.
Then $D''$ has at most $((n+2q)+6q)$ vertices in its MBQC-form part.
We can construct a new pattern $\pat'$ from $D''$ using Lemma~\ref{lem:zx-to-pattern}.
As $D''$ has gflow, $\pat'$ also has gflow, and hence is uniformly deterministic.
The new pattern $\pat'$ involves at most $(n+8q)$ qubits.
For the complexity of these operations:
\begin{itemize}
\item Constructing $D$ from $\pat$ takes $O(\abs{G})$ operations
\item Constructing $D'$ from $D$ takes $O(\abs{G}^3)$ operations
\item Constructing $D''$ from $D$ takes $O(\abs{G})$ operations
\item Constructing $\pat'$ from $D''$ takes $O(\abs{G})$ operations
\end{itemize}
So the entire process is dominated $O(\abs{G}^3)$.
\end{proof}
\subsection{Phase-gadget form}\label{sec:phasegadgetform}
Using the local complementation and pivoting rules of Section~\ref{sec:pattern-local-complementation} we can transform the geometry of MBQC+LC form diagrams so that they no longer contain any vertices measured in the \normalfont XZ\xspace plane, nor will \normalfont YZ\xspace vertices be connected.
\begin{definition}\label{def:phasegadgetform}
An MBQC+LC diagram is in \emph{phase-gadget form} if
\begin{itemize}
\item there does not exist any $v\in\comp{O}$ such that $\lambda(v) = \normalfont\normalfont\textrm{XZ}\xspace$, and
\item there does not exist any pair of neighbours $v,w\in\comp{O}$ such that $\lambda(v)=\lambda(w)=\normalfont\normalfont\textrm{YZ}\xspace$.
\end{itemize}
\end{definition}
The name `phase gadget' refers to a particular configuration of spiders in the \zxcalculus, which, in our setting, corresponds to spiders measured in the \normalfont YZ\xspace plane. Phase gadgets have been used particularly in the study of circuit optimisation~\cite{phaseGadgetSynth,tcountpreprint,pi4parity}.
In Section~\ref{sec:further-opt} we introduce another form,
called \emph{reduced} (Definition~\ref{def:reduced-form}),
which requires the pattern to be in phase-gadget form.
\begin{example}
The following MBQC+LC diagram is in phase-gadget form.
\ctikzfig{example-phase-gadget-form}
\end{example}
\begin{proposition}\label{prop:ZXtophasegadgetform}
Let $D$ be a ZX-diagram in MBQC+LC form with gflow.
Then we can find an equivalent ZX-diagram $D'$ in MBQC+LC form that has gflow and is in phase-gadget form.
This process takes a number of steps that is polynomial in the number of vertices of $D$.
\end{proposition}
\begin{proof}
Set $D_0:=D$ and iteratively construct the diagram $D_{k+1}$ based on $D_k$.
\begin{itemize}
\item
If the diagram $D_k$ contains a pair of vertices $u \sim v$ that are both measured in the \normalfont YZ\xspace-plane:
First, note that any input vertex $w$ has $\lambda(w) = \ensuremath\normalfont\textrm{XY}\xspace$, as
otherwise $w \in g(w)$ contradicting the co-domain of $g$ as given in Definition~\ref{defGFlow}.
Therefore $u,v \notin I$.
Let $D_{k+1}$ be the diagram that results from pivoting about the edge $u \sim v$ (Lemma~\ref{lem:ZX-pivot})
and then transforming to MBQC+LC form (Lemma~\ref{lem:pivot-MBQC-form-non-input}.)
This changes the measurement plane for $u$ and $v$ from \normalfont YZ\xspace to \normalfont XY\xspace
and it does not affect the measurement planes for any other vertices:
\ctikzfig{rm-adj-red}
\item
If there is no such connected pair but there is some vertex $u$ that is measured in the \normalfont XZ\xspace-plane:
Note that $u$ cannot be an input vertex by the same reasoning as in the first subcase.
Let $D_{k+1}$ be the diagram that results from applying a local complementation
on $u$ and transforming back to MBQC+LC form (Lemmas~\ref{lem:ZX-lcomp} and \ref{lem:lc-MBQC-form-non-input}).
\ctikzfig{rm-adj-red2}
As can be seen from Lemma~\ref{lem:lc-MBQC-form-non-input},
this process changes the measurement plane of $u$ from \normalfont XZ\xspace to \normalfont YZ\xspace
and it does not affect the labels of any vertices that are measured in the \normalfont XY\xspace-plane.
\item
If there is no such connected pair, nor any vertex that is measured in the \normalfont XZ\xspace-plane
then $D_k$ is already in the desired form, so halt.
\end{itemize}
The number of vertices not measured in the \normalfont XY\xspace-plane decreases with each step,
and no vertices are added, so this process terminates in at most $n$ steps, where $n$ is the number of vertices in $D$.
Each step requires checking every pair of vertices,
or performing local complementation,
each of which have complexity $O(n^2)$, so the total complexity is $O(n^3)$.
Since a pivot is just a sequence of local complementations,
$D_{k+1}$ has gflow if $D_k$ had gflow
(Proposition~\ref{prop:MBQC-lc-MBQC}).
Finally every step preserves equivalence, so $D_{k+1}$ is equivalent to $D_k$.
\end{proof}
Proposition~\ref{prop:ZXtophasegadgetform} finds a phase-gadget form for an MBQC+LC diagram, but note that the phase-gadget form is not guaranteed to be unique.
\subsection{Further pattern optimisation}\label{sec:further-opt}
In Section~\ref{sec:removecliffordqubits} we saw that we can remove all non-input Clifford qubits from a pattern while preserving both determinism and the computation the pattern implements.
We will show in this section that it is also possible to remove certain qubits measured in non-Clifford angles.
These measurement pattern rewrite rules, seen then as transformations of ZX-diagrams, were used in Ref.~\cite{tcountpreprint} to reduce the T-count of circuits. We will see how in our context they can be used to remove a qubit from a pattern, again while preserving determinism.
First of all, any internal \normalfont YZ\xspace vertex with just one neighbour can be fused with this neighbour, resulting in the removal of the \normalfont YZ\xspace vertex:
\begin{lemma}\label{lem:removeidvertex}
Let $D$ be an MBQC+LC diagram with an interior vertex $u$ measured in the \normalfont YZ\xspace plane, and suppose it has a single neighbour $v$, which is measured in the \normalfont XY\xspace plane. Then there is an equivalent MBQC+LC diagram $D'$ with $G(D') = G(D)\setminus \{u\}$. If $D$ has gflow, then $D'$ also has gflow.
\end{lemma}
\begin{proof}
We apply the following rewrite:
\ctikzfig{id-simp-1}
The resulting diagram is again an MBQC+LC diagram. The change to the labelled open graph\ comes down to deleting a YZ vertex. By Lemma~\ref{lem:deletepreservegflow} this preserves gflow.
\end{proof}
Note that, by Proposition~\ref{prop:XY-neighbours}, if the diagram has gflow and equal numbers of inputs and outputs, then it has no internal \normalfont XY\xspace vertices with just one neighbour.
Thus, if the diagram is in phase-gadget form (cf.\ Definition~\ref{def:phasegadgetform}), the above lemma allows us to remove all internal vertices which have a single neighbour.
Our second rewrite rule allows us to also `fuse' \normalfont YZ\xspace vertices that have the same set of neighbours:
\begin{lemma}\label{lem:removepairedgadgets}
Let $D$ be an MBQC+LC diagram with two distinct interior vertices $u$ and $v$, both measured in the YZ plane and with $N(u) = N(v)$. Then there is an equivalent diagram $D'$ with $G(D') = G(D)\setminus\{u\}$. If $D$ has gflow, then $D'$ also has gflow.
\end{lemma}
\begin{proof}
We apply the following rewrite:
\ctikzfig{gadget-simp}
A straightforward sequence of \zxcalculus transformations shows this rewrite preserves semantics:
\begin{equation*}
\scalebox{0.9}{\tikzfig{gf-proof}}
\end{equation*}
The new diagram is still an MBQC+LC diagram, and the only change in the underlying labelled open graph\ is the deletion of a \normalfont YZ\xspace vertex. Hence, by Lemma~\ref{lem:deletepreservegflow}, this rewrite preserves gflow.
\end{proof}
The analogous result is not true for a pair of \normalfont XY\xspace vertices. However, when the diagram has gflow, such pairs cannot exist to begin with:
\begin{lemma}\label{lem:nopairedXYvertices}
Let $G$ be a labelled open graph{} with gflow and distinct interior vertices $u$ and $v$ both measured in the \normalfont XY\xspace plane. Then $N(u) \neq N(v)$.
\end{lemma}
\begin{proof}
Assume for a contradiction that $N(u)=N(v)$ and that the diagram has gflow. Note that, for any subset of vertices $S$, we have $u\in \odd{}{S} \iff v\in \odd{}{S}$. In particular, as $u\in \odd{}{g(u)}$ by \ref{it:XY}, we have $v\in \odd{}{g(u)}$ and thus $u\prec v$ by \ref{it:odd}. Yet, swapping $u$ and $v$ in the above argument, we also find $v\prec u$, a contradiction.
Thus, if the diagram has gflow, distinct vertices $u$ and $v$ must have distinct neighbourhoods $N(u) \neq N(v)$.\end{proof}
We can now combine these rewrite rules with our previous results to get a more powerful rewrite strategy:
\begin{definition} \label{def:reduced-form}
Let $D$ be an MBQC+LC diagram. We say $D$ is \emph{reduced} when:
\begin{itemize}
\item It is in phase-gadget form (see Definition~\ref{def:phasegadgetform}).
\item It has no internal Clifford vertices.
\item Every internal vertex has more than one neighbour.
\item If two distinct vertices are measured in the same plane, they have different sets of neighbours.
\end{itemize}
\end{definition}
\begin{theorem}\label{thm:optimisation}
Let $D$ be an MBQC+LC diagram with gflow and equal numbers of inputs and outputs.
Then we can find an equivalent diagram $D'$ that is reduced and has gflow.
This process finishes in a number of steps that is polynomial in the number of vertices of $D$.
\end{theorem}
\begin{proof}
Starting with $D$, we simplify the diagram step by step with the following algorithm:
\begin{enumerate}
\item Apply Theorem~\ref{thm:simplifiedZXdiagram} to remove all interior Clifford vertices.
\item Apply Proposition~\ref{prop:ZXtophasegadgetform} to bring the diagram into phase-gadget form. Then every vertex is of type \normalfont YZ\xspace or \normalfont XY\xspace, and the \normalfont YZ\xspace vertices are only connected to \normalfont XY\xspace vertices.
\item Apply Lemma~\ref{lem:removeidvertex} to remove any YZ vertex that has a single neighbour.
\item Apply apply Lemma~\ref{lem:removepairedgadgets} to merge any pair of \normalfont YZ\xspace vertices that have the same set of neighbours.
\item If the application of these lemmas resulted in any new internal Clifford vertices, go back to step 1. Otherwise we are done.
\end{enumerate}
Each of the steps preserves gflow, and hence at every stage of the algorithm the diagram has gflow.
By construction,
if algorithm has terminates,
every vertex is now of type \normalfont YZ\xspace or \normalfont XY\xspace, and \normalfont YZ\xspace vertices are only connected to \normalfont XY\xspace vertices.
Furthermore, every \normalfont YZ\xspace vertex must have more than one neighbour and have a different set of neighbours than any other \normalfont YZ\xspace vertex.
This is also true for the \normalfont XY\xspace vertices by the existence of gflow and the requirement that the number of inputs match the number of outputs (using Lemma~\ref{lem:nopairedXYvertices} and Proposition~\ref{prop:XY-neighbours}).
Hence, the resulting diagram has all the properties needed for it to be reduced.
To show this process terminates consider the lexicographic order:
\begin{itemize}
\item Number of vertices in the diagram
\item Number of vertices measured in the XZ or YZ planes
\item Number of vertices measured in the XY plane
\end{itemize}
The result of each step of the algorithm on this order is:
\begin{itemize}
\item Applying Theorem~\ref{thm:simplifiedZXdiagram} reduces the number of vertices in the diagram,
while possibly increasing the number of vertices in any given plane.
\item Applying Proposition~\ref{prop:ZXtophasegadgetform} reduces the number of vertices in the XZ or YZ planes,
while possibly increasing the number of vertices in the XY plane.
\item Applying Lemmas~\ref{lem:removeidvertex} and \ref{lem:removepairedgadgets}
reduces the number of vertices in the diagram,
and the number of vertices measured in the YZ plane.
\end{itemize}
Therefore each step in the algorithm reduces our order, so the process terminates.
Writing $n$ for the number of vertices in $D$
we see that the algorithmic loop can be called at most $n$ times (since we remove vertices each iteration),
and each of the steps in the loop take at most $O(n^3)$ operations,
giving a total complexity of $O(n^4)$.
\end{proof}
\begin{remark}
The algorithm described above uses the same idea as that described in Ref.~\cite{tcountpreprint}. But while they describe the procedure in terms of modifying a graph-like \zxdiagram, we describe it for MBQC+LC diagrams, a more general class of diagrams. Furthermore, we prove that the procedure preserves the existence of gflow. The existence of gflow is used in the next section to show how to recover a circuit from an MBQC+LC diagram.
\end{remark}
\section{Circuit extraction}\label{sec:circuitextract}
In this section we will see that we can extract a circuit from a measurement pattern whose corresponding labelled open graph\ has a gflow.
The extraction algorithm modifies that of Ref.~\cite{cliff-simp} so that it can handle measurements in multiples planes (not just the \normalfont XY\xspace plane).
Instead of describing the algorithm for measurement patterns, we describe it for the more convenient form of MBQC+LC diagrams.
The general idea is that we modify the diagram vertex by vertex to bring it closer and closer to resembling a circuit. We start at the outputs of the diagram and work our way to the inputs. The gflow informs the choice of which vertex is next in line to be `extracted' (specifically, this will always be a vertex maximal in the gflow partial order).
By applying various transformations to the diagram, we change it so that the targeted vertex can easily be pulled out of the MBQC-form part of the diagram and into the circuit-like part. The remaining MBQC-form diagram is then one vertex smaller. Since all the transformations preserve gflow, we can then repeat the procedure on this smaller diagram until we are finished.
Before we explain the extraction algorithm in detail in Section~\ref{sec:generalextractalgorithm}, we state some relevant lemmas.
\begin{lemma}\label{lem:cnotgflow}
The following equation holds:
\begin{equation}
\tikzfig{cnot-pivot}
\end{equation}
where $M$ is the biadjacency matrix of the output vertices to the neighbours of $D$, and $M^\prime$ is the matrix produced from $M$ by adding row~1 to row~2, modulo~2. If the full diagram on the LHS has gflow, then so does the RHS.
\end{lemma}
\begin{proof}
The equality is proved in Ref.~\cite[Proposition~6.2]{cliff-simp}. There it is also shown that this preserves gflow when all measurements are in the \normalfont XY\xspace plane, but the same proof works when measurement in all three planes are present.
\end{proof}
\begin{lemma}\label{lem:remove-output-edges-preserves-gflow}
Suppose $(G,I,O,\lambda)$ is a labelled open graph{} with gflow.
Let $G'$ be the graph containing the same vertices as $G$ and the same edges except those for which both endpoints are output vertices.
Formally, if $G=(V,E)$, then $G'=(V,E')$, where
\[
E' = \{v\sim w \in E\mid v\in\comp{O} \text{ or } w\in\comp{O}\}.
\]
Then $(G',I,O,\lambda)$ also has gflow.
\end{lemma}
\begin{proof}
We claim that if $(g,\prec)$ is a gflow for $G$, then it is also a gflow for $G'$. Note that $\odd{G'}{g(v)}\cap \comp{O} = \odd{G}{g(v)}\cap \comp{O}$ as the only changes to neighbourhoods are among the output vertices. It is thus easily checked that all properties of Definition~\ref{defGFlow} remain satisfied.
\end{proof}
\begin{lemma}\label{lem:output-neighbours-are-XY}
Let $(G,I,O,\lambda)$ be a labelled open graph with a gflow and the same number of inputs as outputs: $\lvert I\rvert = \lvert O\rvert$.
Let $v\in O\cap \comp{I}$ be an output which is not an input.
Suppose $v$ has a unique neighbour $u\in\comp{O}$. Then $\lambda(u)=\ensuremath\normalfont\textrm{XY}\xspace$.
\end{lemma}
\begin{proof}
Suppose, working towards a contradiction, that $\lambda(u) \neq \ensuremath\normalfont\textrm{XY}\xspace$.
Form the labelled open graph\ $(G',I,O,\lambda')$ by removing from $G$ all vertices $w$
such that $w \in \comp{O}$ and $\lambda(w) \neq \ensuremath\normalfont\textrm{XY}\xspace$, restricting $\lambda$ accordingly.
By Lemma~\ref{lem:gflow_drop_gadgets} the labelled open graph\ $(G',I,O,\lambda')$ also has a gflow.
Note that $G'$ does contain $v$, which is still an output vertex in $G'$, but does not contain $u$,
and hence $v$ has no neighbours in $G'$.
By Theorem~\ref{thm:mhalla2}, $G'$ has a focused gflow, and
because $G'$ has the same number of inputs as outputs, its reversed graph also has a gflow $(g,\prec)$ by Corollary~\ref{cor:reverse_unitary_gflow}.
In this reversed graph $v$ is an input and, since it is not an output, it is measured in the \normalfont XY\xspace plane.
It therefore has a correction set $g(v)$ so that $v\in \odd{}{g(v)}$.
But because $v$ has no neighbours, this is a contradiction.
We conclude that indeed $\lambda(u)=\ensuremath\normalfont\textrm{XY}\xspace$.
\end{proof}
\noindent For any set $A\subseteq V$, let $N_G(A) = \bigcup_{v\in A} N_G(v)$.
Recall the partition of vertices according to the partial order of the gflow into sets $V_k^\prec$, which is introduced in Definition~\ref{defVk}.
\begin{lemma}\label{lem:maxdelayednotempty}
Let $(G,I,O,\lambda)$ be a labelled open graph in phase-gadget form, which furthermore satisfies $\comp{O}\neq\emptyset$.
Suppose $(G,I,O,\lambda)$ has a gflow.
Then the maximally delayed gflow, $(g,\prec)$, constructed in Proposition~\ref{prop:focused-gflow}
exists and moreover $N_G(V_1^\prec)\cap O \neq \emptyset$, \ie the gflow has the property that,
among the non-output vertices,
there is a vertex which is maximal with respect to the gflow order and also connected to an output vertex.
\end{lemma}
\begin{proof}
By Proposition~\ref{prop:focused-gflow}, there exists a maximally delayed gflow of $(G,I,O,\lambda)$ such that
no element of a correction set (other than possibly the vertex being corrected) is measured in the \normalfont YZ\xspace plane.
Since the open graph does not consist solely of outputs, the set $V_1^\prec$ (as defined in Definition~\ref{defVk}) is non-empty, so the following arguments are non-trivial.
For any $v\in V_1^\prec$ we must have $g(v)\subseteq O\cup \{v\}$.
Now if there is a $v\in V_1^\prec$ with $\lambda(v) = \ensuremath\normalfont\textrm{XY}\xspace$, then $v\in\odd{}{g(v)}$.
There are no self-loops, hence this $v$ must be connected to at least one output, and we are done.
As the graph is in phase-gadget form, there are no vertices labelled \normalfont XZ\xspace and hence from now on assume that $\lambda(v)=\normalfont\normalfont\textrm{YZ}\xspace$ for all $v\in V_1^\prec$.
We distinguish two cases.
\begin{enumerate}
\item If $V_2^\prec = \emptyset$, then the only non-output vertices are in $V_1^\prec$.
Now, any connected component of the graph $G$ must contain an input or an output.
The vertices in $V_1^\prec$ are all labelled \normalfont YZ\xspace and thus appear in their own correction sets; this means they cannot be inputs because inputs do not appear in correction sets.
The vertices in $V_1^\prec$ are not outputs either, so each of them must have at least one neighbour.
Yet the labelled open graph{} is in phase-gadget form.
This implies that two vertices both labelled \normalfont YZ\xspace cannot be adjacent, and all vertices in $V_1^\prec$ are labelled \normalfont YZ\xspace.
Thus any vertex $v\in V_1^\prec$ must have a neighbour in $O$, and we are done.
\item So now assume there is some vertex $w\in V_2^\prec$.
Then, regardless of $\lambda(w)$, we have $g(w) \subseteq V_1^\prec\cup O\cup\{w\}$ and $\odd{}{g(w)} \subseteq V_1^\prec\cup O\cup\{w\}$.
We distinguish three subcases according to whether one of $g(w)$ or $\odd{}{g(w)}$ intersects $V_1^\prec$.
\begin{itemize}
\item Suppose $g(w)\cap V_1^\prec = \odd{}{g(w)}\cap V_1^\prec = \emptyset$, i.e.\ $g(w) \subseteq O\cup\{w\}$ and $\odd{}{g(w)} \subseteq O\cup\{w\}$.
Let $\prec' = \prec \setminus \{(w,u): u\in V_1^\prec\}$.
Then $(g,\prec')$ is a gflow: dropping the given inequalities from the partial order does not affect the gflow properties since $u\in V_1^\prec$ implies $w\notin g(u)$ and $w\notin \odd{}{g(u)}$.
Furthermore, $(g,\prec')$ is more delayed than $(g,\prec)$ because $w$ (and potentially some of its predecessors) moves to an earlier layer, contradicting the assumption that $(g,\prec)$ is maximally delayed.
Hence this case cannot happen.
\item Suppose $g(w)\cap V_1^\prec \neq \emptyset$, then there exists a \normalfont YZ\xspace vertex in the correction set of $w$ since all elements of $V_1^\prec$ are measured in the \normalfont YZ\xspace plane.
But our gflow satisfies the properties of Proposition~\ref{prop:focused-gflow}, and hence this cannot happen.
\item Suppose $\odd{}{g(w)}\cap V_1^\prec \neq \emptyset$ and $g(w)\cap V_1^\prec = \emptyset$, then there is a $v\in V_1^\prec$ such that $v\in \odd{}{g(w)}$.
There are two further subcases.
\begin{itemize}
\item If $\lambda(w)=\ensuremath\normalfont\textrm{XY}\xspace$, we have $w\not\in g(w)$ and hence $g(w)\subseteq O$ so that there must be some $o\in O$ that is connected to $v$ and we are done.
\item Otherwise, if $\lambda(w)=\normalfont\normalfont\textrm{YZ}\xspace$, then $w\in g(w)$.
Yet both $v$ and $w$ are measured in the \normalfont YZ\xspace plane, so they are not neighbours, and hence there still must be an $o\in O$ that is connected to $v$ to have $v\in \odd{}{g(w)}$.
\end{itemize}
\end{itemize}
\end{enumerate}
Thus, the gflow $(g,\prec)$ has the desired property in all possible cases.
\end{proof}
\subsection{General description of the algorithm}\label{sec:generalextractalgorithm}
We first walk through a high-level description of how to extract a circuit from a diagram in MBQC+LC form with gflow, explaining why every step works. After that, we present a more practical algorithm in Section~\ref{s:more-practical}. As we wish the output to be a unitary circuit, we will assume that the diagram has an equal number of inputs and outputs.
The process will be to make sequential changes to the \zxdiagram that make the diagram look progressively more like a circuit. During the process, there will be a `frontier': a set of green spiders such that everything to their right looks like a circuit, while everything to their left (and including the frontier vertices themselves) is an MBQC+LC form diagram equipped with a gflow.
We will refer to the MBQC-form diagram on the left as the \emph{unextracted} part of the diagram, and to the circuit on the right as the \emph{extracted} part of the diagram.
For example:
\begin{equation}\label{ex:frontier-example}
\scalebox{1.2}{\tikzfig{frontier-example}}
\end{equation}
In this diagram, we have merged the \normalfont XY\xspace measurement effects with their respective vertices, in order to present a tidier picture.
The matrix $M$ is the biadjacency matrix between the vertices on the frontier and all their neighbours to the left of the frontier.
For the purposes of the algorithm below, we consider the extracted circuit as no longer being part of the diagram, and hence the frontier vertices are the outputs of the labelled open graph\ of the unextracted diagram.
\textbf{Step 0}: First, we transform the pattern into phase-gadget form using Proposition~\ref{prop:ZXtophasegadgetform}, ensuring that all vertices are measured in the \normalfont XY\xspace or \normalfont YZ\xspace planes, and that vertices measured in the \normalfont YZ\xspace plane are only connected to vertices measured in the \normalfont XY\xspace plane.
This can be done in polynomial time, and preserves the interpretation of the diagram. Furthermore, the resulting diagram still has gflow.
\textbf{Step 1}: We unfuse any connection between the frontier vertices as a CZ gate into the extracted circuit, and we consider any local Clifford operator on the frontier vertices as part of the extracted circuit. For example:
\[\scalebox{1.2}{\tikzfig{example-unfuse-gates}}\]
This process changes the unextracted diagram in two ways: by removing local Clifford operators and by removing connections among the frontier vertices.
The former does not affect the underlying labelled open graph{} and the latter preserves gflow by Lemma~\ref{lem:remove-output-edges-preserves-gflow}.
Thus, the unextracted diagram continues to be in MBQC form and it continues to have gflow.
If the only unextracted vertices are on the frontier, go to step~5, otherwise continue to step~2.
\textbf{Step 2}: The unextracted diagram is in phase-gadget form and has gflow.
Thus, by Lemma~\ref{lem:maxdelayednotempty}, it has a maximally delayed gflow $(g,\prec)$ such that $N_G(V_1^\prec)\cap O \neq \emptyset$, where $V_1^\prec$ is the `most delayed' layer before the frontier vertices, which are the outputs of the labelled open graph\ (see Definition~\ref{defVk}).
Such a gflow can be efficiently determined by first finding any maximally delayed gflow using the algorithm of Theorem~\ref{thmGFlowAlgo} and then following the procedure outlined in Proposition~\ref{prop:focused-gflow}.
Now, if any of the vertices in $V_1^\prec$ are labelled \normalfont XY\xspace, pick one of these vertices and go to step~3. Otherwise, all the maximal non-output vertices (with respect to $\prec$) must have label \normalfont YZ\xspace; go to step~4.
\textbf{Step 3}: We have a maximal non-output vertex $v$ labelled \normalfont XY\xspace, which we want to extract. Since it is maximal in $\prec$, we know that $g(v)\subseteq O$ by Definition~\ref{defVk}.
As the gflow is maximally delayed, we have $\odd{}{g(v)}\cap \comp O = \{v\}$.
We now follow the strategy used in Ref.~\cite{cliff-simp} for the `\normalfont XY\xspace-plane only' case, illustrating it with an example.
Consider the following diagram, in which the vertex $v$ and its correction set $g(v)$ are indicated:
\begin{equation}\label{eq:example-extracted-vertex}
\scalebox{1.2}{\tikzfig{example-extracted-vertex}}
\end{equation}
For clarity, we are ignoring the measurement effects on the left-hand-side spiders, and we are not showing any frontier vertices that are inputs (although note that by definition of a gflow, the vertices of $g(v)$ cannot be inputs).
In the above example, the biadjacency matrix of the bipartite graph between the vertices of $g(v)$ on the one hand, and their neighbours in the unextracted part on the other hand, is
\begin{equation}\label{eq:biadjacency-example}
\tikzfig{example-matrix}
\end{equation}
where the rows correspond to vertices of $g(v)$, and vertices are ordered top-to-bottom. We do not include the bottom-most frontier vertex in the biadjacency matrix, as it is not part of $g(v)$, and we do not include the bottom left spider, as it is not connected to any vertex in $g(v)$.
The property that $\odd{}{g(v)}\cap \comp O = \{v\}$ now corresponds precisely to the following property of the matrix: if we sum up all the rows of this biadjacency matrix modulo 2, the resulting row vector contains a single 1 corresponding to the vertex $v$ and zeroes everywhere else.
It is straightforward to see that this is indeed the case for the matrix of Eq.~\eqref{eq:biadjacency-example}.
Now pick any frontier vertex $w\in g(v)$.
Lemma~\ref{lem:cnotgflow} shows that the application of a CNOT to two outputs corresponds to a row operation on the biadjacency matrix, which adds the row corresponding to the target to the row corresponding to the control. Hence if, for each $w'\in g(v)\setminus\{w\}$, we apply a CNOT with control and target on the output wires of $w$ and $w'$, the effect on the biadjacency matrix is to add all the other rows of the vertices of $g(v)$ to that of $w$:
\[\scalebox{1.15}{\tikzfig{example-extracted-vertex-cnots}}\]
As a result, $w$ is now only connected to $v$, but $v$ may still be connected to other vertices in $O\setminus g(v)$. For each such vertex $u$, applying a CNOT with control $u$ and target $w$ removes the connection between $u$ and $v$:
\[\scalebox{1.15}{\tikzfig{example-extracted-vertex-cnots2}}\]
Now we can move $v$ to the frontier by removing $w$ from the diagram, adding a Hadamard to the circuit (this comes from the Hadamard edge between $v$ and $w$), adding the measurement angle of $v$ to the circuit as a Z-phase gate, and adding $v$ to the set of outputs of the graph (i.e.\ the frontier):
\begin{equation}\label{eq:extract-vertex}
\scalebox{1.15}{\tikzfig{extract-vertex}}
\end{equation}
On the underlying labelled open graph\ this corresponds to removing $w$ and adding $v$ to the list of outputs. We need to check that this preserves the existence of a gflow. The only change we need to make is that for all $v'\neq v$ with $w\in g(v')$ we set $g'(v') = g(v')\backslash\{w\}$. As $w$'s only neighbour is $v$, removing $w$ from $g(v')$ only toggles whether $v\in\odd{}{g'(v')}$. Since $v$ is a part of the outputs in the new labelled open graph, this preserves all the properties of being a gflow.
Now that the vertex $w$ has been removed, the number of vertices in the unextracted part of the diagram is reduced by 1. We now go back to step 1.
\textbf{Step 4}: All the maximal vertices are labelled \normalfont YZ\xspace. Since we chose our gflow according to Lemma~\ref{lem:maxdelayednotempty}, we know that at least one of these vertices is connected to an output, and hence a frontier vertex. Pick such a vertex $v$, and pick a $w\in O\cap N_G(v)$ (this set is non-empty). Pivot about $vw$ using Lemma~\ref{lem:ZX-pivot} and reduce the resulting diagram to MBQC form with Lemma~\ref{lem:pivot-MBQC-form-non-input}. Afterwards, $v$ has label \normalfont XY\xspace and $w$ has a new Hadamard gate on its output wire (which will be dealt with in the next iteration of step 1).
We have changed one vertex label in the unextracted part of the diagram from \normalfont YZ\xspace to \normalfont XY\xspace. Since no step introduces new \normalfont YZ\xspace vertices, step~4 can only happen as many times as there are \normalfont YZ\xspace vertices at the beginning. Go back to step 1.
\textbf{Step 5:} At this point, there are no unextracted vertices other than the frontier vertices, all of which have arity 2 and can be removed using rule $(\bm{i1})$ of Figure~\ref{fig:zx-rules}.
Yet the remaining frontier vertices might be connected to the inputs in some permuted manner and the inputs might carry some local Cliffords:
\ctikzfig{example-permutation}
This is easily taken care of by decomposing the permutation into a series of SWAP gates, at which point the entire diagram is in circuit form.
\textbf{Correctness and efficiency:} Since step 3 removes a vertex from the unextracted diagram, and step 4 changes a measurement plane from \normalfont YZ\xspace to \normalfont XY\xspace (and no step changes measurement planes in the other direction), this algorithm reduces the lexicographic order ($\#$ unextracted vertices, $\#$ unextracted vertices with $\lambda = YZ$) each time we repeat an iteration of the process, so that the algorithm terminates. Each step described above takes a number of graph-operations polynomial in the number of unextracted vertices,
and therefore this entire algorithm takes a number of steps polynomial in the number of vertices.
All steps correspond to ZX-diagram rewrites, so the resulting diagram is a circuit that implements the same unitary as the pattern we started with.
\subsection{A more practical algorithm}\label{s:more-practical}
Now we know that the algorithm above is correct and will always terminate, we can take a couple of short-cuts that will make it more efficient.
In step 2, instead of using the gflow to find a maximal vertex, we do the following: Write down the biadjacency matrix of the bipartite graph consisting of frontier vertices on one side and all their neighbours on the other side. For example, the Diagram~\eqref{eq:example-extracted-vertex} would give the matrix:
\begin{equation}\label{eq:matrix2}
\begin{pmatrix}
1&1&0&0&0\\
0&0&1&1&0\\
0&1&1&1&0\\
1&1&0&1&1
\end{pmatrix}
\end{equation}
Now perform a full Gaussian elimination on this $\mathbb{Z}_2$ matrix. In the above case, this results in the matrix:
\begin{equation}\label{eq:matrix_after_elim}
\begin{pmatrix}
1&0&0&0&0\\
0&1&0&0&0\\
0&0&1&0&1\\
0&0&0&1&1
\end{pmatrix}
\end{equation}
Any row in this matrix containing a single 1 corresponds to an output vertex with a single neighbour. By Lemma~\ref{lem:output-neighbours-are-XY}, this neighbour is of type \normalfont XY\xspace. As an example, in the matrix in Eq.~\eqref{eq:matrix_after_elim}, the first row has a single 1 in the first column, and hence the top-left spider of Diagram~\eqref{eq:example-extracted-vertex} is the unique \normalfont XY\xspace neighbour to the first output. Similarly, the second row has a single 1, appearing in column 2, and hence the second spider from the top on the left in Diagram~\eqref{eq:example-extracted-vertex} is the unique neighbour to the second output.
If we found at least one row with a single 1 with this method, we implement the row operations corresponding to the Gaussian elimination procedure as a set of CNOT gates using Lemma~\ref{lem:cnotgflow}. Doing this with Diagram~\eqref{eq:example-extracted-vertex} gives:
\begin{equation}
\scalebox{1.15}{\tikzfig{example-extracted-gauss}}
\end{equation}
We see that every row which had a single 1 now corresponds to a frontier spider with a single neighbour, and hence we can extract vertices using the technique of Eq.~\eqref{eq:extract-vertex}:
\begin{equation}\label{eq:example-extracted-3}
\scalebox{1.15}{\tikzfig{example-extracted-3}}
\end{equation}
As we now extract multiple vertices at a time, there could be connections between the new frontier vertices (for instance between the top two frontier spiders in Eq.~\eqref{eq:example-extracted-3}). These are taken care of in the next iteration of step 1, turning those into CZ gates.
If the Gaussian elimination does not reveal a row with a single 1, then we are in the situation of step 4. We perform pivots involving a vertex with label \normalfont YZ\xspace and an adjacent frontier vertex until there is no vertex with a label \normalfont YZ\xspace which is connected to a frontier vertex. We then go back to step 1.
With these short-cuts, it becomes clear that we do not need an explicitly calculated gflow in order to extract a circuit. The fact that there is a gflow is only used to argue that the algorithm is indeed correct and will always terminate. Pseudocode for this algorithm can be found in Appendix~\ref{sec:pseudocode}.
With the results of this section we have then established the following theorem.
\begin{theorem}\label{thm:extraction-algorithm}
Let $\pat$ be a measurement pattern with $n$ inputs and outputs containing a total of $k$ qubits, and whose corresponding labelled open graph\ has a gflow. Then there is an algorithm running in time $O(n^2k^2 + k^3)$ that converts $\pat$ into an equivalent $n$-qubit circuit that contains no ancillae.
\end{theorem}
\begin{proof}
The runtime for the extraction algorithm is dominated by Gaussian elimination of the biadjacency matrices which has complexity $O(n^2m)$, where $n$ is the number of rows, corresponding to the number of outputs, and $m$ is the number of neighbours these output vertices are connected to. In principle $m$ could be as large as the number of vertices in the graph and hence could be as large as $k$ (although in practice it will be much smaller than that).
In the worst case, performing a pivot operation also requires toggling the connectivity of almost the entire graph, which requires $k^2$ elementary graph operations.
Since we might have to apply a pivot and a Gaussian elimination process for every vertex in the graph, the complexity for the entire algorithm is bounded above by $O(k(n^2k + k^2)) = O(n^2k^2 + k^3)$.
\end{proof}
Note that if $k\geq O(n^2)$, which will be the case for most useful computation, the bound becomes $O(k^3)$. In practice however we would not expect to see this worst-case complexity as it would only be attained if everything is almost entirely fully connected all the time. This does not seem possible because the pivots and Gaussian elimination always toggle connectivity, and hence a highly connected graph in one step will become less connected in the following step.
\section{Conclusions and Future Work}\label{sec:conclusion}
We have given an algorithm which extracts a circuit from any measurement pattern whose underlying labelled open graph\ has extended gflow.
This is the first algorithm which works for patterns with measurements in multiple planes, and does not use ancillae.
Simultaneously, it is the most general known algorithm for extracting quantum circuits from \zxcalculus diagrams.
We have also developed a set of rewrite rules for measurement patterns containing measurements in all three planes.
For each of these rewrite rules, we have established the corresponding transformations of the extended gflow.
The rewrite rules can be used to reduce the number of qubits in a measurement pattern, in particular eliminating all qubits measured in a Pauli basis.
Additionally, we have generalised the notions of focused gflow and maximally delayed gflow to labelled open graph{}s with measurements in multiple planes, and we have described algorithms for finding such gflows.
The pattern optimization algorithm of Theorem~\ref{thm:optimisation} and the circuit extraction algorithm of Section~\ref{sec:circuitextract} have been implemented in the \zxcalculus rewriting system \emph{PyZX}\footnote{PyZX is available at \url{https://github.com/Quantomatic/pyzx}. A Jupyter notebook demonstrating the T-count optimisation is available at \url{https://github.com/Quantomatic/pyzx/blob/5d409a246857b7600cc9bb0fbc13043d54fb9449/demos/T-count\%20Benchmark.ipynb}.}~\cite{pyzx}.
The reduction in non-Clifford gates using this method matches the state-of-the-art for ancillae-free circuits~\cite{tcountpreprint} at the time of development.
Our circuit extraction procedure resynthesizes the CNOT gates in the circuit. Depending on the input circuit this can lead to drastic decreases in the 2-qubit gate count of the circuit~\cite{cliff-simp,pyzx}, but in many cases it can also lead to an \emph{increase} of the CNOT count.
Such increases are avoided in the procedure of Ref.~\cite{tcountpreprint}, where two-qubit gates are not resynthesized.
Yet re-synthesis of two-qubit gates may be necessary anyway in many applications: current and near-term quantum devices do not allow two-qubit gates to be applied to arbitrary pairs of qubits.
Thus, general circuits need to be adapted to the permitted connections; this is called routing.
Our extraction algorithm currently uses a standard process of Gaussian elimination to produce the two-qubit gates required to implement the circuit, which implicitly assumes that two-qubit gates can be applied between any pair of qubits in the circuit.
It may be useful to replace this procedure with one incorporating routing, such as the \zxcalculus-based routing approach by Kissinger and Meijer-van~de~Griend~\cite{kissinger2019cnot}.
This would allow circuits to be adapted to the desired architecture as they are being extracted.
It would also be interesting to consider whether these routing algorithms can be used more abstractly to transform general measurement patterns to more regular patterns with restricted connectivity.
\medskip
\noindent {\small \textbf{Acknowledgements.}
Many thanks to Fatimah Ahmadi for her contributions in the earlier stages of this project.
The majority of this work was developed at the Applied Category Theory summer school during the week 22--26 July 2019; we thank the organisers of this summer school for bringing us together and making this work possible.
JvdW is supported in part by AFOSR grant FA2386-18-1-4028.
HJM-B is supported by the EPSRC.
}
\bibliographystyle{plainnat}
|
2,877,628,089,126 | arxiv | \section{Introduction}\label{sec:intro}
In experimental quantum optics, the bosonic annihilation operator $\hat{a}$ can be realized through the process of ``single-photon subtraction'' (SPS)~\cite{wen_04,our_06}. When the input state contains a definite number of photons $\ket{n}$, this operation transforms the state as $\hat{a}\ket{n}\rightarrow\sqrt{n}\ket{n-1}$ in the usual way, corresponding to the simple removal of one photon from the state. However, when the input is a superposition of different number states, the SPS process can lead to counterintuitive results~\cite{ueda_90,miz_02}. For example, consider the input state $\ket{\psi}_{in}=\frac{1}{\sqrt{2}}\left(\ket{1}+\ket{5}\right)$, which has a mean number of photons $\langle \hat{n} \rangle = 3$ (where the number operator $\hat{n} \equiv \hat{a}^{\dagger} \hat{a}$). Applying $\hat{a}$ to this state leads to $\ket{\psi}_{out} = \frac{1}{\sqrt{6}}\left(\ket{0}+\sqrt{5}\ket{4}\right)$, which has $ \langle \hat{n} \rangle= 3.\bar{3}$. In this sense, subtracting a single photon from the state has actually {\em increased} the mean number of photons~\cite{par_07,zav_08}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.50,trim={300 160 300 140},clip]{fig1.pdf}
\caption{An implementation of ``zero-photon subtraction'' (ZPS) via conditional measurements on a beamsplitter. A superposition state $\ket{\psi}_{in}$ with expected photon number $\langle \hat{n}\rangle_{in}$ is prepared in the input mode of a beamsplitter with reflectance $R$. In contrast to single-photon subtraction (SPS), ZPS requires heralding on the detection of \textit{zero} photons in the reflected mode~\cite{nun_21}. Heralding on zero photons yields the attenuated state $\ket{\psi}_{out}$ with reduced mean photon number $\langle \hat{n} \rangle_{out}<\langle \hat{n}\rangle_{in}$. The degree of attenuation depends on both $R$ and the photon number statistics of the input state.}
\label{fig:bsinout}
\end{figure}
In a similarly counterintuitive way, subtracting zero photons from a state can actually \textit{decrease} the mean number of photons. Figure~\ref{fig:bsinout} shows an implementation of this ``zero-photon subtraction'' (ZPS) process using a beamsplitter with reflectance $R$, and conditional measurements. A pulsed input state $\ket{\psi}_{in}$ passes through the beamsplitter, and the transmitted output $\ket{\psi}_{out}$ is heralded by the successful detection of zero photons in the reflected mode. Despite no photons being physically removed from the system, ZPS results in $\langle \hat{n} \rangle_{out}<\langle \hat{n} \rangle_{in}$ for all but pure Fock states~\cite{gag_14}. Importantly, the ZPS process in Fig.~\ref{fig:bsinout} can be used to implement a probabilistic noiseless attenuation protocol that is useful for quantum communications~\cite{mic_12,ricky_17,guo_20}.
As highlighted by the structure of Fig.~\ref{fig:bsinout}, the key difference between SPS and ZPS is a heralding signal based on the detection of one vs. zero photons, respectively. While SPS has been experimentally studied extensively~\cite{bellini_10}, ZPS has only been briefly observed for super-Poissonian (i.e., thermal) states, and with fixed values of beamsplitter reflectance $R$~\cite{allevi_10,zhai_13,vid_16,bog_17,hlou_17,magl_19,kat_20}. In this paper, we systematically study ZPS for examples of super-Poissonian, sub-Poissonian, and coherent state inputs, all as a function of beamsplitter reflectance ranging from $R=0\rightarrow1$. The observed trends in attenuation demonstrate some complementary aspects of ZPS and SPS that depend on the photon number distributions, and highlight the role of losses and detector efficiency when heralding on zero photons in ZPS.
The remainder of the paper is structured as follows: in Section~\ref{sec:zps} we provide a detailed theoretical background for ZPS, and introduce an experimentally accessible parameter $K$ that can be used to quantify the degree of attenuation. In Section~\ref{sec:exp} we describe our experimental system, which uses (1) a conventional pulsed parametric down-conversion (PDC) source to produce the desired input states~\cite{gk_04}, (2) a variable evanescent-mode fiber coupler to continuously vary $R$~\cite{digshaw_82}, and (3) the ability to actively herald on zero photons using commercial single-photon detectors~\cite{nun_21}. In Section~\ref{sec:res} we analyze and discuss the experimental results, and briefly describe classical analogues that provide some additional insight into the observed attenuation effects. Finally, we summarize our study and conclude in Section~\ref{sec:con}.
\section{Zero-Photon Subtraction}\label{sec:zps}
The process of zero-photon subtraction (ZPS) illustrated in Figure~\ref{fig:bsinout} was first proposed as a method of noiseless attenuation by Mi\v{c}uda \textit{et al}.~\cite{mic_12}. This transformation can be defined by its action on Fock states $\ket{n}\rightarrow t^n\ket{n}$ with beamsplitter transmittance $T=|t|^2$. For an arbitrary input state $\hat\rho=\sum_{m,n=0}^\infty\rho_{mn}\ket{m}\bra{n}$, noiseless attenuation yields the following expected photon number in the output~\cite{gag_14}:
\begin{equation}\label{eqn:nout}
\langle \hat{n} \rangle_{out} = \frac{\sum_n n \rho_{nn}T^n}{\sum_n \rho_{nn}T^n}
\end{equation}
When $T=1$, we regain the expected photon number of the original state with no attenuation, $\langle \hat{n} \rangle_{in}=\sum_n n \rho_{nn}$. Remarkably, when $T<1$ it can be seen that $\langle \hat{n} \rangle_{out}<\langle \hat{n} \rangle_{in}$ for all but pure Fock states by differentiating Eq.~\ref{eqn:nout} with respect to $T$:
\begin{equation}\label{eqn:dndt}
\frac{d\langle \hat{n} \rangle_{out}}{dT} = \frac{1}{T}\left[\frac{\sum_n (n - \langle \hat{n} \rangle_{out})^2 \rho_{nn}T^n}{\sum_n \rho_{nn}T^n}\right] \equiv \frac{\langle (\Delta n)^2\rangle_{out}}{T} \geq 0,
\end{equation}
and seeing that $\langle \hat{n} \rangle_{out}$ increases monotonically on the interval $T\in(0,1]$. Here $\langle (\Delta n)^2\rangle_{out}$ is the photon number variance of the transformed state.
Equation~\ref{eqn:dndt} is analogous to the result derived by Ueda \textit{et al}. for stationary fields~\cite{ueda_90}, and it suggests that the degree of attenuation is closely related to the photon number statistics of the input state. Experimentally, it is convenient to quantify the degree of attenuation as a function of reflectance $R$ with the following ratio:
\begin{equation}\label{eqn:kdeff}
K(R)\equiv \frac{\langle \hat{n} \rangle_{out}}{(1-R)\langle \hat{n} \rangle_{in}}
\end{equation}
The denominator $(1-R)\langle \hat{n} \rangle_{in}$ simply corresponds to ordinary attenuation by a beamsplitter, in which a fraction $T=1-R$ of the photons are transmitted on average. Thus, $K(R)$ compares the mean photon number of the heralded ZPS state $\langle \hat{n} \rangle_{out}$ to that of the ``ordinary'' output state with no conditional measurements.
The relative attenuation function $K(R)$ contains information about the photon number distribution and higher-order correlations. Most importantly, one can derive from Eqns.~\ref{eqn:nout} to~\ref{eqn:kdeff} that:
\begin{equation}\label{eqn:qin}
\frac{dK}{dR}\Bigr|_{R=0}=1-\frac{\langle (\Delta n)^2\rangle_{in}}{\langle \hat{n} \rangle_{in}}\equiv -Q_{in}
\end{equation}
where $Q_{in}$ is Mandel's $Q$-parameter for the input state~\cite{man_79}.
The above result highlights an important connection between ZPS and typical SPS. In the limit of low beamsplitter reflectance $R$, SPS is equivalent to the annihilation operator $\hat{a}$~\footnote{Historically, ``photon subtraction'' refers to application of the annihilation operator, predating its realization by conditional measurements in the output of a beamsplitter (BS)~\cite{bellini_10}. However, the term often encompasses the more general BS transformations which relax the weak reflectance requirement (e.g.,~\cite{oli_03,kim_05}), which we follow here in our discussion of ZPS.}, and increases the mean photon number of some states as demonstrated in Sec.~\ref{sec:intro}. More precisely, this so-called ``photon excess'' is given exactly by the $Q$-parameter, such that $Q_{in}=\langle \hat{n}\rangle_{out}-\langle \hat{n} \rangle_{in}$~\cite{miz_02}. Thus, the mean photon number of super-Poissonian states $(Q>0)$ counterintuitively increases after performing SPS with a weakly reflecting beamsplitter. Equation~\ref{eqn:qin} links this property of SPS to the behavior of ZPS in the same regime of $R\ll 1$. For ZPS, the $Q$-parameter determines the initial slope $dK/dR$, and thus deviations from $K=1$ as $R$ increases from zero. We can therefore say super-Poissonian states exhibit a complementary ``photon deficit'' $(K<1)$ after ZPS in this regime, such that the mean photon number is reduced below that of ordinary attenuation.
In the same way that SPS has unique consequences for sub-, super- and Poissonian states~\cite{zav_08}, it is also natural to investigate these three classes of states for ZPS. Our experiment will examine the following cases: (1) coherent states $\ket{\alpha}$, which possess Poissonian statistics; (2) the single-mode squeezed vacuum state (SMSV) $\ket{\xi}$, which is super-Poissonian; and (3) a single-photon Fock state $\ket{1}$, which is sub-Poissonian. As detailed in Section~\ref{sec:exp}, the experimentally prepared single-photon state is actually a mixture that includes the vacuum term, $\hat{\rho}_1=(1-\beta)\ket{0}\bra{0}+\beta\ket{1}\bra{1}$. We can calculate the expected relative attenuation for each of the three input states:
\begin{align}
K^{(\alpha)}(R)&=1 \label{eqn:ka} \\
K^{(\xi)}(R)&\approx 1-R \label{eqn:kx} \\
K^{(\hat{\rho}_1)}(R)&=\frac{1}{1-\beta R} \label{eqn:kh}
\end{align}
where the approximation for the SMSV $\ket{\xi}$ in Eq.~\ref{eqn:kx} holds for weak squeezing.
\section{Experiment}\label{sec:exp}
\begin{figure*}[t]
\includegraphics[scale=0.55,trim={10 120 10 180},clip]{fig2.pdf}
\caption{Zero-photon subtraction (ZPS) experiment, in four stages. (1) \textit{Input State Preparation}-- One of three input states is generated, $\ket{\alpha}$, $\ket{\xi}$ or $\hat{\rho}_1$. Coherent states $\ket{\alpha}$ are produced directly by an ultrafast pulsed laser (100 MHz, 780 nm). These pulses also undergo second harmonic generation (SHG) to serve as 390 nm pump pulses for Type-I parametric down-conversion (PDC) using a $\beta$-barium borate (BBO) crystal. The resulting photon pairs are coupled into a Hong-Ou-Mandel (HOM) interferometer to produce either $\ket{\xi}$ or $\hat{\rho}_1$ as described in the text. (2) \textit{Attenuation}-- The input state enters a fiber-based variable beamsplitter (VBS) with reflectance $R$. (3) \textit{Measurement}-- Each VBS output is measured by single-photon detectors $D_1$ and $D_2$, with overall channel efficiencies $\eta_1$ and $\eta_2$. The heralding detector $D_1$ can be translated across the mode of interest by displacement $\Delta x$. Detection events and the $D_{ref}$ reference signal are recorded by time-to-digital converters (TDCs). (4) \textit{Post-selection}-- Time tags from all detections are used to measure $D_2$ counting rates with and without post-selecting on ``no-click'' events at $D_1$. Abbreviations: DM-- dichroic mirror used to isolate UV pump pulses; L-- various lenses; $\Delta t$-- glass wedge time delay; IF-- narrowband interference filters centered near 780 nm.}
\label{fig:exp}
\end{figure*}
The full ZPS experiment is shown in Figure~\ref{fig:exp}. As summarized in the first panel, one of each type of input state (sub-, super- and Poissonian) is prepared with a combination of standard techniques in quantum optics.
In the case of Poissonian statistics, coherent states $\ket{\alpha}$ are prepared with a mode-locked fiber laser (Menlo Systems C-Fiber 780), which generates a train of ultrashort pulses with a repetition rate of 100 MHz and a center wavelength of 780 nm. These pulses are coupled into a single-mode fiber and attenuated for use as ZPS input states.
Super-Poissonian SMSV states $\ket{\xi}$ are prepared with parametric down-conversion (PDC) and a Hong-Ou-Mandel (HOM) interferometer~\cite{hom_87}. The 780 nm pulse train is first frequency doubled and used as a pump for Type-I PDC using a $\beta$-barium borate (BBO) crystal. The resulting photon pairs are coupled into single-mode fibers, and then combined in a 50-50 fiber coupler serving as the HOM interferometer. The relative time delay $\Delta t$ between photons is controlled with a pair of translating glass wedges before one of the fibers. When $\Delta t =0$, interference ideally produces two disentangled SMSV states in the HOM outputs~\cite{kim_02}. With our low pump power, the PDC photon pair production rate of $\sim10^{-4}$ per pulse ensures we are in the weak squeezing limit where Eq.~\ref{eqn:kx} holds.
Using the same setup with noncollinear PDC and the HOM interferometer, we can also generate heralded single-photon states with sub-Poissonian statistics. First, a large time delay is introduced in one input of the interferometer, eliminating HOM interference. Next, a single-photon detector $D_0$ without photon-number resolution (non-PNR) is coupled to one output. When $D_0$ detects exactly one photon with a ``click,'' the twin photon is heralded in the other mode (offset by the delay $\pm\Delta t$). Alternatively, a ``click'' could result from two photons hitting $D_0$, heralding zero photons in the output. The ideal result is a mixture $\hat{\rho}_1=(1-\beta)\ket{0}\bra{0}+\beta\ket{1}\bra{1}$, with $\beta=2/3$.
ZPS is performed at a variable beamsplitter (VBS), implemented with a tunable fiber coupler~\cite{digshaw_82}. The input state $\ket{\psi}_{in}$ (i.e., $\ket{\alpha}$, $\ket{\xi}$ or $\hat{\rho}_1$) enters one input of the VBS, and the two outputs are routed to detection channels. Each channel includes a free-space U-bench with 25-nm-bandwidth rectangular bandpass filters centered near 780 nm, then coupled into multimode fibers and directed to single-photon counting modules (SPCMs) $D_1$ and $D_2$ (silicon avalanche photodiodes, Excelitas SPCM-AQ4C). The auxiliary heralding detector $D_0$ has a similar channel not shown in Figure~\ref{fig:exp}, with a more narrow 10-nm-bandwidth filter to increase heralding efficiency~\cite{pitt_05,bovino_03}. To serve as a universal clock for all ``click'' and ``no-click'' events, all detection signals are recorded alongside a 100 MHz mode-locking reference signal from an additional detector $D_{ref}$, using time-to-digital converters (TDCs) with 81 ps timebin resolution (IDQuantique, model ID801). All detection events are stored as time tags and processed using the techniques described in Ref.~\cite{nun_21}.
The counting statistics of ZPS states are observed by post-selecting on ``no-click'' events, in which $D_{ref}$ registers a pulse but the heralding detector $D_1$ measures zero photons. After a 20-second exposure, the mean counting rate at $D_2$ is calculated with and without this post-selection. Then the $D_2$ dark count rate ($\sim$80 Hz, after filtering~\cite{wahl_20}) is subtracted from each of these values, and their ratio is taken to obtain $K$. This is repeated for multiple values of VBS reflectance $R$, revealing the behavior of $K(R)$ for each state.
Each stage of the experiment introduces losses which must be taken into account for our analysis. Returning to Figure~\ref{fig:bsinout}, we can group all losses into three distinct channels: the input mode of the main beamsplitter (VBS); the reflected auxiliary mode, containing heralding detector $D_1$; and the transmitted output mode, containing the photon-counting detector $D_2$. Input losses are primarily due to coupling free-space photon pairs from the PDC source into single-mode fibers, as well as fiber connector losses at the VBS. The fiber-coupling efficiency and connector transmission are denoted $\kappa_{\text{PDC}}$ and $\kappa_f$, respectively. Additional losses \textit{after} the VBS are contained in the effective detector efficiencies $\eta_1$ and $\eta_2$, illustrated in the third panel of Fig.~\ref{fig:exp}.
As defined in Eq.~\ref{eqn:kdeff}, $K(R)$ is unaffected by losses in the output mode with detector $D_2$. This can differ for non-PNR detectors as shown in the Appendix, but these effects are negligible in our experiment. However, losses in the heralding mode introduce unwanted noise that alters our counting statistics~\cite{nun_21}. Similarly, input losses introduce noise that alter the photon statistics of the initial states. Even so, the resulting mixed states can be analyzed with the same measurement of $K$, which only depends on diagonal terms $\rho_{nn}$ in a full description of the state. Consequently, we can modify our equations of $K(R)$ to include all losses (see Appendix for details):
\begin{align}
K_{exp}^{(\alpha)}(R)&=1 \label{eqn:ka2} \\
K_{exp}^{(\xi)}(R)&\approx 1-\kappa_{\text{PDC}}\kappa_f R \eta_1 \label{eqn:kx2} \\
K_{exp}^{(\hat{\rho}_1)}(R)&=\frac{1}{1-\beta \kappa_f R \eta_1} \label{eqn:kh2}
\end{align}
Results for the coherent state $\ket{\alpha}$ remain unchanged from Eq.~\ref{eqn:ka} to Eq.~\ref{eqn:ka2}. In the case of the SMSV (Eq.~\ref{eqn:kx2}), the probability of multipair emission is negligible, and so the HOM interferometer output is dominated by zero- and two-photon terms. Loss before ($\kappa_{\text{PDC}}$) and after the interferometer ($\kappa_f$) introduce a significant single-photon component, but the altered statistics remain super-Poissonian. For the state $\hat{\rho}_1$, the existing single-photon term is similarly reduced by $\kappa_f$ but remains sub-Poissonian. The initial single-photon probability $\beta$ is also degraded by dark counts at $D_0$ and interferometer losses, lowering it from the ideal value of $2/3$. In all cases, finite heralding efficiency $\eta_1$ has the same effect on the measured value of $K$ as the overall input losses.
To experimentally determine coupling values and detector efficiencies in our system, and to align the apparatus for input state preparation, we first perform a series of standard HOM tests~\cite{hom_87} and channel loss measurements. We bypass $D_0$ and the VBS in Figure~\ref{fig:exp} and perform a coincidence measurement with the $D_1$ and $D_2$ channels connected directly to the HOM interferometer outputs. We observe a HOM dip with 98\% visibility with this arrangement. Approximate Klyshko efficiencies~\cite{klyshko_80}, apart from the interferometer coupling efficiency of $\kappa_{\text{PDC}}\approx0.50$, are found to be $\eta_1\approx0.32$ and $\eta_2\approx0.28$. This is consistent with nominal SPCM detector efficiencies of $\sim50\%$ at 780 nm and U-bench transmission of $\sim65\%$ and $\sim60\%$. The values of $\kappa_f\approx0.86$ and $\beta\approx0.38$ are determined in the analysis of the main experiment.
\section{Results and Discussion}\label{sec:res}
Our main results are shown in Figure~\ref{fig:res}. The relative attenuation $K$ induced by ZPS is shown as a function of reflectance $R$ for all three input states. Each state exhibits very distinct behavior in agreement with equations \ref{eqn:ka2}-\ref{eqn:kh2}. For the coherent state shown in Fig.~\ref{fig:res}(a), $K=1$ for all $R$, indicating that ZPS counting rates are identical to those of ordinary attenuation. This case provides a benchmark for our experiment, and can be understood by the well-known fact that a coherent state entering a beamsplitter produces two uncorrelated coherent states in the outputs~\cite{zav_08,gk_04,kim_02}. Consequently, conditional measurements like those in ZPS have no effect on the output state in this case.
\begin{figure}
\includegraphics[scale=0.45,trim={190 25 225 50},clip]{fig3.pdf}
\caption{Experimental measurements of relative attenuation $K(R)$ for the three cases of (a) a coherent state $\ket{\alpha}$, (b) a SMSV state $\ket{\xi}$, and (c) a heralded single photon state $\hat{\rho}_1$. The measured data points (red circles) in plots (a)-(c) show distinct trends that depend on the photon number statistics of the given state: the benchmark case $\ket{\alpha}$ shows $K=1$ for all reflectance $R$, while $\ket{\xi}$ and $\hat{\rho}_1$ have a negative and positive slope, respectively. In panel (b), a fit to the data using Eq.~\ref{eqn:kx2} (black dashed curve) gives the value $(\kappa_{\text{PDC}})(\kappa_f)( \eta_1)= 0.14$, corresponding to an overall loss of 86\%. In panel (c), a similar fit to the data using Eq.~\ref{eqn:kh2} corresponds to an overall loss of 73\% with single-photon probability $\beta\approx0.38$. In both panels (b) and (c), the blue (dot-dashed) and green (dotted) theoretical curves show more pronounced effects of ZPS that would be observed for 50\% and 0\% overall loss, respectively. For comparison, panel (d) shows theoretical attenuation of an ideal Fock state $\ket{n}$ calculated for the same overall loss values as in panel (c).}
\label{fig:res}
\end{figure}
In Fig.~\ref{fig:res}(b), measurements of $K$ for the SMSV state $\ket{\xi}$ are shown. The data trend exhibits a negative initial slope in accordance with Eq.~\ref{eqn:qin} $(Q>0)$. This ``photon deficit'' $K<1$ increases linearly with reflectance $R$. This attenuation, however, is limited by heralding efficiency and losses. A fit to the data using Eq.~\ref{eqn:kx2} shows that as $R\rightarrow1$, $K^{(\xi)}_{min}=0.861\pm0.003$. This is consistent with the product of efficiencies $(\kappa_{\text{PDC}})(\kappa_f)( \eta_1)\approx0.14$ (i.e., overall loss of 86\%). For comparison, the two theoretical curves in Fig.~\ref{fig:res}(b) show the stronger attenuation that would be achieved with overall losses of only 50\% and 0\%
The heralded single photon case $\hat{\rho}_1$ in Fig.~\ref{fig:res}(c) displays essentially opposite behavior. The sub-Poissonian statistics $(Q<0)$ determine a positive initial slope such that $K>1$, and this trend continues for all values of $R$. Note that by our definition of \textit{relative} attenuation, this does not indicate $\langle \hat{n} \rangle_{out}>\langle \hat{n} \rangle_{in}$, and is much different from the ``photon excess'' observed for super-Poissonian states after SPS. The state is still attenuated relative to the input, and this can be seen by comparing the observed $K$ to the theoretically predicted values for ideal Fock states in Fig.~\ref{fig:res}(d), for which $\langle \hat{n} \rangle_{out}=\langle \hat{n} \rangle_{in}$. In contrast to the states $\ket{\alpha}$ and $\ket{\xi}$, however, $K>1$ indicates that the degree of heralded attenuation by ZPS is weaker than that of ordinary attenuation. With previously determined values of $\kappa_f\approx0.86$ and $\eta_1\approx0.32$, which combine to give an overall loss of 73\%, a fit of the data in panel (c) to Eq.~\ref{eqn:kh2} indicates a single photon probability of $\beta\approx0.38$ for our initial state. Two theoretical curves with $\beta=0.38$ and improved overall losses of 50\% and 0\% show more extreme deviations from $K=1$.
Interestingly, the theoretical curves in Figure~\ref{fig:res}(b)-(d) show as detector inefficiency and losses increase, the ZPS statistics for both sub- and super-Poissonian states converge toward the Poissonian case $K=1$. Losses before or after the VBS play an identical role in reshaping $K(R)$ in Eqs.~\ref{eqn:ka2}-\ref{eqn:kh2}, and so we can explain this in two ways. First, the photon number distributions of $\ket{\xi}$ and $\hat{\rho}_1$ tend toward Poissonian statistics after ordinary attenuation, i.e., loss~\cite{hu_07}. As losses before the beamsplitter increase, the observed $K(R)$ values should therefore tend to unity and resemble those of the coherent state. Alternatively, losses before the heralding detector $D_1$ reduce our ability to distinguish ``true'' vacuum in the reflected mode from one or more photons~\cite{nun_21}. As effective efficiency decreases, ``no-click'' events herald a mixture of the desired ZPS state with unwanted noise, becoming identical to ordinary attenuation in the zero-efficiency limit.
\begin{figure}
\includegraphics[scale=0.45,trim={205 95 205 80},clip]{fig4.pdf}
\caption{Relative attenuation $K$ for the states $\ket{\xi}$ and $\hat{\rho}_1$ at $R\approx0.5$, measured as the heralding detection channel $D_1$ is moved out of the mode of interest by a distance $\Delta x$, which modulates heralding efficiency $\eta_1$. As $\eta_1$ decreases down to zero (right axis), the relative attenuation values (left axis) converge to $K=1$ as ZPS becomes increasingly ineffective. Here, heralding efficiency is normalized to its maximum value of $\eta_1\approx0.32$.}
\label{fig:dx}
\end{figure}
Figure~\ref{fig:dx} demonstrates this effect by studying $K(R)$ as detector efficiency $\eta_1$ decreases. Here, the multimode fiber launcher coupled to heralding detector $D_1$ is scanned a distance $\Delta x$ out of the mode of interest within the U-bench (see inset and Fig.~\ref{fig:exp}). The blue curve shows the degree of spatial mode overlap as the fiber is moved, measured separately as the fraction of power coupled from an auxiliary source. As detector mode overlap decreases, it can be seen that the relative attenuations $K^{(\xi)}$ and $K^{(\hat{\rho}_1)}$ (measured at $R\approx0.5$) converge to the Poissonian value of $K=1$. Although we continue to herald on zero photons in $D_1$, these measurements carry less information about the system and ZPS becomes increasingly ineffective.
The results presented in Figs.~\ref{fig:res} and \ref{fig:dx} show a strong connection between the effects of ZPS and the photon number distributions of initial states. Importantly, however, measuring $K(R)$ yields no information about phase or coherence between number states. Consequently, as has been argued for the ``photon excess'' after SPS~\cite{par_07,zav_08}, the effects of ZPS observed here can also be replicated using counting statistics of classical particles and probabilistic subtraction. For ZPS, probability of success drops exponentially with higher numbers of photons, thus shifting the mean of the number distribution downward. Additionally, the ``photon excess'' or ``photon deficit'' exhibited by thermal light undergoing SPS or ZPS can also be understood as intensity fluctuations of the classical electromagnetic field (i.e., correlated intensities at detectors $D_1$ and $D_2$).
In this sense, the ``photon deficit'' observed in ZPS, much like the ``photon excess'' of SPS, is not a purely quantum mechanical effect. Nonetheless, it is important to note that the degree of attenuation measured after ZPS can reveal existing nonclassicality of the input states. For example, our observations confirmed the nonclassical~\cite{gk_04} sub-Poissonian statistics of the heralded single photon $\hat{\rho}_1$. Furthermore, quantum state tomography or some other phase-dependent measurement would reveal that ZPS not only attenuates quantum states, but does so noiselessly (i.e., preserves their coherence)~\cite{saur_21}. This property is exactly what makes ZPS promising for applications in quantum communication~\cite{mic_12,ricky_17,guo_20}.
\section{Conclusions}\label{sec:con}
In summary, we have experimentally demonstrated that the zero-photon subtraction (ZPS) process of Figure~\ref{fig:bsinout} can reduce the mean photon number of quantum optical states, despite no photons being removed from the system. Our experiment tested the effects of ZPS on three unique classes of input states (sub-, super- and Poissonian) using a beamsplitter with variable reflectance $R$. By studying the relative attenuation ratio $K$ as a function of $R$, the observed trends reveal a connection to Mandel's $Q$-parameter in the regime of $R\ll 1$. More precisely, the initial slope of $K(R)$ near $R=0$ is equal to $-Q_{in}$, resulting in distinct behavior for each input state. Consequently, (sub-) super-Poissonian states will be attenuated (less) more by ZPS than by ordinary attenuation with a weakly reflecting beamsplitter. These ZPS effects are complementary to the effects of typical single-photon subtraction (SPS) in the same regime $R\ll 1$. Most notably, super-Poissonian states that exhibit a ``photon excess'' after SPS will also exhibit a unique ``photon deficit'' after ZPS.
These observations were made possible by actively heralding on the detection of zero photons with a single-photon detector~\cite{nun_21}. We further confirmed the need for high efficiency in the heralding mode by measuring the convergence of non-Poissonian attenuation $K$ to the benchmark Poissonian case as losses increased.
Although not revealed by the photon counting measurements reported here, ZPS can preserve the coherence of quantum states~\cite{saur_21}, making it useful for quantum communications as a noiseless attenuator~\cite{mic_12,ricky_17,guo_20}. Our results provide further insight into the nature of this transformation and its relationship to other techniques in quantum state engineering by conditional measurements.
\begin{acknowledgments}
We would like to thank S. U. Shringarpure for many valuable discussions on this topic. This work was supported by the National Science Foundation under Grant No. PHY-2013464.
\end{acknowledgments}
|
2,877,628,089,127 | arxiv | \section{Introduction }
Newly born neutron stars are hot and
lepton rich objects, quite different from ordinary low
temperature, lepton poor neutron stars. In view of these
differences, newly born neutron stars are called
{\it protoneutron} stars; they transform into standard neutron
stars on a timescale of the order of ten seconds, needed for the
loss of a significant lepton number excess via emission of
neutrinos trapped in dense, hot interior.
In view of the fact that the typical evolution timescale of
a protoneutron star (seconds) is some three orders of magnitude
longer, than the dynamical timescale for this objects (milliseconds),
one can study its evolution in the quasistatic approximation
(Burrows \& Lattimer 1986). Static properties of
protoneutron stars, under various assumptions concerning
composition and equation of state (EOS) of hot, dense stellar interior
were studied by numerous authors (Burrows \& Lattimer 1986,
Takatsuka 1995, Bombaci et al. 1995, Bombaci 1996,
Prakash et al. 1997).
The scenario of transformation of a protoneutron star into a
neutron star could be strongly influenced by a phase transition
in the central region of the star. Brown and Bethe (1994)
suggested a phase transition implied by the $K^-$ condensation
at supranuclear densities. Such a $K^-$ condensation could
dramatically soften the equation of state of dense matter,
leading to a low maximum allowable mass of neutron stars.
In such a case, the massive protoneutron stars could be
stabilized by the effects of high temperature and of the
presence of trapped neutrinos, and this would lead to maximum
baryon mass of protoneutron stars larger by some $0.2~M_\odot$ than
that of cold neutron stars. The deleptonization and cooling of
protoneutron stars of baryon mass exceeding the maximum
allowable baryon mass for neutron stars, would then inevitably
lead to their collapse into black holes. The dynamics of
such a process was
recently studied by Baumgarte et al. (1996). It should be
mentioned, however, that the very possibility of existence of
the kaon condensate (or other exotic phases of matter, such as
the pion condensate, or the quark matter) at neutron star
densities is far from being established. Recently, for
instance,
Pandharipande et al. (1995) pointed out, that kaon-nucleon and
nucleon-nucleon correlations in dense matter raise
significantly the threshold density for kaon condensation.
In view of these uncertainties, we will
restrict in the present paper to a standard model of dense
matter, composed of nucleons and leptons.
The calculations of the static models of protoneutron stars should be
considered as a first step in the studies of these objects. It
is clear, in view of the dynamical scenario of their formation,
that protoneutron stars are far from being static.
The formation scenario
involves compression (with overshoot of central density) and a
hydrodynamical bounce, so that a newborn protoneutron star
begins its life in a highly excited state, pulsating around its
quasistatic equilibrium. If the rotation of protoneutron star is
not too rapid, the coupling of radial and non-radial pulsations
is weak. Such a scenario of formation leads to a
preferential excitation of the radial modes. Having constructed
the static model of a protoneutron star, one should thus answer the
question about the stability of the static configuration with
respect to radial perturbations (standard stability criteria are
valid only for cold neutron stars). Clearly, both high temperature
and large trapped lepton number will influence the spectrum of
radial eigenfrequencies of protoneutron stars, implying
differences with respect to the case of cold neutron stars.
In the present paper we study the radial pulsations of
protoneutron stars and their stability. Our models of
protoneutron stars are composed of a hot, neutrino-opaque
interior (hereafter referred to as ``hot interior''), separated
from much colder, neutrino-transparent
envelope by a neutrinosphere. We will consider two limiting cases of the
thermal state of the hot interior: isentropic, with entropy
per baryon $s=const.$, and isothermal, with
$T_\infty=(g_{00})^{1/2}T=const.$
($T_\infty$ is the value of the temperature, measured by an
observer at infinity, while $T$ is the value of the temperature
measured by a local observer).
The first case, characteristic of a very
initial state of a protoneutron star, will simultaneously
correspond to a significant trapped lepton number. The second
case corresponds to situation after the deleptonization of a
protoneutron star. The position of the neutrinosphere will
be located using a simple prescription
based on specific properties of the neutrino opacity of hot
dense matter.
In all cases, the equation of state of hot dense
matter will be determined using one of the realistic models of
Lattimer and Swesty (1991).
The plan of the paper is as follows.
In Section 2 we describe
the physical state of the interior of protoneutron star, with
particular emphasis on the EOS of the hot interior at various
stages of evolution of a protoneutron star. We explain also our
prescription for locating the
neutrinosphere of a protoneutron star, and we
give some details concerning the assumed temperature profile within
protoneutron star.
Equations of state, corresponding to different physical
situations, are described in Section 3. We also present there
static models of protoneutron stars, calculated for various
assumptions concerning the hot stellar interior. In Section 4 we
compare various timescales, characteristic of evolution and
dynamics of a protoneutron star, which are essential for
approximations used in the treatment of pulsations.
Section 5 is devoted to the formulation of the problem of
linear, adiabatic, radial pulsations of protoneutron stars.
Both pulsations and stability involve the adiabatic index of
pulsating stellar interior; our results for this important
quantity are presented in Section 6. Numerical results for the
eigenfrequencies of the lowest modes of radial pulsations and
the problem of stability of protoneutron stars are presented in
Section 7. Finally, Section 8 contains a discussion of our
results and conclusion.
\section{Physical model of the interior of protoneutron stars}
We will consider a protoneutron star (PNS)
just after its formation. We will assume it has a
well defined neutrinosphere, which separates a hot,
neutrino-opaque interior from colder, neutrino-transparent outer
envelope. The parameters, which determine the local state
of the matter in the hot interior are: baryon
(nucleon) number
density $n$, net electron fraction $Y_e
= (n_{e^-}-n_{e^+})/n$, and the net electron-neutrino
fraction $Y_{\nu}=Y_{\nu_e}-Y_{\bar\nu_e}$. The calculation of
the composition of hot matter and of its EOS is described below.
\subsection{Neutrino opaque core with nonzero trapped lepton number}
The situation described in this subsection
is characteristic of
the very initial stage of existence
of a PNS. Matter is assumed to be
composed of nucleons (both free and bound in
nuclei) and leptons (electrons and neutrinos; for the sake of
simplicity, we
do not include muons). All constituents
of the matter (plus photons) are in thermodynamic equilibrium at
given values of $n$, $T$ and $Y_l=Y_e+Y_\nu$. The
composition of the matter is calculated from the condition of
beta equilibrium, combined with the condition of a fixed
$Y_l$,
\begin{eqnarray}
\mu_p + \mu_e &=& \mu_n + \mu_{\nu_e}~,\nonumber \\
Y_l&=&Y_e+Y_\nu~,
\label{mu.trapL}
\end{eqnarray}
where $\mu_{\rm j}$ are the chemical potentials of matter
constituents. At the very initial stage we expect $Y_l\simeq
0.4$. Electron neutrinos are degenerate, with $\mu_{\nu_e}\gg T$
(in what follows we measure $T$ in energy units). The
deleptonization, implying the decrease of $Y_l$,
occurs due to diffusion of neutrinos outward
(driven by the $\mu_{\nu_e}$ gradient), on a timescale of
seconds (Sawyer \& Soni 1979, Bombaci et al. 1996).
The diffusion of highly degenerate neutrinos from the central
core is a dissipative
process, resulting in a significant {\it heating} of the
neutrino-opaque core (Burrows \& Lattimer 1986).
\subsection{Neutrino opaque core with $Y_\nu = 0$}
This is the limiting case, reached after complete deleptonization.
There is no trapped lepton number, so that $Y_l=Y_e$ and
$Y_{\nu_e}=Y_{\bar\nu_e}$, and therefore
$\mu_{\nu_e}=\mu_{\bar\nu_e}=0$. Neutrinos trapped within the
hot interior do not therefore influence the beta equilibrium of nucleons,
electrons and positrons, and for given $n$ and $T$ the
equilibrium value of $Y_e$ is determined from
\begin{eqnarray}
\mu_p + \mu_e &=& \mu_n~,
\label{mu.free}
\end{eqnarray}
while $\mu_{e^+}=-\mu_e$. In practice, this approximation can
be used as soon as electron neutrinos become non-degenerate
within the opaque core,
$\mu_{\nu_e}< T$, which occurs after some $\sim 10$ seconds
(Sawyer \& Soni 1979, Prakash et al. 1997).
\subsection{Neutrinosphere and the temperature profile}
In principle, the temperature (or entropy per nucleon) profile
within a PNS has to be determined via evolutionary calculation,
starting from some initial state, and taking into account
relevant transport processes in the PNS interior, as well as
neutrino emission from PNS. Transport
processes within neutrino-opaque interior occur on timescales of
seconds, some three orders of magnitude longer than dynamical
timescales. Convection can shorten neutrino transport timescale,
but still the time needed for the deleptonization of the
neutrino opaque core is then much longer than the dynamical
timescale. The very outer layer of a PNS becomes rapidly
transparent to neutrinos, deleptonizes, and cools on a very
short timescale
via $e^-e^+$ pair annihilation and plasmon decay to $T<1$ MeV.
It seemed thus natural to model the thermal structure of the PNS
interior by a hot core limited by a neutrinosphere, and a
much cooler, neutrino transparent outer envelope.
The transition through the neutrinosphere
is accompanied by a temperature drop, which takes place over
some interval of density just above the ``edge'' of the hot
neutrino-opaque core, situated at some $n_\nu$.
In view of the uncertainties in the actual temperature profiles
within the hot interior of PNS, we considered two extremal
situations for $n>n_\nu$,
corresponding to an isentropic and an isothermal hot interior. In the
first case, hot interior was characterized by a constant entropy
per baryon $s=const.$. In the case of trapped lepton number,
this leads to the EOS of the type: pressure
$P=P(n,~[s,Y_l])$, energy density
divided by $c^2$ (energy-mass density),
$\rho=\rho(n,~[s,Y_l])$, and temperature
$T=T(n,~[s,Y_l])$,
with fixed $s$ and $Y_l$.
This EOS will be denoted by EOS[$s,Y_l$].
The condition of isothermality, which in the static case
corresponds to a vanishing heat flux, is more
complicated. Due to the curvature of the space-time within PNS,
the condition of isothermality (thermal equilibration)
corresponds to
the constancy of $T_\infty=(g_{00})^{1/2}T$
(see, e.g., Zeldovich \& Novikov 1971, chapter 10.6).
Actually, the isothermal state
within the hot interior will be reached on a timescale
corresponding to thermal equilibration, which is much longer
than the lifetime of a PNS.
Nevertheless, we considered the
$T_\infty=const.$ models because, as a limiting case
so different from the $s=const.$ one, it enables us to check the
dependence of our results for PNS on the
thermal state of the hot interior.
To determine the isothermal temperature profile in the hot interior
after deleptonization,
we use the condition of thermal equilibrium,
given
by the constancy of the function $T_\infty \equiv T(r)
e^{{1\over2}\nu (r)}$ (where $\nu(r)$ is the metric fuction,
see Sect. 5). The relativistic condition of the
isothermality can be rewritten as:
\begin{equation}
{{\rm d} \ln T \over {\rm d}r} =
{1\over \rho c^2 + P}{{\rm d}P\over {\rm d}r}~.
\label{eq:isot}
\end{equation}
This formula enables us to determine the $T(n)$ profile,
for given EOS,
{\it independently} of the specific structure of the stellar
model under consideration.
Treating $n$, $T$ as thermodynamic variables for our equation of
state, we can rewrite Eq. (\ref{eq:isot}) in the form:
\begin{eqnarray}
{{\rm d} \ln T\over {\rm d} \ln n}
&=&{P\over \rho c^2 + P}\nonumber\\
&\times&\left({\partial \ln P \over \partial \ln n}
\right)
\left\{1- {P\over \rho c^2 + P}
\left({\partial \ln P \over \partial \ln T}\right)
\right\}^{-1}~.
\label{eq:isot1}
\end{eqnarray}
Using the above formula we can construct a specific isothermal
EOS, describing hot isothermal interiors, and
parametrized by the boundary condition at the edge of
the hot isothermal core -
the value of $T$ just below the neutrinosphere
will be denoted by $T_{\rm b}$.
Thus in our relativistic calculations the set of the
``isothermal'' stellar
configurations corresponds to stars with given $T_{\rm b}$.
Starting from $T=T_{\rm b}$, the temperature increases inward,
reaching its maximum value in the center of the star where
$T=T_{\rm centr}$.
This central temperature is the function of the central
density $\rho_{\rm centr}$
and is larger for a star with larger $\rho_{\rm centr}$.
Our calculation of the neutrinosphere within the hot PNS
interior is explained below.
For a given static PNS model, the neutrinosphere
radius, $R_\nu$, has been
located through the condition
\begin{equation}
\int_{R_\nu}^R
{1\over \lambda_\nu(E_\nu)}
{\rm d}r_{\rm prop}= 1~,
\label{R_nu}
\end{equation}
where $\lambda_\nu$ is calculated at the matter temperature,
$E_\nu$ is the mean energy of non-degenerate neutrinos at and
above the
neutrinosphere, $E_\nu=3.15T_\nu$, and $r_{\rm prop}$ is the
proper distance from the star center.
We assumed that neutrino opacity above $R_\nu$
is dominated by the elastic scattering off nuclei and nucleons,
so that $\lambda_\nu = \lambda^0_\nu(n,T)/E_\nu^2$. Then,
we determined the value of the density at the neutrinosphere,
$n_\nu$, for a given static PNS model, combining Eq.
(\ref{R_nu})
with that of hydrostatic equilibrium, and readjusting
accordingly the temperature profile within the outer layers of
PNS.
Neutrino opacity in the outer layers of PNS can be well
approximated by $1/\lambda_\nu\simeq \kappa_0
{E_\nu}^2$.
Within a reasonable approximation
$\kappa_0\simeq \gamma\rho $,
where $\rho$ is the matter
density, and $\gamma=6\cdot 10^{-20}~{\rm
cm^{-1}}$ ($\rho$ in ${\rm g~cm^{-3}}$ and $E_\nu$ in MeV).
The proper distance
near the neutrinosphere is given by ${\rm d}r_{\rm
prop}= (g_{rr})^{1/2}{\rm d}r
\simeq (1-2GM/Rc^2)^{-1/2}{\rm d}r$.
Using the definition of the neutrinosphere radius, $R_\nu$, we
can express the mass of the envelope above $R_\nu$ and the
pressure at $R_\nu$, denoted by $P_\nu$, in terms of
${E_\nu}=3.15{T_\nu}$. This leads to
\begin{equation}
P_\nu =
2.23~10^{32}
~\widetilde{M}\widetilde{R}^{-2}
\left
(1-0.295{\widetilde{M}\over\widetilde{R}}\right)^{-{1\over 2}}
~{T_\nu}^{-2}~{\rm erg\over cm^3}~.
\label{P_nu}
\end{equation}
where $\widetilde{M}=M/M_\odot$ and $\widetilde{R}=R/10$~km.
For a given stellar model, this approximate relation, combined
with Eq.(3), enables one to determine, in a
self-consistent way, the values of $T,~n$, and $\rho$ at the
neutrinosphere. In practice, this was done assuming a specific
functional form of the temperature drop within the
neutrinosphere (a combination of Fermi functions), which yielded
a smooth transition between hot interior and cool envelope. In all cases,
the temperature profile was adjusted in such a way, that
neutrinosphere was found within the region of the temperature
drop. For a $1.4~M_\odot$ PNS, and an isothermal hot core with
$T_{\rm b}=
15~$MeV, we found
$T_\nu=4.3~$MeV,
$n_\nu=2.0~10^{-3}~{\rm fm^{-3}}$,
and $\rho_\nu=3.5~10^{12}~{\rm g~cm^{-3}}$.
In the case of a very massive PNS with $M=2~M_\odot$ we
obtained (for the same value of $T_{\rm b}$)
$T_\nu=5.2~$MeV,
$n_\nu=2.4~10^{-3}~{\rm fm^{-3}}$,
and $\rho_\nu=4.1~10^{12}~{\rm g~cm^{-3}}$. These results are
in a reasonable agreement with those obtained in detailed numerical
simulations (see, e.g., Burrows \& Lattimer 1986).
\section{Equation of state and static models of protoneutron stars}
The starting point for the construction of our EOS for the PNS
models was the model of hot dense matter of Lattimer and Swesty
(1991), hereafter referred to as LS. Actually, we used one
specific LS model, corresponding to the incompressibility
modulus at the saturation density of symmetric nuclear matter
$K=220~$MeV (this model will be hereafter referred to
as LS-220). For $n>n_\nu$ we supplemented the LS-220 model
with contributions
resulting from the presence of trapped neutrinos of three
flavours (electronic, muonic and tauonic) and of the
corresponding antineutrinos.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=8.5cm
\epsfbox{5967F1.eps}
\end{center}
\caption[]{
Pressure versus baryon density for our model of dense hot matter
(based on the LS-220 model of the nucleon component of the
matter),
under various physical conditions,
for the subnuclear densities $n<n_0=0.16~{\rm fm^{-3}}$.
The curve $T=0$ corresponds to cold catalyzed matter.
The curve corresponding to $s=0.5, Y_l=0.4$ is unphysical, but
has been added in order to visualize the importance of trapped lepton
number at subnuclear densities. The low density edge of the
hot, neutrino opaque core corresponds to
$n_\nu=2\times 10^{-3}~{\rm fm^{-3}}$.
}
\end{figure}
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=8.5cm
\epsfbox{5967F2.eps}
\end{center}
\caption[]{
Pressure versus baryon density for our model of dense hot matter,
under various physical conditions, for the
supranuclear densities.
The curve $T=0$ corresponds to cold catalyzed matter.
The curve corresponding to $s=0.5, Y_l=0.4$ is unphysical, but
has been added in order to visualize the importance of trapped lepton
number at subnuclear densities. The low density edge of the
hot, neutrino opaque core corresponds to
$n_\nu=2\times 10^{-3}~{\rm fm^{-3}}$.
}
\end{figure}
In Figs. 1-3 we show our EOS in several cases, corresponding to
various physical condition in the hot, neutrino-opaque interior
of PNS. For the sake of comparison, we have also shown the EOS
for cold catalyzed matter, used for the calculation of the
(cold) NS models.
In Fig. 1 we show our EOS at subnuclear
densities.
At such densities, both the temperature and the
presence of trapped neutrinos
lead to a significant increase of pressure,
as compared to the cold catalyzed matter.
The constant $T_\infty$
EOS stiffens considerably at lower
densities, which is due to the weak dependence of
the thermal contribution (photons, neutrinos) on the baryon
density of the matter.
It is quite obvious, that $T_\infty=const.$ EOS
becomes dominated by thermal effects for $n<10^{-2}~{\rm
fm^{-3}}$. On the contrary, for the isentropic EOS, the effect of
the trapped
lepton number ($Y_l=0.4$) turns out to be more important
than the thermal effects. This can be seen in Fig. 1,
by comparing long-dashed curve,
$[s=2,~Y_l=0.4]$, with a short-dashed line, which
corresponds to an artificial (unphysical) case with small
thermal effects, $[s=0.5,~Y_l=0.4]$.
It is clear, that the correct location of the
neutrinosphere, which separates hot interior from the colder
outer envelope, should be important for the determination of the
radius of PNS.
Our EOS above nuclear density is plotted in Fig. 2.
The presence of a trapped lepton number softens the EOS, while
thermal effects always
lead to pressure increase
with respect to that for cold catalyzed matter.
The softening of the supranuclear EOS at fixed
$Y_l$ is due to
the fact, that a significant trapped lepton number increases the
proton fraction, which implies the softening of the nucleon
contribution to the EOS.
This is the reason why pressure for $[s=0.5,~Y_l=0.4]$ model
is smaller than in the case of cold, catalyzed matter.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=8.5cm \epsfbox{5967F3.eps}
\end{center}
\caption[]{
Pressure versus mass--energy density
for our model of dense hot matter,
under various physical conditions.
The curve $T=0$ corresponds to cold catalyzed matter.
}
\label{fig3}
\end{figure}
In the calculations of the stellar structure the
relevant EOS is of the form
$P=P(\rho)$, because only pressure and mass--energy density
appear in the general relativistic equations of
hydrostatic equilibrium.
It is obvious that thermal contribution to $\rho$ is always positive.
Also, the contribution of trapped neutrinos (which do not
contribute to baryon number density $n$) to $\rho$ is
positive. However, large trapped lepton number implies an
increased proton fraction, which in turn implies a {\it softening}
of the nucleon contribution to the pressure. The interplay of
these effects leads to a characteristic {\it softening} of the
$P(\rho)$ EOS for large trapped lepton number, visualized in
Fig. 3.
It should be stressed, that in contrast to
Bombaci et al. (1995) and Prakash et al. (1997) we used a
unified dense matter model, valid for both
supranuclear and subnuclear densities. Also, the fact that we
use various assumptions about the $T$ and $s$ profiles within
PNS, enables us to study the relative importance of the
temperature profile
and that of a trapped lepton number, for the PNS models.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=8.5cm \epsfbox{5967F4.eps}
\end{center}
\caption[]{
The gravitational mass versus stellar radius for
static models
of the protoneutron stars and neutron stars, under various assumptions
concerning the physical conditions within the stellar interior.
The curve corresponding to $s=0.5, Y_l=0.4$ is unphysical, but has been
added in order to visualize the relative importance of the trapped lepton
number and thermal effects. The curve $T=0$ corresponds
to cold neutron stars.
}
\label{fig4}
\end{figure}
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=8.5cm \epsfbox{5967F5.eps}
\end{center}
\caption[]{
The gravitational mass versus central density for
static models
of the protoneutron stars and neutron stars, under various assumptions
concerning the physical conditions within the stellar interior.
The curve corresponding to $s=0.5, Y_l=0.4$ is unphysical, but has been
added in order to visualize the relative importance of the trapped lepton
number and thermal effects. The curve $T=0$ corresponds
to cold neutron stars.
}
\label{fig5}
\end{figure}
The mass-radius relation for the PNS models calculated using
various versions of our EOS for the hot interior is shown in
Fig. 4. We assumed $n_\nu = 2\times 10^{-3}~{\rm fm^{-3}}$,
which was consistent with our definition of the neutrinosphere.
For the sake
of comparison, we also show the mass-radius relation for the
$T=0$ (cold catalyzed matter) EOS, which corresponds to cold
neutron star models. In the case of an isothermal hot interior
with $T_{\rm b}=15$~MeV we note a very small increase of
the maximum mass, as compared to the $T=0$ case (this result
is consistent with those obtained by Bombaci
et al. 1995, Prakash et al. 1997, for models composed of nucleons
and leptons only). However, the
effect on the mass-radius relation is quite strong, and
increases rapidly with decreasing stellar mass. In the case of
the isentropic EOS with a trapped lepton number, [$s=2,Y_l=0.4$],
the softening of the high-density EOS due to the trapped $Y_l$
leads
to the decrease of $M_{\rm max}$ compared to the $T=0$ case; as far
as the value of $M_{\rm max}$ is concerned, the softening
effect of $Y_l$ prevails over that of finite $s$ (this is
consistent with results of Takatsuka 1995, Bombaci et al.
1995, and Prakash et al. 1997 for purely nucleonic models). However, the
thermal effect on the stellar radius is
important even in the case of $Y_l=0.4$.
This can be seen by comparing $[s=2,~Y_l=0.4]$ curve with that
corresponding to the unphysical, fictitious case of
$[s=0.5,~Y_l=0.4]$.
Let us notice, that
for our EOS,
configurations with maximum (minimum) gravitational mass
are simultaneously those with the maximum (minimum) baryon mass.
The values of the minimum and maximum gravitational and baryon
masses for various EOS of the PNS interiors, based on
the LS-220 models of the nucleon component of dense matter,
are given in Table 1. The baryon (rest) mass of a PNS model
is defined by $M_{\rm bar}=Am_0$, where $A$ is the total baryon
number of a PNS, and $m_0$ is the mass of the hydrogen atom.
\begin{table}
\caption[]{
Minimum and maximum gravitational and baryon masses of cold neutron star
($T=0$) and isothermal ($T_{\rm b}=15~{\rm MeV}, Y_\nu=0$) and isentropic
($s=2, Y_l=0.4$) protoneutron stars. Equations of state are based
on the LS-220
model of the nucleon component of the matter.
}
\centering
\begin{tabular}{lllll}
\hline
& & & &\\
EOS & $M_{\rm min}$ & $M_{\rm bar,min}$
& $M_{\rm max}$ & $M_{\rm bar,max}$ \\
& $[M_{\odot}]$ & $[M_{\odot}]$ & $[M_{\odot}]$ & $[M_{\odot}]$ \\
& & & &\\
\hline
& & &\\
$T = 0$, $Y_\nu = 0$ & 0.054 & 0.055 & 2.044 & 2.406 \\
$T = 15$, $Y_\nu = 0$ & 0.864 & 0.892 & 2.050 & 2.391 \\
$s = 2$, $Y_l = 0.4$ & 0.675 & 0.676 & 1.972 & 2.183 \\
& & &\\
\hline
\end{tabular}
\end{table}
\vspace{0.5cm}
We find that the value of
$M_{\rm bar, max}$ is the largest one for the $T=0$ (cold catalyzed matter)
EOS. For an isothermal hot star with $T_{\rm b}=15$~MeV the total baryon mass
is slightly lower than $M_{\rm bar,max}^{[T=0]}$,
but for isentropic stars we see a significant
decrease of $M_{\rm bar, max}$ as compared to the $T=0$ case.
A newborn hot protoneutron star with
the maximum gravitational mass $M_{\rm max}^{[s=2,~Y_l=0.4]}=1.97\,M_\odot$
transforms, due to deleptonization and cooling, into a cold NS of
gravitational mass of $1.9~M_\odot$, some $0.15~M_\odot$ less than the maximum
gravitational mass of cold neutron stars (c.f. the analysis of Bombaci 1996).
For a given model of the neutrino opaque PNS core, the value of $M_{\rm min}$
depends on the location of the neutrinosphere.
However, this dependence is relatively weak.
For example in the case of EOS $[s=2,~Y_l=0.4]$ a rather drastic variation of
$n_\nu$ from $2\times 10^{-4}~{\rm fm^{-3}}$ to
$6\times 10^{-3}~{\rm fm^{-3}}$ implies change in
$M_{\rm min}^{[s=2,~Y_l=0.4]}$ from $0.686~M_\odot$ to $0.649~M_\odot$,
respectively (the radius of these minimum--mass stars varies then
from $44~{\rm km}$ to $32.8~{\rm km}$, respectively).
At a given mass, the radius of a PNS is significantly larger
than that of a cold NS.
It should be stressed, however, that the value of radius
turns out to be
quite sensitive to the location of the edge of
the hot neutrino-opaque interior (i.e., to the value of
$n_\nu$),
especially for PNS which are not close to the $M_{\rm max}$
configuration.
The choice of Prakash et al. (1997) would lead to a
much smaller effect on $R$.
The role of the thermal and the lepton number effects are
particularly pronounced in the mass - central density plots for the
PNS models, Fig. 5. The low - $\rho_{\rm centr}$ segments of the
$M - \rho_{\rm centr}$ plots for the PNS are dramatically
different from that for the cold NS models. The relevance of this
features for the stability of lower-mass PNS will be discussed in
Sect. 7.
\section{Characteristic timescales}
The EOS of PNS is evolving with time, due mainly to the
deleptonization process, which changes the composition of the
matter, and also due to changes of the internal temperature of the
star. However, these changes occur on the timescales $\tau_{\rm
evol}\sim $1-10~s (see below), which are
three or more orders of magnitude longer than the dynamical
timescale, governing the readjustment of pressure and gravity
forces. This dynamical timescale $\tau_{\rm dyn}\sim 1~$ms
corresponds also to the characteristic periods of the PNS
pulsations. In view of this, we are
able to decouple the PNS evolution from its dynamics, with a well defined
EOS of the PNS matter.
The evolution of the PNS interior results from cooling,
deleptonization, and dissipative transport processes, leading
to heating. Deleptonization of PNS is due to diffusion of
$\nu_e$, driven by the gradient of their chemical potential, and
occurs on a timescale $\tau_{\rm delept}\sim$ few seconds
(Prakash et al. 1997).
(Both $\tau_{\rm delept}$ and $\tau_{\rm cool}$ can actually
be shorter, due to the presence of convection in some layers
of a PNS).
The PNS cooling is due to neutrino losses from the
neutrinosphere; for the neutrino-opaque interior,
the characteristic timescale of cooling
$\tau_{\rm cool}$ is of the order of tens
of seconds (Sawyer \& Soni 1979). Notice, that deleptonization
is accompanied by a significant heating of neutrino-opaque core.
Highly degenerate $\nu_e$ diffuse out from the neutrino-opaque
core, and due to $E_{\nu_e}\gg T$ they deposit most of
their energy in the matter, which in view of
$\tau_{\rm cool}\gg \tau_{\rm delept}$ corresponds to the net
heating.
Radial pulsations of PNS are damped due to dissipative
processes, resulting from the weak interactions involving
nucleons and leptons. The dissipative effects can be represented
in the form of a bulk viscosity of hot dense matter (Sawyer 1980).
Let us consider first the case of hot interior with a significant
trapped lepton number. For $Y_l=0.3$ Sawyer (1980) gets
$\tau_{\rm damp}(Y_l=0.3)\sim 2000~(T/10~{\rm MeV})^2~$s. For
deleptonized hot interior the characteristic timescale is
somewhat shorter,
$\tau_{\rm damp}(Y_\nu=0)\sim 30~(T/10~{\rm MeV})^2$~s (Sawyer
1980). Still, in both cases we get $\tau_{\rm damp}\widetilde>
10^4\tau_{\rm dyn}$, so that one can safely neglect damping when
calculating the eigenfrequencies of radial pulsations of PNS.
Summarizing, the estimates of the evolutionary and dissipative
timescales indicate, that radial pulsations of the hot interior
of PNS can be treated as adiabatic, and can be studied using a
well defined EOS of dense hot matter, corresponding to a given
stage of evolution of a PNS.
Of course, all these remarks are valid only for the hot interior of
PNS (i.e., the region below the neutrinosphere). However, the
outer envelope contains less than $10^{-3}$ of the mass of PNS,
and its influence on the eigenfrequencies of radial pulsations
is negligible.
\section{Linear adiabatic radial pulsations of protoneutron stars}
Consider an idealized static configuration of a PNS and assume
it is spherically symmetric. Using
the notation of Landau \& Lifshitz (1975), we write the
metric for such a configuration as
\begin{equation}
{\rm d}s^2=
{\rm e}^\nu c^2{\rm d}t^2
-{\rm e}^\lambda {\rm d}r^2
-r^2({\rm d}\theta^2+\sin^2\theta {\rm d}\phi ^2)~,
\label{eq:metric}
\end{equation}
where $\lambda$ and $\nu$ are functions of $r$.
The hydrostatic equilibrium of the static PNS is described
by the
Tolman-Oppenheimer-Volkoff (TOV) equations (Tolman 1939,
Oppenheimer \&
Volkoff 1939)
\begin{equation}
\label{a}
{{\rm d}P\over {\rm d}r}=
-{Gm\rho \over r^2 (1-{2Gm\over rc^2})}
\left(1+{P\over \rho c^2 }\right)
\left(1+{4\pi Pr^3\over m c^2}\right)~,
\end{equation}
\begin{equation}
\label{b}
{{\rm d}m\over {\rm d}r}=4\pi r^2\rho~,
\end{equation}
\begin{equation}
\label{c}
{{\rm d}\nu \over {\rm d}r}=-{2\over (P+\rho c^2 )}
{{\rm d}P\over {\rm d}r}~,
\end{equation}
where $m$ is the mass contained within radius $r$.
Since our EOS of PNS can be always written in the
one-parameter form
$P=P(\rho)$, the TOV equations
can be numerically integrated for a given
central density $\rho_{\rm centr}$, yielding the
stellar radius, $R,$
and the total gravitational mass,
$M=m(R)$, of the star.
The equations governing infinitesimal radial adiabatic
stellar pulsations in general
relativity were derived by Chandrasekhar (1964),
and were rewritten by Chanmugan (1977) in a form, which
turns out to be particularly suitable for numerical
applications.
Two important quantities, describing pulsations, are:
the relative radial displacement, $\xi = \Delta r/r$,
where $\Delta r$ is the radial displacement of a matter element,
and $\Delta P$ - the corresponding
Lagrangian perturbation of the pressure. These two quantities
are determined from a system of two ordinary differential
equations, which we rewrite as
\begin{eqnarray}
{{\rm d}\xi \over {\rm d}r}&=&
-{1\over r}\left(3\xi+{\Delta P\over \Gamma P}\right)-
{{\rm d}P\over {\rm d}r}{\xi\over (P+\rho c^2)}~, \label{d}\\
{{\rm d}\Delta P \over {\rm d}r}&=&
\xi\left\{
{\omega^2 \over c^2}
{\rm e}^{\lambda-\nu}\left( P+\rho c^2\right)r
-4{{\rm d}P \over {\rm d}r}\right\}\nonumber\\
&+&\xi\left\{
\left({{\rm d}P \over {\rm d}r}\right)^2
{r \over(P+\rho c^2)}
-{8\pi G \over c^4}{\rm e}^\lambda (P+\rho c^2)Pr\right\}\nonumber\\
&+&{\Delta P} \left\{{{\rm d}P \over {\rm d}r}{1 \over
(P+\rho c^2)}-{4\pi G \over c^4}
(P+\rho c^2)r{\rm e}^\lambda
\right\}~,
\label{e}
\end{eqnarray}
where $\Gamma$ is a relativistic adiabatic index
(see Section 6),
$\omega$ is the eigenfrequency and the
quantities $\xi$ and $\Delta P$ are
assumed to have a harmonic time dependence
$\propto e^{i\omega t}$.
Our Eq. (\ref{e}) has been obtained from a second-order
pulsation equation of Chandrasekhar (1964) [his Eq. (59)],
using his Eq. (35) and Eq.(36). While Eq. (\ref{d}) coincides with
Eq.(18) of Chanmugan (1977),
our Eq. (\ref{e}) - in contrast to his
second equation [Eq.(19) of Chanmugan 1977]
- does not show a singularity at the
stellar surface.
Another important advantage of our system of pulsation equations,
Eq. (\ref{d},~\ref{e}),
stems from the fact, that they do not involve
any derivatives of the adiabatic index,
$\Gamma.$ In view of the fact, that we used tabulated
forms of the EOS and of $\Gamma$, this enabled us to
reach a very high
precision in solving the eigenvalue problem.
To solve equations
(\ref{d}) and (\ref{e}) one needs two boundary conditions.
The condition of regularity at $r=0$ requires, that for
$r\rightarrow 0$ the
coefficient of the $1/r$-term in Eq. (\ref{d}) must vanish,
\begin{equation}
\label{f}
\left(\Delta P\right)_{\rm center}=
-3\left(\xi \Gamma P\right)_{\rm center}~.
\end{equation}
Our normalization of eigenfunctions corresponds to $\xi(0)=1$.
The surface of the star is determined by
the condition that for $r\rightarrow R$,
one has $P\rightarrow 0$. This implies
\begin{equation}
\label{g}
\left(\Delta P\right)_{\rm surface} =0
\end{equation}
The problem of solving Eqs. (\ref{d}), (\ref{e}), can be reduced
to a second order (in $\xi$),
linear radial wave
equation, of the Sturm-Liouville type. The quantity $\omega^2$ is
the eigenvalue of the Sturm-Liouville problem with boundary conditions
given by Eqs. (\ref{f}), (\ref{g}).
For a given static model of a PNS, we get a set of the
eigenvalues $\omega_0^2 < \omega_1^2
<...<\omega_{\rm n}^2<...,$ with corresponding
eigenfunctions $\xi_0,\xi_1,...,
\xi_{\rm n},...,$ where the eigenfunction $\xi_{\rm n}$ has
$n$ nodes within the star, $0\le r \le R$ (see, e.g., Cox 1980).
A given static model is stable with respect to small, radial, adiabatic
perturbations (pulsations), if $\omega_{\rm n}^2 > 0$ for all $n$.
The
configuration is marginally stable, if
the lowest eigenfrequency $\omega_0 = 0$.
\section{Adiabatic indices}
The adiabatic index within the star, $\Gamma$, plays a central
role for both the linear radial oscillations, and for the
stability of PNS. In what follows, we will restrict ourselves to
the case of the neutrino-opaque interior; the layer above the
neutrinosphere contains only about $10^{-3}$ of the total mass,
and does not influence neither the spectrum of radial
oscillations, nor the stability of PNS.
Under physical conditions prevailing within the hot interior of
a PNS, the perturbation of a local nucleon density, $\delta n$,
of a matter element during radial oscillations, takes place at
constant entropy per baryon, $s$, and constant electron lepton
number per baryon, $Y_l$. Due to high density and temperature,
all constituents of matter can be considered as being
in thermodynamic equilibrium (the timescale of reactions
leading to thermodynamic equilibrium, $\tau_{\rm react}$, is
much shorter, then the pulsation timescale,
$\tau_{\rm puls}\sim \tau_{\rm dyn}$).
The
adiabatic index, governing linear perturbation of the pressure
within the star under these conditions, will be denoted by
$\Gamma_{\rm a}$. It will be given by
\begin{equation}
\Gamma_{\rm a} \equiv
{n\over P}\left({{\rm d}P\over {\rm d}n}\right)_{s,Y_l}~,
\label{Gamma_a}
\end{equation}
where the derivative is to be calculated at fixed $s$ and $Y_l$.
It should be stressed, that the calculation of $\Gamma_{\rm a}$
requires more than just the knowledge of the EOS of the PNS
interior. Namely, one needs to know the {\rm perturbed} EOS,
under the constraint of constant $s$ and $Y_l$, and assuming
the thermodynamic equilibrium of matter constituents.
In the case of dense matter one can define several
$\Gamma$'s, which are physically (and numerically)
different from $\Gamma_{\rm a}$. The EOS of dense hot matter
within a static PNS
yields the quantity
\begin{equation}
\Gamma_{\rm EOS} \equiv
{n\over P}\left({{\rm d}P\over {\rm d}n}\right)_{\rm star}~.
\label{Gamma_EOS}
\end{equation}
Strictly speaking $\Gamma_{\rm EOS}$ is not an ``adiabatic'' index
unless the entropy $s$ is constant throughout a star.
In the case of a sufficiently cold matter, the reactions between
matter
constituents are so slow, that the matter composition remains
fixed (frozen) during perturbations, because
$\tau_{\rm react}\gg \tau_{\rm dyn}$. Also, neutrinos do not
contribute then to the thermodynamic quantities of the matter.
In such a case, the
appropriate adiabatic index would read
\begin{equation}
\Gamma_{\rm frozen} \equiv
{n\over P}\left({{\rm d}P\over {\rm d}n}\right)_{s,Y_e}~,
\label{Gamma_frozen}
\end{equation}
where $s,Y_e$ correspond to the equilibrium model.
The quantity $\Gamma_{\rm frozen}$ is relevant for radial
pulsations of standard, cold neutron stars, where the difference
between $\Gamma_{\rm frozen}$ and $\Gamma_{\rm EOS}$ has
interesting consequences for the stability of the cold NS models in
the vicinity of $M_{\rm max}$ (Gourgoulhon et al. 1995).
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=8.5cm \epsfbox{5967F6.eps}
\end{center}
\caption[]{
Parameters $\Gamma_{\rm a}$, $\Gamma_{\rm EOS}$, and
$\Gamma_{\rm frozen}$, versus matter density, for the isothermal
hot interior with $T_{\rm b}=15~{\rm MeV}$ and with no trapped lepton
number. The outer edge of the hot
core has been located at
$\rho_\nu=3.5\times 10^{12}~{\rm g\,cm^{-3}}$,
and the rapid changes in the $\Gamma$'s below $n_\nu$ results
from the temperature drop in the neutrinosphere of the PNS.
}
\label{fig6}
\end{figure}
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=8.5cm \epsfbox{5967F7.eps}
\end{center}
\caption[]{
Parameters $\Gamma_{\rm a}$, $\Gamma_{\rm EOS}$, and
$\Gamma_{\rm frozen}$, versus matter density, for the isentropic
hot interior with $s=2$ and with trapped lepton fraction
$Y_l=0.4$. The outer edge of the hot
core has been located at
$\rho_\nu=3.5\times 10^{12}~{\rm g\,cm^{-3}}$,
and the rapid changes in the $\Gamma$'s below $n_\nu$ results
from the temperature drop in the neutrinosphere of the PNS. The
very steep drop in $\Gamma$'s above $10^{14}~{\rm g\,cm^{-3}}$ results
from the phase transition from matter with nuclei to bulk
(homogeneous) dense matter.
}
\label{fig7}
\end{figure}
The dependence of various $\Gamma$'s on the matter density in
the PNS interior, for two models of the hot neutrino-opaque core
of PNS, is displayed in Fig. 5, 6. Above nuclear density (i.e.,
for bulk, homogeneous matter) $\Gamma_{\rm frozen}$ is the
highest of all $\Gamma$'s; freezing of lepton composition
stiffens the matter (c.f. $\S$81 of Landau \& Lifshitz (1987)).
In the case of the isothermal core, Fig. 6, $\Gamma_{\rm EOS}$
is significantly lower than $\Gamma_{\rm a}$ and $\Gamma_{\rm
frozen}$, for subnuclear densities above $n_\nu$
(this results from the fact of a significant density dependence
of the entropy per baryon, in this density interval).
For isothermal models $\Gamma_{\rm EOS}\equiv \Gamma_{\rm star}
< \Gamma_{\rm a}$, except for the region close to and below
the neutrinosphere: for $\rho>10^{13}~{\rm g~cm^{-3}}$ our
isothermal PNS models are convectively stable.
The differences between various $\Gamma$'s
differences are much smaller in the case of the isentropic core,
where fixing $s$ throughout the star and within the EOS makes
$\Gamma_{\rm EOS}$ and $\Gamma_{\rm a}$ undistinguishable,
except near the neutrinosphere, where a dramatic drop in $s$
and in $T$ produces a strong deviation of
$\Gamma_{\rm EOS}$
from both $\Gamma_{\rm a}$ and $\Gamma_{\rm frozen}$.
The differences between $\Gamma_{\rm frozen}$ and $\Gamma_{\rm a}$
are mainly due to the changes of $Y_e$ throughout the star.
The quantity $\Gamma_{\rm frozen}$ corresponds to adiabatic
pulsations keeping $Y_e$
fixed, while $\Gamma_{\rm a}$ was calculated assuming beta equilibrium
during adiabatic oscillations at fixed $Y_l$.
It should be
stressed, that pulsational properties of PNS are determined
essentially by the values of the relevant $\Gamma$ well above
the density $n_\nu$. The outer layer with $n<n_\nu$ contains a
very small fraction of stellar mass,
and therefore rapid variations of $\Gamma$
close to $n_\nu$ have no effect on the global dynamics of PNS.
\section{Eigenfrequencies and instabilities}
The effects of high temperature and those of the trapped
neutrinos influence both the EOS and the adiabatic index of
the interior of PNS. Our calculations show a rather strong
influence of these effects on the eigenfunctions and the
eigenfrequencies of radial pulsations of PNS. Also, these
effects imply significant differences, especially within
some (observationally interesting) interval of stellar masses,
between pulsational properties of PNS and cold NS.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=8.5cm \epsfbox{5967F8.eps}
\end{center}
\caption[]{
The eigenfrequencies of the fundamental mode, $n=0$,
versus stellar mass for considered models of dense matter.
The nearly horizontal part of the curve for
$T_{\rm b}=15$~MeV model corresponds
to the configurations with central density below the value
corresponding to the minimum mass of PNS.
For the sake of comparison we also show corresponding curve
for cold neutron stars.
}
\label{fig8}
\end{figure}
In Fig. 8 we plotted the eigenfrequency of the
fundamental mode of the PNS pulsations, versus
stellar mass, for several models of the EOS of the hot
interior.
For the sake of comparison, we have presented also the
corresponding plot for cold neutron stars.
The effect of finite entropy and of trapped
lepton number on $\omega_0$ depends rather strongly
on the stellar mass. The
relativistic instability takes place very close to $M_{\rm
max}$, so that the ``classical stability criterion''
(protoneutron star models to the left of the maximum in the $M -
R$ plot, Fig. 4, are secularly unstable with respect to the
$n=0$ oscillations) is valid to a very good approximation.
Generally, entropy and trapped lepton number soften the $n=0$
mode with respect to the $T=0$ models. This softening is
particularly strong for lower stellar mass. In the case of
$s=2$, $Y_l=0.4$ PNS, the fundamental mode becomes secularly
unstable at $M=0.676~M_\odot$, very close to the
minimum of the $M - R$ curve, Fig. 4. The classical stability
criterion gives therefore a rather precise location of the
instability point for the {\it isentropic} PNS: the models to
the right of the minimum in the $M - R$ curve are
unstable with respect to the radial pulsations in the
fundamental mode. However, the value of $M_{\rm min}$ is
dramatically larger than that for the {\it cold} ($T=0$) neutron
stars. We have $M_{\rm min}[s=2,Y_l=0.4]=0.675~M_\odot$,
while $M_{\rm min}[T=0]=0.054~M_\odot$.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=8.5cm \epsfbox{5967F9.eps}
\end{center}
\caption[]{
The eigenfrequencies of the first and second overtone, $n=1,2$,
versus stellar mass for considered models of dense matter.
For the sake of comparison we also show corresponding curves
for cold neutron stars.
}
\label{fig9}
\end{figure}
In Fig. 9 we plotted the eigenfrequencies of the
$n=1,~2$ radial modes versus stellar mass, for the isentropic
models of the hot PNS cores. For the sake of comparison,
we have shown also the corresponding plots for cold NS.
The differences
between the NS and the PNS $n=1,~2$ eigenfrequencies are very large,
except for a narrow interval of masses in the vicity of the
maximum mass. These differences reflect the drastic differences
in the structure of the outer layers of the PNS and the NS
models. Generally, PNS are much softer with respect to the
$n=1,~2$ radial modes.
Different
structure of the outer layers of stellar models with isentropic
and isothermal cores, implies strong differences in the
spectra of the lowest three modes of radial pulsations.
Clearly, the $M<1~M_\odot$ isothermal models are much ``softer''
with respects to all considered modes of radial pulsations, than
the isentropic ones; this is due to the fact that they are
significantly less compact (inflated by the thermal effects)
than their isentropic counterparts.
The characteristic "turning points" and the intermodal
intersections
for the PNS models on the low-mass side of Fig. 9,
are due to the existence of minimum masses.
However, it should be stressed that the frequency intersection
points, seen in the case of the case of the $n=1$ and the $n=2$
eigenfrequencies, do not correspond to the same configuration
of the PNS (while having the same mass, these configurations
have different central densities, see Figures 10, 11).
The situation is less complicated
in the case of the plots
of the eigenfrequencies versus the central density
of the stellar models, $\rho_{\rm centr}$,
presented in Figures 10 and 11.
At lower central densities the values of
$\omega_{\rm n}$ decrease monotonically, but very slowly, with
decreasing central density.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=8.5cm \epsfbox{5967F10.eps}
\end{center}
\caption[]{
The eigenfrequencies of three lowest radial modes,
$n=0,~1,~2$, for hot isentropic protoneutron stars with
trapped lepton fraction $Y_l=0.4$, versus central density.
For the sake of comparison we also show corresponding curves
for cold neutron stars (thin solid lines).
}
\label{fig10}
\end{figure}
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=8.5cm \epsfbox{5967F11.eps}
\end{center}
\caption[]{
The eigenfrequencies of three lowest radial modes,
, $n=0,~1,~2$, for protoneutron stars with hot isothermal
core with $T_{\rm b}=15~{\rm MeV}$, versus central density
For the sake of comparison we also show corresponding curves
for cold neutron stars (thin solid lines).
}
\label{fig11}
\end{figure}
The effects of temperature and
of the differences in adiabatic indices $\Gamma$
are particularly strong
in the case of the isothermal hot interior, with
$T=T_\infty (g_{00})^{-1/2}$. Our results for the first three radial
modes ($n=0,~1,~2$), for $T_{\rm b}=15~$MeV, are given in Fig. 11,
where we show also for comparison
results obtained for cold NS.
Three curves presented in Fig. 11 correspond to three different
choices of $\Gamma$, governing the changes of pressure vs. density during
pulsations. Although only results for $\Gamma=\Gamma_{\rm a}$ are physical,
the difference between curves visualizes the role of the proper treatment
of chemical equilibrium and adiabacity when the PNS star is oscillating.
$\Gamma_{\rm EOS}$ is determined by the equation of state of the matter
within a {\it static} model of PNS.
As one can see from Fig. 11,
the use of $\Gamma_{\rm EOS}$ leads usually to a significant
underestimating of the
eigenfrequencies of oscillation modes. A notable
exception from this behavior is that of the fundamental mode near
the maximum mass of PNS, for which results obtained using
$\Gamma_{\rm EOS}$ are very close to those obtained using
$\Gamma_{\rm a}$.
Consequently, the classical static stability
criterion at $M_{\rm max}$ works rather well also for the {\it
isothermal} PNS.
However,
the difference between $\omega_{0}^{\rm (EOS)}$ and
actual $\omega_0$
increases with decreasing $\rho_{\rm centr}$ and
the static stability criterion at
$M_{\rm min}$ does not hold. We have $M_{\rm min}[T_{\rm b}=15~{\rm
MeV}]=0.86~M_\odot$
but configurations in the neighbourhood of the
mass minimum turn out to be stable with respect to the fundamental
mode of the radial pulsations, calculated using the {\it actual}
(physical) values of $\Gamma=\Gamma_{\rm a}$.
In Fig. 11 we can see an interesting phenomenon ---
abrupt, nearly stepwise
changes in frequencies of the consecutive
(neighbouring) oscillation modes,
especially pronounced
for cold neutron stars,
but clearly visible also in the case of
$T_{\rm b}=15$~MeV, $Y_\nu=0$ models of PNS.
This is the ``avoided crossing'' phenomenon
of the {\it radial} modes of neutron stars.
This effect is known from the analysis of the
{\it nonradial} oscillations in ``ordinary'' stars (Aizenman, Smeyers
and Weigert (1977), Christensen--Dalsgaard (1980)) and
has been recently
considered by Lee and Strohmayer (1996) for {\it nonradial} oscillations of
rotating neutron stars.
Our preliminary studies of this effect indicate,
that the stepwise changes of $\omega_n$ are due to the change
of the character of the standing-wave solution for the eigenproblem.
Namely, at the "avoided crossing' point the solution changes from the standing wave localized mainly
in the outer layer of the star, to that localized predominantly in the
central core.
This topic will be discussed in separate paper (Gondek et al. 1996).
\section{Discussion and conclusions}
The birth of protoneutron stars is a dynamical process, and a
newborn protoneutron star is expected to pulsate.
These pulsations were excited during star formation. In the
present paper we studied linear, radial pulsations of protoneutron
stars, under various assumptions concerning hot stellar core,
and for a large interval of stellar masses.
The spectrum of the lowest modes of radial pulsations of
protoneutron stars is quite different from that of cold neutron
stars. Generally, protoneutron stars are significantly softer
with respect to the radial pulsations, than cold neutron stars,
and this difference increases for higher modes and for lower
stellar masses. These differences stem from different structure
of protoneutron stars, which in contrast to cold neutron stars
have extended envelopes, inflated
by thermal and trapped neutrinos effects.
The standard static criteria of stability of neutron star
models were derived under the assumption of cold, catalyzed
matter (Harrison et al. 1965). We have shown, that to a rather
good approximation, the configuration with maximum mass
separates stable configurations from the secularly
unstable ones (with respect to fundamental mode of small radial
pulsations). Therefore, despite thermal and neutrino trapping
effects, one can apply standard ``maximum mass'' criterion to locate
the relativistic instability of
protoneutron stars, both for the isentropic and the isothermal
hot, neutrino opaque cores.
The role of thermal effects increases with decreasing stellar
mass. Static ``minimum mass'' criterion does apply with a rather
good precision to protoneutron stars with hot isentropic,
neutrino opaque cores. This is due to the fact, that the
perturbation itself conserves entropy (pulsations are
adiabatic), and equilibration of matter is sufficiently fast.
However, the ``minimum mass'' criterion does not apply to
protoneutron stars with hot isothermal cores. In both cases, the
minimum masses of protoneutron stars are rather large (for our
models of dense hot matter we obtained the values of
$0.675~M_\odot$ and $0.86~M_\odot$ for the isentropic
($s=2$, $Y_l=0.4$) and the isothermal ($T_{\rm b}=15~{\rm MeV}$)
models, respectively.
Our treatment of the thermal state of the protoneutron star
interior should be considered as very crude. The temperature
profile might be affected by convection. Also, our method of
locating the neutrinosphere was very approximate. Clearly, the
treatment of thermal effects can be refined, but we do not think
this will significantly change our main results.
Our calculations were performed for only one model of the
nucleon component of dense hot matter. The model was realistic,
and enabled us to treat in a unified way the whole interior
(core as well as the envelope) of
the protoneutron star. However, in view of the uncertainties in
the EOS of dense matter at supranuclear densities, one should of
course study the whole range of theoretical possibilities, for a
broad set - from soft to stiff - of supranuclear, high
temperature EOS. An example of such an investigation
in the case of the {\it static} protoneutron stars,
is the study of Prakash et al. (1997).
In a hypothetical scenario, proposed by Brown and Bethe (1994),
a protoneutron star borns as a hot, lepton rich object,
composed of nucleons and leptons. If the final state of cold
neutron star is characterized by a large electron fraction, due
to the appearance of a large amplitude $K^-$ - condensate
(or
because a large fraction of baryons are negatively charged
hyperons), then the cold equation of state of neutron star
matter is softer than that of hot, lepton rich protoneutron
star. In such a case,
$M_{\rm bar,max}({\rm NS})<M_{\rm bar,max}({\rm PNS})$,
and all protoneutron stars with baryon mass
exceeding $M_{\rm bar,max}({\rm NS})$ will eventually collapse into black
holes. Clearly, in such a case the problem of stability
of the protoneutron star models in the vicinity of
$M_{\rm bar,max}({\rm PNS})$ with respect to radial
pulsations looses its significance.
\begin{acknowledgements}
We are very grateful to W. Dziembowski for introducing us into
the topic of the ``avoided crossing'' phenomena, for his
numerous helpful remarks, and for careful reading of the
manuscript.
This research was partially supported by the KBN grant No. P304
014 07 and by the KBN grant No. 2P03D01211 for D. Gondek.
D. Gondek and P. Haensel were also supported by the program
R{\'e}seau Formation Recherche of the French Minist{\`e}re
de l'Enseignement Sup{\'e}rieure et de la Recherche.
\end{acknowledgements}
\par
|
2,877,628,089,128 | arxiv | \section{Introduction}
In this article we consider the following problem, the \(p\)-adic instance of the forward Galois problem: given a \(p\)-adic field \(K\) and a polynomial \(F(x) \in K[x]\) over that field, what is its Galois group \(G := \operatorname{Gal}(F/K)\)?
Over any field for which polynomial factorization algorithms are known, the forward Galois problem can always be solved with the \define{naive algorithm}: explicitly compute the splitting field of \(F\) by repeatedly adjoining a root of it to the base field, and then explicitly compute the automorphisms of the splitting field. To date, there is no general solution to the \(p\)-adic forward Galois problem other than the naive algorithm.
This article presents a general algorithm. In practice, it can for example quickly determine the Galois group of most irreducible polynomials of degree 16 over \(\mathbb{Q}_2\) and has been used to compute some non-trivial Galois groups at degree 32. It has been tested on polynomials defining all extensions of \(\mathbb{Q}_2\), \(\mathbb{Q}_3\) and \(\mathbb{Q}_5\) of degree up to 12, all extensions of \(\mathbb{Q}_2\) of degree 14, and all totally ramified extensions of \(\mathbb{Q}_2\) of degrees 18, 20 and 22, the latter three being new. See \cref{gg-sec-implementation}.
Our implementation is publicly available \cite{galoiscode} and pre-computed tables of Galois groups are available from here also.
\subsection{Overview of algorithm}
Our algorithm uses the ``resolvent method''. We now describe a concrete instance.
Suppose \(F(x) \in \mathbb{Q}_p[x]\) is irreducible of degree \(d\), and therefore defines an extension \(L/\mathbb{Q}_p\) of degree \(d\).
The ramification filtration of this extension is a tower \(L_t=L/\ldots/L_0=\mathbb{Q}_p\). Let \(F_1(x)\in \mathbb{Q}_p[x]\) be a defining polynomial for \(L_1/\mathbb{Q}_p\). By Krasner's lemma, any polynomial in \(\mathbb{Q}[x]\) sufficiently close to \(F_1\) is also a defining polynomial, so we may take \(F_1 \in \mathbb{Q}[x]\). It is irreducible and so defines the number field \(\mathcal{L}_1/\mathcal{L}_0=\mathbb{Q}\) which has a unique completion embedding into \(L_1\). Repeating this procedure up the tower, we obtain the tower of number fields \(\mathcal{L}=\mathcal{L}_t/\ldots/\mathcal{L}_0=\mathbb{Q}\) such that \(\mathcal{L}\) embeds uniquely into \(L\). We call \(\mathcal{L}/\mathbb{Q}\) a \define{global model} of \(L/\mathbb{Q}_p\).
Let \(d_i:=(\mathcal{L}_i:\mathcal{L}_{i-1})=(L_i:L_{i-1})\), then \(\operatorname{Gal}(\mathcal{L}_i/\mathcal{L}_{i-1}) \leq S_{d_i}\) and therefore \(\operatorname{Gal}(\mathcal{L}/\mathbb{Q}) \leq W := S_{d_t} \wr \cdots \wr S_{d_1}\). Observe also that naturally \(\operatorname{Gal}(L/\mathbb{Q}_p) \leq \operatorname{Gal}(\mathcal{L}/\mathbb{Q})\) since the left hand side is a decomposition group of the right hand side.
Suppose \(\alpha_1 \in \mathcal{L}\) generates \(\mathcal{L}/\mathbb{Q}\), and let \(\alpha_2,\ldots,\alpha_d\in\bar\mathbb{Q}\) be its \(\mathbb{Q}\)-conjugates. Suppose we choose some subgroup \(U \leq W\), find an \define{invariant} \(I \in \mathbb{Z}[x_1,\ldots,x_d]\) such that \(\operatorname{Stab}_W(I) = U\) and compute the \define{resolvent}
\[R(x) = \prod_{wU \in W/U}(t - wU(I)(\alpha_1,\ldots,\alpha_d)) \in \mathbb{Z}[t]\]
by finding sufficiently precise complex approximations to \(\alpha_1,\ldots,\alpha_d\), giving a complex approximation to \(R\), whose coefficients we can then round to \(\mathbb{Z}\).
One can show that \(\operatorname{Gal}(R/\mathbb{Q}) = q(\operatorname{Gal}(\mathcal{L}/\mathbb{Q}))\) and hence \(\operatorname{Gal}(R/\mathbb{Q}_p) = q(\operatorname{Gal}(L/\mathbb{Q}_p)) = q(\operatorname{Gal}(F/\mathbb{Q}_p))\) where \(q:W\to S_{W/U}\) is the action of \(W\) on the cosets of \(U\).
In particular, if we define \(s(G)\) to be the multiset of the sizes of orbits of the permutation group \(G\), and we let \(S\) be the multiset of the degrees of the factors of \(R\) over \(K\), then \(s(q(\operatorname{Gal}(F/\mathbb{Q}_p))) = S\).
We compute the set \(\mathcal{G}\) of all transitive subgroups of \(W\), so that \(\operatorname{Gal}(F/\mathbb{Q}_p) \in \mathcal{G}\). If \(\abs{\mathcal{G}}>1\), we search through the subgroups \(U \leq W\) in index order until we find one such that \(\{s(q(G)) \,:\, G \in \mathcal{G}\}\) contains at least two elements. We then compute the corresponding resolvent \(R(t) \in \mathbb{Z}[t]\), factorize it over \(\mathbb{Q}_p\) and let \(S\) be the multiset of degrees of factors, and replace \(\mathcal{G}\) by \(\{G\in\mathcal{G} \,:\, s(q(G)) = S\}\). Observe that \(\mathcal{G}\) is now strictly smaller than it was before, and we still have \(\operatorname{Gal}(F/\mathbb{Q}_p) \in \mathcal{G}\).
We repeat this process until \(\abs{\mathcal{G}}=1\), at which point this single group is the Galois group and we are done.
In \cref{gg-sec-arm} we describe our precise formulation of this algorithm.
We have described one method of producing a global model, which results in the group \(W\) (relative to which we compute resolvents) being a wreath product of symmetric groups. It is better for \(W\) to be as small as possible, since this will reduce the index \((W:U)\) required, and hence also reduce \(\deg R\). In \cref{gg-sec-glomod} we discuss some other constructions. The best constructions take advantage of the simple structure of the Galois group of a ``singly ramified'' extension, something like \(C_d\) for unramified extensions, \(C_d \rtimes (\mathbb{Z}/d\mathbb{Z})^\times\) for tame extensions and \(C_p^k \rtimes H\) for wild extensions. We can also produce global models for reducible \(F\) using global models for its factors.
In this example, we deduced the Galois group by enumerating the set \(\mathcal{G}\) of all possibilities and then eliminating candidates. This is the ``group theory'' part of the algorithm. We have other methods which avoid enumerating all subgroups of \(W\), and instead work down the graph of subgroups of \(W\). These are discussed in \cref{gg-sec-groups}.
The function \(s\) taking a group and returning the multiset of sizes of its orbits is a ``statistic'', and there are other choices. These are discussed in \cref{gg-sec-statistic}. Some statistics provide more information than others, and therefore can result in smaller indices \((W:U)\) being required, but this comes at the expense of taking longer to compute.
We search for \(U\) by enumerating all the subgroups of \(W\) of each index in turn until we find one which is useful. There are other methods which try to avoid computing all of these subgroups, of which there may be many. One method restricts to a special class of subgroups. These are given in \cref{gg-sec-choice}.
\subsection{Previous work}
Over \(p\)-adic fields, there are some special cases where Galois groups can be computed.
\begin{itemize}
\item It is well known that the unramified extensions of \(K\) of degree \(d\) are all isomorphic, Galois and have cyclic Galois group \(C_d\). Hence if the irreducible factors of \(F(x)\) all define unramified extensions, then the splitting field of \(F(x)\) is unramified, Galois and cyclic with degree \(\operatorname{lcm} \{\deg g \,:\, g \in \operatorname{Factors}(F)\}\).
\item Suppose \(L/K\) is tamely ramified. Then it has a maximal unramified subfield \(U\), and \(L/U\) is totally (tamely) ramified. It is well known that \(L = U(\sqrt[e]{\zeta^r \pi})\) where \(e = (L:U)\) for some uniformizer \(\pi \in K\), \(\zeta\) a root of unity generating \(U\) and \(r \in \mathbb{Z}\). In this special form, it is straightforward to write down the splitting field and Galois group of \(L/K\). Furthermore, it is easy to compute the compositum of tame extensions, and hence if each irreducible factor of \(F(x)\) defines a tamely ramified extension, we can compute its Galois group. See \cite[Ch. II, \S2.2]{DPhD} for an exposition.
\item Greve and Pauli have studied \define{singly ramified} extensions, that is extensions whose ramification polygon has a single face, giving an explicit description of their splitting field and Galois group \cite[Alg. 6.1]{GP}. So in particular if \(F(x)\) is an Eisenstein polynomial whose ramification polygon has a single face, then we can compute its Galois group. An explicit description of this algorithm appears in Milstead's thesis \cite[Alg. 3.23]{Mil}.
\item In his thesis, Greve extends this to an algorithm for \define{doubly ramified} extensions \cite[\S6.3]{GreveTh}, that is whose ramification polygon has two faces. Essentially this uses the singly ramified algorithm for the bottom part, and class field theory and group cohomology to deal with the elementary abelian top part.
\item Jones and Roberts \cite{LFDB} have computed all extensions of \(\mathbb{Q}_p\) of degree up to 12, including their Galois group and some other invariants. These are available online in the Local Fields Database (LFDB). Some of the methods they use to compute Galois groups will feature in our general algorithm.
\item Awtrey et al. have also considered degree 12 extensions of \(\mathbb{Q}_2\) and \(\mathbb{Q}_3\) \cite{AwtreyTh}; degree 14 extensions of \(\mathbb{Q}_2\) \cite{AwtreyD14}; degree 15 extensions of \(\mathbb{Q}_5\) \cite{AwtreyD15}; and degree 16 \emph{Galois} extensions of \(\mathbb{Q}_2\) \cite{AwtreyD16}. The main new idea in these articles is the \define{subfield Galois group content} of an extension \(L/K\): the set of Galois groups of all proper subfields of \(L/K\). This invariant of \(\operatorname{Gal}(L/K)\) is useful in distinguishing between possible Galois groups, and is possible to compute given a database of all smaller extensions.
\end{itemize}
The difficult case appears to be when the factors of \(F\) define wildly ramified extensions whose ramification polygons have many faces.
Recently Rudzinski has developed techniques for evaluating linear resolvents \cite{Rudz} and Milstead has used a combination of these techniques with the ones mentioned above to compute some Galois groups in this difficult class \cite{Mil}.
\subsection{Mathematical notation}
Roman capital letters \(K,L,\ldots\) denote \(p\)-adic fields. The ring of integers of \(K\) is denoted \(\mathcal{O}_K\), a uniformizer is denoted \(\pi_K\) and the residue class field is denoted \(\mathbb{F}_K = \mathcal{O}_K/(\pi_K)\). If \(u \in \mathcal{O}_K\) then \(\bar u = u+(\pi_K) \in \mathbb{F}_K\) is its residue class. We denote by \(v_K\) the valuation of \(\bar\mathbb{Q}_p\) such that \(v_K(\pi_K)=1\).
Calligraphic capital letters \(\mathcal{K},\mathcal{L},\ldots\) denote number fields. The ring of integers of \(\mathcal{K}\) is \(\mathcal{O}_\mathcal{K}\).
If \(U \leq W\) is a subgroup then \(q_U:W \to S_{W/U}\) denotes the action of \(W\) on the left cosets of \(U\).
As introduced in \cref{gg-sec-statistic}, \(s\) denotes a function whose input is a permutation group or a polynomial and whose output is anything. There is an equivalence relation \(\sim\) on outputs such that if \(F(x) \in K[x]\) then \(s(\operatorname{Gal}(F))\sim s(F)\). There may also be a partial ordering \(\preceq\) on outputs such that if \(H \leq G\) are groups then \(s(H) \preceq s(G)\).
We may omit subscripts from the notation if they are clear from context.
\subsection{A note on conjugacy}
Recall that the Galois group of a polynomial \(G = \operatorname{Gal}(F)\) is defined to be the group of automorphisms of the splitting field of \(F\). Usually, we represent this as a permutation group \(G \leq S_d\) where \(d = \deg(F)\), such that writing the roots of \(F\) as \(\alpha_1,\ldots,\alpha_d\) in some order, then \(G\) acts as \(g(\alpha_i) = \alpha_{g(i)}\).
Since the order of the roots was arbitrary, \(G\) is only really defined up to conjugacy in \(S_d\).
Sometimes, we may know more about the roots of \(F\). For instance, if \(F\) is reducible, then \(G\) has multiple orbits. If we explicitly factorize \(F = \prod_i F_i\), and let \(d_i = \deg(F_i)\), then we can specify that the first \(d_1\) roots \(\alpha_1,\ldots,\alpha_{d_1}\) are the roots of \(F_1\), the next \(d_2\) are the roots of \(F_2\) and so on. Letting \(W = S_{d_1} \times S_{d_2} \times \ldots\) then \(G \leq W \leq S_d\) is defined up to conjugacy in \(W\). We shall see more examples in \cref{gg-sec-glomod}.
Almost everywhere in our exposition, when we talk of a group, we actually mean the conjugacy class of the group inside some understood larger group. When we talk of the collection of all groups with some property, we mean all the conjugacy classes whose groups have that property. This is to simplify the exposition.
In the implementation, a conjugacy class is usually represented by a representative group. An algorithm which returns all conjugacy classes with some property may actually return several representatives for the same class. Finding which groups generate the same class in order to remove duplicates can be computationally difficult, and so whether or not to do this, and how, is usually parameterised. The default is not to remove duplicates. See \cite[Ch. II, \S 11]{DPhD} for details.
Henceforth, we shall typically only mention conjugacy when we have specific strategies to deal with conjugate groups.
\subsection{Compendium}
\label{gg-sec-tldr}
Most of the rest of this article describes in full detail the possible parameters to our algorithm, of which there are many. We now list the sections with the most important or novel contributions.
\begin{itemize}
\item \Cref{gg-sec-arm}: Describes the resolvent method, the main focus of this article.
\item \Cref{gg-sec-reseval,gg-sec-glomod}: Methods for producing ``global models'' for \(p\)-adic fields, which are used to evaluate resolvents. Our constructions are more general than previous similar efforts and so can produce more efficient models.
\item \Cref{gg-sec-groups-all,gg-sec-groups-maximal2}: The main two ways we perform the group theory part of deducing the Galois group. The former is to write down all possibilities and then eliminate until one remains; the latter works down the graph of possible groups using the notion of ``maximal preimages of statistics'' to efficiently move down the graph without blowing up the number of possibilities.
\item \Cref{gg-sec-stat-facdegs}: The main ``statistic'' of a resolvent we compute is the multiset of degrees of its factors. This is compared to the multiset of sizes of orbits of potential Galois groups to deduce which are possible.
\item \Cref{gg-sec-tranche-oidx}: Methods to produce groups from which to compute resolvents which empirically are both fast to compute and give low-degree resolvents.
\item \Cref{gg-sec-implementation}: The implementation, timings, performance notes, etc.
\end{itemize}
\section{Galois group algorithms}
\label{gg-sec-algorithms}
This article is mainly concerned with the resolvent method, introduced in \cref{gg-sec-arm}. However, the algorithm is recursive, in that it may compute other Galois groups along the way, and it may suffice to use other algorithms for this purpose. Therefore, we briefly describe the other algorithms available in our implementation.
\subsection{\texttt{Naive}}
\label{gg-sec-naive}
This explicitly computes a splitting field for \(F(x)\) and explicitly computes its automorphisms.
This is the algorithm currently implemented in Magma for \(p\)-adic polynomials, called \texttt{GaloisGroup}. Since the splitting field is computed explicitly, this is only suitable when the Galois group is known in advance to be small, such as because the degree is small.
\input{gg-sec-tame-v2.tex}
\subsection{\texttt{SinglyRamified}}
\label{gg-sec-singlyramified}
This computes the Galois group of \(F(x)\) provided it is irreducible and defines an extension whose ramification filtration contains a single segment. Such an extension is called \define{singly ramified}.
When the extension is tamely ramified, we can use the \texttt{Tame} algorithm. Otherwise the extension is totally wildly ramified and we use an algorithm due to Greve and Pauli \cite[Alg. 6.1]{GP}. An explicit description is given by Milstead \cite[Alg. 3.23]{Mil}.
\subsection{\texttt{ResolventMethod}}
\label{gg-sec-arm}
The resolvent method is the focus of the remainder of this article and is based on the following simple lemma.
\begin{lemma}
Suppose \(G := \operatorname{Gal}(F) \leq W \leq S_d\) where \(d = \deg F\), and take any \(U \leq W\). Now \(S_d\) acts on \(\mathbb{Z}[x_1,\ldots,x_d]\) by permuting the variables, so suppose \(I \in \mathbb{Z}[x_1,\ldots,x_n]\) such that \(\operatorname{Stab}_W(I) = U\) (we say \(I\) is a \define{primitive \(W\)-relative \(U\)-invariant}). Letting \(\alpha_1,\ldots,\alpha_d\) be the roots of \(F\), define \(\beta_{wU} = wU(I)(\alpha_1,\ldots,\alpha_n)\) (this is well-defined since \(I\) is fixed by \(U\)) and define the \define{resolvent} \(R(t) := \prod_{wU \in W/U} (t - \beta_{wU})\). Then \(R(t) \in K[t]\). If \(R\) is squarefree, then its Galois group corresponds to the coset action of \(G\) on \(U\). That is, letting \(q : W \to S_{W/U}\) be the coset action, then identifying \(wU \leftrightarrow \beta_{wU}\) we have \(\operatorname{Gal}(R) = q(G)\).
\end{lemma}
\begin{proof}
Writing \(R(t) := \tilde R(\alpha_1,\ldots,\alpha_d; t)\) where \[\tilde R(x_1,\ldots,x_d; t) := \prod_{wU \in W/U} (t - wU(I)(x_1,\ldots,x_d))\] then the \(t\)-coefficients of \(\tilde R\) are fixed by \(W\) (the action of \(W\) re-orders the product) and hence by \(G\). We conclude that the \(t\)-coefficients of \(R\) are fixed by \(G\) too, and hence by Galois theory \(R(t) \in K[t]\).
If \(R\) is squarefree, then there is a 1-1 correspondence between the cosets \(\braces{wU}\) of \(W/U\) and the roots \(\braces{\beta_{wU}}\) of \(R\). Take \(g \in G\), then
\begin{align*}
g(\beta_{wU}) &= g(wU(I)(\alpha_1,\ldots,\alpha_d)) \\
&= wU(I)(g(\alpha_1),\ldots,g(\alpha_d))) \\
&= wU(I)(\alpha_{g(1)},\ldots,\alpha_{g(d)}) \\
&= gwU(I)(\alpha_1,\ldots,\alpha_d) \\
&= \beta_{gwU} \\
\end{align*}
so the action of \(G\) on the roots of \(R\) corresponds to the coset action, as claimed.
\end{proof}
Therefore, if we have some \(W\) containing \(G\) and a means to compute resolvents \(R\) for \(U \leq W\), then since \(\operatorname{Gal}(R) = q(G)\) is a function of \(G\), we can deduce information about \(G\) by finding some information about \(\operatorname{Gal}(R)\). Specifically how we compute resolvents and deduce information about \(G\) is controlled by two parameters.
Firstly, a resolvent evaluation algorithm (\cref{gg-sec-reseval}) selects a fixed group \(W \leq S_d\) such that \(G \leq W\), and thereafter is responsible for evaluating the resolvents \(R(t)\) from selected \(U \leq W\) and invariants \(I \in \mathbb{Z}[x_1,\ldots,x_d]\).
Secondly, a group theory algorithm (\cref{gg-sec-groups}) is responsible for deducing the Galois group \(G\) by choosing a suitable \(U\), and then using the resolvent \(R\) returned by the resolvent evaluation algorithm to gather information about \(G\).
\begin{algorithm}[Galois group: resolvent method] Given a polynomial \(F(x) \in K[x]\), returns its Galois group.\hfill
\label{gg-alg-arm}
\begin{algorithmic}[1]
\State Initialize the resolvent evaluation algorithm.\label{gg-alg-arm-rinit}
\State Initialize the group theory algorithm.\label{gg-alg-arm-ginit}
\State If we have determined the Galois group, then return it.\label{gg-alg-arm-done}
\State Let \(U\) be a subgroup of \(W\).\label{gg-alg-arm-U}
\State Let \(I\) be a primitive \(W\)-relative \(U\)-invariant.\label{gg-alg-arm-I}
\State Let \(R\) be the resolvent corresponding to \(I\).\label{gg-alg-arm-R}
\State Use \(R\) to deduce information about the Galois group.\label{gg-alg-arm-deduce}
\State Go to step \ref{gg-alg-arm-done}.
\end{algorithmic}
\end{algorithm}
The resolvent algorithm controls steps \ref{gg-alg-arm-rinit} and \ref{gg-alg-arm-R}. The group theory algorithm controls steps \ref{gg-alg-arm-ginit}, \ref{gg-alg-arm-done}, \ref{gg-alg-arm-U} and \ref{gg-alg-arm-deduce}. Step \ref{gg-alg-arm-I} could also be parameterised, but we find it is sufficient to use the algorithm due to Fieker and Kl\"uners \cite[\S5]{FK}, implemented as the intrinsic \texttt{RelativeInvariant} in Magma.
\begin{remark}
Using resolvents to compute Galois groups is not new. Stauduhar's method \cite{Stauduhar73} for polynomials over \(\mathbb{Q}\) computes resolvents relative to \(S_d\) by computing complex approximations to the roots. This was improved by Fieker and Kl\"uners \cite{FK} to a ``relative resolvent method'' which allows the overgroup \(W\) to be made smaller at each iteration until it equals \(G\). Over \(\mathbb{Q}_p\), a resolvent method has been used by Jones and Roberts \cite{LFDB} to compute the Galois group of fields of degree up to 12, computing resolvents in \(W = S_{d_2} \wr S_{d_1}\) corresponding to a subfield of degree \(d_1\).
\end{remark}
\subsection{\texttt{Sequence}}
This algorithm takes as parameters a sequence of other algorithms to compute Galois groups. It tries each algorithm in turn until one succeeds. This is mainly useful to deal with special cases first (e.g. \texttt{Tame} or \texttt{SinglyRamified}) before applying a general method (e.g. \texttt{ResolventMethod}).
\section{Resolvent evaluation algorithms}
\label{gg-sec-reseval}
These are used as part of the \texttt{ResolventMethod} algorithm for computing Galois groups. They are responsible for selecting an overgroup \(W\) such that \(G \leq W\) and thereafter evaluating resolvents relative to \(W\).
Currently there is one option, \code{Global}, described here.
\begin{definition}
A \define{global model} for a \(p\)-adic field \(K\) is an embedding \(i : \mathcal{K} \to K\) where \(\mathcal{K}\) is a global number field such that \(K\) is a completion of \(\mathcal{K}\) and \(i\) is the corresponding embedding.
If \(L/K\) is an extension of \(p\)-adic fields, and \(i : \mathcal{K} \to K\) is a global model for \(K\), then a \define{global model for \(L/K\) extending \(i\)} is a global model \(j : \mathcal{L} \to L\) of \(L\) such that \(j|_{\mathcal{K}} = i\).
Similarly a \define{global model for \(F(x) \in K[x]\) extending \(i\)} is \(\prod_k \mathcal{F}_k\) where \(F = \prod_k F_k\) is the factorization over \(K\) of \(F\) into irreducible factors, \(L_k/K\) are the corresponding extensions, \(i_k : \mathcal{L}_k \to L_k\) are global models for \(L_k/K\) extending \(i\), and \(\mathcal{L}_k \cong \mathcal{K}(x)/(\mathcal{F}_k(x))\).
We shall often refer to \(\mathcal{K}\) itself as the global model, instead of the embedding \(i\).
\end{definition}
The \texttt{Global} algorithm computes a global model \(\mathcal{K}\) for \(K\) and a global model \(\mathcal{F}(x) \in \mathcal{K}[x]\) for the input \(F(x) \in K[x]\) extending \(\mathcal{K}\). At the same time, it computes the required overgroup \(W\) such that \(G \leq \operatorname{Gal}(\mathcal{F} / \mathcal{K}) \leq W\). A parameter (a global model algorithm, \cref{gg-sec-glomod}) specifies how to produce a global model for \(F(x)\).
\begin{remark}
\label{gg-rmk-glomodidx}
Note that this implies that \(\deg\mathcal{F}=\deg F=d\). In fact, our algorithm more generally computes an \define{overgroup embedding} \(e:W\to\mathcal{W}\) such that \(G\leq W\), \(\operatorname{Gal}(\mathcal{F}/\mathcal{K})\leq\mathcal{W}\) and \(e(G)\) is the corresponding decomposition group. Hence \(\deg\mathcal{F}>d\) is allowed. This usually arises as a global model \(\mathcal{L}/\mathcal{K}'/\mathcal{K}\) for \(L/K\) where \(\mathcal{K}'\) is also a global model for \(K\) and \((\mathcal{L}:\mathcal{K}')=d\), in which case we refer to \((\mathcal{K}':\mathcal{K})\) as the \define{index} of the global model. In our exposition we shall assume \(W=\mathcal{W}\) for simplicity and leave the details to \cite[Ch. II]{DPhD}.
\end{remark}
The algorithm then can evaluate resolvents as follows. For each complex embedding \(c : \mathcal{K} \to \mathbb{C}\), we compute the roots of \(c(\mathcal{F})\) to high precision. Letting \(\tilde\alpha_1,\ldots,\tilde\alpha_{d'}\) be these roots, we compute \[\tilde R_c(t) := \prod_{wU \in W/U}(t - wU(I)(\tilde\alpha_1,\ldots,\tilde\alpha_{d'}))\] which is an approximation to \(c(R(t)) \in \mathbb{C}[t]\).
We can always arrange for \(\mathcal{F}(x)\) to be monic and integral, so that its roots are integral, and therefore \(R(t) \in \mathcal{O}_{\mathcal{K}}[t]\). Firstly, suppose that \(\mathcal{K} = \mathbb{Q}\) (so \(K = \mathbb{Q}_p\)), then we know \(R(t) \in \mathbb{Z}[t]\) and therefore assuming we have computed \(\tilde R(t)\) sufficiently precisely, then we can compute \(R(t)\) by rounding its coefficients to the nearest integer.
More generally, for each coefficient \(R_i\) of \(R(t)\) we take the vector \((\tilde R_{c,i})_c\) which should be a close approximation to \((c(R_i))_c\). Since \(R_i\) are integral, \((c(R_i))_c\) is an element of the \define{Minkowski lattice} \(\prod_c c(\mathcal{O}_\mathcal{K})\), which is discrete, and therefore we can deduce \(R_i\) by rounding \((\tilde R_{c,i})_c\) to the nearest point in the lattice. This can be done using lattice basis reduction techniques such as LLL.
\begin{algorithm}[Resolvent: \texttt{Global}] Given a global model \(\mathcal{F}(x) \in \mathcal{K}[x]\) and subgroup \(U \leq W\), returns the corresponding resolvent \(R(t)\).
\label{gg-alg-resolvent}
\begin{algorithmic}[1]
\State Choose a Tschirnhaus transformation \(T \in \mathbb{Z}[x]\) (see Rmk. \ref{gg-rmk-resolvent-tschirnhaus}).
\State Choose a complex floating point precision, \(k\) decimal digits (see Rmk. \ref{gg-rmk-resolvent-complex-precision}).
\State Compute complex approximations to the roots of \(c(\mathcal{F})\) for each complex embedding \(c : \mathcal{K} \to \mathbb{C}\).
\State Compute \(\tilde R_c(t) = \prod_{wU \in W/U} (t - wU(I)(T(\tilde \alpha_1),\ldots,T(\tilde \alpha_{d'})))\).
\State Round \((\tilde R_{c,i})_i\) to the nearest point of the Minkowski lattice of \(\mathcal{O}_{\mathcal{K}}\), and let \(R_i\) be the corresponding element of \(\mathcal{O}_{\mathcal{K}}\).
\State If \(R(t) \in \mathcal{K}[t]\) is not squarefree, go to Step 1.
\State Return \(R(t)\).
\end{algorithmic}
\end{algorithm}
\begin{remark}
\label{gg-rmk-resolvent-tschirnhaus}
In Step 1, a Tschirnhaus transformation is any randomly selected polynomial in \(\mathbb{Z}[x]\). Its purpose is to ensure that \(R(t)\) is squarefree. Indeed, if \(R(t)\) is not squarefree, then there is some coincidence between its roots, and therefore some unintended structure between the roots of \(F\). By transforming the roots, we should destroy this structure.
Such a transformation always exists \cite{Girstmair83}. In practice, it suffices to use \(T(x) = x\) initially, and thereafter to choose a random polynomial of small degree and coefficients, increasing the degree and coefficient bound at each iteration.
\end{remark}
\begin{remark}
\label{gg-rmk-resolvent-complex-precision}
It is important in Step 2 that we choose a complex floating point precision \(k\) such that the rounding step produces the correct answer. We do this as follows.
First, we find an upper bound on the absolute valuations of the roots of \(c(\mathcal{F})\) for each complex embedding \(c\). In principle this could be done by analyzing the polynomials which define the global model and bounding their roots in terms of the coefficients, but in our current implementation we instead compute the complex roots to some default precision (30 decimal digits) and take the size of the largest root as our bound. It is possible although unlikely that the latter approach introduces enough precision error that this bound is incorrect, and hence this part of the implementation does not yield proven results.
Using this upper bound, we can follow through the computation of \(\tilde R_c\) to get upper bounds on its coefficients. By increasing the bounds by a small fraction at each computation, we can absorb the effect of any complex precision error. We then select a precision so that the absolute errors on the coefficients \(\tilde R_{c,i}\) are less than half the shortest distance between two elements of the Minkowski lattice. We then add a generous margin to the precision (say 20 decimal digits) so that we can check in the code that we are in fact very close (say within 10 decimal digits) of an integer point.
\end{remark}
\begin{remark}
The choice to approximate the roots of \(\mathcal{F}\) in the complex field \(\mathbb{C}\) is somewhat arbitrary. We could instead pick a prime \(\ell\) such that \(\mathcal{F}\) has a small splitting field over \(\mathbb{Q}_\ell\) and approximate the roots \(\ell\)-adically. Making such a change usually improves the reliablility and precision requirements. The theory of the Minkowski lattice carries over into this setting.
\end{remark}
\section{Global model algorithms}
\label{gg-sec-glomod}
Given a polynomial \(F(x) \in K[x]\) and a global model \(i:\mathcal{K} \to K\), a global model algorithm computes a global model \(\mathcal{F}(x)\) for \(F(x)\) extending \(\mathcal{K}\). It also computes an overgroup \(W\) such that \(G \leq \operatorname{Gal}(\mathcal{F} / \mathcal{K}) \leq W\).
\begin{remark}
As presented, these constructions assume the global model index (\cref{gg-rmk-glomodidx}) is 1, but do generalize. See \cite[Ch. II, \S4]{DPhD} for details.
\end{remark}
\subsection{\texttt{Symmetric}}
Given irreducible \(F(x) \in K[x]\), this finds a polynomial \(\mathcal{F}(x) \in \mathcal{K}[x]\) sufficiently close to \(F(x)\) that they have the same splitting field over \(K\). Generically we expect that \(\operatorname{Gal}(\mathcal{F} / \mathcal{K}) = S_d\), since we are not imposing any further restriction of \(\mathcal{F}\), and therefore the corresponding overgroup is taken to be \(W=S_d\).
To find such a polynomial, we pick some precision parameter \(k \in \mathbb{N}\). We take some polynomial \(\mathcal{F}(x) \in \mathcal{K}[x]\) such that \(i(\mathcal{F}(x)) - F(x)\) has coefficients of valuation at least \(k\), and then we check that \(\mathcal{F}\) is a global model. If not, we increase \(k\). By keeping \(k\) small, we limit the size of the coefficients of \(\mathcal{F}\), which in turn limits the precision required in the complex arithmetic later.
\subsection{\texttt{Factors}}
This factorizes \(F(x) = \prod_k F_k(x)\) into irreducible factors over \(K\), produces a global model \(\mathcal{F}_k(x)\) for each factor, and then the global model is \(\mathcal{F}(x) = \prod_k \mathcal{F}_k(x)\). The overgroup is the direct product \(W=\prod_k W_k\) of overgroups for each factor.
A parameter determines how to compute a global model for each factor.
\subsection{\texttt{RamTower}}
\label{gg-sec-ramtower}
Assuming \(F(x)\) is irreducible and defines an extension \(L/K\), this finds the ramification filtration \(L=L_t/\ldots/L_0=K\) of \(L/K\). For each segment \(L_k/L_{k-1}\), it produces a global model extending the global model of the segment below it. Then the global model is the final model in this iteration. The overgroup is the wreath product \(W=W_t\wr\cdots\wr W_1\) of overgroups of each segment.
A parameter determines how to compute a global model for each segment.
\subsection{\texttt{RootOfUnity}}
\label{gg-sec-rootofunity}
Assuming the splitting field \(L\) of \(F\) over \(K\) is unramified, and therefore generated by a primitive \(n\)th root of unity \(\zeta\), we define the global model to be \(\mathcal{L} = \mathcal{K}(\zeta)\).
We naturally identify \(\mathcal{W} = \operatorname{Gal}(\mathcal{L} / \mathcal{K})\) with a subgroup of \((\mathbb{Z} / n \mathbb{Z})^\times\), identifying \(i \bmod n\) with \(\zeta \mapsto \zeta^i\). The subgroup \(W = \angles{q} \leq \mathcal{W}\) is the decomposition group, i.e. \(\operatorname{Gal}(L/K)\). If \(W=\mathcal{W}\) then this is our overgroup (otherwise \(\mathcal{W}\) is an overgroup for a model of higher index \cite[Ch. II, \S4.5]{DPhD}).
By default, we use \(n=q^d-1\). A parameter can change this to use the smallest divisor of \(q^d-1\) not dividing \(q^c-1\) for any \(c<d\).
Another parameter controls whether to search for a \define{complement} to \(W\) --- i.e. a subgroup \(H \leq \mathcal{W}\) such that \(H \cap W = 1\) --- of smallest index possible, and then replace \(\mathcal{L}\) by the fixed field of \(H\). By design, this still has a completion to \(L\), but is of smaller degree. If \(\angles{H,W}=\mathcal{W}\) then \(H\) is a \define{perfect complement} and \(W=\mathcal{W}/H\) is our overgroup (otherwise \(\mathcal{W}/H\) is an overgroup for a model of higher index).
\begin{remark}
The complement option usually finds a perfect complement. For example, suppose \(\mathcal{K} = \mathbb{Q}\) and \(K = \mathbb{Q}_p\), \(p \leq 7\) and \(d \leq 50\), then there is a perfect complement unless: \(p=2\) and \(8 \mid d\); or \(p=3\) and \(d=9\); or \(p=7\) and \(d\in\braces{5,8}\).
\end{remark}
\begin{remark}
\label{gg-rmk-grunwaldwang}
The Grunwald--Wang theorem of class field theory \cite[Ch. X, \S2]{ATCFT} implies that if \(K\) is a completion \(\mathcal{K}_\mathfrak{p}\), and \(L/K\) is cyclic, degree \(d\), then there is \(\mathcal{L} / \mathcal{K}\) cyclic of degree \(d\) which completes to \(L\). There is an exception at primes \(\mathfrak{p} \mid 2\) and degrees \(8 \mid d\), for which \((\mathcal{L} : \mathcal{K}) = 2d\) is sometimes necessary.
\end{remark}
\subsection{\texttt{RootOfUniformizer}}
\label{gg-sec-rootofuniformizer}
Assuming \(F\) is irreducible of degree \(d\) over \(K\) and defines a totally tamely ramified extension \(L/K\), then \(L=K(\sqrt[d]{\pi})\) for some uniformizer \(\pi \in K\). Taking a sufficiently precise approximation to \(\pi\), we may assume that \(\pi \in \mathcal{K}\), and we define the global model to be \(\mathcal{L} = \mathcal{K}(\sqrt[d]{\pi})\). The embedding \(\mathcal{K} \to K\) extends uniquely to \(\mathcal{L} \to L\).
Letting \(\zeta\) be a primitive \(d\)th root of unity, then clearly \(\mathcal{K}(\sqrt[d]{\pi},\zeta)\) is the normal closure and its Galois group \(W\) (which is a function of \(\operatorname{Gal}(\mathcal{K}(\zeta)/\mathcal{K})\) which may be computed explicitly) acts faithfully on the \(d\) elements \(\sqrt[d]{\pi}\), \(\zeta \sqrt[d]{\pi}\), \(\ldots\), \(\zeta^{d-1} \sqrt[d]{\pi}\).
\subsection{\texttt{SinglyWild}}
\label{gg-sec-singlywild}
Suppose \(F(x) \in K[x]\) defines a singly wildly ramified extension \(L/K\) of degree \(d=p^k\). That is, a totally wildly ramified extension whose ramification polygon has a single face.
Suppose also \(p=2\) and \(L/K\) is Galois, then \(\operatorname{Gal}(L/K)\cong C_2^k\) and so \(L = K(\sqrt{a_1},\ldots,\sqrt{a_k})\) for some \(a_i\in K\). By taking sufficiently precise approximations, we may further assume \(a_i\in\mathcal{K}\). Then \(\mathcal{L}=\mathcal{K}(\sqrt{a_1},\ldots,\sqrt{a_k})\) is our global model with overgroup \(W=C_2^k\).
\begin{remark}
Using Kummer theory, an averaging argument, and a result of Greve \cite[Thm. 7.3]{GP}, this method generalizes to \(p\ne2\) and non-Galois \(L/K\) \cite[Ch. II, \S4.7]{DPhD}. This has not yet been implemented.
\end{remark}
\subsection{\texttt{Select}}
This selects between several different global model algorithms, depending on \(F\). For example, we can select between \texttt{RootOfUnity}, \texttt{RootOfUniformizer} or \texttt{SinglyWild} depending on whether \(F\) defines an unramified, tame, or wild extension.
\section{Group theory algorithms}
\label{gg-sec-groups}
The job of a group theory algorithm is to decide, given the overgroup \(W\), which subgroups \(U \leq W\) to form resolvents from, and to use those resolvents to deduce the Galois group \(G \leq W\).
We recommend now reading the definition of statistic at the start of \cref{gg-sec-statistic}. A statistic is our means of comparing groups with resolvents.
\subsection{\texttt{All}}
\label{gg-sec-groups-all}
This algorithm proceeds by writing down all possible Galois groups \(G\) (up to \(W\)-conjugacy), and then eliminating possibilities until only one remains.
There are two parameters, a statistic algorithm \(s\) (\cref{gg-sec-statistic}) which determines which properties of the Galois groups \(G\) and resolvents \(R\) to compare, and a subgroup choice algorithm (\cref{gg-sec-choice}) which determines how we choose a subgroup \(U\).
The subgroup choice algorithm is used to choose a subgroup \(U\). Then, given a resolvent \(R\), we compute the statistic \(s(R)\) and see for which \(G\) in the list of possible Galois groups this equals \(s(q(G))\) where \(q\) is the coset action of \(W\) on \(W/U\). We eliminate the \(G\) for which the statistics differ. We are done when only one \(G\) remains.
\begin{remark}
The parameters must be chosen correctly to ensure that the algorithm terminates, otherwise it is possible that the subgroup choice algorithm cannot find a useful subgroup for the given statistic. \Cref{gg-lem-rootsmax} below implies the algorithm terminates for the \texttt{HasRoot} statistic (or any more precise statistic such as \texttt{FactorDegrees}) and any subgroup choice algorithm which considers all groups.
\end{remark}
\begin{lemma}
\label{gg-lem-rootsmax}
\(G\) is congruent to a subgroup of \(U\) if and only if the corresponding resolvent \(R\) has a root.
\end{lemma}
\begin{proof}
\(G \leq U\) if and only if \(q(G)\) has a fixed point, where \(q:W \to S_{W/U}\) is the coset action. Since \(\operatorname{Gal}(R)=q(G)\), this occurs if and only if \(R\) has a root.
\end{proof}
\subsection{\texttt{Maximal}}
\label{gg-sec-groups-maximal}
This algorithm avoids the need to enumerate all possible Galois groups. We start at the top of the directed acyclic graph of subgroups of \(W\) and work our way down, at each stage either proving that a current group under consideration is not the Galois group, and so moving on to its maximal subgroups, or proving that the Galois group is not a subgroup of some of the maximal subgroups of a group under consideration.
Specifically, at all times we have a set \(\mathcal{P}\) of subgroups of \(W\) such that we know that the Galois group is contained in at least one of them. We call this the \define{pool}. Initially we have \(\mathcal{P} = \{W\}\). If for some resolvent \(R\) and \(P \in \mathcal{P}\) we find that their statistics do not agree, i.e. \(s(R) \not\sim s(q(P))\), then we record that \(G \ne P\). We also test if the statistic is consistent with the Galois group being a subgroup of \(P\). If this latter test fails, i.e. \(s(R) \not\preceq s(q(P))\), then we remove \(P\) from the pool. We also perform the same tests on all maximal subgroups \(Q < P \in \mathcal{P}\).
Having processed a resolvent in this way, we may decide to modify \(\mathcal{P}\) further. For example, as soon as there is some \(P \in \mathcal{P}\) such that the Galois group is not \(P\), replace \(P\) by its maximal subgroups. Or instead, when all \(P \in \mathcal{P}\) are known not to be the Galois group, replace the whole pool by the set of maximal subgroups of its elements. This behaviour is parameterised.
We have determined the Galois group when \(\mathcal{P}\) contains one group, and we have deduced that the Galois group is not contained in any of its maximal subgroups.
The question remains of which subgroups \(U\leq W\) are \define{useful} in the sense that a resolvent formed from \(U\) will provide information. Unlike the \code{All} algorithm, it is not possible to determine for certain if a given group \(U\) will allow us to make progress or not. There is a necessary condition, but this does not guarantee progress, and there is a sufficient condition, but it is not guaranteed there there exists a group with this condition. We parameterise this choice, but in the next section give an improved method without this issue.
\subsection{\texttt{Maximal2}}
\label{gg-sec-groups-maximal2}
Note that a shortcoming of the \texttt{Maximal} algorithm is that it is not always possible to tell if a subgroup \(U \leq W\) will provide any information, and so its behaviour is more heuristic than principled. Another problem is that it only ever rules groups out of consideration which cannot contain the Galois group, and therefore all groups \(P\) with \(G \leq P \leq W\) will be considered in the pool \(\mathcal{P}\) at some point; if there are many such groups, this can get inefficient. The \texttt{Maximal2} algorithm avoids both of these problems by positively identifying groups which do contain the Galois group.
As before, we have a pool \(\mathcal{P}\) of subgroups, at least one of which contains the Galois group. Suppose there is a group \(U \leq W\) such that \(s(q(P)) \not\sim s(q(Q))\) for some \(P \in \mathcal{P}\) and maximal \(Q < P\) (such a group is \define{useful}) and we form the corresponding resolvent \(R\). There are two possibilities.
If \(s(R) \sim s(q(P))\) then \(s(q(Q)) \prec s(R)\), so \(s(R) \not\preceq s(q(Q))\), so \(G \not\leq Q\), and so we can rule \(Q\) out of consideration.
Otherwise \(s(R) \not\sim s(q(P))\) and so \(G \ne P\). In the \texttt{Maximal} algorithm at this point we would do something like replace \(P\) in the pool by its maximal subgroups. Instead, we find the set \(X''\) of subgroups \(Q'' < q(P)\) which are maximal among those such that \(s(Q'') \sim s(R)\); we refer to these as the \define{maximal preimages in \(q(P)\) of \(s(R)\)}. Then we let \(X = \{P \cap q^{-1}(Q'') \,:\, Q'' \in X''\}\). By construction, if \(G \leq P\) then \(G \leq Q'\) for some \(Q' \in X\) and so we can replace \(P\) in the pool by \(X\). Typically \(X\) is much smaller than the number of maximal subgroups of \(P\).
Suppose now that we have eliminated all maximal subgroups of all \(P \in \mathcal{P}\) from consideration. Then we know that \(G=P\) for some \(P \in \mathcal{P}\). We are now in the scenario of the \texttt{All} algorithm, and so can now eliminate groups from the pool by finding \(U \leq W\) such that \(s(q(P_1)) \not\sim s(q(P_2))\) for some \(P_1,P_2 \in \mathcal{P}\). Such a \(U\) is also said to be \define{useful}.
We have deduced the Galois group when the pool contains a single group, and we have ruled all of its maximal subgroups out of consideration.
We can use any statistic which has an equivalence relation (as required for \texttt{All}) and a partial ordering (as required for \texttt{Maximal}) and an algorithm for computing maximal preimages. For the latter, in general we have a ``naive'' algorithm, which simply works down the subgroups of \(P\) until ones with the correct statistic are found.
\begin{algorithm}[Maximal preimages: Naive]
\label{gg-alg-maxpre-naive}
Given a group \(P\), a statistic \(s\) and a value \(v\) of \(s\), returns the maximal preimages of \(v\) in \(P\).
\begin{algorithmic}[1]
\If{\(v \sim s(P)\)}
\State\Return \(\{P\}\)
\ElsIf{\(v \prec s(P)\)}
\State\Return \(\bigcup_{\text{maximal \(Q < P\)}} \text{maximal preimages of \(v\) in \(Q\)}\)
\Else
\State\Return \(\emptyset\)
\EndIf
\end{algorithmic}
\end{algorithm}
However, only using the naive algorithm would not provide an improvement over \code{Maximal}. The real efficiency gain comes from the existence of more efficient algorithms for particular statistics, in particular \texttt{HasRoot} (\cref{gg-sec-stat-hasroot}) and \texttt{FactorDegrees} (\cref{gg-sec-stat-facdegs}).
\subsection{\texttt{Sequence}}
This takes as parameters a sequence of group theory algorithms. Each one is used in turn until either the Galois group is deduced or the subgroup choice algorithm runs out of subgroups to try.
If the same algorithm appears consecutively with different parameters, then the state of the algorithm (such as the pool of possible Galois groups) is maintained so that information is not lost.
This allows us, for example, to first use a cheap statistic on a limited number of subgroups --- aiming to deduce easy Galois groups quickly --- before trying a more expensive statistic.
\section{Statistic algorithms}
\label{gg-sec-statistic}
A statistic algorithm is a means of comparing the Galois group of a polynomial with a permutation group. Specifically it is a function which takes as input a permutation group or a polynomial and outputs some value. There must be an equivalence relation on these values, which we denote \(\sim\). A statistic function \(s\) must satisfy the following property: \(s(R) \sim s(\operatorname{Gal}(R))\) for all polynomials \(R\). For most statistics, \(\sim\) is equality.
Using this, if we are given a polynomial \(R(x)\) (such as a resolvent) and a permutation group \(G\) and we find that \(s(R) \not\sim s(G)\), then we know that \(\operatorname{Gal}(R) \neq G\). This is the basis of the \texttt{All} (\cref{gg-sec-groups-all}) group theory algorithm.
Optionally, statistics can also support a partial ordering, denoted \(\preceq\), which must respect the partial ordering due to subgroups. Specifically, the following must hold: for all groups \(G,H\), if \(H \leq G\) then \(s(H) \preceq s(G)\). Statistics supporting this operation may be used in the \texttt{Maximal} (\cref{gg-sec-groups-maximal}) and \texttt{Maximal2} (\cref{gg-sec-groups-maximal2}) group theory algorithms.
Optionally, ordered statistics can also provide a specialised algorithm to compute maximal preimages, as defined in \cref{gg-sec-groups-maximal2}.
\subsection{\texttt{HasRoot}}
\label{gg-sec-stat-hasroot}
\(s(G)\) is true if it has a fixed point, and otherwise is false. Correspondingly, \(s(R)\) is true if it has a root (in its base field \(K\)).
If \(H \leq G\) and \(G\) has a fixed point, then so does \(H\), so we define \(v_1 \preceq v_2\) to be \(v_2 \Rightarrow v_1\).
The maximal subgroups with a fixed point are point stabilizers. Two point stabilizers are conjugate if they stabilize a point in the same orbit, and so we deduce the following algorithm to compute maximal preimages.
\begin{algorithm}(Maximal preimages: \texttt{HasRoot})
\label{gg-alg-maxpre-hasroot}
Given a group \(P\) and a value \(v \in \{\text{true},\text{false}\}\), returns the maximal preimages of \(v\) in \(P\).
\begin{algorithmic}[1]
\If{\(v = \text{true}\)}
\State\Return \(\{\operatorname{Stab}_P(x) \text{ for some \(x \in o\)} \,:\, o \in \operatorname{Orbits}(P)\}\)
\Else
\State\Return \(\{P\}\)
\EndIf
\end{algorithmic}
\end{algorithm}
\subsection{\texttt{NumRoots}}
\label{gg-sec-stat-numroots}
\(s(G)\) is the number of fixed points of \(G\). Correspondingly, \(s(R)\) is the number of roots of \(R\).
If \(H \leq G\) then \(H\) has at least as many fixed points as \(G\), so \(\preceq\) in this case is the usual \(\leq\) on integers.
\subsection{\texttt{Factors}}
\label{gg-sec-stat-factors}
This takes a parameter, which is another statistic \(s'\). Then \(s(G)\) is the multiset \(\{s'(G')\}\) where \(G'\) runs over the images of \(G\) acting on each of its orbits (so the degree of \(G'\) is the size of the corresponding orbit). Correspondingly, \(s(R)\) is the multiset \(\{s'(R')\}\) where \(R'\) runs over the irreducible factors of \(R\).
\subsection{\texttt{Degree}}
\label{gg-sec-stat-degree}
\(s(G)\) is the degree of the permutation group \(G\) and \(s(R)\) is the degree of \(R\).
If \(H \leq G\), then they are permutation groups of equal degree, so \(v_1 \preceq v_2\) is \(v_1 = v_2\).
\subsection{\texttt{FactorDegrees}}
\label{gg-sec-stat-facdegs}
\(s(G)\) is the multiset of sizes of orbits of \(G\). Correspondingly, \(s(R)\) is the mulitset of degrees of irreducible factors of \(R\).
This is equivalent to \code{Factors} with the \code{Degree} parameter, but is more efficient because it does not require the explicit computation of the orbit images of \(G\) on its orbits.
Additionally, it supports ordering as follows: we know that if \(H \leq G\) then the orbits of \(H\) form a refinement of the orbits of \(G\); that is, the orbits of \(G\) are unions of orbits of \(H\). Hence, given two multisets \(v_1\) and \(v_2\) of orbits sizes, we check combinatorially if one is a refinement of the other.
We provide an algorithm to compute maximal preimages of this statistic. First, in case the group \(G\) is intransitive, we embed \(G\) into a direct product \(D\) and find maximal preimages there. For each preimage \(H\), and \(d \in D\) we see if any \(H^d \cap G\) is a preimage. Observing that if \(n \in N_D(H)\) and \(g \in G\) then \(H^{ndg} \cap G = (H^d \cap G)^g\), it suffices to only consider coset representatives of \(N_D(H) \backslash D / G\).
\begin{algorithm}[Maximal preimages: \texttt{FactorDegrees}]
\label{gg-alg-maxpre-facdegs}
Given a group \(G\) of degree \(d\) and a multiset \(v\) of integers such that \(\sum v = d\), returns all maximal preimages of \(v\) in \(G\) up to conjugacy.
\begin{algorithmic}[1]
\State \(S \leftarrow \emptyset\)
\State Embed \(G \subset D = G_1 \times \ldots \times G_r\)
\For{maximal preimages \(H\) of \(v\) in \(D\) (\cref{gg-alg-maxpre-facdegs-dp})}
\For{double coset representatives \(d\) of \(N_D(H) \backslash D / G\)}
\State \(H' \leftarrow H^d \cap G\)
\If{\(H'\) has orbits of sizes \(v\)}
\State \(S \leftarrow S \cup \{H'\}\)
\EndIf
\EndFor
\EndFor
\State \Return \(S\)
\end{algorithmic}
\end{algorithm}
To find maximal preimages in direct products, we first find all the ways in which \(v\) may be written as a union, with each component corresponding to a direct factor. Then by \cref{gg-lem-subpar-dp}, the maximal preimages in \(D\) are direct products of the maximal preimages in each (transitive) factor.
\begin{algorithm}[Maximal preimages: \texttt{FactorDegrees}: Direct products]
\label{gg-alg-maxpre-facdegs-dp}
Given a direct product \(G = G_1 \times \ldots \times G_r\) and \(v\) as above, returns all maximal preimages of \(v\) in \(G\) up to conjugacy.
\begin{algorithmic}[1]
\State \(S \leftarrow \emptyset\)
\For{multisets \((v_1,\ldots,v_r)\) of integers such that \(\sum v_i = \deg G_i\) and \(\bigcup_i v_i = v\)}
\For{i = 1, \ldots, r}
\State \(S_i \leftarrow\) maximal preimages of \(v_i\) in \(G_i\) (\cref{gg-alg-maxpre-facdegs-trans})
\EndFor
\For{\((H_1,\ldots,H_r) \in \prod_i S_i\)}
\State \(S \leftarrow S \cup \{H_1 \times \ldots \times H_r\}\)
\EndFor
\EndFor
\State \Return \(S\)
\end{algorithmic}
\end{algorithm}
To find maximal preimages in transitive groups, we embed \(G\) into a wreath product \(W\), and solve the problem there. As with \cref{gg-alg-maxpre-facdegs}, a loop over coset representatives lifts these to all preimages in \(G\).
\begin{algorithm}[Maximal preimages: \texttt{FactorDegrees}: Transitive]
\label{gg-alg-maxpre-facdegs-trans}
Given a transitive group \(G\) and \(v\) as above, returns all maximal preimages of \(v\) in \(G\) up to conjugacy.
\begin{algorithmic}[1]
\State \(S \leftarrow \emptyset\)
\State Embed \(G \subset W = G_r \wr \ldots \wr G_1\)
\For{maximal preimages \(H\) of \(v\) in \(W\) (\cref{gg-alg-maxpre-facdegs-wr})}
\For{double coset representatives \(w\) of \(N_W(H) \backslash W / G\)}
\State \(H' \leftarrow H^w \cap G\)
\If{\(H'\) has orbits of sizes \(v\)}
\State \(S \leftarrow S \cup \{H'\}\)
\EndIf
\EndFor
\EndFor
\State \Return \(S\)
\end{algorithmic}
\end{algorithm}
\begin{remark}
Sometimes, if the wreath product \(W\) is very large compared to \(G\), the number of double cosets to check makes \cref{gg-alg-maxpre-facdegs-trans} infeasible. In this case, we use the naive algorithm instead.
\end{remark}
For wreath products, we work recursively so that we only need to consider a single wreath product \(A \wr B\). By \cref{gg-lem-subpar-wr}, the maximal preimages correspond to choosing a partition \(\mathcal{X}\) for \(B\), and for each \(X \in \mathcal{X}\) a partition \(\mathcal{Y}_X\) for \(A\), with \(v = \{\abs X \abs Y : Y \in \mathcal{Y}_X, X \in \mathcal{X}\}\). We can think of \(v\) as the areas of a \(d \times e\) rectangle which has a series of vertical cuts (corresponding to the sizes of \(\mathcal{X}\)), and each piece (\(X\)) having a further series of horizontal cuts (corresponding to the sizes of \(\mathcal{Y}_X\)). We call this a ``rectangle division'' (see \cref{gg-fig-recdiv}). For each such division, we find all possible corresponding partitions of \(A\) and \(B\), and take all combinations to construct the partitions for \(A \wr B\).
\begin{figure}
\centering
\begin{tikzpicture}
\draw (0,0) -- (5,0) -- (5,4) -- (0,4) -- (0,0);
\draw (3,0) -- (3,4);
\draw (0,2) -- (3,2);
\draw (0,3) -- (3,3);
\end{tikzpicture}
\caption[A rectangular division]{A rectangular division of a \(5 \times 4\) rectangle, represented as \(\{(3, \{2,1,1\}), (2, \{4\})\}\), with areas \(\{8,6,3,3\}\).}
\label{gg-fig-recdiv}
\end{figure}
\begin{algorithm}
\label{gg-alg-maxpre-facdegs-wr}
Given a wreath product \(G = W_r \wr \ldots \wr W_1\) and \(v\) as above, returns all maximal preimages of \(v\) in \(G\) up to conjugacy.
\begin{algorithmic}[1]
\If{r = 0}
\State \Return \(\{G\}\)
\EndIf
\State \(A \leftarrow W_r \wr \ldots \wr W_2\)
\State \(B \leftarrow W_1\)
\State \(S \leftarrow \emptyset\)
\For{rectangle divisions \(\{(w_i,\{h_{i,j} \,:\, j\}) \,:\, i\}\) of \(\deg A \times \deg B\) into areas \(v\)
}
\State \(S_B \leftarrow\) maximal preimages of \(\{w_i \,:\, i\}\) in \(B\) (naive \cref{gg-alg-maxpre-naive})
\For{i}
\State \(S_{A,i} \leftarrow\) maximal preimages of \(\{h_{i,j} \,:\, j\}\) in \(A\) (recursively)
\EndFor
\For{\(H_B \in S_B\)}
\State \(\mathcal{X} \leftarrow \operatorname{Orbits}(H_B)\)
\For{bijections \(m : \mathcal{X} \to \{i\}\) so that \(\abs X = w_{m(X)}\)}
\For{\((H_{A,1},\ldots) \in \prod_i S_{A,i}\)}
\State \(H \leftarrow \parens{\prod_x H_{A,m(\mathcal{X}(x))}} \rtimes H_B\)
\State \(S \leftarrow S \cup \{H\}\)
\EndFor
\EndFor
\EndFor
\EndFor
\State \Return \(S\)
\end{algorithmic}
\end{algorithm}
We use the naive algorithm to find the maximal preimages of transitive and primitive groups. Since we are mainly dealing with groups close to \(p\)-groups, we expect that they have plenty of block structure and therefore the factors in any such wreath product are small enough to use the naive algorithm.
\subsection{\texttt{NumAuts}}
\label{gg-sec-stat-numauts}
\(s(G)\) is the index \((N_G(S):S)\) where \(S := \operatorname{Stab}_G(1)\), assuming \(G\) is transitive. \(s(R)\) is the number of automorphisms \(\abs{\operatorname{Aut}(L/K)}\) where \(R\) is irreducible and defines the extension \(L/K\).
Observe that if \(G=\operatorname{Gal}(R/K)\), then \(S=\operatorname{Gal}(R/L)\), \(N_G(S)\) is (by definition) the largest subgroup of \(G\) in which \(S\) is normal, and hence its fixed field is the smallest subfield \(M\) of \(L/K\) such that \(L/M\) is normal. Hence \(\operatorname{Gal}(L/M)\) is \(\operatorname{Aut}(L/K)\), and so \(\operatorname{Aut}(L/K) \cong N_G(S)/S\).
As we shall see in \cref{gg-lem-autgrp-order}, if \(H \leq G\) then \(s(G) \mid s(H)\). Hence \(v_1 \preceq v_2\) is \(v_2 \mid v_1\).
\subsection{\texttt{AutGroup}}
\label{gg-sec-stat-autgroup}
\(s(G)\) is the group \(N_G(S)/S\) where \(S := \operatorname{Stab}_G(1)\) as a regular permutation group of degree \((N_G(S):S)\); it requires \(G\) to be transitive. Correspondingly, \(s(R)\) requires \(R\) to be irreducible, and is \(\operatorname{Aut}(L/K)\) where \(L\) is the field defined by \(R\).
\(v_1 \sim v_2\) iff \(v_1\) and \(v_2\) are groups of the same degree and are conjugate in the symmetric group of this degree.
The test for ordering uses the following lemma, which says that as the Galois group gets smaller, the automorphism group gets larger. Hence \(v_1 \preceq v_2\) is defined as follows: \(v_1\) must have degree at least the degree of \(v_2\), and \(v_2\) must be conjugate to a subgroup of \(v_1\).
\begin{lemma}
\label{gg-lem-autgrp-order}
Suppose \(G' \leq G\) acts transitvely on a set \(X\). Fix \(x \in X\) and define \(S := \operatorname{Stab}_G(x)\), \(N := N_G(S)\), \(A := N/S\) and define \(S'\), \(N'\), \(A'\) similarly with respect to \(G'\). Then \(A\) is naturally isomorphic to a subgroup of \(A'\).
\end{lemma}
\begin{proof}
By definition
\begin{align*}
N &= \{ n \in G : s \in S \Rightarrow s^n \in S \} \\
&= \{ n \in G : s \in S \Rightarrow (s^n)(x) = x \} \\
&= \{ n \in G : s \in S \Rightarrow s(n(x)) = n(x) \} \\
&= \{ n \in G : s \in S \Rightarrow s \in \operatorname{Stab}_G(n(x)) \} \\
&= \{ n \in G : S \subseteq \operatorname{Stab}_G(n(x)) \} \\
&= \{ n \in G : S = \operatorname{Stab}_G(n(x)) \} \text{ by orbit-stabilizer theorem} \\
&= \{ n \in G : n(x) \in \operatorname{Fix}(S) \} \\
&= \{ n \in G : n(y) \in \operatorname{Fix}(S) \} \text{ for any \(y \in \operatorname{Fix}(S)\) by symmetry} \\
&= \{ n \in G : y \in \operatorname{Fix}(S) \Rightarrow n(y) \in \operatorname{Fix}(S) \} \\
&= \operatorname{Stab}_G \operatorname{Fix}(S)
\end{align*}
is the group of elements of \(G\) which permute the fixed points of \(S := \operatorname{Stab}_G(x)\).
Since \(G\) is transitive, for each \(y \in \operatorname{Fix}(S)\) there exists \(n \in G\) such that \(n(x) = y\), and hence \(n \in N\). We deduce that \(N\) acts transitively on \(\operatorname{Fix}(S)\), and in particular the orbit-stabilizer theorem implies that \[\abs A = (N:S) = \abs{\operatorname{Fix}(S)}.\]
Similarly, since \(G'\) is also transitive then \(N \cap G' = \operatorname{Stab}_{G'} \operatorname{Fix}(S)\) acts transitively on \(\operatorname{Fix}(S)\), and so the orbit-stabilizer theorem implies \[\abs{N \cap G'} = \abs{\operatorname{Stab}_{N \cap G'}(1)} \abs{\operatorname{Fix}(S)},\] but noting that the stabilizer is actually \(S'\) then we deduce \[(N \cap G' : S') = (N : S).\]
The isomorphism theorems imply \[(N \cap G')/(S \cap G') \cong (N \cap G')S/S \leq N/S,\] but noting that \(S' = S \cap G'\) then the previous paragraph implies that we have equality, and hence naturally \[(N \cap G')/(S \cap G') \cong N/S =: A.\]
Finally, note that \[N \cap G' = \operatorname{Stab}_{G'} \operatorname{Fix}(S) \leq \operatorname{Stab}_{G'} \operatorname{Fix}(S') =: N'\] so that \[(N \cap G')/(S \cap G') \leq N'/S' =: A'.\]
\end{proof}
\subsection{\texttt{Tup}}
This statistic takes as a parameter a tuple \((s_1,\ldots,s_k)\) of statistic algorithms. Then \(s(G) = (s_1(G),\ldots,s_k(G))\) and similarly for \(s(R)\). Also \(v_1 \sim v_2\) iff \(v_{1,i} \sim v_{2,i}\) for all \(i\), and similarly for \(\preceq\).
\section{Subgroup choice algorithms}
\label{gg-sec-choice}
A subgroup choice algorithm decides, given the current state of a group theory algorithm (\cref{gg-sec-groups}) for the resolvent method, which subgroup \(U \leq W\) to form a resolvent from next.
Currently we use one method \code{Tranche} which generates a sequence \(\mathscr{U}_1,\mathscr{U}_2,\ldots\) of sets of subgroups of \(W\) one at a time, which we call \define{tranches}. Given the current tranche, \(\mathscr{U}\), we inspect each element \(U\) in turn to test if it is useful by some measure (see \cref{gg-rmk-useful}). If so, we use one such \(U\). If there is no such \(U\), we declare the tranche useless and move on to the next one.
The idea is that we avoid enumerating all possible subgroups \(U\leq W\), and only generate them until we find a useful one.
\begin{remark}[On usefulness]
\label{gg-rmk-useful}
In the \texttt{All} group theory algorithm, we have a pool \(\mathcal{P}\) of all possible Galois groups, and therefore we know all of the possible outcomes of using the group \(U\) to form a resolvent: i.e. the resolvent has one of the Galois groups \(\braces{q(P) \,:\, P \in \mathcal{P}}\) and so we measure the statistic values \(\mathcal{S} = \braces{s(q(G)) \,:\, P \in \mathcal{P}}\). If \(\mathcal{S}\) contains multiple elements, then \(U\) is useful because we will certainly cut down the list \(\mathcal{P}\). Usefulness for \texttt{Maximal} and \texttt{Maximal2} is defined in \cref{gg-sec-groups-maximal,gg-sec-groups-maximal2}.
\end{remark}
The rest of this section describes some possible methods for producing tranches.
\subsection{\texttt{All}}
\label{gg-sec-tranche-all}
Produces a single tranche containing all subgroups of \(W\).
\subsection{\texttt{Index}}
\label{gg-sec-tranche-idx}
For each divisor \(n \mid \abs{W}\), produces a tranche containing all the subgroups of \(W\) of index \(n\).
There are algorithms to produce the subgroups of a group with a given index. For example, the \texttt{Subgroups} intrinsic in Magma has a \texttt{IndexEqual} parameter for this purpose.
\subsection{\texttt{OrbitIndex}}
\label{gg-sec-tranche-oidx}
\begin{definition}
\label{gg-def-oidx}
For \(U \leq W \leq S_d\), the \define{orbit index of \(U\) in \(W\)} is the index \((W : U')\) where \[U' = \operatorname{Stab}_W \operatorname{Orbits}(U) = \braces{w \in W \,:\, X \in \operatorname{Orbits}(U), x \in X \Rightarrow w(x) \in X}\] and is denoted \((W:U)^{\operatorname{orb}}\). The \define{remaining orbit index of \(U\) in \(W\)} is \((W:U)/(W:U)^{\operatorname{orb}} = (U':U)\). If \(\mathcal{X}\) is a partition of \(\braces{1,\ldots,d}\), then it is a \define{subgroup partition for \(W\)} if there exists \(U \leq W\) such that \(\mathcal{X} = \operatorname{Orbits}(U)\). The \define{index} \((W:\mathcal{X})\) of a subgroup partition \(\mathcal{X}\) is \((W : \operatorname{Stab}_W(\mathcal{X}))\).
\end{definition}
For each divisor \(n \mid \abs{W}\) and \(r \mid n\), produces a tranche containing all the subgroups of \(W\) of index \(n\) and of remaining orbit index \(r\).
We find empirically that restricting to small \(r\), such as \(\operatorname{val}_p(r)\le1\), typically results in an algorithm which still terminates, and does so more quickly because it generates many fewer groups.
To produce the tranche corresponding to a given \((n,r)\), we compute the subgroup partitions \(\mathcal{X}\) of \(\braces{1,\ldots,d}\) such that \((W:\operatorname{Stab}_W(\mathcal{X})) = m := \tfrac{n}{r}\), and then compute the subgroups of \(\operatorname{Stab}_W(\mathcal{X})\) of index \(r\). To efficiently compute the subgroup partitions of \(W\) of a given index, we use the special form of \(W\). If \(W\) is a wreath product, direct product, or symmetric group, then we can use the algorithms in the rest of this section to reduce the problem to computing subgroup partitions of smaller groups. For these smaller groups, we compute the subgroup partitions by explicitly enumerating all the subgroups.
\begin{lemma}[Partitions of direct products]
\label{gg-lem-subpar-dp}
Suppose \(W_i \leq S_{d_i}\) for \(i=1,\ldots,k\) (each symmetric group acting on a disjoint set) and \(W = W_1 \times \cdots \times W_k\). If \(\mathcal{X}_i\) is a partition for \(W_i\) of orbit index \(m_i\) then \(\bigcup_i \mathcal{X}_i\) is a partition for \(W\) of orbit index \(\prod_i m_i\). Every partition for \(W\) is of this form.
\end{lemma}
\begin{proof}
By definition \(m_i = (W_i : \operatorname{Stab}_W(X_i))\). Now \[\operatorname{Stab}_W(\bigcup_i X_i) = \prod_i \operatorname{Stab}_{W_i}(X_i)\] and the result follows. Take any \(U \leq W\), and consider its projections \(U_i\) to \(W_i\), and let \(\mathcal{X}_i = \operatorname{Orbits}(U_i)\), then clearly \(\mathcal{X} = \bigcup_i \mathcal{X}_i\).
\end{proof}
\begin{algorithm}[Partitions of direct products]
\label{gg-alg-subpar-dp}
Given \(W_i \leq S_{d_i}\) for \(i=1,\ldots,k\) and an integer \(m \mid \prod_i \abs{W_i}\), this returns all the partitions for \(W = W_1 \times \cdots \times W_k\) of index \(m\).
\begin{algorithmic}[1]
\If{\(k = 0\)}
\State\Return \(\braces{\emptyset}\)
\EndIf
\State \(S \leftarrow \emptyset\)
\ForAll{\(m_1 \mid \gcd(m, \abs{W_1})\)}
\State \(S_1 \leftarrow\) partitions of \(W_1\) of index \(m_1\)
\State \(S_2 \leftarrow\) partitions of \(W_2 \times \cdots \times W_k\) of index \(m_2 = \tfrac{m}{m_1}\)
\State \(S \leftarrow S \cup \braces{\mathcal{X}_1 \cup \mathcal{X}_2 \,:\, \mathcal{X}_1 \in S_1, \mathcal{X}_2 \in S_2}\)
\EndFor
\State\Return \(S\)
\end{algorithmic}
\end{algorithm}
\begin{lemma}[Partitions of wreath products]
\label{gg-lem-subpar-wr}
Suppose \(A,B\) are permutation groups, let \(\mathcal{X}\) be a subgroup partition for \(B\), and for each \(X \in \mathcal{X}\) let \(\mathcal{Y}_X\) be a subgroup partition for \(A\). Then \(\mathcal{Z} = \braces{X \times Y \,:\, X \in \mathcal{X}, Y \in \mathcal{Y}_X}\) is a subgroup partition for \(W = A \wr B\), its index is \((B:\mathcal{X}) \prod_{X \in \mathcal{X}} (A:\mathcal{Y}_X)^{\abs{X}}\), and all subgroup partitions are of this form up to conjugacy.
\end{lemma}
\begin{proof}
If \(A\) acts on \(\{1,\ldots,d\}\) and \(B\) acts on \(\{1,\ldots,e\}\), then elements of \(A \wr B\) can be defined as elements of the cartesian product \(A^e \times B\) acting on \(\{1,\ldots,e\} \times \{1,\ldots,d\}\) as \[(a_1,\ldots,a_e,b)(x,y) = (b x, a_x y).\] This implies the group operation is \[(a'_1,\ldots,a'_e,b')(a_1,\ldots,a_e,b) = (a'_{b1}a_1,\ldots,a'_{bd}a_d,b'b).\]
Suppose \(\mathcal{Z}\) is defined as above, and take any \((x,y),(x',y') \in X \times Y \in \mathcal{Z}\). Choose \(b \in \operatorname{Stab}_B(\mathcal{X})\) such that \(b(x)=x'\), which is possible since \(\operatorname{Stab}_B(\mathcal{X})\) acts transitively on \(X\) by definition of a subgroup partition. Choose \(a_x \in \operatorname{Stab}_A(\mathcal{Y}_X)\) such that \(a_x(y)=y'\), and choose all other \(a_{x''} \in \operatorname{Stab}_A(\mathcal{Y}_{X''})\) for \(x'' \in X''\) arbitrarily (e.g. the identity). Defining \(g=(a_1,\ldots,a_e,b)\) then \(g(x,y) = (bx,a_xy) = (x',y')\) and by construction \(g \in \operatorname{Stab}_W(\mathcal{Z})\). We conclude that \(\operatorname{Stab}_W(\mathcal{Z})\) acts transitively on each element of \(\mathcal{Z}\), and so \(\mathcal{Z}\) is a subgroup partition of \(W\) as claimed.
Expressing \(A \wr B\) as a semidirect product \(A^e \rtimes B\), then \(\operatorname{Stab}_W(\mathcal{Z})\) is the subgroup \[\parens{\prod_{x \in \{1,\ldots,e\}} \operatorname{Stab}_A(\mathcal{Y}_{\mathcal{X}(x)})} \rtimes \operatorname{Stab}_B(\mathcal{X})\] where \(\mathcal{X}(x)\) is the \(X \in \mathcal{X}\) such that \(x \in X\). The index \((W:\mathcal{Z})\) follows.
Suppose \(G \leq W\). We want to show that a conjugate of \(G\) has orbits of the form \(\mathcal{Z}\). Letting \(\pi : A \wr B \to B\) be the natural projection \((a_1,\ldots,a_e,b) \mapsto b\), let \(\mathcal{X} = \operatorname{Orbits}(\pi(G))\), which is a subgroup partition of \(B\). For each \(X \in \mathcal{X}\), fix a representative \(x_X \in X\), and for each \(x \in X\), fix some \(g_x = (a_{x,1},\ldots,a_{x,e},b_x) \in G\) such that \(\pi(g_x)(x_X) = x\). Define \(\hat a_x = a_{x,x_X}\) and \(\hat g = (\hat a_1,\ldots,\hat a_e,id) \in W\) then by construction \[g_x^{-1} \hat g (x,y) = (x_X, y).\] Define \(\mathcal{Y}_X\) such that \(\{x_X\} \times Y\) is an orbit of \(S_X := \operatorname{Stab}_G(\{x_X\} \times \{1,\dots,d\})\) for each \(Y \in \mathcal{Y}_X\). We claim that \[\operatorname{Orbits}(G^{\hat g}) = \mathcal{Z} = \{X \times Y \,:\, Y \in \mathcal{Y}_X, X \in \mathcal{X}\}.\] Note that if \(g^{\hat g}(x,y)=(x',y')\) then \(\pi(g^{\hat g})(x)=\pi(g)(x)=x'\) and so \(\mathcal{X}(x)=\mathcal{X}(x')=X\) say. For any \((x,y),(x',y')\) with \(x,x'\in X \in \mathcal{X}\), then there exists \(g \in G\) such that \(g^{\hat g}(x,y) = (x',y')\) iff there is \(g\) such that \((g_{x'}^{-1} g g_x) g_x^{-1} \hat g (x, y) = g_{x'}^{-1} \hat g (x', y')\), i.e. such that \((g_{x'}^{-1} g g_x)(x_X, y) = (x_X, y').\) This occurs iff there is \(g \in S_X\) such that \(g(x_X,y)=(x_X,y')\), which occurs iff \(\mathcal{Y}(y)=\mathcal{Y}(y')=Y\) say, in which case \((x,y),(x',y')\in X \times Y\). This proves the claim.
\end{proof}
\begin{algorithm}[Partitions of wreath products]
\label{gg-alg-subpar-wr}
Given \(A \leq S_d, B \leq S_e\) and an integer \(m \mid \abs{A}^e \abs{B}\), this returns all the partitions for \(A \wr B\) of index \(m\) up to conjugacy.
\begin{algorithmic}[1]
\State \(S \leftarrow \emptyset\)
\ForAll{\(m' \mid m\)}
\State \(S' \leftarrow\) partitions for \(B\) of index \(m'\)
\ForAll{\(\mathcal{X} \in S'\)}
\ForAll{factorizations of \(\tfrac{m}{m'}\) of the form \(\prod_{X \in \mathcal{X}} m_X^{\abs{X}}\)}
\ForAll{\(X \in \mathcal{X}\)}
\State \(S_X \leftarrow\) partitions for \(A\) of index \(m_X\)
\EndFor
\ForAll{\((\mathcal{Y}_X)_X \in \prod_X S_X\)}
\State include \(\braces{X \times Y \,:\, X \in \mathcal{X}, Y \in \mathcal{Y}_X}\) in \(S\)
\EndFor
\EndFor
\EndFor
\EndFor
\State \Return \(S\)
\end{algorithmic}
\end{algorithm}
\begin{remark}
The preceding algorithm may produce multiple representatives per conjugacy class. With a little more care, we can return just one as follows.
Having chosen \(\mathcal{X}\), we partition it into \(B\)-conjugacy classes \(\mathcal{X}_i=\{X_{i,j}\}\). Then we consider all factorizations of \(m/m'\) of the form \(\prod_{\mathcal{X}_i} m_i^{\abs{X_{i,1}}}\), and then all factorizations of \(m_i\) of the form \(\prod_{X_{i,j}\in\mathcal{X}_i} m_{X_{i,j}}\) with \(m_{i,1} \le m_{i,2} \le \ldots\). Hence we have a factorization of \(m/m'\) of the form \(\prod_{X \in \mathcal{X}} m_{X}^{\abs{X}}\) as above. Note that this includes all factorizations of this form exactly once up to reordering conjugate blocks \(X \in \mathcal{X}\).
For such a factorization, we partition \(\mathcal{X}_i\) further into classes \(\mathcal{X}_{i,j}=\{X_{i,j,k}\}\) such that \(m_{i,j}:=m_{X_{i,j,k}}\) is constant within a class. Similar to before, we let \(S_{i,j} = \{\mathcal{Y}_{i,j,\ell}\}\) be all partitions for \(A\) of index \(m_{i,j}\), and consider all \((\mathcal{Y}_{i,j,\ell_k})_{i,j,k} \in \prod_{i,j,k} S_{i,j}\) with \(\ell_1 \le \ell_2 \le \ldots\). Note that this includes all \((\mathcal{Y}_X)_X \in \prod_X S_X\) as above precisely once up to reordering conjugate blocks \(X \in \mathcal{X}\).
Letting \(\mathcal{Z} = \{X_{i,j} \times Y \,:\, Y \in \mathcal{Y}_{i,j,\ell_k}\}\) be the corresponding partition, then all such \(\mathcal{Z}\) are not conjugate in \(A \wr B\), and they cover all conjugacy classes up to reordering conjugate blocks of \(\mathcal{X}\). Define \(S \leq S_d \wr S_e\) to be the group isomorphic to \(1_d \wr \prod_i 1_{\abs{\mathcal{X}_i}} \wr S_{\abs{X_{i,1}}}\) which reorders conjugate blocks of \(\mathcal{X}\), where \(1_d\) denotes the trivial subgroup of \(S_d\). Then we find all \(\mathcal{Z}\) up to \(A\wr B\) conjugacy by finding all \(S\)-conjugates of \(\mathcal{Z}\) up to \(A\wr B\) conjugacy as follows.
Let \(H_0 = \operatorname{Stab}_{A \wr B}(\mathcal{Z})\), then we want all \(S\)-conjugates of \(H_0\) up to \(A\wr B\) conjugacy. Note that if \(n \in N_S(H_0)\) and \(g \in A \wr B\) then \(H_0^{nsg} \sim_{A\wr B} H_0^s\) so it suffices to consider double coset representatives \(s\) of \(N_S(H_0) \backslash S / (A \wr B) \cap S\). Compute \(H_0^s\) for all such \(s\) and dedupe by \(A \wr B\)-conjugacy.
\end{remark}
\begin{lemma}[Partitions of symmetric groups]
\label{gg-lem-subpar-sym}
Any partition \(\mathcal{X}\) of \(\braces{1,\ldots,d}\) is a subgroup partition for \(S_d\) and it has orbit index \(d! / \prod_{X \in \mathcal{X}} \abs{X}!\).
\end{lemma}
\begin{proof}
Indeed \(\operatorname{Stab}_{S_d}(\mathcal{X}) = \prod_{X \in \mathcal{X}} S_X\).
\end{proof}
\begin{algorithm}[Partitions of symmetric groups]
\label{gg-alg-subpar-sym}
Given integers \(d \geq 0, m \mid d!\), returns all partitions for \(S_d\) of index \(m\) up to conjugacy.
\begin{algorithmic}[1]
\If{\(d=0\)}
\State\Return \(\braces{\emptyset}\)
\EndIf
\State \(S \leftarrow \emptyset\)
\ForAll{\(d_1=0,\ldots,d\)}
\If{\(d!/d_1!(d-d_1)! \mid m\)}
\State \(S_2 \leftarrow\) partitions of \(S_{d-d_1}\) of index \(m d_1! (d-d_1)! / d!\) up to conjugacy
\State \(S \leftarrow S \cup \braces{\braces{1,\ldots,d_1} \cup \mathcal{X}_2 \,:\, \mathcal{X}_2 \in S_2}\)
\EndIf
\EndFor
\State\Return \(S\)
\end{algorithmic}
\end{algorithm}
\input{gg-sec-implementation-v2.tex}
\section{Auxillary algorithms}
\label{gg-sec-auxillary}
A collection of algorithms used elsewhere in this article.
\subsection{Group embeddings}
\begin{algorithm}[Embed into direct product]
\label{gg-alg-embed-dp}
Given \(G \leq S_d\), returns \(s \in S_d\) and transitive \(G_1,\ldots,G_r\) such that \(\sum_i \deg G_i = d\) and \(G^s \leq G_1 \times \ldots \times G_r\).
The algorithm finds the image of the group acting on each of its orbits, then takes the direct product.
\begin{algorithmic}[1]
\State \(X_1=\{x_{1,1},\ldots\},\ldots,X_r \leftarrow \operatorname{Orbits}(G)\)
\State \(D \leftarrow G|_{X_1} \times \ldots \times G|_{X_r}\)
\State \(s \leftarrow\) permutation sending \(x_{i,j}\) to \(\sum_{i'<i} \abs{X_{i'}} + j\).
\State \Return \(D, s^{-1}\)
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[Embed into wreath product]
\label{gg-alg-embed-wr}
Given transitive \(G \leq S_d\), returns \(s\in S_d\) and primitive \(G_1,\ldots,G_r\) such that \(\prod_i \deg G_i = d\) and \(G^s \leq G_r \wr \ldots \wr G_1\).
We choose a minimal non-trivial block-partition of \(G\) and use this to embed \(G\) into \(A \wr B\) with the same block structure. We then recurse on \(B\).
Note that, unlike with the direct product case, there is not a canonical ``best'' (smallest) choice for the factors. Indeed, suppose we are given a group of the form \((A_1 \times \ldots \times A_e) \rtimes B \leq S_d \wr S_e\), then we could embed it into \(A \wr B\) where \(A = \angles{A_1^{s_1},\ldots,A_e^{s_e}}\) for any \(s_i \in S_d\). Minimizing \(A\) is difficult. However, something cheap to compute is: for each \(i>1\), choose \(g_i \in G\) such that \(g_i(1) \in \{d(i-1)+1,\ldots,di\}\), and then reorder \((d(i-1)+1,\ldots,di)\) to \((g_i(1),\ldots,g_i(d))\) (i.e. let \(s_i\) permute \(j \mapsto g_i(j)-d(i-1)\)). If the best \(A\) is cyclic \(C_d\), this is guaranteed to find it.
\begin{algorithmic}[1]
\If{\(G\) is primitive}
\State \Return \(G, id\)
\EndIf
\State \(P \leftarrow\) minimal partition of \(G\)
\State Fix an ordering \((B_1,\ldots,B_e)\) on \(P\)
\State Fix an ordering \((x_{i,1},\ldots,x_{i,d})\) on each \(B_i\)
\For{i = 1,\ldots,e}
\State \(g_i \leftarrow\) an element of \(G\) such that \(g_i(x_{1,1}) = g_i(x_{i,1})\)
\EndFor
\State \(s \leftarrow\) the permutation \(g_i(x_{1,j}) \mapsto di+j\)
\State \(G' \leq S_d \wr S_e \leftarrow G^s\)
\State \(q : G' \to S_e\) the quotient
\State \(b : S_e \to S_d \wr S_e\) the canonical lift such that \(b(\sigma)(id+j)=\sigma(i)d+j\)
\State \(B \leftarrow q(G')\)
\State \(\mathcal{A} \leftarrow \emptyset\)
\For{each generator \(g\) of \(G\)}
\State \(g' \leftarrow g b(q(g))^{-1}\)
\State \(\mathcal{A} \leftarrow \mathcal{A} \cup \{j \mapsto g'((i-1)d+j)-(i-1)d \,:\, i=1,\ldots,e\} \subset S_d\)
\EndFor
\State \(A \leq S_d \leftarrow \angles{\mathcal{A}}\)
\State \((W_r,\ldots,W_1), s' \leftarrow\) embedding of \(B\) into a wreath product
\State \Return \((A,W_r,\ldots,W_1), sb(s')\)
\end{algorithmic}
\end{algorithm}
\subsection{Combinatorial}
\begin{algorithm}[Linear divisions]
\label{gg-alg-lindiv}
Given \(n \in \mathbb{N}\) and multiset \(N \subset \mathbb{N}\), returns all subsets \(M \subset N\) such that \(\sum M = n\).
We represent multisets of integers as a sorted sequence, with the largest element first. We loop over possible choices of the first division, and then recurse to assign the rest. An additional optional parameter \(L\) is such a sequence, and restricts any returned \(M\) to be at most \(L\) in the lexicographic ordering (i.e. if \(M \ne L\), then at the first place they disagree, \(M\) must be smaller). The default \(L\) is \(\{n\}\), which is no restriction.
\begin{algorithmic}[1]
\State \(S \leftarrow \emptyset\)
\For{distinct \(m_1 \in N\)}
\If{\(m_1 \le \min(n, L_1)\)}
\For{linear divisions \((m_2,\ldots)\) of \(n-m_1\) from \(N-\{m_1\}\) with limit \((L_2,\ldots)\) if \(m_1 = L_1\) or else limit \((m_1,m_1,\ldots)\)}
\State Append \((m_1,m_2,\ldots)\) to \(S\)
\EndFor
\EndIf
\EndFor
\State \Return \(S\)
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[Rectangle divisions]
\label{gg-alg-recdiv}
Given \(w,h \in \mathbb{Z}\) and multiset \(A \subset \mathbb{Z}\), returns all multisets \(\{(w_i,\{h_{i,j}\})\}\) such that \(\sum_i w_i = w\), \(\sum_j h_{i,j} = h\) for each \(i\) and \(\{w_i h_{i,j} \,:\, i,j\} = A\). See \cref{gg-fig-recdiv}.
As with the previous algorithm, multisets are represented as sorted sequences. We loop over possible choices of the first division, and then recurse to assign the rest. Optional parameter \(L=(w_L,\{h_{L,j}\})\) limits the allowed divisions, with default \((w,\{h\})\).
\begin{algorithmic}[1]
\State \(S \leftarrow \emptyset\)
\For{distinct divisors \(w_1\) of some \(a \in A\)}
\If{\(w_1 \le \min(w, w_L)\)}
\For{linear divisions \((h_{1,1},h_{1,2},\ldots)\) of \(h\) from \(\{a/w_1 \,:\, a \in A, w_1 \mid a\}\) with limit \(h_{L,j}\) if \(w_1=w_L\) or else no limit}
\For{rectangle divisions \(\{(w_2,\{h_{2,j}\}),\ldots\}\) of width \(w-w_1\), height \(h\), areas \(A-\{w_1h_{1,j}\}\), limit \((w_1,\{h_{1,j}\})\)}
\State Append \((w_1,\{h_{1,j}\},\ldots)\) to \(S\)
\EndFor
\EndFor
\EndIf
\EndFor
\State \Return \(S\)
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[Binning]
\label{gg-alg-binning}
Suppose we are given integers \((m_1,\ldots,m_r)\) and \((n_1,\ldots,n_s)\) such that we have \(m_i\) indistinguishable copies of some item \(i\), and \(n_j\) indistinguishable copies of some bin \(j\). A \define{binning} is some \((m'_1,\ldots,m'_r)\) with \(0 \leq m'_i \leq m_i\) for all \(i\). Suppose we are given a function \(V\) such that when \(V((m'_1,\ldots,m'_r),j)\) is true, we define the binning to be \define{valid for bin \(j\)}. Suppose we are given a function \(S\) such that whenever \((m'_1,\ldots,m'_r)\) is valid for bin \(j\) and \(0 \leq m''_i \leq m'_i\), then \(S((m''_1,\ldots,m''_r),j)\) is true. Such a binning is \define{semi-valid}. A \define{total valid binning} is a sequence of length \(s\), whose \(j\)th entry is a multiset of \(n_j\) valid binnings for bin \(j\), and such that all of these binnings sum to \((m_1,\ldots,m_r)\). This algorithm returns all total valid binnings.
Optionally, a \define{partial semi-valid binning} \(B\) can be given (like a total valid binning, except the binnings are only semi-valid and only sum to at most \(m_1,\allowbreak\ldots,\allowbreak m_r\)) and this algorithm only returns total valid binnings which extend it.
Optionally, a limit \(N\) can be given (defaulting to \(\infty\)) and this algorithm returns at most this many total binnings.
This algorithm works by choosing an item and considering all bins it could be added to. For each choice, we add this to the partial semi-valid binning, and recursively find all the total binnings extending it.
In order to avoid duplicated effort, we only add item \(i\) to a bin if either it makes the binning equal to the largest or exceed the largest, when comparing the \(i\)th entry in each binning.
Since items are assigned in order, they should be given to the algorithm in whatever order is likely to lead to a contradiction quickest (in terms of not being semi-valid). This usually means the ``largest'' items should come first, because these will ``fill'' the bins quicker.
\begin{algorithmic}[1]
\LineComment{Check semi-valid}
\If{\(B\) is not semi-valid}
\State\Return \(\emptyset\)
\EndIf
\LineComment{Base case: nothing more to bin}
\If{\(m_i=0\) for all \(i\)}
\If{\(B\) is a total valid binning}
\State\Return \(\{B\}\)
\Else
\State\Return \(\emptyset\)
\EndIf
\EndIf
\LineComment{General case}
\State \(\mathcal{R} \leftarrow \emptyset\)
\State \(i \leftarrow \min \{i \,:\, m_i \ne 0\}\)
\State \(m_i \leftarrow m_i-1\)
\LineComment{Put an item \(i\) into one of the \(j\) bins}
\For{\(j=1,\ldots,k\)}
\State \(B = \mathcal{B}_i\) (a set of multiset binnings of size \(n_j\))
\State \(B' \leftarrow\) the set of binnings in \(B\) with entry \(i\) set to 0
\For{\(b' \in B'\)}
\LineComment{Increase the highest value}
\State \(m''_i \leftarrow\) the largest value of \(b_i\) among all \(b \in B\) agreeing with \(b'\) away from the \(i\)th entry
\State \(b_0 \leftarrow\) \(b'\) with the \(i\)th entry set to \(m''_i\)
\State \(b \leftarrow\) \(b'\) with the \(i\)th entry set to \(m''_i+1\)
\State \(\mathcal{R} \leftarrow \mathcal{R} \cup\) all total valid binnings extending \(B\) with \(b_0\) replaced by \(b\) in \(B_i\)
\LineComment{Increase the next one down}
\State \(b_{-1} \leftarrow\) \(b'\) with the \(i\)th entry set to \(m''_i-1\)
\If{\(b_{-1} \in B\)}
\State \(\mathcal{R} \leftarrow \mathcal{R} \cup\) all total valid binnings extending \(B\) with \(b_{-1}\) replaced by \(b_0\) in \(B_i\)
\EndIf
\EndFor
\EndFor
\State \Return \(\mathcal{R}\)
\end{algorithmic}
\end{algorithm}
\section{Implementation and results}
\label{gg-sec-implementation}
These algorithms have been implemented \cite{galoiscode} for the Magma computer algebra system \cite{magma}. Our main \code{GaloisGroup} routine takes two arguments: a polynomial over a \(p\)-adic field, and a string describing the parameterization of the algorithm to use.
Our algorithm is by design highly modular, with each piece of the parameterization as independent as possible from the rest. This means that if one has a new algorithm for evaluating resolvents for instance, one simply needs to implement this algorithm satisfying a particular interface, and then add a line of code to the parameterization parser.
The main omission from our implementation is that the \code{SinglyWild} global model algorithm is not available in full generality, which means that for wild extensions our global model will usually use symmetric groups. Over \(\mathbb{Q}_2\) with a \(2 \times \ldots \times 2\) ramification filtration this is not a problem, but for coarser filtrations, \(S_8\) is much larger than \(C_2^3\) for example, and \(S_7\) is much larger than \(C_7\), and so our global models are far from optimal. A special case of \code{SinglyWild} has been implemented and is discussed specifically in \cref{gg-sec-impl-sw}.
All experiments reported on in this section were performed on a 2.7GHz Intel Xeon. Any timings are given in core-seconds. Tables of Galois groups have been produced from all runs in this section and are available from the implementation website \cite{galoiscode}.
Unless otherwise stated, all experiments use the ``exact'' \(p\)-adic polynomial type made available by the \texttt{ExactpAdics} package \cite{exactpadics}. This uses infinite-precision arithmetic and its routines are designed to give provably correct results (modulo coding errors) and hence our algorithm also yields provably correct results except for \cref{gg-rmk-resolvent-complex-precision}.
See \cite[Ch. II, \S13]{DPhD} for a more detailed account.
\subsection{Some particular parameterizations}
Six parameterizations we will consider are named A0, B0, A1, B1, A2 and B2. These parameterizations all try three algorithms in turn: \code{Tame} (\cref{gg-sec-tame}), \code{Singly\-Ra\-mi\-fied} (\cref{gg-sec-singlyramified}) and \code{ResolventMethod} (\cref{gg-sec-arm}). The resolvent method evaluates resolvents using a global model which first factorizes the polynomial, then finds the ramification tower of the field defined by each factor, then finds a global model for each segment of the tower.
For the A parameterizations, this global model is \code{Symmetric}. For the B parameterizations, we use the \code{RootOfUnity}, \code{RootOfUniformizer} or \code{Symmetric} global model, depending on whether the segment is unramified, tame or wild.
The number part of the parameterization name controls the group theory part of the algorithm. For A0 and B0, we enumerate \code{All} possible Galois groups, then eliminate candidates based on the \code{FactorDegrees} statistic for resolvents of all subgroups. For A1 and B1, we do the same except using the \code{OrbitIndex} method to only generate resolvents for subgroups whose remaining orbit index \(r\) satisfies \(v_p(r) \le 1\). For A2 and B2, instead of enumerating all possible Galois groups, we work down the graph of possibilities using \code{Maximal2}.
We shall also consider the parameterization 00, which is the same as A0, but which uses a \code{Symmetric} global model for each factor and the \code{RootsMaximal} group theory algorithm \cite[Ch. II, \S5.4]{DPhD} which mimics Stauduhar's original absolute resolvent method \cite{Stauduhar73}.
\subsection{Up to degree 12 over \texorpdfstring{\(\mathbb{Q}_2\)}{Q2}, \texorpdfstring{\(\mathbb{Q}_3\)}{Q3} and \texorpdfstring{\(\mathbb{Q}_5\)}{Q5}}
\label{gg-sec-d12}
The local fields database (LFDB) \cite{LFDB} tabulates data about all extensions of degree up to 12 over \(\mathbb{Q}_p\) for all \(p\) including a defining polynomial, residue and ramification degrees, Galois and inertia groups, and the Galois slope content which summarizes the ramification polygon of the Galois closure.
We have run our algorithm with the eight paramaterizations \texttt{Naive}, 00 and A0 to B2 on all defining polynomials from the LFDB of degrees 2 to 12 over \(\mathbb{Q}_2\), \(\mathbb{Q}_3\) and \(\mathbb{Q}_5\). We also ran with the parameterization A0 but using Magma's default inexact polynomial representation, which does not guarantee correctness, which we denote A0*. In all cases, the Galois group agrees with that reported in the LFDB.
The mean run times of these are given in Tables~\ref{gg-tbl-d12-q2}, \ref{gg-tbl-d12-q3} and \ref{gg-tbl-d12-q5}. In each case, the times within 10\% of the smallest are shown in bold. Counts marked with an asterisk (*) represent a random sample of all possibilities. Times marked with a numeric superscript mean that the algorithm failed to find the Galois group for this many polynomials; these are not included in the mean. A dash (---) means the corresponding algorithm was not tried. A cross (\texttimes) means the corresponding runs were prohibitively slow. Times preceded by \(\approx\) are the mean of a small number of runs, the rest being prohibitively slow. This notation is reused in subsequent tables.
\begin{table}
\centering
\input{gg-tbl-d12-q2.tex}
\caption[Timings on polynomials up to degree 22 over \(\mathbb{Q}_2\)]{Mean run times for some parameterizations on polynomials defining fields of given degrees over \(\mathbb{Q}_2\).}
\label{gg-tbl-d12-q2}
\end{table}
\begin{table}
\centering
\begin{tabular}{rrrrrrrrrrr}
\hline
Deg & \# & \multicolumn{9}{l}{Run time (seconds)} \\
& & Naive & 00 & A0* & A0 & B0 & A1 & B1 & A2 & B2 \\
\hline
2 & 3 & \bfseries 0.04 & 0.11 & 0.07 & 0.10 & 0.11 & 0.12 & 0.12 & 0.11 & 0.11 \\
3 & 10 & 0.05 & 0.07 & \bfseries 0.04 & 0.06 & 0.06 & 0.06 & 0.06 & 0.05 & 0.06 \\
4 & 5 & 0.10 & 0.10 & \bfseries 0.05 & 0.08 & 0.08 & 0.08 & 0.12 & 0.09 & 0.09 \\
5 & 2 & \bfseries 0.08 & 0.16 & 0.10 & 0.15 & 0.16 & 0.15 & 0.16 & 0.14 & 0.16 \\
6 & 75 & 0.66 & 0.29 & \bfseries 0.13 & 0.31 & 0.33 & 0.34 & 0.32 & 0.30 & 0.32 \\
7 & 2 & 0.12 & 0.17 & \bfseries 0.10 & 0.15 & 0.18 & 0.19 & 0.15 & 0.16 & 0.17 \\
8 & 8 & 0.10 & 0.09 & \bfseries 0.06 & 0.09 & 0.08 & 0.09 & 0.08 & 0.08 & 0.08 \\
9 & 795 & \(\approx 400\) & \(\approx 100\) & --- & \bfseries 0.63 & \bfseries 0.64 & \bfseries 0.67 & \bfseries 0.66 & \bfseries 0.66 & 0.73 \\
10 & 6 & 0.14 & 0.09 & \bfseries 0.08 & 0.09 & 0.09 & 0.09 & 0.10 & 0.09 & 0.10 \\
11 & 2 & 0.15 & 0.16 & \bfseries 0.11 & 0.17 & 0.17 & 0.18 & 0.19 & 0.21 & 0.20 \\
12 & 785 & \texttimes & \texttimes & --- & \bfseries 1.52 & \bfseries 1.57 & 1.90 & 2.24 & 2.21 & 2.54 \\
\hline
\end{tabular}
\caption[Timings on polynomials up to degree 12 over \(\mathbb{Q}_3\)]{Mean run times for some parameterizations on polynomials defining fields of given degrees over \(\mathbb{Q}_3\). There were 11 polynomials of degree 12 for which A0, A1 and A2 did not succeed due to a bug in Magma; these are not included in timings.}
\label{gg-tbl-d12-q3}
\end{table}
\begin{table}
\centering
\begin{tabular}{rrrrrrrrrrr}
\hline
Deg & \# & \multicolumn{9}{l}{Run time (seconds)} \\
& & Naive & 00 & A0* & A0 & B0 & A1 & B1 & A2 & B2 \\
\hline
2 & 3 & \bfseries 0.04 & 0.11 & 0.07 & 0.12 & 0.28 & 0.11 & 0.11 & 0.12 & 0.11 \\
3 & 2 & \bfseries 0.09 & 0.14 & \bfseries 0.10 & 0.15 & 0.15 & 0.15 & 0.15 & 0.20 & 0.16 \\
4 & 7 & \bfseries 0.03 & 0.07 & 0.04 & 0.07 & 0.07 & 0.08 & 0.07 & 0.09 & 0.08 \\
5 & 26 & 0.12 & 0.05 & \bfseries 0.02 & 0.05 & 0.06 & 0.05 & 0.06 & 0.05 & 0.06 \\
6 & 7 & 0.07 & 0.09 & \bfseries 0.05 & 0.08 & 0.08 & 0.08 & 0.09 & 0.08 & 0.08 \\
7 & 2 & 0.12 & 0.17 & \bfseries 0.10 & 0.15 & 0.16 & 0.16 & 0.16 & 0.15 & 0.21 \\
8 & 11 & 0.07 & 0.09 & \bfseries 0.05 & 0.08 & 0.07 & 0.08 & 0.08 & 0.07 & 0.09 \\
9 & 3 & 0.12 & 0.11 & \bfseries 0.09 & 0.15 & 0.13 & 0.13 & 0.13 & 0.13 & 0.13 \\
10 & 258 & \(\approx 100\) & \texttimes & --- & \bfseries 2.09 & \bfseries 1.93 & 3.00 & 2.76 & 16.02 & 11.87 \\
11 & 2 & 0.15 & 0.17 & \bfseries 0.11 & 0.18 & 0.17 & 0.17 & 0.19 & 0.18 & 0.44 \\
12 & 17 & 0.16 & \bfseries 0.08 & \bfseries 0.09 & \bfseries 0.08 & \bfseries 0.08 & \bfseries 0.08 & \bfseries 0.08 & \bfseries 0.08 & \bfseries 0.08 \\
\hline
\end{tabular}
\caption[Timings on polynomials up to degree 12 over \(\mathbb{Q}_5\)]{Mean run times for some parameterizations on polynomials defining fields of given degrees over \(\mathbb{Q}_5\).}
\label{gg-tbl-d12-q5}
\end{table}
Over \(\mathbb{Q}_2\), we have also run the algorithm on a selection of reducible polynomials whose irreducible factors have a given set of degrees. For example, we consider all pairs \(F_1,F_2 \in K[x]\) of quadratic polynomials defining quadratic fields over \(\mathbb{Q}_2\) and run the algorithm on \(F(x) = F_1(x) F_2(x+1)\). Note that the offset \(x+1\) ensures that \(F(x)\) is squarefree in case \(F_1=F_2\). Mean run times are given in \cref{gg-tbl-d12-q2}, where for example degree ``\(2+2=4\)'' means products of quadratics.
Observe that A0* is generally faster than A0, suggesting there is some overhead due to using exact arithmetic. However, this overhead is around a factor of two in the worst case and usually less, so not too significant.
There is little variation in timings between the six parameterizations A0 to B2. This suggests that for small degrees, there is little overhead in writing down all possible Galois groups \(G \leq W\), or in enumerating all subgroups of \(W\) of a given index.
Unsurprisingly, the run time increases in both the degree \(d\) and in \(v_p(d)\), the latter being the number of wild ramification breaks possible.
Not displayed in the table is that the variance in these run times is low. In particular, the maximum run time is always within a factor of 3 of the mean, and is usually less.
For small degrees, the simple parameterization 00 is comparable to the other parameterizations. However it quickly becomes infeasible as the degree increases, taking for example about 50 seconds at degree 8 over \(\mathbb{Q}_2\).
The same is true for the \texttt{Naive} algorithm. Indeed, for small degrees this is often the fastest but becomes infeasibly slow above degree about 10.
\subsection{Degree 14 over \texorpdfstring{\(\mathbb{Q}_2\)}{Q2}}
\label{gg-sec-d14}
There are two types of wildly ramified extensions \(L/K=\mathbb{Q}_2\) of degree 14: those with \(e(L/K)=2\) and those with \(e(L/K)=14\). In the former case, \(L\) is a ramified quadratic extension of the unique unramified extension \(U/K\) of degree 7. In the latter case, \(L\) is a ramified quadratic extension of the unique (tamely) ramified extension \(T=K(\sqrt[7]2)/K\) of degree 7. We refer to these as Type 14u and Type 14t respectively.
Using the \code{AllExtensions} intrinsic in Magma we have generated all such extensions up to \(K\)-conjugacy, and have run our algorithm on all of these. The timings are given in \cref{gg-tbl-d12-q2} separately for the two types.
As a point of comparison, \cite{AwtreyD14} uses a degree 364 resolvent relative to \(W=S_{14}\) and a few other invariants to compute the same Galois groups, taking around 20 hours per polynomial whereas our algorithm takes around 2 seconds. Our results are consistent with \cite[Table 3]{AwtreyD14}.
We see that for Type 14t, using a more sophisticated global model \code{Root\-Of\-Uni\-formizer} for \(T/K\) in the B parameterizations instead of \texttt{Symmetric} in the A parameterizations makes a marked improvement to the run-time. Even when we do use \texttt{Symmetric}, we get an improvement for using more sophisticated group theory, comparing A0, A1 and A2.
In contrast, for Type 14u using a more sophisticated global model \texttt{RootOfUnity} actually made the run time worse. In this case, with parameterization B0, most of the run time is spent computing complex approximations to resolvents, despite generally using fewer resolvents and using a lower complex precision. This suggests that the implementation of \texttt{RootOfUnity} needs to be optimized.
\subsection{Degree 16 over \texorpdfstring{\(\mathbb{Q}_2\)}{Q2}}
\label{gg-sec-d16}
Recall (e.g. \cite{PS} or \cite{extensions}) that to an extension of \(p\)-adic fields, we can attach a ramification polygon, which is an invariant of the extension. By attaching further residual information such as the residual polynomials of each face of the ramification polygon, we can form a finer invariant.
Using the \texttt{pAdicExtensions} package \cite{extensionscode}, which implements these invariants, we generated all possible equivalence classes of the finest such invariant, called the \define{fine ramification polygon with residues and uniformizer residue} in \cite{extensions}, for totally ramified extensions of degree 16 of \(\mathbb{Q}_2\).
For each class, we selected at random one Eisenstein polynomial generating a field with this invariant, giving us a sample of 447 polynomials.
We divide these polynomials into three types. Writing \(L=L_t/\ldots/L_0=K=\mathbb{Q}_2\) for the ramification filtration of the field they generate, then Type 16a polynomials have \((L_i:L_{i-1})=2\) for all \(i\) (and hence \(t=4\)), Type 16b polynomials are those remaining with \((L_i:L_{i-1})\mid4\) for all \(i\), and Type 16c are the rest (so \((L_i:L_{i-1})=8\) or \(16\) for some \(i\)). There are 64, 253 and 130 polynomials of each type respectively.
In total, there are 4,008,960 degree 16 extensions of \(\mathbb{Q}_2\) inside \(\bar \mathbb{Q}_2\) of Type 16a, 1,857,120 of Type 16b and 155,024 of Type 16c \cite{Sinclair}.
Per an earlier remark, we do not have \texttt{SinglyWild} global models fully implemented and so use the less efficient \texttt{Symmetric} instead. We expect run times for Types 16b and 16c to be worse than Type 16a, since the former will work relative to groups like \(W=S_4 \wr S_4\) or \(S_2 \wr S_8\) which are larger than \(W=S_2 \wr S_2 \wr S_2 \wr S_2\) of the latter. We expect that with \texttt{SinglyWild} fully implemented, the overgroup for Types 16b or 16c will be smaller not larger than for Type 16a, and that Types 16b and 16c will therefore actually become the easier classes. See \cref{gg-sec-impl-sw} for some evidence supporting this claim.
Our algorithm has been run on these polynomials with the 6 parameterizations A0 to B2. \Cref{gg-tbl-d16-q2} summarizes the results, with the polynomials grouped by type. Mean timings are also given in \cref{gg-tbl-d12-q2} for comparison. Some of these runs failed to find the Galois group, because the parameterization ran out of resolvents to try; the number of failures is given in the table. The timings only include successful runs. To give an idea of the variance in run time, we report the median and maximum time as well as the mean.
\begin{table}
\centering
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{lrrrrrr}
\hline
& A0 & B0 & A1 & B1 & A2 & B2 \\
\hline
\multicolumn{7}{l}{Type 16a (64 polynomials)} \\
Number failed & 0 & 0 & 4 & 4 & 4 & 4 \\
Mean run time & 53.65 & 54.54 & 17.47 & 18.21 & 7.25 & 7.59 \\
Median run time & 27.87 & 28.64 & 16.69 & 17.00 & 6.06 & 6.34 \\
Maximum run time & 311.86 & 252.39 & 31.57 & 56.59 & 22.99 & 21.76 \\
\hline
\multicolumn{7}{l}{Type 16b (253 polynomials)} \\
Number failed & 0 & 0 & 7 & 7 & 7 & 7 \\
Mean run time & 304.97 & 288.25 & 42.37 & 34.90 & 25.47 & 29.40 \\
Median run time & 18.20 & 14.77 & 12.25 & 10.38 & 8.02 & 7.65 \\
Maximum run time & 4016.19 & 3721.84 & 432.85 & 1182.44 & 1063.16 & 1616.56 \\
\hline
\multicolumn{7}{l}{Type 16c (130 polynomials)} \\
Number failed & --- & --- & 23 & 23 & 4 & 23 \\
Mean run time & --- & --- & 133.29 & 195.59 & 115.38 & 150.83 \\
Median run time & --- & --- & 10.50 & 1.58 & 1.43 & 1.36 \\
Maximum run time & --- & --- & 2502.06 & 7949.19 & 12432.12 & 4368.25 \\
\hline
\end{tabular}
\end{adjustbox}
\caption[Timings on polynomials of degree 16 over \(\mathbb{Q}_2\)]{Run times in seconds for a selection of parameterizations on a sample of polynomials defining fields of degree 16 over \(\mathbb{Q}_2\) divided into three types.}
\label{gg-tbl-d16-q2}
\end{table}
The run times are significantly higher at degree 16 than lower degrees, and there are now pronounced differences between the parameterizations, with A0 and B0 being the slowest and numbered A2 and B2 being the fastest.
As predicted, Type 16a polynomials are the fastest. For this type, the median is usually close to the mean and the maximum is not much larger, indicating this is a low-variance regime. Elsewhere, the median is smaller and the maximum is a lot higher, so the variance is greater.
\subsection{Degree 18 over \texorpdfstring{\(\mathbb{Q}_2\)}{Q2}}
\label{gg-sec-d18}
Using the \texttt{pAdicExtensions} package \cite{extensionscode}, we have generated all ramification polygons of totally ramified extensions \(L/\mathbb{Q}_2\) of degree 18. These have vertices of the form
\[(1,J), (2,0), (18,0)\]
where the discriminant valuation is \(18+J-1\). Note that these extensions are of the form \(L/T/\mathbb{Q}_2\) where \(T/\mathbb{Q}_2\) is the unique tame extension of degree 9 and \(L/T\) is quadratic.
For each polygon, we have generated a set of polynomials generating all extensions with this ramification polygon, and run our algorithm on them all with parameterizations A0 to B2. There are 2046 polynomials in total.
Mean timings are given in \cref{gg-tbl-d12-q2}. Note that the B parameterizations are far quicker than A as a result of using the \code{RootOfUniformizer} global model instead of \code{Symmetric} for \(T/\mathbb{Q}_2\).
In \cref{gg-tbl-d18-q2} we give the number of polynomials for each ramification polygon (parameterized by \(J\)) and the count of the T-numbers of their Galois groups.
\begin{table}
\centering
\begin{tabular}{rrp{17em}}
\hline
\(J\) & \# & Groups \\
\hline
1 & 2 & \(433\), \(434\) \\
3 & 4 & \(98\), \(101\), \(588\), \(592\) \\
5 & 8 & \(433^2\), \(434^2\), \(588^2\), \(592^2\) \\
7 & 16 & \(433^4\), \(434^4\), \(588^4\), \(592^4\) \\
9 & 32 & \(45^2\), \(147^2\), \(512^{14}\), \(656^{14}\) \\
11 & 64 & \(433^8\), \(434^8\), \(512^{16}\), \(588^8\), \(592^8\), \(656^{16}\) \\
13 & 128 & \(433^{16}\), \(434^{16}\), \(512^{32}\), \(588^{16}\), \(592^{16}\), \(656^{32}\) \\
15 & 256 & \(98^2\), \(101^2\), \(147^4\), \(588^{62}\), \(592^{62}\), \(656^{124}\) \\
17 & 512 & \(433^{32}\), \(434^{32}\), \(512^{64}\), \(588^{96}\), \(592^{96}\), \(656^{192}\) \\
18 & 1024 & \(45^{4}\), \(147^{12}\), \(512^{252}\), \(656^{756}\) \\
\hline
Total & 2046 & \(45^{6}\), \(98^{3}\), \(101^{3}\), \(147^{18}\), \(433^{63}\), \(434^{63}\), \(512^{378}\), \(588^{189}\), \(592^{189}\), \(656^{1134}\) \\
\hline
\end{tabular}
\caption[Totally ramified Galois groups of degree 18 over \(\mathbb{Q}_2\)]{Totally ramified Galois groups of degree 18 over \(\mathbb{Q}_2\).}
\label{gg-tbl-d18-q2}
\end{table}
Noting that \(L/T\) is Galois and \(T/\mathbb{Q}_2\) has only the trivial automorphism, then \(\operatorname{Aut}(L/\mathbb{Q}_2) \cong C_2\) and so each \(L/\mathbb{Q}_2\) has 9 conjugates inside \(\bar\mathbb{Q}_2\). The number of polynomials generated times 9 is equal to the number of extensions of degree 18 in \(\bar\mathbb{Q}_2\), from which we deduce we have exactly one polynomial per isomorphism class.
\subsection{Degree 20 over \texorpdfstring{\(\mathbb{Q}_2\)}{Q2}}
\label{gg-sec-d20}
As in \cref{gg-sec-d18}, we have generated all ramification polygons of totally ramified extensions \(L/\mathbb{Q}_2\) of degree 20. For each we have produced a set of generating polynomials, 511,318 in total.
We have computed the Galois groups of all of these polynomials \(F(x)\), which required several parameterizations of our algorithm to cover all cases. This also occasionally required computing \(\operatorname{Gal}(F/K)\) where \(K = \mathbb{Q}_2(\sqrt[3]2,\zeta_3)\), for which we can compute a more efficient global model than \(F/\mathbb{Q}_2\), at the expense of some more group theory computation.
By \cite[Theorem 1]{Monge11} there are 259,968 isomorphism classes of such extensions \(L/\mathbb{Q}_2\) so we have over-counted by a factor of about 2.
Average timings are given in \cref{gg-tbl-d12-q2} and counts of Galois groups are given in \cref{gg-tbl-d20-q2}.
\begin{table}
\centering
\begin{tabular}{rp{24em}}
\hline
\# & Groups \\
\hline
511,318 & \(16^{4}\), \(18^{8}\), \(19^{6}\), \(20^{8}\), \(42^{48}\), \(61^{3}\), \(68\), \(77^{15}\), \(80^{15}\), \(129^{78}\), \(131^{90}\), \(132^{78}\), \(137^{90}\), \(173^{30}\), \(186^{120}\), \(189^{120}\), \(194^{180}\), \(195^{114}\), \(196^{180}\), \(261^{720}\), \(282^{90}\), \(305^{120}\), \(306^{105}\), \(309^{240}\), \(312^{240}\), \(317^{140}\), \(330^{35}\), \(332^{240}\), \(338^{140}\), \(351^{120}\), \(406^{630}\), \(411^{1440}\), \(416^{1440}\), \(417^{1440}\), \(419^{1440}\), \(420^{1440}\), \(422^{630}\), \(434^{720}\), \(435^{1440}\), \(437^{1440}\), \(441^{1440}\), \(443^{720}\), \(444^{562}\), \(447^{720}\), \(448^{720}\), \(449^{562}\), \(471^{85}\), \(472^{225}\), \(510^{1920}\), \(511^{2880}\), \(512^{1920}\), \(514^{1920}\), \(515^{2880}\), \(516^{2880}\), \(517^{1920}\), \(518^{2880}\), \(519^{1920}\), \(520^{1920}\), \(523^{1920}\), \(524^{1920}\), \(526^{1396}\), \(528^{840}\), \(529^{1920}\), \(530^{1920}\), \(632^{11520}\), \(633^{11520}\), \(634^{11520}\), \(678^{255}\), \(683^{675}\), \(847^{5760}\), \(850^{5760}\), \(851^{5760}\), \(854^{5760}\), \(906^{42240}\), \(907^{42240}\), \(908^{34560}\), \(909^{42240}\), \(910^{42240}\), \(911^{34560}\), \(946^{161280}\) \\
\hline
\end{tabular}
\caption[Totally ramified Galois groups of degree 20 over \(\mathbb{Q}_2\)]{Totally ramified Galois groups of degree 20 over \(\mathbb{Q}_2\).}
\label{gg-tbl-d20-q2}
\end{table}
\subsection{Degree 22 over \texorpdfstring{\(\mathbb{Q}_2\)}{Q2}}
\label{gg-sec-d22}
As in \cref{gg-sec-d18}, we have generated all ramification polygons of totally ramified extensions \(L/\mathbb{Q}_2\) of degree 22, these have vertices of the form
\[(1,J),(2,0),(22,0),\]
and for each we have produced a set of generating polynomials. Again, we have precisely one polynomial per isomorphism class, 8190 in total.
Timings with parameterizations B0 to B2 are given in \cref{gg-tbl-d12-q2} and counts of Galois groups are given in \cref{gg-tbl-d22-q2}.
\begin{table}
\centering
\begin{tabular}{rrl}
\hline
\(J\) & \# & Groups \\
\hline
1 & 2 & \(34\), \(35\) \\
3 & 4 & \(34^{2}\), \(35^{2}\) \\
5 & 8 & \(34^{4}\), \(35^{4}\) \\
7 & 16 & \(34^{8}\), \(35^{8}\) \\
9 & 32 & \(34^{16}\), \(35^{16}\) \\
11 & 64 & \(6^{2}\), \(37^{62}\) \\
13 & 128 & \(34^{32}\), \(35^{32}\), \(37^{64}\) \\
15 & 256 & \(34^{64}\), \(35^{64}\), \(37^{128}\) \\
17 & 512 & \(34^{128}\), \(35^{128}\), \(37^{256}\) \\
19 & 1024 & \(34^{256}\), \(35^{256}\), \(37^{512}\) \\
21 & 2048 & \(34^{512}\), \(35^{512}\), \(37^{1024}\) \\
22 & 4096 & \(6^{4}\), \(37^{4092}\) \\
\hline
Total & 8190 & \(6^{6}\), \(34^{1023}\), \(35^{1023}\), \(37^{6138}\) \\
\hline
\end{tabular}
\caption[Totally ramified Galois groups of degree 22 over \(\mathbb{Q}_2\)]{Totally ramified Galois groups of degree 22 over \(\mathbb{Q}_2\).}
\label{gg-tbl-d22-q2}
\end{table}
\subsection{Degree 32 over \texorpdfstring{\(\mathbb{Q}_2\)}{Q2}}
\label{gg-sec-d32}
Our algorithm can compute some non-trivial Galois groups of order 32. For example, consider \(F(x) = x^{16} + 32 x + 2\) which is Eisenstein with Galois group 16T1638 of index \(8=2^3\) in \(C_2^{\wr 4}\). Using A2, we find the Galois group of \(F(x^2)\) is 32T2583443 of index \(2^{10}\) in \(C_2^{\wr 5}\). This took about 125 seconds, which breaks down as follows.
\begin{center}
\begin{tabular}{lrr}
\hline
& Run time (seconds) & Share of run time \\
\hline
Start resolvent algorithm & 23.28 & 18.6\% \\
Choose subgroup & 91.44 & 73.0\% \\
Compute resolvent & 1.39 & 1.1\% \\
Process resolvent & 6.84 & 5.5\% \\
Other & 2.37 & 1.9\% \\
\hline
Total & 125.32 & \\
\hline
\end{tabular}
\end{center}
Here, ``start resolvent algorithm'' includes initially factorizing the polynomial, finding the extensions defined by the factors, finding their ramification filtrations, and computing a corresponding global model. ``Choose subgroup'' means time spent by the subgroup choice algorithm choosing a subgroup \(U \leq W\) from which to form a resolvent. ``Compute resolvent'' is the time spent computing a resolvent \(R(x)\) given an invariant for the subgroup \(U\). ``Process resolvent'' is the time spent by the group theory algorithm deducing information about the Galois group from a resolvent, and so in particular includes finding the degrees of the factors of the resolvent and computing maximal preimages. ``Other'' is everything else, including intializing the group theory algorithm and computing invariants.
This used 104 resolvents in total: 82 of degree 2, 9 of degree 4, 7 of degree 8, 2 of degree 16 and 4 of degree 32. The maximum complex precision used was 4056 decimal digits.
The run time is dominated by time spent choosing subgroups \(U \leq W\), suggesting that this should be the focus for future improvement. The next most dominant part is time spent starting the resolvent algorithm, but this part is essentially independent of the Galois group. Very little time is spent actually computing resolvents, which is perhaps surprising given that this is the part spent using complex embeddings of global models.
\subsection{A special case of \texttt{SinglyWild}}
\label{gg-sec-impl-sw}
We have implemented \texttt{SinglyWild} in the special case \(p=2\) for totally wildly ramified extensions \(L/K\) which are Galois. Hence \(\operatorname{Gal}(L/K) \cong C_2^k\) where \((L:K)=p^k\).
We now define three more parameterizations C0, C1 and C2 which are the same as B0, B1 and B2 except that the \texttt{Symmetric} global model on the wild part is replaced by \texttt{SinglyWild}.
It is well-known (e.g. \cite[Ch. IV, \S2, Prop. 7]{SerLF}) that for such an extension \(L/K\) there is an injective group homomorphism \(\operatorname{Gal}(L/K) \to \mathbb{F}_K^+\), and hence \(\operatorname{Gal}(L/K)\) is isomorphic to a subspace of \(\mathbb{F}_K/\mathbb{F}_p\). In particular, \((\mathbb{F}_K : \mathbb{F}_p) \ge k\) and so \(K/\mathbb{Q}_p\) has residue degree at least \(k\).
Using the \texttt{pAdicExtensions} package \cite{extensionscode}, we have generated defining polynomials which between them generate all extensions of the form \(L/U/\mathbb{Q}_2\) where \(U/\mathbb{Q}_2\) is unramified of some degree and \(L/U\) is singly wildly ramified and Galois of some degree.
For example when \(k=2\) and \((U:\mathbb{Q}_2)=4\), then the global model in C0 gives the overgroup \(W = C_2^2 \wr C_4\) of order \(2^{10}\), which is somewhat smaller than the overgroup \(W = S_4 \wr C_4\) of order \(2^{14} \cdot 3^4\) from B0.
We have run our algorithm with the 9 parameterizations A0 to C2 on these polynomials. Mean timings are given in \cref{gg-tbl-sw-q2}.
\begin{table}
\newcommand{\makebox[0pt][l]}{\makebox[0pt][l]}
\newcommand{\makebox[0pt][r]}{\makebox[0pt][r]}
\newcommand{\z}[1]{\makebox[0pt][l]{\textsuperscript{#1}}}
\centering
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{rrrrrrrrrrrr}
\hline
Deg & \(k\) & \# & \multicolumn{9}{l}{Run time (seconds)} \\
& & & A0 & B0 & C0 & A1 & B1 & C1 & A2 & B2 & C2 \\
\hline
8 & 2 & 4 & \bfseries 0.53 & \bfseries 0.56 & 0.61 & \bfseries 0.57 & \bfseries 0.57 & 0.66 & 0.62 & \bfseries 0.58 & 0.68 \\
12 & 2 & 28 & 2.36 & 2.34 & 0.71 & 3.41 & 3.73 & \bfseries 0.64 & 4.57 & 4.79 & \bfseries 0.66 \\
16 & 2 & 140 & \(\times\) & \(\times\) & 1.23 & 80.02\z1 & 23.27\z1 & \bfseries 0.95 & \(\times\) & \(\times\) & \bfseries 0.98 \\
24 & 3 & 8 & \(\times\) & \(\times\) & \bfseries 12.55 & \(\times\) & \(\times\) & \bfseries 12.75 & \(\times\) & \(\times\) & \bfseries 12.42 \\
32 & 3 & 120 & --- & --- & 40.49 & --- & --- & 31.34 & --- & --- & \bfseries 23.68 \\
\hline
\end{tabular}
\end{adjustbox}
\caption[Timings on polynomials using \texttt{SinglyWild} over \(\mathbb{Q}_2\)]{Mean run times for a selection of parameterizations on polynomials defining fields of the form \(L/U/\mathbb{Q}_2\) where \(U/\mathbb{Q}_2\) is unramified and \(L/U\) is singly wildly ramified with Galois group \(C_2^k\). At degree 32, there were four polynomials which did not succeed due to a bug in Magma; these are not included in the mean.}
\label{gg-tbl-sw-q2}
\end{table}
Except at degree 8, the C parameterizations are by far the quickest.
\subsection{\texttt{Tame}}
\label{gg-sec-tame}
As explained in the introduction, if the irreducible factors of \(F(x)\) all generate tamely ramified extensions of \(K\), then its Galois group can be computed directly.
\section*{Acknowledgements}
This work was partially supported by a grant from GCHQ.
\bibliography{refs-bibtex}
\end{document}
|
2,877,628,089,129 | arxiv | \section{Introduction}
Khintchine's theorem \cite{Khi1924} is the foundational result of metric diophantine approximation. For $d \in \mathbb N$, we denote by $\mu_d$ the $d$-dimensional Lebesgue measure. For $x \in \mathbb R$, we write $\| x \| = \inf_{m \in \mathbb Z} |x-m|$. The abbreviation i.o. stands for `infinitely often'. Throughout, let $k \ge 2$ be an integer, and let $\psi: \mathbb N \to [0,1/2]$.
\begin{thm} [Variant of Khintchine, 1924]\label{thm: Khintchine} If $\psi$ is non-increasing then
\[
\mu_1(\{ {\alpha}} \def\bfalp{{\boldsymbol \alpha} \in [0,1]: \| n {\alpha}} \def\bfalp{{\boldsymbol \alpha} \| < \psi(n) \quad \mathrm{i.o.} \} )
=
\begin{cases}
1, &\text{if } \sum_{n=1}^\infty \psi(n) = \infty \\
0, &\text{if } \sum_{n=1}^\infty \psi(n) < \infty.
\end{cases}
\]
\end{thm}
Gallagher's theorem \cite{Gal1962} is one of the standard generalisations of Khintchine's theorem, and is related to a famous conjecture of Littlewood. For $d \in \mathbb N$ and ${\alpha}} \def\bfalp{{\boldsymbol \alpha}_1, \ldots, {\alpha}} \def\bfalp{{\boldsymbol \alpha}_d \in \mathbb R$, write ${\boldsymbol{\alp}} = ({\alpha}} \def\bfalp{{\boldsymbol \alpha}_1,\ldots,{\alpha}} \def\bfalp{{\boldsymbol \alpha}_d)$.
\begin{thm}[Gallagher, 1962]\label{thm: Gallagher}
If $\psi$ is non-increasing then
\begin{align*}
&\mu_k (\{ {\boldsymbol{\alp}} \in [0,1]^k: \| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_1 \| \cdots \| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_k \| < \psi(n) \quad \mathrm{i.o.}\}) \\
&= \begin{cases} 1, &\text{if } \sum_{n=1}^\infty \psi(n) (\log n)^{k-1} = \infty \\
0, &\text{if } \sum_{n=1}^\infty \psi(n) (\log n)^{k-1} < \infty.
\end{cases}
\end{align*}
\end{thm}
\begin{conj}[Littlewood, c. 1930]
If ${\alpha}} \def\bfalp{{\boldsymbol \alpha}, {\beta}} \def\bfbet{{\boldsymbol \beta} \in \mathbb R$ then
\[
\liminf_{n \to \infty} n \| n {\alpha}} \def\bfalp{{\boldsymbol \alpha} \| \cdot \| n {\beta}} \def\bfbet{{\boldsymbol \beta} \| = 0.
\]
\end{conj}
As well as having infinitely many good rational approximations, one might be interested in the number of such approximations up to a given height. Schmidt~\cite{Sch1960} demonstrated such a result, see \cite[Theorem 4.6]{Har1998}.
\begin{thm} [Variant of Schmidt, 1960]
\label{thm: Schmidt}
For $N \in \mathbb N$ and ${\boldsymbol{\alp}} \in \mathbb R^k$,
denote by $S({\boldsymbol{\alp}},N)$ the number of $n \in \mathbb N$ such that
\[
n \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec N, \qquad \| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_i \| < \psi(n)
\quad (1 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec k).
\]
Assume that $\psi$ is non-increasing, assume that
\[
\Psi_k(N) := \sum_{n \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec N} (2 \psi(n))^k
\]
is unbounded, and let $\varepsilon > 0$. Then, for almost all ${\boldsymbol{\alp}} \in \mathbb R^k$, we have
\[
S({\boldsymbol{\alp}},N) = \Psi_k(N) + O_{k,\varepsilon}(\sqrt{\Psi_k(N)} (\log \Psi_k(N))^{2+\varepsilon})
\qquad (N \to \infty).
\]
\end{thm}
Wang and Yu \cite{WY1981} established a counting version of Gallagher's theorem. We state a variant of this below, deducing it from \cite[Theorem 4.6]{Har1998} in the appendix. For $N \in \mathbb N$ and ${\boldsymbol{\alp}}, {\boldsymbol{\gam}} \in \mathbb R^k$,
denote by $S^\times_{\boldsymbol{\gam}}({\boldsymbol{\alp}},N,\psi)$ the number of $n \in \mathbb N$
satisfying
\begin{equation}\label{eq: Littlewood product small}
n \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec N, \qquad \| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_1 - {\gamma}} \def\Gam{{\Gamma}_1 \| \cdots \| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_k - {\gamma}} \def\Gam{{\Gamma}_k \| < \psi(n).
\end{equation}
For $N \in \mathbb N$, define
\[
\Psi_k^\times(N) = \frac{1}{(k-1)!} \sum_{n \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec N} \psi(n) (- \log(2^k \psi(n)))^{k-1}
\]
and
\[
\tilde \Psi_k^\times(N) = \sum_{n \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec N} \psi(n) (\log n)^{k-1}.
\]
In our definition of $\Psi_k^\times(N)$, we adopt the convention that
\[
x (- \log x)^d \mid_{x = 0} \: = 0 \qquad (d \in \mathbb R).
\]
\begin{remark}
In standard settings, we have
\[
-\log \psi(n) \asymp \log n
\qquad (n \ge 2),
\]
and correspondingly
\[
\Psi_k^\times(N) \asymp \tilde \Psi_k^\times(N).
\]
\end{remark}
\begin{thm} [Variant of Wang--Yu, 1981] \label{WangYu}
Assume that $\psi$ is non-increasing, that $\psi(n) \to 0$ as $n \to \infty$, and that $\Psi_k^\times(N)$ is unbounded. Then, uniformly for almost all ${\boldsymbol{\alp}} \in \mathbb R^k$, we have
\[
S^\times_\mathbf 0({\boldsymbol{\alp}},N,\psi) \sim \Psi_k^\times (N)
\qquad (N \to \infty).
\]
\end{thm}
\begin{remark} Without the assumption that $\psi(n) \to 0$, there is a less explicit asymptotic main term, namely $T_k(N)$ as defined in the appendix. As can be seen from the proof therein, the assumption that $\psi(n) \to 0$ is necessary for Theorem \ref{WangYu} as stated.
\end{remark}
In this note, we address natural inhomogeneous and fibre refinements of this problem, as popularised by Beresnevich--Haynes--Velani \cite{BHV2020}. Our findings are enumerative versions of some of our previous results \cite{Cho2018, CT2019, CT2021}.
\begin{thm} \label{MainThm} Let ${\boldsymbol{\gam}} \in \mathbb R^k$ with ${\gamma}} \def\Gam{{\Gamma}_k = 0$, and let ${\kappa} > 0$. Assume that $\psi$ is non-increasing, that $\psi(n) < n^{-{\kappa}}$ for all $n$, and that $\tilde \Psi_k^\times(N)$ is unbounded. Then, for almost all ${\boldsymbol{\alp}} \in \mathbb R^k$, we have
\[
S^\times_{\boldsymbol{\gam}}({\boldsymbol{\alp}},N,\psi) \gg \tilde \Psi_k^\times (N) \qquad (N \to \infty).
\]
The implied constant only depends on $k$.
\end{thm}
The \emph{multiplicative exponent} of ${\boldsymbol{\alp}} \in \mathbb R^d$ is
\[
{\omega}} \def\Ome{{\Omega}} \def\bfome{{\boldsymbol \ome}^\times({\boldsymbol{\alp}}) =
\sup \{ w: \| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_1 \| \cdots \| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_d \| < n^{-w} \quad \mathrm{i.o.} \}.
\]
Specialising $k = d$ and $\psi(n)=
(n (\log n)^{d+1})^{-1}$
in Gallagher's Theorem \ref{thm: Gallagher}, we see that ${\omega}} \def\Ome{{\Omega}} \def\bfome{{\boldsymbol \ome}^\times({\boldsymbol{\alp}})=1$
for almost all ${\boldsymbol{\alp}}\in \mathbb R^d$. Thus, Theorem \ref{MainThm} is implied by the following fibre statement.
\begin{thm} \label{FibreThm} Let ${\kappa} > 0$. Assume that $\psi$ is non-increasing, that $\psi(n) < n^{-{\kappa}}$ for all $n$, and that $\tilde \Psi_k^\times(N)$ is unbounded. Let ${\gamma}} \def\Gam{{\Gamma}_1,\ldots,{\gamma}} \def\Gam{{\Gamma}_{k-1} \in \mathbb R$, and suppose $({\alpha}} \def\bfalp{{\boldsymbol \alpha}_1,\ldots,{\alpha}} \def\bfalp{{\boldsymbol \alpha}_{k-1})$ has multiplicative exponent $w < \frac{k-1}{k-2}$, where $\frac{k-1}{k-2} \big |_{k=2} = \infty$. Then, for almost all ${\alpha}} \def\bfalp{{\boldsymbol \alpha}_k$, we have
\[
S^\times_{({\gamma}} \def\Gam{{\Gamma}_1,\ldots,{\gamma}} \def\Gam{{\Gamma}_{k-1},0)}(({\alpha}} \def\bfalp{{\boldsymbol \alpha}_1,\ldots,{\alpha}} \def\bfalp{{\boldsymbol \alpha}_k),N,\psi) \gg \tilde \Psi_k^\times (N) \qquad (N \to \infty).
\]
The implied constant only depends on $k, w,$ and ${\kappa}$.
\end{thm}
A natural strategy to prove
Theorem \ref{FibreThm} is
to isolate the metric parameter ${\alpha}} \def\bfalp{{\boldsymbol \alpha}_k$
to one side of the inequality
\eqref{eq: Littlewood product small}. Indeed, defining
\begin{equation} \label{PhiDef}
\Phi(n)= \frac{\psi(n)}{\| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_1 - {\gamma}} \def\Gam{{\Gamma}_1 \| \cdots
\| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_{k-1} - {\gamma}} \def\Gam{{\Gamma}_{k-1} \|},
\end{equation}
the quantity
\[
S^\times_{({\gamma}} \def\Gam{{\Gamma}_1,\ldots,{\gamma}} \def\Gam{{\Gamma}_{k-1},0)}(({\alpha}} \def\bfalp{{\boldsymbol \alpha}_1,\ldots,{\alpha}} \def\bfalp{{\boldsymbol \alpha}_k),N,\psi)
\]
counts positive integers $n\leqslant} \def \geq {\geqslant N$ satisfying
$\| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_{k} \| < \Phi(n)$. If $\Phi$ were monotonic, then one could try to apply
Theorem \ref{thm: Schmidt}. The basic problem with this approach is that $\Phi$
is far from being monotonic.
Khintchine's theorem is false without the monotonicity assumption, as was shown by Duffin and Schaeffer~\cite{DS1941}. They proposed a modification of it, not requiring monotonicity of the approximating function, that was open for almost 80 years and only recently settled
by Koukoulopoulos and Maynard \cite{KM2020}.
We rely heavily on a very recent quantification
by Aistleitner, Borda, and Hauke. Recall that $\psi: \mathbb N \to [0,1/2]$.
\begin{thm} [Aistleitner--Borda--Hauke, 2022+]
\label{ABH}
Let $C>0$.
For $\alpha \in \mathbb R$, let $S(\alpha, N)$ denote
the number of coprime pairs $(a,n) \in \mathbb Z \times \mathbb N$
such that
$$
n \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec N, \qquad
\left \vert \alpha - \: \frac{a}{n} \right \vert \leqslant} \def \geq {\geqslant \frac{\psi(n)}{n} .
$$
If
\[
\Psi(N):= \sum_{n\leqslant} \def \geq {\geqslant N} 2\frac{\varphi (n)}{n}\psi(n)
\]
is unbounded then, for almost all ${\alpha}} \def\bfalp{{\boldsymbol \alpha} \in \mathbb R$, we have
$$
S(\alpha, N) = \Psi(N) (1+O_C((\log \Psi(N))^{-C}))
$$
as $N\rightarrow \infty$.
\end{thm}
Our proof of Theorem \ref{FibreThm} also involves the theory of Bohr sets, as developed in our previous work \cite{Cho2018, CT2019}, which we use to verify the unboundedness condition in Theorem \ref{ABH}. In general $\Phi(n)$, as defined in \eqref{PhiDef}, will not lie in $[0,1/2]$, but the condition $\psi(n) < n^{-{\kappa}}$ enables us to circumvent this and ultimately to apply Theorem \ref{ABH} to an allied approximating function.
\begin{remark} We do not believe that the condition
\[
\psi(n) \in [0,1/2] \qquad (n \in \mathbb N)
\]
is necessary in Theorem \ref{ABH}, though it is currently an assumption. It is necessary for many of the other theorems stated here, owing to the use of the distance to the nearest integer function $\| \cdot \|$. If one could relax this condition, then the condition that $\psi(n) < n^{-{\kappa}}$ for all $n$ could be removed from Theorems \ref{MainThm} and \ref{FibreThm} but, instead of using $S_{\boldsymbol{\gam}}^\times({\boldsymbol{\alp}},N,\psi)$, one would need to count pairs $(n,a_k) \in \mathbb N \times \mathbb Z$ such that
\[
n \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec N, \qquad
\| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_1 - {\gamma}} \def\Gam{{\Gamma}_1 \| \cdots \| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_{k-1} - {\gamma}} \def\Gam{{\Gamma}_{k-1} \| \cdot |n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_k - a_k| < \psi(n).
\]
The latter counting function is greater than or equal to the former, so the reader should not be alarmed that our lower bound for it could far exceed $\Psi_k^\times(N)$ if $\psi$ were to be constant or decay very slowly.
\end{remark}
A natural `uniform' companion to
$S^\times_{{\boldsymbol{\gam}}}
({\boldsymbol{\alp}},N,\psi)$ replaces $\psi(n)$ by $\psi(N)$ in the definition, giving rise to the counting function
\[
S^\times_{{\boldsymbol{\gam}},\mathrm{unif}}
({\boldsymbol{\alp}},N,\psi):=
\# \{
n \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec N:
\| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_1 - {\gamma}} \def\Gam{{\Gamma}_1 \| \cdots \| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_k - {\gamma}} \def\Gam{{\Gamma}_k \| < \psi(N)
\}.
\]
When $\psi$ is not decaying too rapidly,
lattice point counting can be successfully used
to obtain asymptotic formulas for
$S^\times_{\mathbf{0},\mathrm{unif}}({\boldsymbol{\alp}},N,\psi)$.
We refer to the works of Widmer~\cite{Wid2017} and Fregoli~\cite{Fre2021}.
\subsection*{Notation.}
For complex-valued functions $f$ and $g$, we write $f \ll g$ or $f = O(g)$ if $|f| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec C|g|$ pointwise for some constant $C$, sometimes using a subscript to record dependence on parameters, and $f \asymp g$ if $f \ll g \ll f$. We write $f \sim g$ if $f/g \to 1$, and $f = o(g)$ if $f/g \to 0$.
\subsection*{Funding and acknowledgements}
NT was supported by a Schr\"odinger Fellowship of the Austrian Science Fund (FWF): project J 4464-N.
We thank Jakub Konieczny for raising the question, as well as for feedback on an earlier version of this manuscript, and we thank Christoph Aistleitner for a helpful conversation.
\section{Counting approximations on fibres}
In this section, we prove Theorem \ref{FibreThm}.
Fix $\varepsilon > 0$ such that
\begin{equation} \label{epsrange}
10 k \sqrt{\varepsilon}
\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \min \left \{ \frac{1}{w} \: - \: \frac{k-2}{k-1}, {\kappa} \right \} \in (0,1).
\end{equation}
We write
\[
{\boldsymbol{\alp}} = ({\alpha}} \def\bfalp{{\boldsymbol \alpha}_1,\ldots,{\alpha}} \def\bfalp{{\boldsymbol \alpha}_{k-1}),
\qquad
{\boldsymbol{\gam}} = ({\gamma}} \def\Gam{{\Gamma}_1,\ldots,{\gamma}} \def\Gam{{\Gamma}_{k-1}).
\]
Define
\[
G = \{ n \in \mathbb N: \| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_i - {\gamma}} \def\Gam{{\Gamma}_i \| \ge n^{-\sqrt \varepsilon} \quad (1 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec k-1) \}
\]
and
\[
U_N({\boldsymbol{\alp}},{\boldsymbol{\gam}},\psi) = \sum_{\substack{n \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec N \\ n \in G}} \frac{\varphi(n) \psi(n)}{n \| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_1 - {\gamma}} \def\Gam{{\Gamma}_1 \| \cdots \| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_{k-1} - {\gamma}} \def\Gam{{\Gamma}_{k-1} \|}.
\]
We showed in \cite[Equation (6.3)]{CT2019} that
\[
U_N({\boldsymbol{\alp}},{\boldsymbol{\gam}},\psi) \gg_{\boldsymbol{\alp}} \: \tilde \Psi_k^\times(N),
\]
so the unboundedness assumption needed to apply Theorem \ref{ABH} to the approximating function
\[
n \mapsto \frac{\psi(n)}{ \| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_1 - {\gamma}} \def\Gam{{\Gamma}_1 \| \cdots \| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_{k-1} - {\gamma}} \def\Gam{{\Gamma}_{k-1} \| } 1_G(n) \in [0,1/2]
\]
is met. Thus, for almost all ${\alpha}} \def\bfalp{{\boldsymbol \alpha}_k$, we have
\begin{equation} \label{ABHapp}
S^\times_{({\gamma}} \def\Gam{{\Gamma}_1,\ldots,{\gamma}} \def\Gam{{\Gamma}_{k-1},0)}(({\alpha}} \def\bfalp{{\boldsymbol \alpha}_1,\ldots,{\alpha}} \def\bfalp{{\boldsymbol \alpha}_k),N,\psi) \gg U_N({\boldsymbol{\alp}},{\boldsymbol{\gam}},\psi).
\end{equation}
The implied constant in \cite[Equation (6.3)]{CT2019} was allowed to depend on ${\boldsymbol{\alp}}$, however the following more uniform statement holds with essentially the same proof.
\begin{lemma} \label{uniform} Assume that $\psi$ is non-increasing.
Let ${\boldsymbol{\alp}} = ({\alpha}} \def\bfalp{{\boldsymbol \alpha}_1,\ldots,{\alpha}} \def\bfalp{{\boldsymbol \alpha}_{k-1})$ be a real vector such that $ {\omega}} \def\Ome{{\Omega}} \def\bfome{{\boldsymbol \ome}^\times({\boldsymbol{\alp}}) = w < \frac{k-2}{k-1}$, and let ${\boldsymbol{\gam}} = ({\gamma}} \def\Gam{{\Gamma}_1,\ldots,{\gamma}} \def\Gam{{\Gamma}_{k-1}) \in \mathbb R^{k-1}$. Then there exist $c = c(k,w,{\kappa}) > 0$ and $N_0 = N_0({\boldsymbol{\alp}})$ such that
\[
U_N({\boldsymbol{\alp}},{\boldsymbol{\gam}},\psi) \ge c \sum_{n \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec N} \psi(n) (\log n)^{k-1} \qquad (N \ge N_0).
\]
\end{lemma}
\begin{proof}
Recall that $\varepsilon > 0$ satisfies \eqref{epsrange}.
First, we verify that the implicit constants in the
`inner structure' (\cite[Lemma 3.1]{CT2019})
and `outer structure'
(\cite[Lemma 3.2]{CT2019}) lemmas
depend only on $k$. That is, there exist positive constants $c_1 = c_1(k)$ and $c_2 = c_2(k)$ such that
if
\begin{equation} \label{delrange}
N^{\sqrt \varepsilon} \leqslant} \def \geq {\geqslant \delta_i \leqslant} \def \geq {\geqslant 1/2 \quad (1\leqslant} \def \geq {\geqslant i \leqslant} \def \geq {\geqslant k-1),
\qquad N\geq N_0
\end{equation}
then
\begin{equation}\label{eq: explicit order of magn}
c_1 \leqslant} \def \geq {\geqslant \frac{
\# B_{\boldsymbol{\alpha}}^{\mathbf 0} (N;{\boldsymbol{\del}})}
{{\delta}} \def\Del{{\Delta}_1 \ldots {\delta}} \def\Del{{\Delta}_{k-1}N }
\leqslant} \def \geq {\geqslant c_2,
\end{equation}
where
\[
B_{\boldsymbol{\alp}}^{{\boldsymbol{\gam}}}(N;{\boldsymbol{\del}})
= \{ n \in \mathbb Z: |n| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec N,
\| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_i - {\gamma}} \def\Gam{{\Gamma}_i \| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec {\delta}} \def\Del{{\Delta}_i \: (1 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec k-1) \}.
\]
The proofs of \cite[Lemma 3.1]{CT2019} and \cite[Lemma 3.2]{CT2019} are quite similar to one another, so we confine our discussion to the former. The only essential source of implied constants in its proof comes from the first finiteness theorem, and that implied constant only depends on $k$. The other implied constants that we introduced can easily be made absolute. For example, with ${\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda},{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}_1$ as defined in \cite[\S 3.1]{CT2019}, the upper bound
\[
\| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_1 \| \cdots \| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_{k-1} \|
\leqslant} \def \geq {\geqslant (\lambda_1/(10 \lambda))^{k-1}
{\delta}} \def\Del{{\Delta}_1 \cdots {\delta}} \def\Del{{\Delta}_{k-1} \leqslant} \def \geq {\geqslant
(\lambda_1/ \lambda)^{k-1}
\]
and, for $n\geq N_{0}$, the lower bound
\[
\| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_1 \| \cdots \| n {\alpha}} \def\bfalp{{\boldsymbol \alpha}_{k-1} \|
\geq n^{\varepsilon-w}
\geq (N {\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}_1/(10{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}))^{\varepsilon-w}.
\]
We thus have \eqref{eq: explicit order of magn}.
The construction of the base point $b_0$ in \cite[Section 3.2]{CT2019}
only requires $N_0$ to be large, and does not affect the implied constants as long as $N \ge N_0$. Thus, we have
\[
c_1 \leqslant} \def \geq {\geqslant \frac{
\# B_{\boldsymbol{\alpha}}^{{\boldsymbol{\gam}}} (N;{\boldsymbol{\del}})}
{{\delta}} \def\Del{{\Delta}_1 \cdots {\delta}} \def\Del{{\Delta}_{k-1}N }
\leqslant} \def \geq {\geqslant c_2,
\]
subject to \eqref{delrange}.
With this at hand, the argument of \cite[Section 4]{CT2019} yields
$$
\sum_{n\in \hat{B}_{\boldsymbol{\alpha}}^{{\boldsymbol{\gam}}} (N;{\boldsymbol{\del}})}
\frac{\varphi(n)}{n}
\gg_{k,\varepsilon} \: {\delta}} \def\Del{{\Delta}_1 \ldots {\delta}} \def\Del{{\Delta}_{k-1} N,
$$
where $\hat{B}_{\boldsymbol{\alpha}}^{{\boldsymbol{\gam}}} (N;{\boldsymbol{\del}})
= B_{\boldsymbol{\alpha}}^{{\boldsymbol{\gam}}} (N;{\boldsymbol{\del}})
\cap [N^{\sqrt{\varepsilon}},N]$.
The implied constant comes from Davenport's lattice point counting estimate \cite[Theorem 4.2]{CT2019} and the value of
$\displaystyle \sum_{p \text{ prime}} p^{-1-\varepsilon}$,
and therefore only depends on $k,\varepsilon$.
Decomposing the range of summation into $(k-1)$-tuples of dyadic ranges for $({\delta}} \def\Del{{\Delta}_1,\ldots,{\delta}} \def\Del{{\Delta}_{k-1})$, together with partial summation, as in \cite[Sections 5 and 6]{CT2019}, then gives
\[
U_N({\boldsymbol{\alp}},{\boldsymbol{\gam}},\psi) \gg_{k,\varepsilon} \: \sum_{n \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec N} \psi(n) (\log n)^{k-1} \qquad (N \ge N_0).
\]
Indeed, this process involves at least
\[
c_3 (\log N)^{k-1}
\]
many $(k-1)$-tuples of dyadic ranges, where $c_3 = c_3(k,\varepsilon) > 0$.
Finally, note that $\varepsilon$ can be chosen to only depend on $k,w,{\kappa}$.
\end{proof}
Combining Lemma \ref{uniform} with \eqref{ABHapp} completes the proof of Theorem \ref{FibreThm}.
\begin{appendix}
\section{Computing a volume}
Here we deduce Theorem \ref{WangYu} from \cite[Theorem 4.6]{Har1998} and the argument of Wang and Yu \cite[Section 1]{WY1981}. For ${\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda} > 0$, define
\[
\mathcal B_k({\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}) = \{ \mathbf x \in [0,1]^k: 0 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec x_1 \cdots x_k \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec {\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda} \}
\]
and
\[
\mathcal C_k({\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}) = \{ \mathbf x \in [0,1/2]^k: 0 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec x_1 \cdots x_k \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec {\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda} \}
\]
By symmetry and \cite[Theorem 4.6]{Har1998}, for almost all ${\boldsymbol{\alp}} \in \mathbb R^k$, we have
\[
S_\mathbf 0^\times({\boldsymbol{\alp}}, N, \psi) = T_k(N) + O(\sqrt{T_k(N)} (\log T_k(N))^{2+\varepsilon}),
\]
where
\[
T_k(N) = 2^k \sum_{n \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec N} \mu_k(\mathcal C_k(\psi(n))).
\]
Thus, it remains to show that
\begin{equation} \label{vol}
T_k(N) \sim \Psi_k^\times(N) \qquad (N \to \infty).
\end{equation}
\begin{lemma} For $k \in \mathbb N$ and ${\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda} > 0$, we have
\[
\mu_k(\mathcal B_k({\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda})) =
\begin{cases}
1, &\text{if } {\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda} \ge 1 \\
{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda} \sum_{s=0}^{k-1} \frac{(-\log {\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda})^s}{s!},
&\text{if } 0 < {\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda} < 1.
\end{cases}
\]
\end{lemma}
\begin{proof} We induct on $k$. The base case is clear: $\mu_1(\mathcal B_1({\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda})) = \min \{ {\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}, 1\}$. Now let $k \ge 2$, and suppose the conclusion holds with $k-1$ in place of $k$. We may suppose that $0 < {\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda} < 1$. We compute that
\begin{align*}
\mu_k(\mathcal B_k({\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda})) &= \int_0^1 \mu_{k-1} (\mathcal B_{k-1}({\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}/x)) {\,{\rm d}} x = {\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda} + \int_{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}^1 \frac {\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda} x \sum_{s = 0}^{k-2} \frac{(\log(x/{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}))^s} {s!} {\,{\rm d}} x \\
&= {\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda} + \sum_{s=0}^{k-2} \frac{{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}}{s!} \int_1^{1/{\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}} \frac{(\log y)^s}y {\,{\rm d}} y = {\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda} + {\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda} \sum_{s=0}^{k-2} \frac{ (- \log {\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda})^{s+1}}{(s+1)!} \\
&= {\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda} \sum_{t=0}^{k-1} \frac{(-\log {\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda})^t}{t!}.
\end{align*}
\end{proof}
In view of Schmidt's Theorem \ref{thm: Schmidt}, we may assume that $k \ge 2$. Now, as
\[
\mu_k(\mathcal C_k({\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda}))
= 2^{-k} \mu_k(\mathcal B_k(2^k {\lambda}} \def\Lam{{\Lambda}} \def\Lamtil{{\widetilde \Lambda})),
\]
and as $\psi(n) < 2^{-k}$ for large $n$, we have
\begin{align*}
T_k(N) &= O_{k,\psi}(1) + \sum_{n\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec N} \psi(n) \sum_{s=0}^{k-1} \frac{(- \log (2^k \psi(n)))^s}{s!} \\
&= \Psi_k^\times(N) + O_k(\Psi_{k-1}^\times(N)) + O_{k,\psi}(1).
\end{align*}
Since $\psi(n) \to 0$ as $n \to \infty$, we have $\Psi_{k-1}^\times(N) + 1 = o(\Psi_k^\times(N))$ and hence \eqref{vol}, completing the proof of Theorem \ref{WangYu}.
\end{appendix}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
|
2,877,628,089,130 | arxiv | \section*{Appendix \thesection\protect\indent \parbox[t]{11.715cm} {#1}}
\addcontentsline{toc}{section}{Appendix \thesection\ \ \ #1} }
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\def^{\dagger}{^{\dagger}}
\renewcommand\roman{enumi}}\def\theenumii{\alph{enumii}{\arabic{enumi}}
\newenvironment{romanlist}{%
\def\roman{enumi}}\def\theenumii{\alph{enumii}{\roman{enumi}}\def\theenumii{\alph{enumii}}
\def\labelenumi{(\roman{enumi}}\def\theenumii{\alph{enumii})}\def\labelenumii{(\theenumii)}%
\begin{enumerate}
}{%
\end{enumerate}}
\newenvironment{alphlist}{%
\def\roman{enumi}}\def\theenumii{\alph{enumii}{\alph{enumi}}\def\theenumii{\roman{enumii}}%
\def\labelenumi{(\roman{enumi}}\def\theenumii{\alph{enumii})}\def\labelenumii{(\theenumii)}%
\begin{enumerate}%
}{%
\end{enumerate}}
\newbox\ncintdbox \newbox\ncinttbox
\setbox0=\hbox{$-$} \setbox2=\hbox{$\displaystyle\int$}
\setbox\ncintdbox=\hbox{\rlap{\hbox to
\wd2{\hskip-.125em\box2\relax \hfil}}\box0\kern.1em}
\setbox0=\hbox{$\vcenter{\hrule width 4pt}$}
\setbox2=\hbox{$\textstyle\int$}
\setbox\ncinttbox=\hbox{\rlap{\hbox to
\wd2{\hskip-.175em\box2\relax \hfil}}\box0\kern.1em}
\newcommand{\ncint}{\mathop{\mathchoice{\copy\ncintdbox}%
{\copy\ncinttbox}{\copy\ncinttbox}{\copy\ncinttbox}}\nolimits}
\def\delta{\delta}
\newcommand{\br}[1]{\left( #1 \right)}
\newcommand{\vev}[1]{\left\langle #1 \right\rangle}
\hyphenation{di-men-sion-al}
\hyphenation{di-men-sion-al-ly}
\newcommand{\Pl}[1]{\Phi_{#1}}
\newcommand{\fl}[2]{\Psi_{#1}^{#2}}
\newcommand{\underline{{\mathbb{Z}}}}{\underline{{\mathbb{Z}}}}
\def\varphi{\varphi}
\def\bar{\psi}{\bar{\psi}}
\newcommand{\matr}[4]{\br{\begin{array}{cc} #1 & #2 \\ #3 & #4
\end{array}}}
\def\bar{\epsilon}{\bar{\epsilon}}
\def\epsilon{\epsilon}
\def\Dirac{{D\!\!\!\!/\,}}
\newcommand{{\mbf \xi}}{{\mbf \xi}}
\newcommand{{\mbf f}}{{\mbf f}}
\newcommand{{\mbf \beta}}{{\mbf \beta}}
\newcommand{{\mbf \ell}}{{\mbf \ell}}
\newcommand{{\mbf m}}{{\mbf m}}
\newcommand{{\sf a}}{{\sf a}}
\newcommand{{\sf A}}{{\sf A}}
\newcommand{{\sf b}}{{\sf b}}
\newcommand{{\sf D}}{{\sf D}}
\newcommand{{\Gamma}}{{\Gamma}}
\newcommand{\complex}{{\mathbb C}}
\newcommand{\complexs}{{\mathbb C}}
\newcommand{\zed}{{\mathbb Z}}
\newcommand{\nat}{{\mathbb N}}
\newcommand{\unitary}{{\mathbb U}}
\newcommand{\real}{{\mathbb R}}
\newcommand{\reals}{{\mathbb R}}
\newcommand{\zeds}{{\mathbb Z}}
\newcommand{\rat}{{\mathbb Q}}
\newcommand{\mat}{{\mathbb M}}
\newcommand{\mats}{{\mathbb M}}
\newcommand{{\mathbb H}}{{\mathbb H}}
\newcommand{{\mathbb O}}{{\mathbb O}}
\newcommand{{\mathbb K}}{{\mathbb K}}
\newcommand{{\mathbb T}}{{\mathbb T}}
\newcommand{\NO}{\,\mbox{$\circ\atop\circ$}\,}
\newcommand{\id}{{1\!\!1}}
\def\slash{{\!\!\!/\,}}
\def\semiprod{{\supset\!\!\!\!\!\!\!{\times}~}}
\def{\mathcal A}{{\mathcal A}}
\def{\mathcal H}{{\mathcal H}}
\def{\mathcal E}{{\mathcal E}}
\def\otimes_{ A}{\otimes_{ A}}
\def\otimes_{\complexs}{\otimes_{\complexs}}
\def\otimes_{reals}{\otimes_{reals}}
\def{\rm pt}{{\rm pt}}
\def{\Omega}{{\Omega}}
\def{\rm K}{{\rm K}}
\def{\mathbb{H}}{{\rm H}}
\def{\rm C}{{\rm C}}
\def{\rm E}{{\rm E}}
\def{\rm b}{{\rm b}}
\def{\mathbb{B}}{{\mathbb{B}}}
\def{\rm N}{{\rm N}}
\def{\rm W}{{\rm W}}
\def{\mathbb{S}}{{\mathbb{S}}}
\def{\mathbb{P}}{{\mathbb{P}}}
\def{\mathbb{L}}{{\mathbb{L}}}
\def{\mathbb{H}}{{\mathbb{H}}}
\def{\mathbb{Z}}{{\mathbb{Z}}}
\def{\rm S}{{\rm S}}
\def{\rm F}{{\rm F}}
\def{\rm U}{{\rm U}}
\def{\rm im}{{\rm im}}
\def{\rm coker}{{\rm coker}}
\def{\rm BU}{{\rm BU}}
\def{\rm Tor}{{\rm Tor}}
\def{\rm tor}{{\rm tor}}
\def{\rm Ext}{{\rm Ext}}
\def{\rm Hom}{{\rm Hom}}
\def{\rm coh}{{\rm coh}}
\def{\bf D}{{\bf D}}
\def{{\rm C}\ell}{{{\rm C}\ell}}
\def{\rm ch}{{\rm ch}}
\def{\rm td}{{\rm td}}
\def{\rm id}{{\rm id}}
\def{\rm ch}{{\rm ch}}
\def{\rm PD}{{\rm PD}}
\def{\rm Index}{{\rm Index}}
\def{\widehat{A}}{{\widehat{A}}}
\newcommand{\mathbb{R} \rm{P}^{2n}}{\mathbb{R} \rm{P}^{2n}}
\def\nonumber{\nonumber}
\newcommand{\tr}[1]{\:{\rm tr}\,#1}
\newcommand{\Tr}[1]{\:{\rm Tr}\,#1}
\def{\,\rm e}\,{{\,\rm e}\,}
\newcommand{\rf}[1]{(\ref{#1})}
\newcommand{\nonumber \\}{\nonumber \\}
\newcommand{{\vec \nabla}}{{\vec \nabla}}
\newcommand{{\vec x}}{{\vec x}}
\hyphenation{pre-print}
\hyphenation{pre-prints}
\hyphenation{di-men-sion-al}
\hyphenation{di-men-sion-al-ly}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{eqnarray}{\begin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\def\begin{displaymath}{\begin{displaymath}}
\def\end{displaymath}{\end{displaymath}}
\def{\rm const}{{\rm const}}
\def\sigma{\sigma}
\def\varphi_{\rm cl}{\varphi_{\rm cl}}
\def\left\langle{\left\langle}
\def\right\rancle{\right\rancle}
\def\partial{\partial}
\def{\rm D}{{\rm D}}
\def{\rm d}{{\rm d}}
\def{\rm b}{{\rm b}}
\def{\rm B}{{\rm B}}
\def{\sf C}{{\sf C}}
\def{\rm Q}{{\rm Q}}
\def{\,{\rm i}\,}{{\,{\rm i}\,}}
\defS_{\rm eff}{S_{\rm eff}}
\def{\vec p\,}{{\vec p\,}}
\def{\vec q\,}{{\vec q\,}}
\def{\vec N}{{\vec N}}
\DeclareFontFamily{U}{rsf}{}
\DeclareFontShape{U}{rsf}{m}{n}{
<5> <6> rsfs5 <7> <8> <9> rsfs7 <10-> rsfs10}{}
\DeclareMathAlphabet\Scr{U}{rsf}{m}{n}
\def{\Scr R}{{\Scr R}}
\def{\Scr S}{{\Scr S}}
\def{\Scr J}{{\Scr J}}
\def{\Scr M}{{\Scr M}}
\def{\Scr N}{{\Scr N}}
\def{\Scr A}{{\Scr A}}
\def{\Scr B}{{\Scr B}}
\def{\Scr C}{{\Scr C}}
\def{\Scr K}{{\Scr K}}
\def{\Scr D}{{\Scr D}}
\def{\Scr I}{{\Scr I}}
\def{\Scr H}{{\Scr H}}
\def{\Scr P}{{\Scr P}}
\def{\Scr T}{{\Scr T}}
\def{\Scr L}{{\Scr L}}
\def{\Scr E}{{\Scr E}}
\def{\Scr F}{{\Scr F}}
\def{\Scr X}{{\Scr X}}
\def{\Scr Y}{{\Scr Y}}
\def{\Scr G}{{\Scr G}}
\def{\Scr O}{{\Scr O}}
\makeatletter
\newdimen\normalarrayskip
\newdimen\minarrayskip
\normalarrayskip\baselineskip
\minarrayskip\jot
\newif\ifold \oldtrue \def\oldfalse{\oldfalse}
\def\arraymode{\ifold\relax\else\displaystyle\fi}
\def\eqnumphantom{\phantom{(\theequation)}}
\def\@arrayskip{\ifold\baselineskip\zeta@\lineskip\zeta@
\else
\baselineskip\minarrayskip\lineskip2\minarrayskip\fi}
\def\@arrayclassz{\ifcase \@lastchclass \@acolampacol \or
\@ampacol \or \or \or \@addamp \or
\@acolampacol \or \@firstampfalse \@acol \fi
\edef\@preamble{\@preamble
\ifcase \@chnum
\hfil$\relax\arraymode\@sharp$\hfil
\or $\relax\arraymode\@sharp$\hfil
\or \hfil$\relax\arraymode\@sharp$\fi}}
\def\@array[#1]#2{\setbox\@arstrutbox=\hbox{\vrule
height\arraystretch \ht\strutbox
depth\arraystretch \dp\strutbox
width\zeta@}\@mkpream{#2}\edef\@preamble{\halign \noexpand\@halignto
\bgroup \tabskip\zeta@ \@arstrut \@preamble \tabskip\zeta@ \cr}%
\let\@startpbox\@@startpbox \let\@endpbox\@@endpbox
\if #1t\vtop \else \if#1b\vbox \else \vcenter \fi\fi
\bgroup \let\par\relax
\let\@sharp##\let\protect\relax
\@arrayskip\@preamble}
\makeatother
\newcommand{\fr}[2]{{\textstyle {#1 \over #2}}}
\newcommand{{\textstyle{1\over 2}}}{{\textstyle{1\over 2}}}
\def\rule{14cm}{0pt}\and{\rule{14cm}{0pt}\and}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\bf \mathcal M}{\bf \mathcal M}
\newcommand{\gamma}{\gamma}
\newcommand{\Gamma}{\Gamma}
\newcommand{\rho}{\rho}
\newcommand{\sigma}{\sigma}
\newcommand{\Sigma}{\Sigma}
\newcommand{\partial}{\partial}
\newcommand{\tilde{\Sigma}}{\tilde{\Sigma}}
\newcommand{\hat{r}}{\hat{r}}
\newcommand{\hat{q}}{\hat{q}}
\newcommand{\hat{K}}{\hat{K}}
\newcommand{\hat{\omega}}{\hat{\omega}}
\newcommand{\partial}{\partial}
\newcommand{\kappa}{\kappa}
\newcommand{{\mathcal O}}{{\mathcal O}}
\newcommand{\epsilon}{\epsilon}
\newcommand{\zeta}{\zeta}
\newcommand{\uparrow}{\uparrow}
\newcommand{\downarrow}{\downarrow}
\renewcommand{\r}{\prime}
\newcommand{{\mathcal Z}}{{\mathcal Z}}
\newcommand{{\phantom\dag}}{{\phantom\dag}}
\newcommand{\eqn}[1]{(\ref{#1})}
\def |
2,877,628,089,131 | arxiv | \section{Introduction}
\label{sec:intro}
On general grounds, upon taking the low energy limit of any UV complete quantum theory of gravity, we expect to generate higher derivative corrections to the leading two-derivative Einstein-Hilbert action with matter:
\begin{equation}
I= \frac{1}{16\pi\, G_N^{(d)}}\
\int d^dx \sqrt{-g}~ \left( R + \mathcal{L}_{matter} \right) \,.
\label{einhilactn0}
\end{equation}
The precise form of the higher derivative corrections depend on the nature of the UV completion. While it is expected that string theory provides the desired UV complete description, we do not yet know how these are organized. Therefore, it is in principle useful, if using general principles, we could constrain the space of admissible low energy theories. This strategy has previously been employed with some success both in non-gravitational \cite{Adams:2006sv}, and gravitational \cite{Camanho:2014apa} settings.
The second law of thermodynamics is one such principle that we can test on low energy solutions of gravity. As is well known, black holes are thermodynamic objects \cite{Hawking:1971tu, Bardeen:1973gs, Bekenstein:1973ur, Hawking:1974sw}. We can associate extensive parameters, such as energy, and other conserved charges, in terms of asymptotic data, while intensive parameters
such as temperature and chemical potential refer to properties of the horizon.
In the Einstein-Hilbert theory Eq.~\eqref{einhilactn0} we associate a notion of entropy, given by the area of the event horizon. Thanks to Hawking's famous area theorem, which states that the horizon area is non-decreasing in any physical process (for matter satisfying suitable energy conditions), this notion of entropy satisfies the second law of thermodynamics (see e.g., \cite{Hawking:1973uf}).
A natural question then is how does this picture extend to higher derivative theories. This question is especially salient since a study of higher derivative black hole dynamics in the non-linear regime
could help us constrain such terms via gravitational wave observations. Thus, the question of whether higher derivative black holes are sensible endpoints of dynamics in a second law sense
is an interesting question to answer. The question of extending black hole thermodynamics to higher derivative gravity was first tackled by various people more than two decades ago,
and culminated in a beautiful application of the Noether procedure in the covariant phase space formalism \cite{Wald:1993nt, Iyer:1994ys}. In particular,
Wald argued that for stationary black hole solutions of higher derivative gravity theories, the entropy is a Noether charge associated with time translations along the horizon generating Killing field. This Wald entropy was constructed to explicitly satisfy the first law of thermodynamics, which being an equilibrium statement, can be understood in the stationary solution.
An open question since then has been whether there is a notion of second law of black hole mechanics in higher derivative gravity. This question was addressed in various guises over the years, \cite{Jacobson:1993xs,Jacobson:1993vj,Iyer:1994ys, Jacobson:1995uq, Kolekar:2012tq, Sarkar:2013swa, Bhattacharjee:2015yaa, Wall:2015raa, Bhattacharjee:2015qaa}. We will say more about these works below, but first let us appreciate the issues involved.
The second law involves dynamics. In its crudest version it says that, in a physical process in which a system evolves from one equilibrium configuration to another, the total entropy must increase. Whilst this is a non-local statement, comparing only the initial and final configurations, one can ensure this by simply exhibiting a function of the system variables, the \emph{entropy function},
which is monotone under time evolution, and reduces in equilibrium to the familiar notion of entropy. These requirements, per se, seem quite unrestrictive, for we make no demand of uniqueness.
Constructing an entropy function in a higher derivative theory will convince us of the validity of the second law. On the contrary, if we can show that no such entropy function exists, then we would be forced to conclude that the higher derivative theory is physically unacceptable.\footnote{ We do not wish to suggest that the existence of the entropy function is the most stringent condition one could impose. Demanding causality and unitarity may imply stronger constraints, which may rule out a vaster swathe of the higher derivative landscape. It is nevertheless interesting to ask if the entropy function itself serves to give us interesting constraints on the admissible higher derivative terms.} All told, we wish to basically examine whether dynamical black holes in higher derivative theories
can indeed be viewed as thermodynamic objects.
In gravity, a configuration in thermal equilibrium, translates to geometry with a Killing horizon. Recall that the event horizon is a null surface, ruled by null geodesics. The equilibrium configuration is one where the null generators are along a Killing direction. In Einstein-Hilbert theory Eq.~\eqref{einhilactn0}, the area of the horizon provides a suitable entropy function, thanks to the aforementioned area increase theorem. Assuming the existence of a Killing horizon, Wald derived
the Wald entropy function \cite{Wald:1993nt,Iyer:1994ys}, which gives us a notion of equilibrium entropy in higher derivative theories. We then need to find its extension to the dynamical setting, satisfying the desired monotonicity conditions.
Let us try to organize the discussion in a useful manner. Firstly, higher derivative terms come with characteristic length scales (to soak up dimensions). Let us assume that these are all calibrated against a single `string' scale $\ell_s$, so that we have a bunch of dimensionless couplings, collectively denoted as $\alpha$, which parametrize the higher derivative theory. We pick a stationary black hole in this theory, which will determine the initial equilibrium configuration. To study the second law, we should now be perturbing away from equilibrium, which itself can be parametrized by a) the amplitude, $\mathfrak{a}$, of the departure from equilibrium, and b) the characteristic frequency, $\omega$, of the disturbance.\footnote{ We assume for the moment that the frequency scale is commensurate with spatial momenta in the case of non-compactly generated horizons.}
A putative entropy function should, at the very least, carry information about these three parameters. The following summarizes what is known to date:
\begin{itemize}
\item The Wald entropy can be constructed for arbitrary $\alpha$ with $\mathfrak{a}=0$.
\item In \cite{Jacobson:1993xs,Jacobson:1995uq} the authors construct entropy functions for theories where the higher derivative Lagrangian is written solely in terms of the Ricci scalar (the so-called $f(R)$ theories), for a finite range of $\alpha$, and arbitrary $\mathfrak{a}$ and $\omega$.
\item In recent years, \cite{Kolekar:2012tq} first constructed an entropy functional that was valid for higher derivative interactions of the Lovelock form (see below) for small amplitude departures from equilibrium $\mathfrak{a}\ll 1$. This was generalized subsequently to $f(\text{Lovelock})$ theories in \cite{Sarkar:2013swa}.
\item These constructions were further examined in \cite{Bhattacharjee:2015yaa} who showed that by fixing various ambiguities associated with foliations of the event horizon, one could construct an entropy function for four-derivative theories of gravity (in spherically symmetric situations). Again the range of validity was $\mathfrak{a} \ll 1$.
\item In a more interesting development, \cite{Wall:2015raa} showed that for any higher derivative theory of gravity, a particular correction to the Wald functional due to \cite{Fursaev:2013fta,Dong:2013qoa,Camps:2013zua}, which construct functions for computing entanglement entropy in the holographic context, provides an entropy function to linear order in amplitudes $\mathfrak{a} \ll 1$.
\item While, there is no explicit discussion in the literature, the explicit construction of an entropy current \cite{Bhattacharyya:2008xc} in the fluid/gravity correspondence \cite{Bhattacharyya:2008jc,Hubeny:2011hd}, is strongly
suggestive of an entropy function in higher derivative gravity perturbatively in the couplings and the frequency, $\alpha \ll 1, \,\omega \ell_s \ll1$, but valid for arbitrary amplitudes.\footnote{ In the hydrodynamic regime, the transport coefficients become unphysical, e.g., the shear viscosity goes negative, for finite values of the higher derivative coupling \cite{Brigante:2007nu}. A negative value of viscosity leads to entropy destruction, but this constraint, one must remark, is far weaker than demanding that the theory respect causality, either in the effective field theory sense \cite{Brigante:2007nu}, or from a fundamental perspective \cite{Camanho:2014apa}.}
\end{itemize}
In this note, we shall construct entropy function for the Lovelock family of higher derivative Lagrangians. We will work perturbatively in higher derivative interactions, treating, as one should, the corrections to Einstein-Hilbert theory in a gradient expansion.
The effective small parameter governing our perturbation theory is going to be
$\omega\, \ell_s$ which is dimensionless. We however will make no assumption about the amplitude $\mathfrak{a}$ of the perturbation away from equilibrium. In fact, we allow arbitrary time evolution away from equilibrium; our only proviso is that this time evolution be sensibly captured by the low energy effective field theory. In geometric terms, we will allow fluctuations of the black hole horizon which, at the horizon scale, are small enough, so that the leading Einstein-Hilbert term dominates over the higher derivative terms.
Despite the two-derivative theory dominating on the horizon scale, we will need to modify the entropy to respect the second law. This follows because, while the leading area contribution is per se large, it remains possible that under evolution, the area variation is anomalously small. This can then be overwhelmed by the higher derivative
$\mathcal{O}(\omega \ell_s)$ contributions, spoiling the monotonicity of the entropy. To ensure that this does not happen we will need to shift the entropy function away from the Wald form with suitable corrections.
The outline of this paper is as follows. We begin in \S\ref{sec:state} with a basic statement of the problem we study, and a complete summary of our results. We will then describe in \S\ref{sec:GaussBonnet} a construction of an all-order entropy function for Gauss-Bonnet theory. Our analysis will give an entropy function with monotonicity properties in spherically symmetric configurations, but we note an interesting obstruction (in the form of a total derivative), which precludes us form making a general statement. We elaborate on this in the course of our discussion. We then show in \S\ref{sec:lovelock} how to extend the construction to higher Lovelock terms, once again encountering a total derivative term. The final result for an arbitrary combination of Lovelock terms can be compactly packaged in terms of a variations of the gravitational Lagrangian with respect to the curvatures, see Eq.~\eqref{eq:SfinalIntro}. In general, as long as the total derivative obstruction term\footnote{ This term also can be expressed in terms of the variation of the Lagrangian with respect to curvatures, multiplying a certain tensor built out of the extrinsic curvatures, cf., Eq.~\eqref{eq:Xobsgen}.} is negative definite (or vanishes as in spherical symmetry), we have our desired entropy function. Finally, in \S\ref{sec:discussion} we comment on potential generalizations and open issues. The various appendices collect some useful technical information relating to the computations.
\subsection{Statement of the problem}
\label{sec:state}
Let us first define the problem we tackle more precisely. We consider higher derivative corrections to Eq.~\eqref{einhilactn0}. Schematically, suppressing all indices, these corrections will be of the form $\alpha_k\,\ell_s^k \, D^k \, R$. $\ell_s$ here is the length scale at which higher derivative corrections appear. Rather than focusing on all possible corrections of this form, we will restrict attention to the family of Lovelock theories, whose action can be written as:\footnote{ To keep the expressions readable we have chosen to work in natural entropy units by setting $G_N^{(d)} = \frac{1}{4}$. Thus an overall dimensionful dependence on the Planck scale is suppressed in our analysis, which however, can be restored easily if desired.}
\begin{equation}
\begin{split}
I &= \frac{1}{4\pi}\ \int d^dx\, \sqrt{-g} \left(R + \sum_{m=2}^\infty \, \alpha_m\,
\ell_s^{2m-2} \, \mathcal{L}_m + \mathcal{L}_{matter} \right)\\
\mathcal{L}_m &=
\delta^{\mu_1\nu_1\cdots \mu_m \nu_m}_{\rho_1\sigma_1\cdots \rho_m \sigma_m} \ R^{\rho_1}{}_{\mu_1}{}^{\sigma_1}{}_{\nu_1} \ \cdots\ R^{\rho_m}{}_{\mu_m}{}^{\sigma_m}{}_{\nu_m} \,.
\end{split}
\label{eq:action1}
\end{equation}
We have written the Lovelock action in terms of the generalized Kronecker symbol
$\delta_{\rho_1 \sigma_1\cdots \rho_m \sigma_m}^{\mu_1 \nu_1\cdots \mu_m \nu_m}$.\footnote{ $\delta^{\mu_1\mu_2 \cdots \mu_m}_{\nu_1\nu_2\cdots \nu_m}$ is the determinant of an $m\times m$ matrix whose $(ij)^{\rm th}$ element is given by $\delta^{\mu_i}_{\nu_j}$.
Hence $\delta_{\rho_1 \sigma_1\cdots \rho_m \sigma_m}^{\mu_1 \nu_1\cdots \mu_m \nu_m}$ is completely antisymmetric both in all its upper and lower indices.}
In particular, the first non-trivial contribution at the four-derivative order is given by the Gauss-Bonnet theory
\begin{equation}
\mathcal{L}_2 \equiv \mathcal{L}_{GB} =
R^2 - 4 R_{\mu\nu} R^{\mu\nu} +R_{\mu\nu\alpha\beta} R^{\mu\nu\alpha\beta} \,,
\label{eq:gb}
\end{equation}
The matter Lagrangian $\mathcal{L}_{matter}$ will play a peripheral role in our analysis. The only assumption we make is that the matter we include satisfies the null energy condition (NEC), $T^{matter}_{\mu\nu} \,k^\mu k^\nu \geq 0$ for any null $k^\mu$.
For the initial part of the paper we will explain the construction in the case of the Gauss-Bonnet theory \S\ref{sec:GaussBonnet}, and only thence generalize to the Lovelock case \S\ref{sec:lovelock}.
As we are interested in analyzing black hole geometries we will assume that we have been handed a solution to a spacetime metric $g_{\mu\nu}$ which has a regular horizon. We pick a coordinate chart for this geometry which manifests regularity at the horizon (similar to the ingoing Eddington-Finkelstein chart), and parameterize the metric as\footnote{ We thank Shiraz Minwalla for numerous discussions on the structure of the entropy function, and setting up useful coordinate charts to construct candidate functions.}
%
\begin{equation}
\begin{split}
&ds^2 = 2\, dv~ dr - f(r,v, {\bf x})\, dv^2 + 2 \, k_A (r,v, {\bf x}) \, dv~ dx^A + h_{AB}(r,v, {\bf x})\,dx^A dx^B\,.
\end{split}
\label{metricintro}
\end{equation}
The null hypersurface of the horizon $\mathcal{H}^+$ is the locus $r=0$ where several of these functions vanish, e.g.,
\begin{equation}\label{metricintroa}
\begin{split}
f(r,v, {\bf x})\big|_{\mathcal{H}^+}= k_A (r,v, {\bf x})\big|_{\mathcal{H}^+} = \partial_r f(r,v, {\bf x})\big|_{\mathcal{H}^+} =0\,.
\end{split}
\end{equation}
In \S\ref{sec:geom} we shall see that such a coordinate choice is always possible if the spacetime contains a null hypersurface. From Eq.~\eqref{metricintro} it follows that the horizon is at $r=0$, with $\partial_v$ being the affinely parametrized null generators of the horizon. This choice is quite natural from the fluid/gravity intuition (it was also previously employed in \cite{Wall:2015raa}). Spatial sections of the horizon at constant $v$ slices are denoted as $\Sigma_v$.
Our goal is to construct an entropy function, $S_{\text{total}}$, in terms of the derivatives of metric components evaluated on the horizon, subject to the following conditions:
\begin{itemize}
\item \label{cond1} {Condition 1 {\bf (C1)}:} $\partial_v S_{\text{total}} \ge 0$:
on every black hole solution captured by metric of the form Eq.~\eqref{metricintro}, and for every finite value of $v$.
\item{Condition 2 {\bf (C2)}:} \label{cond2}$S_{\text{total}}$ reduces to Wald entropy $S_\text{Wald}$ in equilibrium (whence $\partial_v$ is a Killing vector).
\end{itemize}
As indicated our analysis will be carried out in a gradient expansion, with the perturbation parameter being $\omega \,\ell_s$, consistent with the rules of effective field theory. $\ell_s$ is some UV (string) scale where the higher derivative corrections start to dominate.
\subsection{Summary of results}
\label{sec:summaryresult}
We record here the final results of our analysis for quick reference. We find a parameterization of our entropy functional in terms of a scalar function evaluated on spatial sections of the future horizon, denoted $\mathfrak{s}$. To wit,
\begin{equation}
S_{\text{total}} = \int_{\Sigma_v}\, d^{d-2}x\, \sqrt{h} \; \Big( 1 + \mathfrak{s} \Big) \,.
\label{eq:parsol}
\end{equation}
Our parameterization is such that the equilibrium contribution from the Wald analysis is subsumed into $\mathfrak{s}$.
In particular, $\mathfrak{s} = \mathfrak{s}_\text{Wald} + \mathfrak{s}_\text{cor}$, where $ \mathfrak{s}_\text{cor}$ denotes the correction terms we add which involve a series of gradient corrections suppressed by $\ell_s \omega$, as will be made more explicit below.\footnote{ While this is still a gradient expansion, it is valid as long as the low energy effective field theory makes sense. It differs from the fluid/gravity regime in that the frequencies are only constrained to be smaller than the UV scale set by $\ell_s$ and not an IR scale like temperature, which is set by local thermal equilibrium.} We will derive the following results:
\begin{enumerate}
\item The general answer for the entropy function to all orders in $\ell_s\omega$ for Lovelock theories can be expressed as ($\mathfrak{s}_\text{total} = 1 + \mathfrak{s}$)
\begin{equation}
\begin{split}
\mathfrak{s}_\text{total} = \frac{\delta {\cal L}_\text{grav}}{\delta R^v{}_v{}^r{}_r} \bigg{|}_{R\rightarrow {\cal R}} \,
+ \sum_{n=0}^\infty \kappa_n
\left[ \ell_s^n \partial_v^n \left( \frac{1}{2}\, \frac{\delta^2 {\cal L}_\text{grav}}{
\delta R^A{}_{A_1}{}^{C_1}{}_v \, \delta R^{v}{}_{B_1}{}^{D_1}{}_{B} } \bigg{|}_{R\rightarrow {\cal R}} \, \mathcal{K}_{A_1}^{C_1}\overline{\mathcal{K}}_{B_1}^{D_1}
\right) \right]^2
\end{split}
\label{eq:SfinalIntro}
\end{equation}
%
$\mathcal{L}_\text{grav}$ is the purely geometric part of the action comprising only of the metric contributions. The replacement rule ${R\rightarrow {\cal R}} $ indicates that once we vary the Lagrangian, we replace all the curvature tensors of the spacetime with those intrinsic to the hypersurface $\Sigma_v$ (see Eq.~\eqref{defbkint} below).
The coefficients $\kappa_n$ are some arbitrary $\mathcal{O}(1)$ constants satisfying the following recursive inequalities\footnote{ A particular solution is provided by auxiliary parameters $A_{-4}=\frac{1}{3},\, A_{-3} = -\frac{1}{12}$, so that $ A_{-2} = -4,\, A_n = -1$ for $n \neq -1$ which leads to the values of the parameters $\kappa_n$ being:
$\kappa_{-2} =-\frac{1}{2} ,\, \kappa_{-1}=-2 ,\, \kappa_{n} = -1$ for $n\geq 0$ (and $\kappa_{-3} = -1$ for consistency).}
%
\begin{equation}\label{condsgb}
\begin{split}
A_{n} &=2 \,\kappa_{n} -\frac { \kappa_{n-1}^2 }{ A_{n-2}} \le 0\,, \qquad \text{for}\; n =-2,-1,0, \cdots. \\
& \text{initial condition}: \quad \kappa_{-2} = -\frac{1}{2} \,, \quad \kappa_{-1} = -2 \,.
\end{split}
\end{equation}
By construction, it satisfies {\bf C2}. Furthermore, for $SO(d-1)$ spherically symmetric, time dependent geometries, it also satisfies {\bf C1}.
\item In general Eq.~\eqref{eq:SfinalIntro} is an entropy function satisfying {\bf C1}, provided a particular total derivative term (see Eq.~\eqref{eq:Xobsgen}), which we call the \emph{obstruction term} can be bounded to be negative semidefinite. We find that the obstruction vanishes for spherically symmetric configurations but have not been able to gain control of this term efficiently.
\item The first term in Eq.~\eqref{eq:SfinalIntro} is just the Dong-Camps (or Jacobson-Myers functional) appearing in the holographic entanglement entropy computations, see \cite{Fursaev:2013fta, Dong:2013qoa,Camps:2013zua}. This owes to our replacement of the spacetime curvatures by the intrinsic ones on the codimension-2 hypersurface.\footnote{ As noted first by \cite{Jacobson:1993xs} this also works for the stationary black holes where the extrinsic curvatures vanish on the bifurcation surface of the horizon.} As noted earlier, \cite{Wall:2015raa} has earlier demonstrated that this functional serves to uphold the second law to linear order in amplitudes, in arbitrary higher derivative theories. Thus to leading order in the expansion our result is consistent with \cite{Wall:2015raa}.
\item More generally, the structure of Dong functional also has contributions that resemble the second term of Eq.~\eqref{eq:SfinalIntro}. They involve two variations of the Lagrangian with respect to the curvature, contracted with the extrinsic curvatures. However, despite superficial similarities, there are some subtle differences, especially in the index structure of correction term (and, in particular, the fact that we organize it in the form of a negative semidefinite quadratic form). Nevertheless, given the result of \cite{Wall:2015raa}, it is interesting to further consider whether one can find another entropy function based on the Dong functional.
\item To exemplify the answer, we can restrict to Gauss-Bonnet theory, where we provide an explicit expression for $\mathfrak{s}_2$ to all orders in $\ell_s\omega$. Explicitly, our solution reads\footnote{ For the most part we will work in a coordinate space representation where $\partial_v$ will play the role of the frequency $\omega$.}
\begin{equation} \label{scor}
\begin{split}
\mathfrak{s}_2&= 2\, \alpha_2 \,\ell_s^2\;\delta^{A_1 B_1}_{C_1 D_1}\; {{{\mathcal{R}^{C_1}}_{A_1}}^{D_1}}_{B_1}
+\sum_{n=0}^{\infty}\kappa_n \; \ell_s^{2n}\
\partial_v^n \HnG{0}^A_B \; \partial_v^n \HnG{0}^B_A \\
\end{split}
\end{equation}
This is written in terms of data on the surface $\Sigma_v$ (see \S\ref{sec:notation} for a summary of our notation):
\begin{equation}\label{defbkint}
\begin{split}
&{\cal R}_{ABCD}=\text{intrinsic Riemann tensor on}\ \Sigma_v \
(\text{associated with}\ h_{AB})
\,,\\
& \HnG{0}^{A}_{B} = \alpha_2 \, \ell_s^2\; \delta^{ A A_1 A_2}_{ BB_1 B_2} \;
\mathcal{K}^{B_1}_{A_1}~\overline{\mathcal{K}}^{B_2}_{A_2}, \\
&\mathcal{K}_{AB}= \frac{1}{2}\partial_v h_{AB}\vert_{r=0}, \qquad
\overline{\mathcal{K}}_{AB}= \frac{1}{2}\partial_r h_{AB}\vert_{r=0}
\end{split}
\end{equation}
The first term in Eq.~\eqref{scor} agrees with the expression for Wald entropy in the Gauss-Bonnet theory in equilibrium.
\item When the evolution breaks $SO(d-1)$ spherical symmetry, we find that $S_{\text{total}}$ still satisfies {\bf C1} to all order in $\ell_s\omega$,
provided the following total derivative term can be bounded to be negative semidefinite:
\begin{equation}\label{expJ0int}
\begin{split}
(\partial^2_v \mathfrak{s}_2)_{\nabla} &= 4\, \alpha_2 \, \ell_s^2\; \nabla_A\nabla_B \left[ \mathcal{K}\Kt^{AB}- \mathcal{K}^{A}_{C} \mathcal{K}^{BC} -\frac{h^{AB}}{2}\left( \mathcal{K}^2 - \mathcal{K}_{CD} \mathcal{K}^{CD}\right) \right] .
\end{split}
\end{equation}
Here $\nabla_A$ is the covariant derivative with respect to the spatial metric $h_{AB}$ on $\Sigma_v$ and $\mathcal{K} = \mathcal{K}^C_C$. Curiously, this obstruction term only appears to enter at the leading order in the gradient expansion $\mathcal{O}(\omega\,\ell_s)$.\footnote{ It is also worth remarking that the term is quadratic in the amplitude away from equilibrium, and thus is invisible to the earlier linearized analysis, e.g.,
\cite{Wall:2015raa}. We are also agnostic to the particular black hole background about which we perturb in contrast to the discussion of \cite{Sarkar:2013swa,Bhattacharjee:2015qaa}.}
\end{enumerate}
\subsection{Notation}
\label{sec:notation}
\begin{itemize}
\item Spacetime indices: Lowercase Greek $\{\alpha,\beta, \cdots, \mu,\nu, \cdots\}$.
\item Spacetime metric: $g_{\mu\nu}$ and spacetime covariant derivative $D_\alpha$.
\item Indicies on co-dimension two surfaces: Uppercase Latin $\{A,B,\cdots\}$.
\item Horizon generator: $t^\mu$, $t^\mu\,t_\mu =0$. We fix
$t^\mu = \left(\frac{\partial}{\partial v} \right)^\mu$, and abbreviate to $\partial_v$.
\item Normalizer of horizon generator $n^\mu$, $n^\mu n_\mu = 0$ and $n^\mu t_\mu = 1$. We fix $n^\mu = \left(\frac{\partial}{\partial r} \right)^\mu$, and abbreviate to $\partial_r$.
\item Horizon tangent space generated by
$e^\mu_A \equiv \left(\frac{\partial}{\partial x^A} \right)^\mu$ and abbreviated to $\partial_A$.
\item Intrinsic metric on the horizon $h_{AB}$ and associated covariant derivative $\nabla_A$
\item Extrinsic curvatures of the horizon:
\begin{equation}
\mathcal{K}_A^B\, e_B^\nu = e^\mu_A \, D_\mu t^\nu\,, \qquad
\overline{\mathcal{K}} _A^B\, e_B^\nu = e^\mu_A \, D_\mu n^\nu \,, \qquad
\omega^A\, e_A^\nu = n^\mu \, D_\mu n^\nu
\label{eq:KKbdef}
\end{equation}
Note that $\mathcal{K}_{AB}$ vanishes in equilibrium. For the Lovelock theories we do not find a role for the H\'{a}j\'{i}\v{c}ek one-form $\omega^A$.
\end{itemize}
\section{Set-up and strategy}
\label{sec:setup}
We begin with a general discussion of coordinate choices adapted to a black hole spacetime with a regular event horizon. Specifically, we aim to establish our choice of the metric Eq.~\eqref{metricintro} as the most general form admissible. We then explain the strategy we employ, in abstract, for the construction of an entropy function.
\subsection{Horizon adapted coordinates}
\label{sec:geom}
Our interest is in dynamical black hole spacetimes, which admit a regular event horizon. The horizon is a distinguished codimension-1 null hyperspace, $\mathcal{H}^+$, which is the boundary of the casual past of future null infinity. Being the boundary of a casual set, it is null, and is ruled by null geodesics. Our interest is when this hypersurface is dynamical, evolving non-trivially under dynamics governed by the action Eq.~\eqref{eq:action1}.
We first choose coordinates, so that $\mathcal{H}^+$ lies on a constant coordinate locus, w.l.o.g., say $r=0$. The null generators of the horizon will be taken to be the vector field $t^\mu = (\partial_v)^\mu$. This in turn implies that $\partial_v$ is normal to every tangent vector field on the horizon, including $\partial_v$ itself. We choose the coordinate $v$ such that it is the affine parameter along these null generators. Then constant $v$ slices on the horizon are a $(d-2)$ dimensional spatial manifold, denoted $\Sigma_v$. On this spacetime codimension-2 hypersurface we choose $(d-2)$ vector fields $e_A^\mu = (\partial_A)^\mu$ to span the tangent space. Integral curves for $\partial_A$ give us the spatial coordinates $x^A$ along $\Sigma_v$. The vectors $\partial_A$ are orthogonal to $\partial_v$, so the metric on $\mathcal{H}^+$ takes the following degenerate form, appropriate for a null hypersurface:
\begin{equation}\label{hormet}
ds^2\vert_{\mathcal{H}^+} = e^A_\mu \, e^B_\mu \, h_{AB}\, dx^\mu dx^\nu = h_{AB}(v,{\bf x}) ~dx^A dx^B \,,
\end{equation}
Now that we have the geometry of the future event horizon, we can construct the spacetime metric in its vicinity. To define the coordinate $r$, we construct a congruence of null geodesics piercing through $\mathcal{H}^+$, at an angle fixed by the choices of the inner products between the vector field $n^\mu =(\partial_r)^\mu$ and the tangent vectors of the horizon, (i.e., $\partial_v$ and $\partial_A$). We make the following choice:
\begin{equation}
n^\mu t_\mu = (\partial_r,\partial_v) \Big|_{\mathcal{H}^+} =1 \,,
\qquad
n_\mu e_A^\mu = (\partial_r,\partial_A)\Big|_{\mathcal{H}^+}=0
\label{eq:normalize}
\end{equation}
%
With this choice, the spacetime metric $g_{\mu\nu}$ in the vicinity of the horizon takes the form:
\begin{align}\label{metch2}
ds^2 &= 2 \,j(r,v,{\bf x}) \,dv \, dr + 2\,J_A(r,v,{\bf x}) \,dr \, dx^A -f(r,v,{\bf x}) \,dv^2 \nonumber \\
& \qquad \qquad + 2\,k_A (r,v,{\bf x}) \, dv \, dx^A+
h_{AB}(r,v,{\bf x}) \,dx^A dx^B\,,
\nonumber \\
j(r,v,{\bf x})\big|_{\mathcal{H}^+}& \equiv 1\,, \qquad J_A(r,v,{\bf x})\big|_{\mathcal{H}^+} = f(r,v,{\bf x}) \big|_{\mathcal{H}^+}= k_A (r,v,{\bf x}) \big|_{\mathcal{H}^+}=0 \,.
\end{align}
%
By construction we have chosen $\partial_r$ to be a null vector in Eq.~\eqref{metch2}. Further imposing that null geodesics along $\partial_r$ be affinely parametrized, demands that the $r$ derivative of $j(r,v,{\bf x})$ and $J_A(r,v,{\bf x})$ vanish everywhere, implying
%
\begin{equation}
j(r,v,{\bf x}) =1\,, \qquad J_A(r,v,{\bf x}) =0 \,.
\label{parraffcond}
\end{equation}
Finally, we make the choice that $v$ be the affine parameter along the null geodesics with tangent $\partial_v$. Consequently, we set $\partial_r f(r,v,{\bf x})\vert_{\mathcal{H}^+}=0$. Upon imposing all these choices, we arrive at the metric as described earlier in Eq.~\eqref{metricintro}, which we summarize here for convenience:
\begin{equation}
\begin{split}
ds^2 &= 2\, dv~ dr - f(r,v, {\bf x})\,dv^2 + 2 \,k_A (r,v, {\bf x}) \,dv~ dx^A + h_{AB}(r,v, {\bf x})\,dx^A dx^B\,, \\
& \qquad f(r=0,v, {\bf x})= k_A (r=0,v, {\bf x})= \partial_r f(r=0,v, {\bf x})=0\,.
\end{split}
\label{eq:metric}
\end{equation}
The detailed steps leading to the above constraints are given in Appendix \ref{appC}.
\subsection{Strategy of the proof}
\label{sec:strat}
As it has been already mentioned in \S\ref{sec:state}, we would like to prove the existence of an entropy function satisfying
$$\frac{\partial S_{\text{total}}}{\partial v} \ge 0 \,,$$
for all finite $v$. While this a-priori only requires that we present a quantity which can be evaluated on different $\Sigma_v$ slices, we are going assume this is in turn defined in terms of a local density function $\Theta$.\footnote{ We might actually have desired the existence of local entropy current, a much stronger requirement, based on the fluid/gravity intuition. The dual of the entropy current would be a $(d-2)$-form which we could have integrated over arbitrary slices of the horizon, without a-priori picking a foliation as we have done. See \S\ref{sec:discussion} for further comments.} We write:
\begin{equation}
\label{deftheta}
\frac{\partial S_{\text{total}}}{\partial v} =\int_{\Sigma_v} d^{d-2}x \, \sqrt{h}\; \Theta\,.
\end{equation}
In Einstein-Hilbert theory $\Theta$ is the the expansion of the null congruence along $\partial_v$.
At this point, we simply need to show $\Theta \ge 0$. However, in analogy with the proof of the area increase theorem in Einstein-Hilbert theory, we are going to follow a different strategy. Rather than bound $\Theta$, we are going to show that within the validity of our higher derivative perturbation scheme, we have
\begin{equation}\label{restrtheta}
\partial_{v}\Theta \le 0 \,,
\end{equation}
up to the required order in derivative expansion. Thus $\Theta$ is a monotonically decreasing function of the horizon time, $v$. Further, assuming that in the future the spacetime approaches an equilibrium configuration, and hence $\Theta$ vanishes, i.e.,
\begin{equation}
\lim_{v\to \infty} \Theta(v) \; \rightarrow \;0 \,,
\end{equation}
we shall conclude that $\Theta$ is positive at every finite $v$. Thence Eq.~\eqref{deftheta} implies that the rate of change of total entropy, $\partial_vS_{\text{total}}$, is monotone non-decreasing. We therefore can conclude that $S_{\text{total}}$ increases with time.
Let us recall how this strategy works in Einstein-Hilbert theory Eq.~\eqref{einhilactn0} to prove the area theorem. Firstly, one constructs $S_{\text{total}} = \int_{\Sigma_v} \, d^{d-2}x\, \sqrt{h}$ and obtains therefrom
\begin{equation}
\begin{split}
\Theta_{Einstein} &= \mathcal{K}^A_A = \frac{\partial \log\text{Area}(\Sigma_v)}{\partial v} \,, \\
\qquad \mathcal{K}_{AB} &= \frac{1}{2} \,t^\mu D_\mu h_{AB} = \frac{1}{2} \partial_v h_{AB} \,.
\end{split}
\label{eq:defB}
\end{equation}
Note that $\mathcal{K}_{AB}$ is the Lie drag of the horizon spatial metric along the null generators and therefore it vanishes in equilibrium.\footnote{ An equilibrium black hole is a stationary solution. By virtue of stationarity, a compactly generated horizon is also a Killing horizon, generated by a Killing vector field, $t^\mu$. Furthermore, this field has a vanishing locus on a spacetime codimension-2 surface, the bifurcation surface which lies on the horizon. This has the advantage that we can evaluate various quantities on the bifurcation surface and then Lie drag them along $t^\mu$, which leaves them unchanged everywhere else on slices of $\mathcal{H}^+$. We should also hasten to add that stationarity alone does not imply that the horizon is Killing when the horizon is non-compact, as exemplified by black funnel solutions (see \cite{Marolf:2013ioa} for a review and references). This will not bother us since we start from an equilibrium configuration where the horizon will be stationary and Killing, even if non-compactly generated. }
From here it is a simple geometric computation to show that the rate of change of $\Theta$ is given by (see e.g., \cite{wald2010general})
\begin{equation}
\begin{split}
\partial_v\Theta_{Einstein} &= -\mathcal{K}_{AB} \mathcal{K}^{AB}- R_{vv}\,, \\
& = -\mathcal{K}_{\langle AB\rangle} \mathcal{K}^{\langle AB\rangle} - \frac{1}{d-2} \, \mathcal{K}^2 - R_{vv} \,,
\end{split}
\label{incrEina}
\end{equation}
where $R_{vv} = R_{\mu\nu} (\partial_v)^\mu (\partial_v)^\nu$. All indices are raised and lowered with the metric $h_{AB}$ on $\Sigma_v$ which is defined in Eq.~\eqref{hormet}.
The reader will recognize this to be the famous Raychaudhuri equation; we have split the extrinsic curvature tensor $\mathcal{K}_{AB}$ into a symmetric-traceless shear $\mathcal{K}_{\langle AB\rangle} $, and a scalar expansion $\mathcal{K}$, via:
\begin{equation}
\mathcal{K}_{AB} \equiv \mathcal{K}_{\langle AB\rangle} + \frac{1}{d-2}\, h_{AB} \, \mathcal{K}
\,.
\label{eq:shearexp}
\end{equation}
Up to now we have not made use of the actual dynamical equations of motion. Using Einstein's equations, we can express the result in terms of the matter energy-momentum tensor:
\begin{equation}
T_{\mu\nu} \equiv - \frac{1}{\sqrt{-g}} \frac{\delta \left(\sqrt{-g}\, \mathcal{L}_{matter}\right) }{\delta g^{\mu\nu}} \,.
\label{eq:Tdef}
\end{equation}
\begin{equation}
\partial_v\Theta_{Einstein} = -\mathcal{K}_{AB} \mathcal{K}^{AB}- T_{vv},
\label{}
\end{equation}
where we accounted for $g_{vv}\vert_{\mathcal{H}^+} =0$. We will assume that the matter is sensible, and in particular, the energy-momentum tensor satisfies the null energy condition (NEC), viz.,
\begin{equation}
T_{\mu\nu} \xi^\mu \xi^\nu \geq 0 \,, \;\; \text{for} \;\;g_{\mu\nu} \xi^\mu\,\xi^\nu =0\,
\quad \;\; \Longrightarrow \;\;
T_{vv} \geq 0 \,.
\label{eq:NEC}
\end{equation}
With the assumption of the null energy condition, we can immediate see that
\begin{equation}
\partial_v \Theta_{Einstein} = -\mathcal{K}_{AB}\, \mathcal{K}^{AB} - T_{vv}
\leq 0 \,, \qquad \text{(assuming NEC)}
\label{eq:incrEin}
\end{equation}
which is the result we seek. This then implies the area theorem as detailed above.
The obvious obstacle in extending this result to higher derivative theories lies in exchanging $R_{vv}$, via the equations of motion, with the energy momentum tensor. In general Eq.~\eqref{eq:incrEin} will have higher order terms proportional to $\alpha$, the coupling constants of the higher derivative action (and we have to also correct for the equilibrium entropy).
At this point, the reader might be lulled into thinking that since we are working in $\omega\ell_s$ perturbation theory, so as long as the leading terms contained in the r.h.s. of Eq.~\eqref{eq:incrEin} are non-zero, the higher derivative corrections should be irrelevant. Naively, the argument goes, that lower order terms in perturbation theory should continue to dominate. However, this logic is fallacious, for it can happen that under the course of evolution, the r.h.s. of Eq.~\eqref{eq:incrEin} can become of $\mathcal{O}(\omega^2 \ell_s^2)$, or worse still, vanish. Recall that $\mathcal{K}_{AB}$ vanishes in equilibrium, so it is at least of $\mathcal{O}(\omega\ell_s)$, but it could indeed be smaller, depending on the particularities of the evolution.
A simple illustration of this scenario is provided by linear (in amplitude $\mathfrak{a}$) departures from equilibrium. Then the r.h.s. of Eq.~\eqref{eq:incrEin} is $\mathcal{O}(\mathfrak{a}^2)$, but higher derivative terms can, and do, contribute at linear order $\mathcal{O}(\mathfrak{a})$. In fact, demanding that such linear terms vanish, fixes some ambiguities in the Wald functional, called the JKM ambiguities after \cite{Jacobson:1993vj}, as has been described in \cite{Sarkar:2013swa, Bhattacharjee:2015yaa, Wall:2015raa}.\footnote{ In particular, \cite{Wall:2015raa} demonstrates how one can always ensure the absence of such linear entropy production contributions by working with the Wald-Dong functional.}
Working then to quadratic order in $\mathfrak{a}$, we also have contributions from the r.h.s. of Eq.~\eqref{eq:incrEin} and this generically dominates over all higher derivative corrections.
However, there are still exceptions, for in course of evolution, at a given point of time $\mathcal{K}_{AB}$ can become of order ${\cal O}\left(\mathfrak{a}\,\omega\ell_s\right)$, but its derivatives remain unsuppressed. To exemplify the situation, consider a hypothetical case where for Wald entropy (with appropriate corrections to handle the terms linear in $\mathfrak{a}$), we find
$$\partial_v\Theta_{Wald} = -h^{AA'} h^{BB'}\mathcal{K}_{A'B'}\left(\mathcal{K}_{AB} +\alpha
\, \ell_s^2~\partial_r\partial_v \mathcal{K}_{AB}\right) - T_{vv} \,.$$
In the above equation, the ${\cal O}(\alpha \ell_s^2)$ correction term can dominate over the original $\mathcal{K}_{AB}$ term, if at a given point of time and in its neighborhood, $\mathcal{K}_{AB} \sim {\cal O}(\alpha \ell_s^2)\, \partial_r\partial_v \mathcal{K}_{AB}$. Such a situation is not forbidden from appearing under evolution, and could clearly overwhelm the leading order Einstein-Hilbert contribution.
Our goal is to find a way to handle such situations. We need to correct the Wald entropy so that situations as those sketched above, do not pertain. The strategy will be to add the minimal set of corrections necessary so that the net contribution of all terms ensures non-negative definite entropy production. We will do this by suitably combining the correction terms and contributions from the Wald entropy, so that $\partial_v\Theta $ can be expressed as a sum of perfect squares with an overall negative sign.\footnote{ This strategy, as the astute reader might appreciate, precisely follows the logic for construction of an entropy current in hydrodynamics, cf., \cite{Bhattacharyya:2008xc}.}
Before getting into the details, let us record a few salient points of our method:
\begin{enumerate}
\item Our construction will be perturbative in $\omega\ell_s$, which is a natural parameter for the gradient expansion. We make no assumption about the amplitude of the time dependent perturbation. It therefore is {\it not} an expansion around any given static or stationary background solution of the full higher derivative theory. Consequently, there is no way that the form of our corrections, or their coefficients, could depend on any details of a particular background solution.
\item Note that what we really want to prove is just $\partial_v S_{\text{total}}\geq 0$. The strategy we are employing in fact demands something \emph{ much stronger.} In a nutshell, we are demanding
\begin{itemize}
\item $\partial_v^2 S_{\text{total}} \le 0$ or $\partial_v S_{\text{total}}$ to be monotonically decreasing
\item Assuming $\partial_v S_{\text{total}}$ vanishes as $v\rightarrow\infty$, this implies that $\partial_v S_{\text{total}}$ is positive for all finite $v$.
\end{itemize}
\item While this proof-method is standard (owing, in part, to the way the area theorem is proved), we must nevertheless admit that our inability to find such a function is by no means a counter-example to an entropy increase theorem. After all, it is possible for a function to be positive definite, but still not monotonically decreasing. In other words, although the monotone decrease of $\partial_v S_{\text{total}}$ is a sufficient condition for entropy increase, it is definitely not a necessary one. This fact is a bit frustrating since it precludes us from making strong statements about the nature of low energy effective field theory. We cannot rule out a class of higher derivative terms, and thus place constraints on the low energy limits of quantum gravity, just on the basis that our proof-method fails to construct an entropy function for them.
\end{enumerate}
%
\section{Analysis for Gauss-Bonnet theory}
\label{sec:GaussBonnet}
We will begin our discussion with the Gauss-Bonnet theory,
\begin{equation}
\begin{split}
I &=\frac{1}{4\pi} \, \int d^dx\, \sqrt{-g} \left(R + \alpha_2\, \ell_s^2\, \mathcal{L}_2 + \mathcal{L}_{matter}\right)\,,\\
\mathcal{L}_2 &= R^2 - 4 R_{\mu\nu} R^{\mu\nu} +R_{\mu\nu\alpha\beta} R^{\mu\nu\alpha\beta}=\delta^{\mu_1 \nu_1\mu_2 \nu_2}_{\rho_1 \sigma_1\rho_2 \sigma_2}~ {{{R^{\rho_1}}_{\mu_1}}^{\sigma_1}}_{\nu_1}\; {{{R^{\rho_2}}_{\mu_2}}^{\sigma_2}}_{\nu_2} \,.
\end{split}
\label{GB1}
\end{equation}
%
The generalized Kronecker symbol is defined in \S\ref{sec:summaryresult} (below Eq.~\eqref{scor}), and its use will help us to generalize easily to Lovelock gravity in \S\ref{sec:lovelock}.
As explained in \S\ref{sec:intro}, we would like to construct an expression for total entropy $S_{\text{total}}$ such that in equilibrium it reduces to Wald entropy, $S_\text{Wald}$, and once we move away from equilibrium the inequality Eq.~\eqref{restrtheta} is satisfied. We work in the coordinate system described in \S\ref{sec:geom}. This will suffice to illustrate the general ideas, though we could equally phrase the computation in a more natural geometric language (as will be clear from the final answer).
We will separate the discussion into two parts. Firstly, in \S\ref{sec:deriwald}, we shall compute the Wald entropy for Gauss-Bonnet action and we shall see that it does not satisfy Eq.~\eqref{restrtheta} once we depart from equilibrium. From this analysis, we shall conclude that we need to correct the Wald entropy. Then in the second part, in \S\ref{sec:scorrGB}, we shall give our proposal for the correction, and show how, in the course of time evolution, the corrected entropy function does satisfy the inequality Eq.~\eqref{restrtheta}, provided a very particular total derivative term vanishes.
We will then explain the circumstances in which this obstruction term does not pose a problem and the lessons to be gleaned from it.
\subsection{Part I: Time variation of Wald entropy}
\label{sec:deriwald}
Let us first compute the time derivative of Wald entropy along spatial sections of the horizon, assuming that the black hole is dynamical (i.e., $\partial_v$ is not a Killing vector). The expression for the equilibrium Wald entropy follows from Eq.~\eqref{GB1}, using the technique of \cite{Wald:1993nt, Iyer:1994ys}. This reads:
\begin{equation}
S_\text{Wald}= \int_{\Sigma_v} d^{d-2} x\, \sqrt{h}\; (1+\alpha_2 \,\ell_s^2\; \mathfrak{s}_{2,\text{eq}}) \,.
\label{enteq}
\end{equation}
For the Gauss-Bonnet theory this evaluates to (see e.g., \cite{Jacobson:1993xs})
\begin{equation}
\mathfrak{s}_{2,\text{eq}} = 2 \, \delta^{A_1 B_1}_{C_1 D_1}\; {{{R^{C_1}}_{A_1}}^{D_1}}_{B_1}
\stackrel{\text{equilibrium}}{=} \; \; 2 \, \delta^{A_1 B_1}_{C_1 D_1}\; {{{{\cal R}^{C_1}}_{A_1}}^{D_1}}_{B_1}.
\label{defrho1}
\end{equation}
In the above equation, all the Riemann tensors are projected on the spatial sections $\Sigma_v$. For stationary solutions, the projection will result in the intrinsic Riemann tensor of $\Sigma_v$\footnote{ It is important to distinguish between an intrinsic quantity, denoted by an italicized symbol like $\mathcal{R}_{A_1B_1 C_1 D_1}$, and the non-intrinsic $R_{A_1B_1 C_1 D_1}$ which depends on the full spacetime data.}, and therefore the second equality follows.
We recall that in equilibrium, $\partial_v$ is a Killing vector field, which guarantees that motion between two slices $\Sigma_v$ and $\Sigma_{v'}$ is achieved by Lie drag along a symmetry direction, which results in no changes. Let us now turn to the case where $\partial_v$ is no longer a Killing vector. The second equality of Eq.~\eqref{defrho1} does not hold anymore and there is always an ambiguity of how to lift the equilibrium Wald entropy to the non-equilibrium situation \cite{Jacobson:1993vj}.
However, $\mathfrak{s}_{2,\text{eq}}$ is only the starting point of our construction. In a time-dependent situation we anyway have to add some corrections to $\mathfrak{s}_{2,\text{eq}}$; these corrections will necessarily vanish in equilibrium. While the form of the correction depends on the starting point, it is clear that we can w.l.o.g.\ start with Eq.~\eqref{defrho1}. Having obtained a correction from this particular starting point, we can always finesse the end result, to begin from a different initial guess for the entropy. As explained in \S\ref{sec:summaryresult}, we are therefore effectively starting with the Dong functional, which we know satisfies the second law to linear order in amplitudes, thanks to the result of \cite{Wall:2015raa}.
With this in mind, let us assume, as our ansatz,
that $\mathfrak{s}_{2,\text{eq}}$ is given by the last expression of Eq.~\eqref{defrho1} even outside equilibrium, i.e., in $\mathfrak{s}_{2,\text{eq}}$ all the Riemann tensors will be intrinsic:
\begin{equation}
\mathfrak{s}_{2,\text{eq}} \equiv 2 \, \delta^{A_1 B_1}_{C_1 D_1}~ {{{{\cal R}^{C_1}}_{A_1}}^{D_1}}_{B_1}\,.
\label{defrho}
\end{equation}
Following Eq.~\eqref{deftheta}, we define the equilibrium value of $\Theta$ through the temporal derivative of the Wald entropy:
\begin{equation}
\frac{\partial S_\text{Wald}}{\partial v} =\int_{\Sigma_v}\, d^{d-2} x\, \sqrt{h}~\Theta_{eq} \,.
\label{defthetaeq}
\end{equation}
The explicit expression for $\Theta_{eq}$ for the present case takes the form
\begin{equation}
\Theta_{2,\text{eq}} = \mathcal{K}\,( 1 + \alpha_2 \,\ell_s^2\; \mathfrak{s}_{2,\text{eq}}) - 2\, \alpha_2 \,\ell_s^2\; \left(\frac{\delta \mathfrak{s}_{2,\text{eq}}}{\delta h^{AB}}\right)\mathcal{K}^{AB} \,.
\label{eqtheta1}
\end{equation}
where $\mathcal{K}_{AB}$ is defined in Eq.~\eqref{eq:defB}, and indices are raised/lowered by $h_{AB}$, since we view all data as being defined on $\Sigma_v$.
Next we compute
\begin{equation}
\frac{\partial \mathfrak{s}_{2,\text{eq}} }{\partial h^{AB} }= 2\,\delta^{A_1 B_1 }_{C_1 D_1} \; \frac{\partial h^{D_1 F_1} }{\partial h^{AB} } {\mathcal{R}^{C_1}}_{A_1 F_1 B_1} + 2\, \delta^{A_1 B_1 }_{C_1 D_1}\; h^{D_1 F_1} \; \frac{\partial}{\partial h^{AB} }\left( {\mathcal{R}^{C_1}}_{A_1 F_1 B_1}\right) \,.
\label{drdh1}
\end{equation}
The second term on the r.h.s is a total derivative. As long as the horizon is compactly generated, i.e.,
spatial sections $\Sigma_v$ are compact, we can safely discard this term. For non-compactly generated horizons (e.g., planar AdS black holes) we should impose a suitable fall-off condition along the spatial directions. In any event, dropping this term, we find that we can write:
\begin{equation}
\Theta_{2,\text{eq}} = \mathcal{K}+2 \,\alpha_2 \, \ell_s^2\;
\mathcal{K}_A^C ~\delta^{A A_1 B_1}_{C C_1 D_1}~ {{{\mathcal{R}^{C_1}}_{A_1}}^{D_1}}_{B_1}\,,
\label{eqtheta}
\end{equation}
Taking another time derivative we finally arrive at:
\begin{equation}
\partial_{v} \Theta_{2,\text{eq}} = \underbrace{\partial_{v} \mathcal{K} }_{\text{Term 1}} +
\underbrace{2 \,\alpha_2 \, \ell_s^2\ \partial_{v} \mathcal{K}_A^C ~\delta^{A A_1 B_1}_{C C_1 D_1}~{{{\mathcal{R}^{C_1}}_{A_1}}^{D_1}}_{B_1}}_{\text{Term 2}} + \underbrace{2\, \alpha_2 \, \ell_s^2\ \mathcal{K}_A^C ~\delta^{A A_1 B_1}_{C C_1 D_1}~\partial_{v} {{{\mathcal{R}^{C_1}}_{A_1}}^{D_1}}_{B_1}}_{\text{Term 3}}
\label{reldlth}
\end{equation}
We will now explicitly evaluate each of the three terms on the r.h.s of Eq.~\eqref{reldlth}. The reader is directed to Appendix \ref{appdtlcalc} for further details of this calculation (and also for the intermediate steps leading to Eq.~\eqref{reldlth} itself). We find
\begin{subequations}
\begin{align}
\text{Term 1} &= -\mathcal{K}^{AB}\mathcal{K}_{AB} \,- T_{vv} -2\, \alpha_2\, \ell_s^2\; \delta^{A_1 A_2 B_2}_{D_1 C_2 D_2} \left( \frac{\partial \mathcal{K}^{D_1}_{A_1} }{\partial v} + \mathcal{K}_{A_1C}\mathcal{K}^{D_1C} \right) {{{{R^{C_2}}_{A_2}}^{D_2}}_{B_2}}
\nonumber \\
&\qquad +\; 4 \, \alpha_2\, \ell_s^2\; \delta^{A_1 A_2 B_2}_{D_1 C_2 D_2}~ \nabla_{A_2} \mathcal{K}^{D_1}_{A_1}~\nabla^{D_2}\mathcal{K}^{C_2}_{B_2} \,,
\label{cont1} \\
\text{Term 2} &= 2 \, \alpha_2\, \ell_s^2\; \delta^{A_1 A_2 B_2}_{D_1 C_2 D_2}~ \partial_{v} \mathcal{K}^{D_1}_{A_1} ~{{{\mathcal{R}^{C_2}}_{A_2}}^{D_2}}_{B_2} \,,
\label{cont2} \\
\text{Term 3} &= -4 \, \alpha_2\, \ell_s^2\; \delta^{A_1 A_2 B_2}_{D_1 C_2 D_2} \left[ \mathcal{K}^{D_1}_{A_1} \mathcal{K}^{D_2 F_2} {\mathcal{R}^{C_2}}_{A_2F_2 B_2} +\nabla_{A_2} \mathcal{K}^{D_1}_{A_1}\nabla^{D_2}\mathcal{K}^{C_2}_{B_2} \right] + \nabla_A \mathcal{X}^A\,.
\label{cont3}
\end{align}
\end{subequations}
where we introduce the \emph{obstruction term}:
\begin{equation}
\mathcal{X}^{A} = 4 \, \alpha_2\, \ell_s^2\;\delta^{A_1 A B_2}_{D_1 C_2 D_2} \; \mathcal{K}^{D_1}_{A_1}\nabla^{D_2}\mathcal{K}^{C_2}_{B_2}
\label{eq:XGBdef}
\end{equation}
We will explain the issues with this term at the end of our analysis which should clarify its implications. It is worth remarking that the first term has been manipulated using the equations of motion of the Gauss-Bonnet theory. We, in particular, need to make use of the component of the equation projected onto $\mathcal{H}^+$ (the $vv$-component), see Eq.~\eqref{gbeom2}.
Adding up the three terms we obtain
\begin{align}
\partial_{v} \Theta_{2,\text{eq}} &= -\mathcal{K}^{AB}\mathcal{K}_{AB} - T_{vv} +\nabla_A \mathcal{X}^A
\nonumber \\
& -2 \,\alpha_2\, \ell_s^2 \;\delta^{A_1 A_2 B_2}_{D_1 C_2 D_2} \bigg\{
\partial_{v} \mathcal{K}^{D_1}_{A_1} ~ \big[{{{R^{C_2}}_{A_2}}^{D_2}}_{B_2} -
{{{\mathcal{R}^{C_2}}_{A_2}}^{D_2}}_{B_2}\big] +
\mathcal{K}_{A_1C}\mathcal{K}^{D_1C} {{{R^{C_2}}_{A_2}}^{D_2}}_{B_2}
\nonumber \\
& \hspace{1.5in} -2\, \mathcal{K}^{D_1}_{A_1} \mathcal{K}^{D_2 F_2} {\mathcal{R}^{C_2}}_{A_2F_2 B_2}
\bigg\} \,.
\label{reldlth1}
\end{align}
The last term on the r.h.s of Eq.~\eqref{reldlth1} can be further simplified using the following geometric identity
\begin{equation}
\delta^{A_1 A_2 B_2}_{D_1 C_2 D_2} \big[{{{R^{C_2}}_{A_2}}^{D_2}}_{B_2} - {{{\mathcal{R}^{C_2}}_{A_2}}^{D_2}}_{B_2}\big] = -2\, \overline{\mathcal{K}}^{D_2}_{A_2}\, \mathcal{K}^{C_2}_{B_2} \; \delta^{A_1 A_2 B_2}_{D_1 C_2 D_2}\,.
\label{intcurid}
\end{equation}
In writing this expression we have introduced the second extrinsic curvature of the codimension-2 surface $\Sigma_v$, this time along the normal $n^\mu = (\partial_r)^\mu$ (the first, $\mathcal{K}_{AB}$, was defined earlier in Eq.~\eqref{eq:defB}) ):
\begin{equation}
\overline{\mathcal{K}}_{AB} = \frac{1}{2}\, n^\mu D_\mu h_{AB} = \frac{1}{2}\, \partial_r h_{AB}
\label{eq:defKa}
\end{equation}
We give a detailed derivation of Eq.~\eqref{intcurid} in Appendix~\ref{appallcurv}.
There we also list the curvature tensors (evaluated on the horizon) for the metric coordinatized as in Eq.~\eqref{metricintro}.
Finally, when all the dust settles, we find that Eq.~\eqref{reldlth1} can be written in the following form
\begin{align}
&\partial_{v} \Theta_{2,\text{eq}} = \underbrace{-\mathcal{K}^{AB}\mathcal{K}_{AB} - T_{vv}}_{\text{L1}}
+ \underbrace{\nabla_A \mathcal{X}^A}_{\text{O}}
\nonumber \\
&\underbrace{-2\, \alpha_2\, \ell_s^2\; \delta^{A_1 A_2 B_2}_{D_1 C_2 D_2}~ \mathcal{K}_{A_1C}\mathcal{K}^{D_1C} {{{R^{C_2}}_{A_2}}^{D_2}}_{B_2} -4\, \alpha_2 \, \ell_s^2\; \delta^{A_1 A_2 B_2}_{D_1 C_2 D_2}~ \mathcal{K}^{D_1}_{A_1} \mathcal{K}^{D_2 F_2} {\mathcal{R}^{C_2}}_{A_2 F_2 B_2}}_{\text{L2}}
\nonumber \\
& \underbrace{+4\, \alpha_2\, \ell_s^2\; \delta^{A_1 A_2 B_2}_{D_1 C_2 D_2}~\overline{\mathcal{K}}^{D_2}_{A_2} \mathcal{K}^{C_2}_{B_2} ~ \partial_{v} \mathcal{K}^{D_1}_{A_1}}_{\text{L3}}
\label{reldlth1a}
\end{align}
We now want to show that the r.h.s. of Eq.~\eqref{reldlth1a} is negative semidefinite.
Let us analyze this expression term-wise to get some intuition. The first term being a perfect square is negative semidefinite owing to the explicit sign.
So is the second term, once we assume the NEC Eq.~\eqref{eq:NEC}. This is indeed unsurprising, since these two terms are the only ones that show up in the proof of the area theorem, cf., \S\ref{sec:strat}. The remaining four terms in the second and third lines are of order ${\cal O}(\ell_s^2)$, and naively, each of them could have either sign. The obstruction term (O), we set aside for now.
Let us unpack the $\mathcal{O}(\ell_s^2)$ terms. We will first show that the terms in the second line of Eq.~\eqref{reldlth1a} are always negligible compared to those in the first line. To appreciate this, consider grouping the terms, to write the first two lines as
\begin{equation}
\begin{split}
\text{L1+ L2}&=- \mathcal{K}_{P_1 P_2} \mathcal{K}_{Q_1 Q_2}\left[h^{P_1Q_1} h^{P_2 Q_2} + \alpha_2\, \ell_s^2\; M^{P_1Q_1P_2Q_2}\right] - T_{vv} \\
M^{P_1Q_1P_2Q_2} &= 2\,\delta^{A_1 A_2 B_2}_{D_1 C_2 D_2} \bigg( \delta^{P_1}_{A_1}h^{Q_1D_1}h^{Q_2P_2} {{{R^{C_2}}_{A_2}}^{D_2}}_{B_2}+2\, \delta^{P_2}_{A_1}h^{P_1D_1}h^{D_2Q_1} {{{R^{C_2}}_{A_2}}^{Q_2}}_{B_2}
\bigg) \,.
\end{split}
\label{arg1}
\end{equation}
The r.h.s. of Eq.~\eqref{arg1} can be viewed as a quadratic form built out of $h_{AB}$ and the curvature tensor ,defined on the $d^2$-dimensional space of two-tensors $\mathcal{K}_{AB}$. Let us pass to a coordinate gauge where $h_{AB} = \delta_{AB}$, so that the quadratic form in question is a combination of the identity and another symmetric matrix $M$. A further linear orthogonal basis transformation brings $M$ to its diagonal form $\text{diag}\{\lambda_M^{(1)}, \cdots, \lambda_M^{(d^2)}\}$, without affecting the identity part. In sum, by a bit of linear algebra we can diagonalize the quadratic form, whose components then behave as:
$$ 1+ \alpha_2\, \ell_s^2\; \lambda_M^k \,, \qquad k = 1, 2, \cdots \, d^2\,.$$
While generically $\lambda_M^k$ are all of $\mathcal{O}(1)$, but the pre-multiplicative factor of $\ell_s^2$ suppresses them, in our gradient expansion. This ensures that the quadratic form is close to the identity in this basis, and hence, we conclude that the second term (L2) remains sub-dominant to the leading piece (L1).
This then leaves us with the term L3. While it is also generally suppressed, being as it is of $\mathcal{O}(\ell_s^2)$, it differs from the terms in L1, by involving an explicit factor of $\partial_v \mathcal{K}_C^D$. This is dangerous; there are configurations where $\mathcal{K}_A^B$ gets anomalously small locally, without a compensating suppression of $\partial_v \mathcal{K}_C^D$. It is even possible that this term ends up dominating with the wrong sign, leading to non-monotone decrease of
$S_\text{total}$. In other words, the problematic regime is one where locally
\begin{equation}
\begin{split}
&\mathcal{K}^{B_2}_{C_2}\sim \alpha_2\, \ell_s^2\; \delta^{A_1 A_2 B_2}_{D_1 C_2 D_2}~\overline{\mathcal{K}}^{D_2}_{A_2} ~ \partial_{v} \mathcal{K}^{D_1}_{A_1} \,,\\
\text{or},~~
& \mathcal{K}_{AB}\mathcal{K}^{AB}\sim 4 \,\alpha_2\, \ell_s^2\; \delta^{A_1 A_2 B_2}_{D_1 C_2 D_2} \nabla_{A_2} \left( \mathcal{K}^{D_1}_{A_1}\nabla^{D_2}\mathcal{K}^{C_2}_{D_2} \right) \,.
\end{split}
\label{compterm1}
\end{equation}
Should this come to pass, then L3 could potentially dominate over L1 in Eq.~\eqref{reldlth1a} and change the overall sign of the expression. This would lead to a violation of Eq.~\eqref{restrtheta}. To ensure positivity in such situations we need to correct $S_\text{Wald}$, which we do by adding a correction term
$S_\text{cor}$.
The contributions to $S_\text{cor}$ will be engineered to be such that terms of the form L3, as well as, potential non-negligible contributions form $S_\text{cor}$ itself, combine with L1 of Eq.~\eqref{reldlth1a}, to produce a sum of squares with a negative semidefinite coefficient. Furthermore, it will be required to vanish for a stationary geometry, so that each term in $S_\text{cor}$, by construction, will have at least one $\partial_v$.
A couple of comments about our strategy for determining $S_\text{cor}$ are in order:
\begin{enumerate}
\item We make no claim regarding the uniqueness of $S_\text{cor}$. While we have found a particular choice, based on its efficacy in generating all order corrections (in $\alpha_2$), other choices are possible. Indeed, as presaged in \S\ref{sec:intro} there is no requirement for the entropy function to be unique.
\item Our construction has an obstruction in the form of the total derivative term in Eq.~\eqref{reldlth1a} (labeled as O). We have not been able to bound this term, so we can make a statement in the circumstances where this term vanishes. We will say more about this in due course (see also \S\ref{sec:discussion}).
\item We will also see some curious similarities between the analysis herein and that in hydrodynamics \cite{Bhattacharyya:2013lha,Bhattacharyya:2014bha,Haehl:2015pja}. This is despite the perturbative scheme being different, as well as, the fact that one deals with a local entropy current instead of the total entropy in the latter context.
\end{enumerate}
\subsection{Part II: Temporal gradient corrections to Wald entropy }
\label{sec:scorrGB}
We now turn to the determination of $S_\text{cor}$ for the Gauss-Bonnet theory. The final result has already been quoted in Eq.~\eqref{scor}, which we rewrite here for convenience:
\begin{equation}
\begin{split}
S_{\text{total}} &= S_\text{Wald} + S_\text{cor}\\
S_\text{Wald}&=
\int_{\Sigma_v} \, d^{d-2} x\, \sqrt{h} \left(1+ \alpha_2\, \ell_s^2\; \mathfrak{s}_{2,\text{eq}}\right) \,, \quad
\mathfrak{s}_{2,\text{eq}} \equiv 2\, \delta^{A_1 B_1}_{C_1 D_1}~ {{{{\cal R}^{C_1}}_{A_1}}^{D_1}}_{B_1} \\
S_\text{cor} &=
\int_{\Sigma_v} \, d^{d-2} x\, \sqrt{h} ~\mathfrak{s}_{2,\text{cor}} \,, \quad
\mathfrak{s}_{2,\text{cor}}=\sum_{n=0}^{\infty} \, \kappa_n \, \ell_s^{2n}\; \partial_{v}^n \HnG{0}^{A}_{B}~ \partial_{v}^n \HnG{0}^{B}_{A} \,.
\end{split}
\label{scorall}
\end{equation}
We have written the correction terms in a gradient expansion, where the perturbation parameter is $\ell_s \partial_v$ as explained earlier. The tensors $\HnG{0}^A_B$ are defined below. The coefficients $\kappa_n$ are taken to be $\mathcal{O}(1)$ numbers. Our task to infer whether a suitable choice of these can be made to render $\partial_v \Theta \leq 0$.
In the process, we have introduced some notation which we will adhere to in the future to keep the expressions from getting cluttered. We define:
\begin{equation}
\begin{split}
\HnG{-1}^B_A & \equiv \ell_S^2\, \mathcal{K}^B_A\\
\HnG{0}^A_B &= \alpha_2 \, \ell_s^2 \, \delta^{ A A_1 A_2}_{ BB_1 B_2}\; \mathcal{K}^{B_1}_{A_1}~\overline{\mathcal{K}}^{B_2}_{A_2} \,, \\
\HnG{n}^A_B &= \partial_v^n \HnG{0}^A_B \,.
\end{split}
\label{eq:Hndef}
\end{equation}
Recalling that $\mathcal{K}^A_B$ has an explicit $v$-derivative Eq.~\eqref{eq:defB}, it is $\mathcal{O}(\omega)$, as is $\overline{\mathcal{K}}_{AB}$ in the gradient counting. By suitably suppling powers of $\ell_s$ we have ensured that $\HnG{n}$ has mass dimension $n$. This uniform notation is useful in the perturbation scheme, since $\ell_s^n \HnG{n}$ is a term that contributes at $n^{\rm th}$ order in our gradient perturbation theory. Indeed, note that we can write:
\begin{equation}
\mathfrak{s}_{2,\text{cor}} = \sum_{n=0}^\infty\, \kappa_n \, \ell_s^{2n}\, \HnG{n}^2 \,, \qquad \HnG{n}^2 \equiv \HnG{n}^A_B \, \HnG{n}^B_A \,.
\label{}
\end{equation}
With these definitions in place, we are ready to do the computation. We need
\begin{equation}
\Theta = \Theta_{2,\text{eq}} + \Theta_{2,\text{cor}}\,, \qquad \Theta_{2,\text{cor}} =\frac{1}{\sqrt h} \partial_v\bigg({\sqrt{h}}~\mathfrak{s}_{2,\text{cor}}\bigg)
\label{thetacor}
\end{equation}
In Eq.~\eqref{eqtheta} and Eq.~\eqref{reldlth1a} we already have the contributions to $\Theta_{2,\text{eq}}$ and $\partial_v\Theta_{2,\text{eq}}$, respectively. All that remains is obtaining similar expressions for $\Theta_{2,\text{cor}}$ and its time derivative. At an abstract level this is easy, for by explicit differentiation:
\begin{equation}
\begin{split}
\partial_v \Theta_{2,\text{cor}} = \mathfrak{s}_{2,\text{cor}}\; \partial_v \mathcal{K}+ \mathcal{K} \; \partial_v \mathfrak{s}_{2,\text{cor}} + \partial_v^2 \mathfrak{s}_{2,\text{cor}} \,.
\end{split}
\label{GBthetaf}
\end{equation}
Using our ansatz for $\mathfrak{s}_{2,\text{cor}}$ in Eq.~\eqref{scorall}, each term in Eq.~\eqref{GBthetaf} can be separately computed
\begin{equation}
\begin{split}
\mathcal{K}\; \partial_{v} \mathfrak{s}_{2,\text{cor}} &=
2 \, \mathcal{K} \, \sum_{n=0}^{\infty} \kappa_n\, \ell_s^{2n}\;
\HnG{n+1}^{A}_{B} \; \HnG{n}^{B}_{A} \\
\mathfrak{s}_{2,\text{cor}}\, \partial_{v}\mathcal{K} &=
\partial_{v}\mathcal{K}^C_C\, \sum_{n=0}^{\infty} \kappa_n\, \ell_s^{2n}\; \HnG{n}^2 \\
\partial_{v}^2 \mathfrak{s}_{2,\text{cor}} &=
2 \, \sum_{n=0}^{\infty}\kappa_n\, \ell_s^{2n}\; \bigg\{ \HnG{n+1}^2 + \HnG{n}^{A}_{B} \; \HnG{n+2}^{B}_{A} \bigg\} \,.
\end{split}
\label{relscorall}
\end{equation}
Putting together the contributions to Eq.~\eqref{relscorall} and Eq.~\eqref{reldlth1a} we finally obtain an expression for $\partial_v\Theta$:
\begin{equation}
\begin{split}
\frac{d\Theta}{d v}
& =
-\mathcal{K}_{AB} \mathcal{K}^{AB} -T_{vv} + \nabla_{A} \mathcal{X}^{A}
\\
&\quad
-4 \, \ell_s^{-2}\; \HnG{1}^{A_2}_{C_2}\, \HnG{-1}^{C_2}_{A_2} +
4 \,\tilde{\kappa}_0 \, \ell_s^{-2}\; \HnG{0}^2
\\
& \quad
+ \sum_{n=0}^\infty 2\, \kappa_n \, \ell_s^{2n} \bigg\{
\HnG{n+1}^2 + \HnG{n}^{A}_{B} \; \HnG{n+2}^{B}_{A}
\bigg\}
+ (\partial_v \Theta)_\text{neg}
\end{split}
\label{dthall1}
\end{equation}
%
where
\begin{equation}
\begin{split}
(\partial_v \Theta)_\text{neg} &
= - 4\, \tilde{\kappa}_0 \, \ell_s^{-2}\, \HnG{0}^2 -2 \, \alpha_2 \, \ell_s^2\;
\delta^{A_1 A_2 B_2}_{D_1 C_2 D_2}
\bigg\{ \mathcal{K}_{A_1C}\mathcal{K}^{D_1C} \, {{{R^{C_2}}_{A_2}}^{D_2}}_{B_2}
\\
& \hspace{1in}+2\, \mathcal{K}^{D_1}_{A_1} \mathcal{K}^{D_2 F_2} {\mathcal{R}^{C_2}}_{A_2 F_2 B_2} -
2 \, \mathcal{K}^{D_1}_{A_1} \mathcal{K}^{C_2}_{A_2} \partial_{v}\overline{\mathcal{K}}^{D_2}_{B_2} \bigg\} \, \\
& + \partial_{v}\mathcal{K}^C_C\, \sum_{n=0}^{\infty} \kappa_n\, \ell_s^{2n}\; \HnG{n}^2 \\
& + 2 \, \mathcal{K} \, \sum_{n=0}^{\infty} \kappa_n\, \ell_s^{2n}\;
\HnG{n+1}^{A}_{B} \; \HnG{n}^{B}_{A}
\end{split}
\label{pvTignore}
\end{equation}
Various simple algebraic manipulations have been performed in the process of getting to Eq.~\eqref{dthall1}. Firstly, we simplified the L3 term of Eq.~\eqref{reldlth1a} using the identity
\begin{equation}
\begin{split}
-\text{L3} &= 4\,\alpha_2 \ell_s^2 \left( \partial_{v} \mathcal{K}^{D_1}_{A_1} \right) \, \delta^{A_1 A_2 B_2}_{D_1 C_2 D_2}~\overline{\mathcal{K}}^{D_2}_{B_2} \mathcal{K}^{C_2}_{A_2}
\\
&= 4 \, \HnG{1}^{A_2}_{C_2}\; \mathcal{K}^{C_2}_{A_2} -\; 4 \, \alpha_2 \,\ell_s^2\; \delta^{A_1 A_2 B_2}_{D_1 C_2 D_2}\, \mathcal{K}^{D_1}_{A_1} \, \mathcal{K}^{C_2}_{A_2} \, \partial_{v}\overline{\mathcal{K}}^{D_2}_{B_2}\,.
\end{split}
\label{simpli}
\end{equation}
We then supplied and removed a term proportional to $\tilde{\kappa}_0$ in the expression. One part we have left explicit in Eq.~\eqref{dthall1}, and the other part we subsumed into $ (\partial_v \Theta)_\text{neg} $. This will be helpful to write the final expression in a neat form.
Let us understand some of the structure here, especially the rationale for introducing
$ (\partial_v \Theta)_\text{neg} $. We claim these are terms that can be neglected in our analysis as they will never dominate in the $(\omega\ell_s )$ perturbation theory over terms that we explicitly retain in Eq.~\eqref{dthall1}. The rationale for dropping these is as follows:
%
\begin{enumerate}
\item All terms in the first two lines of Eq.~\eqref{pvTignore} have at least two powers of $\mathcal{K}_{AB}$ multiplying other tensors. As such, they are negligible compared to the leading $\mathcal{K}_{AB} \mathcal{K}^{AB}$ for reasons elaborated in \S\ref{sec:deriwald}, and so can safely be dropped.
\item The terms in the third line proportional to $\partial_v \mathcal{K} $ are controlled as follows. Let us contrast them against the term proportional to $\HnG{n+1}^2$ in Eq.~\eqref{dthall1}. Juxtaposing these two terms together we would have a contribution
\begin{equation}
\sum_{n=1}^\infty\, \, \ell_s^{2n-2}
\left(2 \, \kappa_{n-1} + \kappa_n\, \ell_s^2 \,\partial_v\mathcal{K} \right) \HnG{n}^2 + \kappa_0\,\partial_v\mathcal{K} \, \HnG{0}^2 \,.
\end{equation}
In the infinite sum, each term is dominated by the $\kappa_{n-1}$ term which is the one we have retained. The isolated piece involving $ \kappa_0\,\partial_v\mathcal{K} \, \HnG{0}^2$ has again two powers of $\mathcal{K}_A^B$ in the tensor $\HnG{0}$, and therefore is negligible in our expansion scheme.
\item The terms in the last line which contain $\mathcal{K}\; \HnG{n+1}^A_B \,\HnG{n}_A^B$ need more care, but they too can in the end be neglected. To appreciate this let us contrast them against the first term, $-\mathcal{K}_{AB} \, \mathcal{K}^{AB}$. Isolating just these two terms we have the sub-expression $T_1$ of Eq.~\eqref{dthall1}
\begin{equation}
T_1 \equiv- \mathcal{K}^{B}_{A} \left[ \mathcal{K}^A_B - \delta^A_B \, \sum_{n=0}^{\infty} \,\kappa_n\, \ell_s^{2n} \, \HnG{n+1}^C_D \, \HnG{n}^D_C \right]
\end{equation}
It is then is clear that a term in the infinite sum will dominate only when the leading $\mathcal{K}_A^B$ becomes of order one of the term in the parenthesis. At this point we can combine them and learn that
\begin{equation}
\begin{split}
&\mathcal{K}^A_B \sim \delta^A_B\, \kappa_n \, \ell_s^{2n}\, \HnG{n+1}^C_D \, \HnG{n}^D_C \\
& \Longrightarrow
\; T_1 \sim \kappa_n^2 \, \ell_s^{4n} \, \left(\HnG{n+1}^A_B \, \HnG{n}^A_B\right)^2
\end{split}
\end{equation}
However, now this term is subdominant to the $\HnG{n}^2$ term which we have retained, and thus may be safely neglected.
\end{enumerate}
The remaining terms which we have left explicit in Eq.~\eqref{dthall1} are all the ones which we should examine closely. While some are naively suppressed in powers of
$\omega\ell_s$, they have the potential to reverse the sign of the entropy production by dominating over the leading order terms. Let us collect these terms into a useful form, which will enable us to bring them to a sum of perfect squares. Let us use Eq.~\eqref{eq:Hndef}
to write the leading $\mathcal{K}_{AB}\, \mathcal{K}^{AB}$ contribution in terms of $\HnG{-1}$.
Dropping $ (\partial_v \Theta)_\text{neg} $ we have the temporal variation of entropy production being given now by
%
\begin{equation}
\begin{split}
&\frac{d\Theta}{d v} =
-T_{vv} + \nabla_{A} \mathcal{X}^A - \ell_s^{-4}\, \HnG{-1}^{A}_{B}\HnG{-1}^{B}_{A} -
4 \, \ell_s^{-2}
\, \HnG{1}_{B}^{A} \HnG{-1}^{B}_{A}\\
& + 4\,\tilde{\kappa}_0\, \ell_s^{-2}\, \HnG{0}^2 +2 \sum_{n=1}^{\infty} \kappa_{n-1} \, \ell_s^{2n-2}
\HnG{n}^2 + 2 \sum_{n=0}^{\infty}\kappa_n\, \ell_s^{2n} \HnG{n}^{A}_{B} \HnG{n+2}^{B}_{A}
\end{split}
\label{dthall3}
\end{equation}
Furthermore, introduce
\begin{equation}
\label{defkaps}
\kappa_{-2} = -\,\frac{1}{2} \,, \qquad \tilde{\kappa}_0 = -1\,, \qquad \kappa_{-1} = -2\,.
\end{equation}
After a bit more manipulation it is possible to cast the final expression for $\partial_{v}\Theta$ as
\begin{equation}
\begin{split}
\frac{d\Theta}{d v}
&= -T_{vv} + \mathcal{J}+ \nabla_{A} \mathcal{X}^A \,
\\
\mathcal{J} &= 2 \sum_{n=-1}^{\infty} \ell_s^{2n-2} \left[
\kappa_{n-1} \, \HnG{n}^2 + \kappa_{n} \, \ell_s^2\, \HnG{n}^{B}_{A} \HnG{n+2}^{A}_{B} \right]\,.
\end{split}
\label{dthallf}
\end{equation}
We have now successfully isolated the contributions of the $\HnG{n}$ into $\mathcal{J}$ defined above. Importantly, in obtaining Eq.~\eqref{dthallf}, we have succeeded in getting a quadratic polynomial in the tensor$\HnG{n}^B_A$. The quadratic form is given by a matrix, $\mathcal{M}$, which matrix is band-diagonal in its coefficients, since we have links between $\HnG{n}$ and $\HnG{n+2}$ at most (and $n \geq -1$). We have
\begin{itemize}
\item the diagonal entries of $\mathcal{M}$ given by the first term on the r.h.s of $\mathcal{J}$ in Eq.~\eqref{dthallf}, viz., $2\, \kappa_{n-1} \, \ell_s^{2n-2}$,
\item the off-diagonal entries of $\mathcal{M}$ are in the second term on the r.h.s of $\mathcal{J}$ in Eq.~\eqref{dthallf}, viz., $2\kappa_{n} \, \ell_s^{2n}$, for $n = -1,0,1,2,\cdots$.
\end{itemize}
The matrix $\mathcal{M}$ can thus be explicitly constructed
{\footnotesize
\begin{equation} \label{defmatrix}
\mathcal{M} = \left( \begin{array}{ccccccccc}
2 \kappa_{-2} \,\ell_s^{-4} & 0 & \kappa_{-1} \,\ell_s^{-2} & 0 & 0 & 0 & 0 & \cdots \\
0 & 2 \kappa_{-1} \ell_s^{-2} & 0 & \kappa_{0} & 0 & 0 & 0 & \cdots\\
\kappa_{-1} \ell_s^{-2} & 0 & 2 \kappa_{0} & 0 & \kappa_{1} \ell_s^{2} & 0 & 0 & \cdots \\
0 & \kappa_{0} & 0 & 2 \kappa_{1} \ell_s^{2} & 0 & \kappa_{2} \ell_s^{4} & 0 & \cdots \\
0 & 0 & \kappa_{1} \ell_s^{2} & 0 & 2 \kappa_{2} \ell_s^{4} & 0 & \kappa_{3} \ell_s^{6} & \cdots \\
0 & 0 & 0 & \kappa_{2} \ell_s^{4} & 0 & 2 \kappa_{3} \ell_s^{6} & 0 & \cdots \\
0 & 0 & 0 & 0 & \kappa_{3} \ell_s^{6} & 0 & \ddots & \cdots \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots
\end{array} \right) .
\end{equation}
}
As long as we can exhibit a choice of $\kappa_n$ such that the matrix $\mathcal{M}$ is a negative definite quadratic form, and thus $\mathcal{J} \leq 0$, we can declare victory. This can be done by suitably grouping terms as follows:
\begin{equation}
\begin{split}
\mathcal{J} &= \sum_{n=-2}^\infty \, \ell_s^{2n}\, A_{n} \left[\HnG{n+1} +
\frac {\kappa_{n+1} }{ A_{n}} \, \,\ell_s^2\, \HnG{n+3} \right]^2
\end{split}
\label{dthallf1}
\end{equation}
where $A_{n}$ is defined through a pair of continued fractions\footnote{We would like to thank Dileep Jatkar for useful discussions about this point.}
\begin{equation}
\begin{split}
A_{2p} &= 2\, \kappa_{2p} - { \kappa_{2p-1}^2 \over 2 \kappa_{2p-2} - { \kappa_{2p-3}^2 \over 2 \kappa_{2p-4} -{ \#\#\#\# \over \#\#\#\# - {\kappa_{1}^2 \over 2 \kappa_{0} -{\kappa_{-1}^2 \over 2 \kappa_{-2}} }}}}\,,\\
A_{2p-1} &= 2 \kappa_{2p-1} - { \kappa_{2p-2}^2 \over 2 \kappa_{2p-3} - { \kappa_{2p-4}^2 \over 2 \kappa_{2p-5} -{ \#\#\#\# \over \#\#\#\# - {\kappa_{2}^2 \over 2 \kappa_{1} -{\kappa_{0}^2 \over 2 \kappa_{-1}} }}}}\,.
\end{split}
\end{equation}
It is also useful to note that they satisfy recursion relations
\begin{equation}
\begin{split}
A_{n} &=2 \kappa_{n} - { \kappa_{n-1}^2 \over A_{n-2}} \,, \qquad \text{for}\; n =-2,-1,0, \cdots.
\end{split}
\end{equation}
Therefore, the constraints we need are obtained demanding that
\begin{equation}
\begin{split}
&A_{n} \le 0 \,, \qquad \text{for}\; n \geq -2\,.
\end{split}
\end{equation}
These will suffice to ensure that $ \mathcal{J}\le 0$. Furthermore, once we assume the NEC to guarantee $T_{vv} \ge 0$, we see that $\partial_v \Theta $ in Eq.~\eqref{dthallf} is negative semidefinite, modulo the total derivative term $\nabla_A \mathcal{X}^A$. As long as we can control this term, we have achieved our stated goal of constructing a suitable entropy function for Gauss-Bonnet theory.
The total derivative term, which is our obstruction from giving a clean proof, can be re-expressed as follows:
\begin{equation}
\begin{split}
\nabla_A \mathcal{X}^A &= \nabla_A \nabla_B \mathcal{Y}^{AB} \\
\mathcal{Y}^{AB} &= 2\, \alpha_2\, \ell_s^2 \left[ 2\mathcal{K}\Kt^{AB}- 2\mathcal{K}^{A}_{C} \mathcal{K}^{BC} -h^{AB} \left((\mathcal{K}^{C}_{C})^2 - \mathcal{K}_{CD} \mathcal{K}^{CD}\right) \right]
\end{split}
\label{expJ0}
\end{equation}
As long as we can control this total derivative term, we have managed to construct an
entropy function for Gauss-Bonnet gravity.
We require this obstruction term, $\nabla_A \mathcal{X}^A $, to be negative semidefinite. An easy way to achieve this is to in fact make it vanish. This happens, for instance, if we consider $SO(d-1)$ spherically symmetric solutions, where time evolution preserves the spherical symmetry. Owing then to the spatial symmetry, the geometric structures on $\Sigma_v$ are constrained. In particular, $\mathcal{Y}_{AB} \propto h_{AB}$ and thus the gradient term vanishes.
More generally, for compact horizon topology, with shear-free null generators, $\mathcal{K}_{\langle AB\rangle} =0$, we can infer that vanishing of $\nabla_A \mathcal{X}^A$ in Eq.~\eqref{expJ0}, is equivalent to demanding that $\mathcal{K}^2$ be a harmonic function on $\Sigma_v$. Since all harmonic functions on compact spaces are constants, it then follows that as long as the horizon is generated by shear-free generators $t^\mu$ with constant expansion, we would have an entropy function, since $\nabla_A \mathcal{X}^A =0$.
We find it curious that the term of interest $\mathcal{X}^A$ appears exactly only at the linear order in the gradient expansion. It would be interesting to characterize the solution space to $\nabla_A \mathcal{X}^A \leq 0$ more generally, a task we leave for future investigation.
\section{Higher order Lovelock terms}
\label{sec:lovelock}
Having explained how to construct an entropy function for Gauss-Bonnet gravity in some detail, we now turn to higher order Lovelock terms. The action for these theories is given by
\begin{equation} \label{eq:LoveAction}
\begin{split}
I_m &\equiv \frac{1}{4\pi} \, \int \sqrt{-g} \; \left[ R + \alpha_m \, \ell_s^{2m-2}\,
\mathcal{L}_m + \mathcal{L}_{matter} \right] \\
\mathcal{L}_m &= \delta^{\mu_1\nu_1\cdots \mu_m \nu_m}_{\rho_1\sigma_1\cdots \rho_m \sigma_m} R^{\rho_1}{}_{\mu_1}{}^{\sigma_1}{}_{\nu_1} \cdots R^{\rho_m}{}_{\mu_m}{}^{\sigma_m}{}_{\nu_m} \,.
\end{split}
\end{equation}
where $\delta^{\mu_1\nu_1\cdots \mu_m \nu_m}_{\rho_1\sigma_1\cdots \rho_m \sigma_m}$, the generalized Kronecker symbol, is defined below Eq.~\eqref{scor}. While \S\ref{sec:GaussBonnet} focuses on $m=2$, we now explain how to extend these considerations for $m>2$. We will first work out the story for a single Lovelock term and then consider the general theory Eq.~\eqref{eq:action2}.
Modulo some additional tensor structures, this will work in a very similar vein, so our discussion will be brief.
\subsection{Part I: Time variation of Wald entropy}
The Wald entropy function for Lovelock theories of gravity evaluates to
\begin{equation}
\begin{split}
S_\text{Wald}^{(m)} &\equiv \int_{\Sigma_v} d^{d-2}x \,\sqrt{h} \; \left( 1 + \alpha_m \, \ell_s^{2m-2} \, \mathfrak{s}_{m,\text{eq}} \right) \\
\mathfrak{s}_{m,\text{eq}} & = m \, \delta^{A_1B_1\cdots A_{m-1} B_{m-1}}_{C_1D_1 \cdots C_{m-1} D_{m-1}} \, {\cal R}^{C_1}{}_{A_1}{}^{D_1}{}_{B_1} \cdots {\cal R}^{C_{m-1}}{}_{A_{m-1}}{}^{D_{m-1}}{}_{B_{m-1}}
\end{split}
\end{equation}
Defining $\Theta_{m,\text{eq}}$ through the derivative of $S_\text{Wald}^{(m)}$ as in Eq.~\eqref{defthetaeq}, we find after a simple series of manipulations
\begin{equation}
\begin{split}
\Theta_{m,\text{eq}} &= \mathcal{K}( 1 + \alpha_m\, \ell_s^{2m-2}\, \mathfrak{s}_{m,\text{eq}}) -
2\, \alpha_m\, \ell_s^{2m-2} \left(\frac{\delta \mathfrak{s}_{m,\text{eq}}}{\delta h^{AB}}\right)\mathcal{K}^{AB} \\
&= \mathcal{K}+ m \, \alpha_m\, \ell_s^{2m-2} \, \mathcal{K}^C_A\; \delta^{AA_1B_1\cdots A_{m-1} B_{m-1}}_{CC_1D_1 \cdots C_{m-1} D_{m-1}} \, \left( {\cal R}^{C_1}{}_{A_1}{}^{D_1}{}_{B_1} \cdots {\cal R}^{C_{m-1}}{}_{A_{m-1}}{}^{D_{m-1}}{}_{B_{m-1}} \right) \,.
\end{split}
\label{eq:ThetaLove}
\end{equation}
As in \S\ref{sec:deriwald} we will take this result which is written in terms of the intrinsic data on $\Sigma_v$ as our seed ansatz for the entropy function. This functional will satisfy the second law to linear order in amplitudes \cite{Wall:2015raa}. We now wish to find corrections to it, so as to have a second law for arbitrary amplitude departure away from equilibrium within the remit of effective field theory.
In the following we will make use of some abbreviations to declutter notation. In particular, let us introduce the shortcuts\footnote{ As one might expect since the Lovelock terms comprise of Euler forms in even dimensions this notation should be naturally suggestive of the differential form structure in terms of the curvature 2-form. }
\begin{equation}
\label{eq:abbreviations}
\begin{split}
\left( {\cal R}^{m-1} \right)^A_C & \equiv \delta^{AA_1B_1\cdots A_{m-1} B_{m-1}}_{CC_1D_1 \cdots C_{m-1} D_{m-1}} \, \left( {\cal R}^{C_1}{}_{A_1}{}^{D_1}{}_{B_1} \cdots {\cal R}^{C_{m-1}}{}_{A_{m-1}}{}^{D_{m-1}}{}_{B_{m-1}} \right) \,, \\
\left( {\cal R}^{m-2} \right)^{AA_1B_1}_{CC_1D_1} & \equiv \delta^{AA_1B_1\cdots A_{m-1} B_{m-1}}_{CC_1D_1 \cdots C_{m-1} D_{m-1}} \, \left( {\cal R}^{C_2}{}_{A_2}{}^{D_2}{}_{B_2} \cdots {\cal R}^{C_{m-1}}{}_{A_{m-1}}{}^{D_{m-1}}{}_{B_{m-1}} \right) \,,\\
\left( {\cal R}^{m-3} \right)^{AA_1B_1A_2B_2}_{CC_1D_1C_2D_2} & \equiv \delta^{AA_1B_1\cdots A_{m-1} B_{m-1}}_{CC_1D_1 \cdots C_{m-1} D_{m-1}} \, \left( {\cal R}^{C_3}{}_{A_3}{}^{D_3}{}_{B_3} \cdots {\cal R}^{C_{m-1}}{}_{A_{m-1}}{}^{D_{m-1}}{}_{B_{m-1}} \right)\,.
\end{split}
\end{equation}
Using this shorthand notation, we can take one more derivative of Eq.~\eqref{eq:ThetaLove}:
\begin{equation}
\begin{split}
\partial_v \Theta_{m,\text{eq}}
&= \underbrace{\partial_v \mathcal{K} }_{\text{Term 1}} + \underbrace{m \, \alpha_m \, \ell_s^{2m-2} \, \partial_v \mathcal{K}^C_A\; \left( {\cal R}^{m-1} \right)^{A}_{C}}_{\text{Term 2}} \\
&\quad + \underbrace{m(m-1) \, \alpha_m \, \ell_s^{2m-2} \, \mathcal{K}^C_A\; \left( \partial_v {\cal R}^{C_1}{}_{A_1}{}^{D_1}{}_{B_1} \right) \left( {\cal R}^{m-2} \right)^{AA_1B_1}_{CC_1D_1}}_{\text{Term 3}} \,.
\end{split}
\label{eq:ThetaCalc}
\end{equation}
The there different terms can again be simplified in pretty much the same fashion as before. We have:
\begin{subequations}
\begin{align}
\text{Term 1} & = - \mathcal{K}_{AC} \, \mathcal{K}^{AC}- T_{vv} + \alpha_m\, \ell_s ^{2m-2} \, \mathcal{E}^{(m)}_{vv} \\
\text{Term 2} &=
m \, \alpha_m\, \ell_s^{2m-2} \; \partial_v \mathcal{K}^C_A\; \left( {\cal R}^{m-1} \right)^{A}_{C} \,, \\
\text{Term 3} &=
-2m(m-1) \, \alpha_m\, \ell_s^{2m-2} \left[
\left( \nabla_{A_1} \mathcal{K}_A^C \right) \left( \nabla^{D_1} \mathcal{K}_{B_1}^{C_1} \right)
+ \mathcal{K}^C_A \, \mathcal{K}_D^{D_1} {\cal R}^{C_1}{}_{A_1}{}^{D}{}_{B_1} \right]
\nonumber \\
& \hspace{2in} \times
\left( {\cal R}^{m-2} \right)^{AA_1B_1}_{CC_1D_1}
+ \nabla_{A} \widetilde{\mathcal{X}}_{(m)}^{A} \,,
\label{eq:Lt123}
\end{align}
\end{subequations}
We have made several simplifications in writing the final form of these terms. For one,
in the first term we have used the temporal components of the Lovelock equations of motion (see Appendix \ref{app:loveDvv} for details). This reads:\footnote{ By writing the equations of motion in this form, we are assuming $m\geq 3$. If $m=2$, we can revert back to the Gauss-Bonnet case, which only differs in that the product of curvatures in the first line of Eq.~\eqref{eq:love1} is not present.}
\begin{equation}
\begin{split}
\alpha_m\, \ell_s^2 \mathcal{E}_{vv}^{(m)} &= -m \,\alpha_m\, \ell_s^{2m-2}\, \left( {R}^{m-2} \right)^{AA_1B_1}_{CC_1D_1} \\
&\qquad \times \left[ \left(\partial_v \mathcal{K}_A^C + \mathcal{K}_A^{E} \, \mathcal{K}^C_E \right) {R}^{C_{1}}{}_{A_{1}}{}^{D_{1}}{}_{\mathcal{K}_{1}}- 2\,(m-1)\, \left( \nabla_{A_1} \mathcal{K}^C_A \right) \left( \nabla^{D_1} \mathcal{K}_{B_1}^{C_1} \right) \right] \,,
\end{split}
\label{eq:love1}
\end{equation}
The tensors like $\left( {R}^{m-2} \right)^{AA_1B_1}_{CC_1D_1} $ are defined as in Eq.~\eqref{eq:abbreviations}, but with the projected curvature tensor, $R_{ABCD}$, instead of the intrinsic one, ${\cal R}_{ABCD}$.
For another, to evaluate the third term, we employ a strategy analogous to the discussion around Eq.~\eqref{cont3}. This leads to the total derivative piece, where the vector field $\widetilde{\mathcal{X}}_{(m)}^A$ now has the expression:
\begin{equation}
\widetilde{\mathcal{X}}^{A_1}_{(m)} \equiv 2\,m \, (m-1)\, \alpha_m \, \ell_s^{2m-2} \;
\mathcal{K}_A^C \, \left( \nabla^{D_1} \mathcal{K}_{B_1}^{C_1} \right) \left( {\cal R}^{m-2} \right)^{AA_1B_1}_{CC_1D_1} \,.
\end{equation}
Again, this total derivative term will be an obstruction to our proof aiming at giving a clean result for an entropy function.
Adding up the contributions from the three terms given above, we find in total
\begin{align}
&\partial_v \Theta_{m,\text{eq}} = - T_{vv} - \mathcal{K}_{AC} \, \mathcal{K}^{AC} +
\nabla_A \widetilde{\mathcal{X}}^A_{(m)}
- m \, \alpha_m \, \ell_s^{2m-2}\, \bigg\{
\partial_v \mathcal{K}_A^C \,
\left[ \left( {R}^{m-1} \right)^{A}_{C} - \left( {\cal R}^{m-1} \right)^{A}_{C} \right]
\nonumber \\
&\qquad\qquad\;\;- 2(m-1) \left( \nabla_{A_1} \mathcal{K}^C_A \right) \left( \nabla^{D_1} \mathcal{K}_{B_1}^{C_1} \right)
\left[ \left( {R}^{m-2} \right)^{AA_1B_1}_{CC_1D_1} - \left( {\cal R}^{m-2} \right)^{AA_1B_1}_{CC_1D_1} \right]
\nonumber \\
&\qquad\qquad\;\; + \mathcal{K}_A^E \, \mathcal{K}_F^G \Big[ \delta_E^F \delta_G^C \, {R}^{C_{1}}{}_{A_{1}}{}^{D_{1}}{}_{B_{1}} \left( {R}^{m-2} \right)^{AA_1B_1}_{CC_1D_1}
\nonumber \\
&\qquad\qquad\qquad\qquad\qquad+2(m-1) \, \delta_E^C \delta^{D_1}_G \; {\cal R}^{C_{1}}{}_{A_{1}}{}^{F}{}_{B_{1}}\left( {\cal R}^{m-2} \right)^{AA_1B_1}_{CC_1D_1} \Big]
\bigg\} \,.
\label{eq:dvthetalove1}
\end{align}
We can further simplify this expression, we first note that the last two terms contain explicit
factors of two extrinsic curvature tensors, and therefore remain suppressed relative to the leading
$\mathcal{K}_{AC}\mathcal{K}^{AC}$ term (see the discussion around Eq.~\eqref{arg1}). This leaves us with simplifying the two terms in the first two lines which involve differences of powers of the projected and intrinsic curvature tensors. We expand them using standard geometric identities in terms of extrinsic curvatures and then argue that all but the leading term are irrelevant. Explicitly,
%
\begin{align}
&\left( {R}^{m-1} \right)^{A}_{C} - \left( {\cal R}^{m-1} \right)^{A}_{C} \nonumber \\
&\quad = \delta^{AA_1B_1\cdots A_{m-1} B_{m-1}}_{CC_1D_1 \cdots C_{m-1} D_{m-1}} \Big[ R^{C_{1}}{}_{A_{1}}{}^{D_{1}}{}_{B_{1}} \cdots R^{C_{m-1}}{}_{A_{m-1}}{}^{D_{m-1}}{}_{B_{m-1}} \nonumber \\
&\qquad\qquad\qquad\qquad\qquad\quad - {\cal R}^{C_{1}}{}_{A_{1}}{}^{D_{1}}{}_{B_{1}} \cdots {\cal R}^{C_{m-1}}{}_{A_{m-1}}{}^{D_{m-1}}{}_{B_{m-1}} \Big] \nonumber \\
&\quad= -2(m-1) \, \overline{\mathcal{K}}_{A_1}^{D_1} \, \mathcal{K}_{B_1}^{C_1} \; \delta^{AA_1B_1\cdots A_{m-1} B_{m-1}}_{CC_1D_1 \cdots C_{m-1} D_{m-1}} \, {\cal R}^{C_{2}}{}_{A_{2}}{}^{D_{2}}{}_{B_{2}} \cdots {\cal R}^{C_{m-1}}{}_{A_{m-1}}{}^{D_{m-1}}{}_{B_{m-1}} \nonumber \\
&\qquad\, + \mathcal{O}(\overline{\mathcal{K}}_{AB}\mathcal{K}_{CD})^2 \nonumber \\
&\quad = -2(m-1) \, \overline{\mathcal{K}}_{A_1}^{D_1} \, \mathcal{K}_{B_1}^{C_1} \left( {\cal R}^{m-2} \right)^{AA_1B_1}_{CC_1D_1}
+ \mathcal{O}(\overline{\mathcal{K}}_{AB}\mathcal{K}_{CD})^2 \,.
\end{align}
Using this formula (and a similar one for the term involving $m-2$ powers of curvature) we can bring Eq.~\eqref{eq:dvthetalove1} to the form:
\begin{align}
\partial_v \Theta_{m,\text{eq}} &=
- T_{vv} - \mathcal{K}_{AC} \, \mathcal{K}^{AC} + \nabla_A \widetilde{\mathcal{X}}^A_{(m)} \nonumber \\
&
+2 m(m-1) \, \alpha_m\,\ell_s^{2m-2} \, \left( {\cal R}^{m-3} \right)^{AA_1B_1A_2B_2}_{CC_1D_1C_2D_2} \nonumber \\
&
\qquad \times \Big\{
\partial_v \mathcal{K}_A^C \;
\overline{\mathcal{K}}_{A_1}^{D_1} \, \mathcal{K}_{B_1}^{C_1} \; {\cal R}^{C_{2}}{}_{A_{2}}{}^{D_{2}}{}_{B_{2}}
-2(m-2) \left( \nabla_{A_1} \mathcal{K}^C_A \right) \left( \nabla^{D_1} \mathcal{K}_{B_1}^{C_1} \right)
\overline{\mathcal{K}}_{A_2}^{D_2} \, \mathcal{K}_{B_2}^{C_2} \Big\} \nonumber \\
& + \mathcal{O}\left(\alpha_m\, \ell_s^{2m-2} \mathcal{K}_{AC}^2\right) .
\label{eq:dvthetalove2}
\end{align}
Here, we discarded higher order terms which are at least of the order indicated as they are always suppressed compared to the bare $-\mathcal{K}_{AC}\mathcal{K}^{AC}$ contribution, which is negative semidefinite.
Finally, we can rewrite the second term in the curly bracket in Eq.~\eqref{eq:dvthetalove2} as a total derivative by observing that
\begin{equation}
\begin{split}
& \left( {\cal R}^{m-3} \right)^{AA_1B_1A_2B_2}_{CC_1D_1C_2D_2} \Big\{
\left( \nabla_{A_1} \mathcal{K}^C_A \right) \left( \nabla^{D_1} \mathcal{K}_{B_1}^{C_1} \right)
\overline{\mathcal{K}}_{A_2}^{D_2} \, \mathcal{K}_{B_2}^{C_2} \Big\} \\
&\quad = \frac{1}{2}\, \nabla_{A_1} \Big\{ \left( {\cal R}^{m-3} \right)^{AA_1B_1A_2B_2}_{CC_1D_1C_2D_2} \mathcal{K}_A^C \left( \nabla^{D_1} \mathcal{K}_{B_1}^{C_1} \right) \!
\overline{\mathcal{K}}_{A_2}^{D_2} \mathcal{K}_{B_2}^{C_2} \Big\} + \mathcal{O}(\mathcal{K}_{AC}^2) \,,
\end{split}
\label{eq:LRKKb}
\end{equation}
This means we can ignore it in the analysis at the expense of amending our obstruction from
$\widetilde{\mathcal{X}}^A_{(m)} \mapsto \mathcal{X}^A_{(m)}$. All told, we find that
Eq.~\eqref{eq:dvthetalove2} can be simplified to
\begin{equation}
\begin{split}
\partial_v \Theta_{m,\text{eq}} &= - T_{vv} - \mathcal{K}_{AC} \, \mathcal{K}^{AC} + \nabla_A \mathcal{X}^A_{(m)} \\
&\quad +2 m(m-1) \, \alpha_m\,\ell_s^{2m-2} \,\left( {\cal R}^{m-2} \right)^{AA_1B_1}_{CC_1D_1}\, \Big\{
\partial_v \mathcal{K}_A^C \;
\overline{\mathcal{K}}_{A_1}^{D_1} \, \mathcal{K}_{B_1}^{C_1} \Big\}
\end{split}
\label{eq:dvthetalove3}
\end{equation}
where the total derivative term now reads
\begin{equation}
\begin{split}
\mathcal{X}_{(m)}^{A_1}& \equiv
2\,m \, (m-1)\, \alpha_m \, \ell_s^{2m-2} \; \\
& \times \left( {\cal R}^{m-3} \right)^{AA_1B_1A_2B_2}_{CC_1D_1C_2D_2} \, \mathcal{K}_A^C \, \left( \nabla^{D_1} \mathcal{K}_{B_1}^{C_1} \right) \left( {\cal R}^{C_2}{}_{A_2}{}^{D_2}{}_{B_2} - (m-2) \, \overline{\mathcal{K}}_{A_2}^{D_2} \, \mathcal{K}_{B_2}^{C_2} \right) \,.
\end{split}
\label{eq:LXterm}
\end{equation}
In this boundary term, the piece proportional to $(m-2)$ is new compared to the Gauss-Bonnet analysis and has resulted from Eq.~\eqref{eq:LRKKb}.
\subsection{Part II: Temporal gradient corrections to Wald entropy}
In writing the final expression for $\partial_v \Theta_{m,\text{eq}}$ in Eq.~\eqref{eq:dvthetalove3}, we have already discarded terms which are subleading relative to the leading contribution $-\mathcal{K}_{AC}\mathcal{K}^{AC}$ (which is manifestly negative semidefinite). However, as discussed at length earlier, this term can be overwhelmed by the contribution from the second line of Eq.~\eqref{eq:dvthetalove3}. Therefore as in \S\ref{sec:scorrGB} we are going to add corrections to the Wald entropy to construct an entropy function.
Our candidate entropy function is given as:
\begin{equation}\label{eq:LovelockEntropy}
\begin{split}
S_{\text{total}}^{(m)} &\equiv S_\text{Wald}^{(m)} + S_\text{cor}^{(m)} \equiv
\int_{\Sigma_v} d^{d-2}x \,
\sqrt{h} \; \bigg[ \left( 1 + \alpha_m\, \ell_s^{2m-2} \; \mathfrak{s}_{m,\text{eq}} \right)+
\mathfrak{s}_{m,\text{cor}} \bigg]\\
\mathfrak{s}_{m,\text{eq}} &= m \; \delta^{A_1B_1\cdots A_{m-1} B_{m-1}}_{C_1D_1 \cdots C_{m-1} D_{m-1}} \, {\cal R}^{C_1}{}_{A_1}{}^{D_1}{}_{B_1} \cdots {\cal R}^{C_{m-1}}{}_{A_{m-1}}{}^{D_{m-1}}{}_{B_{m-1}} \,,\\
\mathfrak{s}_{m,\text{cor}} &= \sum_{n=0}^\infty \kappa_n \, \ell_s^{2n} \HnL{n}^A_B \HnL{n}^B_A
\end{split}
\end{equation}
where once again we introduce some shorthand notation with
\begin{equation}
\begin{split}
\HnL{-1}^A_C &\equiv \ell_s^2\, \mathcal{K}^A_C \,,\\
\HnL{0} ^A_C&\equiv \frac{1}{2} \, m(m-1) \, \alpha_m \, \ell_s^{2m-2}\; \mathcal{K}_{A_1}^{C_1} \, \overline{\mathcal{K}}_{B_1}^{D_1} \,
\left( {\cal R}^{m-2} \right)^{AA_1B_1}_{CC_1D_1} \,,\\
\HnL{n}^A_C &= \partial_v^n \HnL{0}^A_C \,.
\label{}
\end{split}
\end{equation}
Our task is to determine $\kappa_n$ so as to ensure that the resulting entropy function has the right monotonicity properties.
Given this expression, we compute the temporal derivatives of $\mathfrak{s}_{m,\text{cor}}$ and combine it with the result in Eq.~\eqref{eq:dvthetalove3} to obtain the answer for $\Theta^{(m)} = \Theta_{m,\text{eq}} + \Theta_{m,\text{cor}}$. We will already write this in a compact form for further analysis:
\begin{equation} \label{eq:dvthetalove4}
\begin{split}
\partial_v \Theta^{(m)} &= - T_{vv} - \mathcal{K}_{AC} \, \mathcal{K}^{AC} + \nabla_A \mathcal{X}^A_{(m)} \\
&\quad
- 4\, \ell_s^{-2} \, \HnL{1}^A_C \, \HnL{-1}_{A}^{C}
+4\, \tilde{\kappa}_0 \, \ell_s^{-2} \HnL{0}^2
\\
& \quad
+ \sum_{n=0}^\infty \, \kappa_n \, \ell_s^{2n}
\Big\{
2 \, \HnL{n+1}^2 + \HnL{n}^A_B \, \HnL{n+2}^B_A \Big\} \\
& \quad
+ \sum_{n=0}^\infty \, \kappa_n \, \ell_s^{2n}
\Big\{ 2\, \mathcal{K}\, \HnL{n}^A_B\,\HnL{n+1}^B_A
+ \partial_v \mathcal{K}\, \HnL{n+1}^2
\Big\}
\end{split}
\end{equation}
The first term in the second line stems from rewriting Eq.~\eqref{eq:dvthetalove3} up to terms of order
$\mathcal{O}(\alpha_m\mathcal{K}_{AC}^2 (\omega\ell_s)^p)$ for some $p\geq 0$ (c.f.\ the similar discussion around Eq.~\eqref{simpli}). We have again included a subleading term (proportional to $\tilde{\kappa}_0$) to put the final answer in a useful form. The terms in the last line may be ignored within our gradient expansion; the arguments for these is exactly as in the Gauss-Bonnet case, so we won't repeat ourselves here. All told, we can write the final expression in a compact form:
%
\begin{equation}
\begin{split}
\partial_v \Theta^{(m)} &= - T_{vv} + \mathcal{J} + \nabla_A \mathcal{X}^A_{(m)} \\
\mathcal{J} &=
2 \sum_{n=-1}^\infty \ell_s^{2n-2}
\Big\{ \kappa_{n-1}\, \HnL{n}^2 +
\kappa_n \, \ell_s^2\, \HnL{n}^A_B \, \HnL{n+2}^B_A \Big\}
\end{split}
\label{eq:LoveFinal}
\end{equation}
%
Note that we have set specific values $\kappa_{-2} = -\frac{1}{2}$, $\kappa_{-1}=-2$ and
$\tilde{\kappa}_0 =-1$ to bring the expression into the structure of a well-defined quadratic form.
At this point we are done. Structurally, Eq.~\eqref{eq:LoveFinal} is identical to the analogous expression obtained for the Gauss-Bonnet theory, so the very same argument given in \S\ref{sec:scorrGB} will suffice
to demonstrate that the result is negative semidefinite.
Of course, we still have to tackle the boundary term involving the vector
$\mathcal{X}_{(m)}^A$ which is given in Eq.~\eqref{eq:LXterm}. Once again the comments made at the end of \S\ref{sec:GaussBonnet} continue to apply, owing to the occurrence of the $\mathcal{K}^C_A\, \nabla^{D_1} \mathcal{K}^{C_1}_{B_1}$ in the expression for $\mathcal{X}^{A}_{(m)}$ (with appropriate contractions). We however do not discern as yet a structure which ensures that this boundary contribution also remains negative, thereby preventing us from presenting a clean entropy function.
\subsection{Generalizations: Sums of Lovelock theories}
\label{sec:general}
If we view higher derivative theories of gravity as toy models for the subleading effects in a UV complete theory of quantum gravity, it is natural to consider not just isolated Lovelock terms for a given $m$. In this section we will argue that our analysis for isolated Lovelock terms can be generalized to this case with only minor modifications. The main insight will be a formula for the entropy functional in terms of the Lagrangian.
Let us consider an arbitrary Lovelock combination of the form
\begin{equation}
\begin{split}
I &= \frac{1}{4\pi}\ \int d^dx\, \sqrt{-g} \left({\cal L}_\text{grav} + \mathcal{L}_{matter} \right)\\
&\, {\cal L}_\text{grav} \equiv R + \sum_{m=2}^\infty \, \alpha_m\,
\ell_s^{2m-2} \, \mathcal{L}_m \,,\\
&\qquad\; \equiv R + \sum_{m=2}^\infty \, \alpha_m\,
\ell_s^{2m-2} \,
\delta^{\mu_1\nu_1\cdots \mu_m \nu_m}_{\rho_1\sigma_1\cdots \rho_m \sigma_m} \ R^{\rho_1}{}_{\mu_1}{}^{\sigma_1}{}_{\nu_1} \ \cdots\ R^{\rho_m}{}_{\mu_m}{}^{\sigma_m}{}_{\nu_m} \,.
\end{split}
\label{eq:action2}
\end{equation}
We start with the observation that the objects appearing in the proposed Lovelock entropy functional Eq.~\eqref{eq:LovelockEntropy} can be written in terms of derivatives of the corresponding Lagrangian:
\begin{equation}
\frac{\delta {\cal L}_{m}}{\delta R^v{}_v{}^r{}_r} =m \, \alpha_m \; \ell_s^{2m-2} \; \delta^{A_1B_1\cdots A_{m-1} B_{m-1}}_{C_1D_1 \cdots C_{m-1} D_{m-1}} \, {\cal R}^{C_1}{}_{A_1}{}^{D_1}{}_{B_1} \cdots {\cal R}^{C_{m-1}}{}_{A_{m-1}}{}^{D_{m-1}}{}_{B_{m-1}} \,.
\end{equation}
Define also the general expression for $\HnL{0}$ which is our basic building block:
\begin{equation}
\HnL{0}^A_B \equiv
- \frac{1}{2}\, \alpha_m \ell_s^{2m-2} \;
\frac{\delta^2 {\cal L}_\text{grav}}{
\delta R^A{}_{A_1}{}^{C_1}{}_v \, \delta R^{v}{}_{B_1}{}^{D_1}{}_{B} } \bigg{|}_{R\rightarrow {\cal R}} \, \mathcal{K}_{A_1}^{C_1}\overline{\mathcal{K}}_{B_1}^{D_1}
\label{}
\end{equation}
We can use this observation to suggest a natural entropy functional for theories which involve several Lovelock terms:
\begin{equation} \label{eq:Sfinal}
\begin{split}
S_\text{total} &= \int_{\Sigma_v} \sqrt{h} \; \Bigg\{1+ \sum_{m=2}^\infty \alpha_m\, \ell_s^{2m-2} \frac{\delta {\cal L}_{m}}{\delta R^v{}_v{}^r{}_r} \bigg{|}_{R\rightarrow {\cal R}} \\
&\qquad + \sum_{n=0}^\infty \kappa_n \left[ \ell_s^{n}\, \partial_v^n \left(
\sum_{m=2}^\infty \frac{1}{2}\alpha_m \ell_s^{2m-2} \frac{\delta^2 {\cal L}_m}{\delta R^A{}_{A_1}{}^{C_1}{}_v \, \delta R^{v}{}_{B_1}{}^{D_1}{}_{B} } \bigg{|}_{R\rightarrow {\cal R}} \, \mathcal{K}_{A_1}^{C_1}\overline{\mathcal{K}}_{B_1}^{D_1}
\right) \right]^2
\Bigg\} \,.
\end{split}
\end{equation}
In this expression, we recognize the first line as Wald entropy.
Since our analysis in \S\ref{sec:lovelock} was linear in its use of the equations of motion, we would expect that the Wald entropy term in the first line (which is linear in ${\cal L}_m$) is reasonable.
The second line provides the corrections in our perturbative framework, now expressed directly in terms of the Lagrangian. More succinctly, we can rewrite Eq.~\eqref{eq:Sfinal} as
\begin{equation}
\begin{split}
S_\text{total} &= \int_{\Sigma_v} \sqrt{h} \, \left\{ \frac{\delta {\cal L}_\text{grav}}{\delta R^v{}_v{}^r{}_r} \bigg{|}_{R\rightarrow {\cal R}}
+ \sum_{n=0}^\infty \kappa_n
\left[ \ell_s^n \partial_v^n \left( \frac{1}{2}\, \frac{\delta^2 {\cal L}_\text{grav}}{\delta R^A{}_{A_1}{}^{C_1}{}_v \, \delta R^{v}{}_{B_1}{}^{D_1}{}_{B} } \bigg{|}_{R\rightarrow {\cal R}} \, \mathcal{K}_{A_1}^{C_1}\overline{\mathcal{K}}_{B_1}^{D_1}
\right) \right]^2 \right\}
\end{split}
\label{eq:Sfinal2}
\end{equation}
%
This equation should be viewed as the most compact and general result in our analysis.
Let us now understand more explicitly why $S_\text{total}$ as defined in Eq.~\eqref{eq:Sfinal2} satisfies the second law. By superposing Lovelock terms from our previous analysis, we find that Eq.~\eqref{eq:Sfinal2} leads to the following:
\begin{equation} \label{eq:dvthetalove4Mod}
\begin{split}
\partial_v \Theta
& = - T_{vv} -\mathcal{K}^A_C \mathcal{K}^C_A + \nabla_A \mathcal{X}^A_{_\text{sum}}
\\
& \qquad - 4\, \ell_s^{-2}\, \Hntot{1}^A_C\Hntot{-1}^C_A
+ 4\, \tilde{\kappa}_0 \Hntot{0}^2 \\
&\qquad + \sum_{n=0}^\infty \, \kappa_n\,\ell_s^{2n} \left\{2\, \Hntot{n+1}^2 +\Hntot{n}^A_C\, \Hntot{n+2}^C_A\right\} \\
&\qquad+\sum_{n=0}^\infty \kappa_n \, \ell_s^{2n}\, \Big\{
2\,\mathcal{K}\Hntot{n}^A_C \, \Hntot{n+1}^C_A
+ \partial_v \mathcal{K} \Hntot{n}^2 \Big\} \\
&
\qquad + \mathcal{O}(\alpha \, \mathcal{K}_{AC}^2 (\omega\ell_s)^p)\,,
\end{split}
\end{equation}
where $p \geq 0$. We define here the tensors:
\begin{equation}\label{eq:Hntotdef}
\begin{split}
\Hntot{-1}^A_C \equiv \ell_s^2\, \mathcal{K}^A_C \,,\qquad
\Hntot{n\geq 0}^A_C \equiv \sum_{m=2}^\infty \,\HnL{n}^A_C \,.
\end{split}
\end{equation}
The boundary term can also be understood in terms of the Lagrangian:
{\footnotesize
\begin{equation}
\begin{split}
\mathcal{X}^{A_1}_{_\text{sum}} &= - \sum_{m=2}^\infty 2\,m(m-1) \alpha_m\, \ell_s^{2m-2}\, \left( {\cal R}^{m-3} \right)^{AA_1B_1A_2B_2}_{CC_1D_1C_2D_2} \, \mathcal{K}_A^C \, \left( \nabla^{D_1} \mathcal{K}_{B_1}^{C_1} \right) \\
&\hspace{3in}\times \left( {\cal R}^{C_2}{}_{A_2}{}^{D_2}{}_{B_2} - (m-2) \, \overline{\mathcal{K}}_{A_2}^{D_2} \, \mathcal{K}_{B_2}^{C_2} \right) \\
&= \left[ \frac{\delta^2 {\cal L}_\text{grav}}{\delta R^A{}_{A_1}{}^{C_1}{}_v \, \delta R^{v}{}_{B_1}{}^{D_1}{}_{B} } - \frac{\delta^3 {\cal L}_\text{grav}}{\delta R^A{}_{A_1}{}^{C_1}{}_v \, \delta R^{v}{}_{B_1}{}^{D_1}{}_{B} \, \delta R^{C_2}{}_{A_2}{}^{D_2}{}_{B_2}} \, \overline{\mathcal{K}}_{A_2}^{D_2} \, \mathcal{K}_{B_2}^{C_2}\right]_{R\rightarrow {\cal R}} \mathcal{K}_A^B \, \left( \nabla^{D_1} \mathcal{K}_{B_1}^{C_1} \right)
\end{split}
\label{eq:Xobsgen}
\end{equation}
}
As far as the second law is concerned, it is immediately clear that the expression Eq.~\eqref{eq:dvthetalove4Mod} can be assembled into a sum of squares with negative coefficients (up to negligible terms) in exactly the same way as we did in the case of a single Gauss-Bonnet or Lovelock term. The only difference lies in the more complicated building blocks Eq.~\eqref{eq:Hntotdef}.
With regard to the obstruction term, one notes a general pattern, but it has not yet proven illuminating to see a useful bound. What remains clear is that one should perhaps try to control the tensor $\mathcal{K}_A^B \, \left( \nabla^{D_1} \mathcal{K}_{B_1}^{C_1} \right) $ on $\Sigma_v$. Modulo this issue, we have exhibited as before, a general entropy function for an arbitrary Lovelock theory.
\section{Discussion}
\label{sec:discussion}
We have constructed one entropy function for Lovelock theories of gravity. This entropy function by construction always increases under every such time evolutions where the horizon remains spherically symmetric.
It is possible to drop the assumption of spherical symmetry, provided we can control the total derivative obstruction term, requiring it to be negative semidefinite. While we do not, as yet, have the general solution to this constraint, we think it is plausible that one can indeed construct such an entropy function. We now turn to describing a few subtleties regarding our construction, and also sketch out some potential extensions of the analysis herein.
\paragraph{Metric redefinition:}
If we take low energy limit of any UV complete theory of gravity, the classical action will generically have higher derivative corrections to all orders in the $\ell_s$ expansion. As is well known there exists a freedom of metric redefinition by terms of order ${\cal O}(\ell_s^n),~n>0$, which will rearrange the form of the action. For example, we know that by a metric redefinition of the form
$$g_{\mu\nu}\rightarrow\tilde g_{\mu\nu} = g_{\mu\nu} + \ell_s^2\left(\tilde a_1 R_{\mu\nu} + \tilde a_2 R~ g_{\mu\nu}\right)$$
we can transform any four derivative pure gravity action to Gauss-Bonnet. This, in turn, implies that in a gravity action, where terms of all order in ${\cal O}(\ell_s^2)$ are allowed, at four-derivative order there could at most be one free coefficient which is of physical significance; the remaining two could always be absorbed in an appropriate field redefinition (in the absence of matter).\footnote{ However, this metric-redefinition freedom does not exist, if we demand that our gravity action truncates at four-derivative order which, one suspects, does not give rise to a consistent quantum theory \cite{Camanho:2014apa}.}
The situation is less clear at higher orders in $\ell_s$; at six-derivative order it appears that by field redefinitions one can reduce the number of independent data to two functional forms. One of these is the Lovelock term $\mathcal{L}_3$, but the other is a quasi-topological form involving a different index contraction of curvatures \cite{Oliva:2010eb,Myers:2010ru,Oliva:2011xu}. It is clear from this analysis that with increasing derivative order, more tensor structures are possible (see \cite{Deser:2016tgn} for related comments). So within effective field theory, we should a-priori allow other possible forms of the gravitational interactions, differing from the Lovelock terms considered herein.
Now, under an arbitrary metric redefinition, a null hypersurface embedded in a dynamical spacetime might not remain null. Our analysis heavily used the fact that the horizon is a null surface throughout the time evolution. In other words, we have chosen to fix our field redefinitions so as to ensure that the metric which appears in the action is the very same one that imposes the causal structure on the spacetime. We require the horizon to be a null hypersurface with respect to this choice. It is an interesting open question, as to how to properly assign an entropy functional to an action, without a-priori fixing a field redefinition frame. Such a viewpoint would be really useful in ascertaining an entropy function for arbitrary higher derivative interactions, an ambitious task whose surface we have barely scratched.
\paragraph{Coordinate choice:}
Throughout our analysis we have chosen a very special gauge adapted to the horizon. The spacetime is foliated by constant $r$ slices such that the event horizon is at $r=0$. But our choice does not fix the coordinate transformation freedom completely. For example, a simple constant scaling (see \cite{Wall:2015raa}) of $v\rightarrow\tilde v =\lambda~ v$ and $r\rightarrow\tilde r= \frac{r}{\lambda}$, coupled with a change $f(r,v)\rightarrow \tilde f(\tilde r,\tilde v)= \frac{f(r,v)}{\lambda^2}$ keeps the form of the metric invariant (see Eq.~\eqref{metricintro}). In particular, since $f(r=0,v)=0$, on horizon one does not even need to change any of the metric components and the metric is exactly invariant at $r=0$. However, the corrected entropy density, though defined only on the horizon, is not automatically invariant under this transformation unless we also scale the $\kappa_n$'s appropriately.
It would be interesting to classify all such transformations that keep the form of the metric invariant and to see how the formula of $\partial_v\Theta$ changes as we change the foliation of the spacetime.
Another potential drawback is that we have had to pick a particular foliation of the
horizon, with leaves given by $\Sigma_v$, and thence construct a scalar function on these leaves. A more covariant procedure, suggested by the fluid/gravity analogy, would have been to construct directly a spacetime codimension-2 form, which could then be integrated on any arbitrary spatial section of $\mathcal{H}^+$. Rather curiously, we were unable to construct such an entropy-form (whose dual would be the entropy current). While we do not have a formal no-go result, indications are that such a covariant object does not exist, a fact that we find rather surprising.\footnote{ For instance, using the fluid/gravity map in asymptotically AdS spacetimes \cite{Hubeny:2011hd}, we could have pushed forward the entropy current onto the horizon. We thank Shiraz Minwalla for numerous discussions regarding this issue. }
\paragraph{Total derivative terms:}
We have seen that if we depart from spherical symmetry, then for Lovelock theories our entropy function works provided a very particular term, which is of the form of a total derivative, is either vanishing or always non-negative. Note that though we have assumed that our horizon is a compact hypersurface and we are concerned only with the total entropy, this particular total derivative term cannot be removed since it appears in the $v$ derivative of the integrand of $\partial_v S_{\text{total}}$ (see Eq.~\eqref{reldlth1a}). Note also that this particular term occurs at the same order in the gradient expansion where the higher derivative terms first start contributing to the rate of entropy change.
It would be very interesting to know whether this term has any physical significance and says something important about the theories of gravity. It is also possible that by doing some re-adjustment in our construction we could absorb this term also in a sum of full square pieces. Perhaps some redefinition in the foliation of the spacetime (as mentioned in the previous paragraph) along with the addition of another infinite series in the expression of entropy will suffice to construct a function with a negative semidefinite second derivative.
\paragraph{Other possible methods:}
Another possibility is that instead of proving $\partial_v \Theta \leq 0$, we could try to prove $\partial_v (Z\Theta) \leq 0$ with $Z$ being a positive function. As we review in Appendix
\ref{sec:fR}, the standard proof of second law in $f(R)$ theories by \cite{Jacobson:1995uq} proceeds exactly by this method. More generally, while we have found an obstruction to
proving second law using usual Raychaudhuri equation, our work does not preclude that there might be some other way of proving second law. We hope our work will be a stepping stone
and an inspiration for an eventual proof.
\paragraph{Future directions:} Other future directions include to examine our results in the regime of fluid/gravity
\cite{Hubeny:2011hd}, or in the context of large $D$ black holes dual to membrane dynamics \cite{Bhattacharyya:2016nhn}. In particular, it would be interesting to examine the total derivative obstruction term in these contexts to see whether we can actually use it to engineer entropy destruction in generic solutions. We leave these studies for future work.
\acknowledgments
It is a great pleasure to thank Shiraz Minwalla for initiating discussions on this topic, for an extended collaboration, and for his numerous suggestions and enlightening comments throughout the course of this work.
We would also like to thank Arpan Bhattacharyya, Jyotirmoy Bhattacharya, Srijit Bhattacharjee, Joan Camps, Veronika Hubeny, Dileep Jatkar, Juan Maldacena,
Sudipta Sarkar, Ashoke Sen, Aninda Sinha, and Tadashi Takayanagi for illuminating discussions.
FH gratefully acknowledges support through a fellowship by the Simons Foundation.
MR would like to thank IAS and ICTS for hospitality during the course of this project. Research of N.K. is supported by the JSPS Grant-in-Aid for Scientific Research (A) No.16H02182. N.K. acknowledges the support he received from his previous affiliation HRI, Allahabad. N.K. would also like to thank IITK for hospitality during the course of this project. The work of S.B. was supported by an India Israel (ISF/UGC) joint research grant (UGC/PHY/2014236). S.B. and R.L. would also like to acknowledge our debt to the people of India for their steady and generous support to research in the basic sciences.
|
2,877,628,089,132 | arxiv | \section{Introduction}
\label{Intr}
The Langevin equation (i.e., a sto\-chastic ordinary differential equation) is widely used for studying stochastic systems in physics, chemistry, engineering and other areas \textcolor{blue} {\cite{CoKaWa2004}}. In the simplest case when the random force noise is Gaussian and white the dynamics of the system is Markovian and its probability density function (PDF) satisfies the Fokker-Planck equation \textcolor{blue}{\cite{Risk1989, HoLe2006, Gard2009}}. One of the advantages of this approach is that the Fokker-Planck equation can often be solved analytically, especially in the stationary regime.
If Gaussian noise is colored, then the system dynamics becomes non-Markovian and the corresponding PDF obeys the integro-differential master equation, which under certain conditions can be reduced to the differential one by the Kramers-Moyal expansion \textcolor{blue}{\cite{CoKaWa2004, Risk1989}}. Since, in general, this differential equation is of infinite order, several approximation schemes for its simplification were proposed \textcolor{blue} {\cite{SaMi1989, LiWe1990, HaJu1995}} (for a recent theoretical and numerical analysis see, e.g., Refs.\ \textcolor{blue} {\cite{WaTaTa_PRL2013, MaGrTa_JCP2018}} and references therein). Note also that in some very special cases when the Langevin equation is solved analytically the PDFs can be determined straightforwardly \textcolor{blue} {\cite{HoLe2006, DeHo_PRE2002a, DeHo_PRE2002b, Vitr_PA2006}}.
The Langevin equation driven by Poisson white noise (sometimes called a train of delta pulses), which is a particular case of non-Gaussian white noises, plays an important role in describing the jump processes and phenomena induced by this noise in different systems (see, e.g., Refs.\ \textcolor{blue} {\cite{Han_ZPhys1980, HePeRo_PRE1987, LuBaHa_EPL1995, Grig2002, Gitt2005}}). More recent studies include noise-induced transport \textcolor{blue} {\cite{BaSo_PRE2013, SHL_PRE2014}}, stochastic resonance \textcolor{blue} {\cite{HXSJ_ND2017}}, vibro-impact response \textcolor{blue} {\cite{Zhu_ND2015, YXHH_MPE2018}} and ecosystem dynamics \textcolor{blue}{\cite{PZ_AMS2014, JXL_Entr2018}}, to name only a few. The determination of the corresponding PDF is a much more difficult problem than for Gaussian white noise, because the master equation is integro-differential. Note in this connection that even for the first-order Langevin equation the master equation reduces to the integro-differential Kolmogorov-Feller equation, whose exact stationary solutions are known only in a few cases \textcolor{blue} {\cite{Vast_IJNLM1995, Prop_IJNLM2003, DeHoHa_EPJB2009, RuDuGu_DM2016, DuRuGu_PRE2016}}.
Often the bounded processes more adequately describe the stochastic behavior of real systems than the unbounded ones \textcolor{blue}{\cite{d'Onofr2013}}. But the bounded jump processes driven by Poisson white noise, which could be used, for example, to model the destruction phenomena, have not been studied in depth. As far as we know, our recent paper \textcolor{blue} {\cite{DeBy_PhysA2018}} is the only one devoted to the analytical study of the statistical properties of such processes. It has been shown, in particular, that the jump character and boundedness of these processes are responsible for the nonzero probability of their extremal values and nonuniformity of their distribution inside a bounded domain.
In this work, we generalize the difference Langevin equation describing bounded jump processes driven by Poisson white noise, derive the corresponding Kolmogorov-Feller equation and solve it analytically in the stationary state for the case of uniform distribution of pulse sizes. The paper is organized as follows. In \textcolor{blue}{Section \ref{Model}}, using the saturation function, we introduce the difference Langevin equation driven by Poisson white noise, whose solutions are bounded. The Kolmogorov-Feller equation that corresponds to this Langevin equation is derived in \textcolor{blue}{Section \ref{KFeq}}. In the same section, we cast the stationary solution of the Kolmogorov-Feller equation as a sum of singular terms defining the probability of the extremal values of the bounded process and a regular part representing the non-normalized PDF of this process inside a bounded domain. In \textcolor{blue}{Section \ref{ExSol}}, which is the main section of the paper, we solve analytically the integral equation for the non-normalized PDF and calculate the extreme values probability in the case of uniform distribution of pulse sizes. Here, we show that the ratio of the saturation function width to the half-width of the pulse-size distribution is the only parameter that determines all features of the non-normalized PDF, including its explicit form and complexity. Finally, our main findings are summarized in \textcolor{blue}{Section \ref{Concl}}.
\section{Model for bounded stochastic processes}
\label{Model}
A variety of continuous-time processes in physics, biology, economics and other areas can be described by the first-order Langevin equation
\begin{equation}
\frac{d}{dt}X_{t} = F(X_{t}) + \xi(t),
\label{Lang}
\end{equation}
which, for convenience, is often written in difference form
\begin{equation}
X_{t + \tau} = X_{t} + F(X_{t})\tau +
\Delta_{\tau}.
\label{Lang_dif}
\end{equation}
Here, $X_{t}$ ($t \geq 0$) is a random process, $F(x)$ is a giving deterministic function, $\xi(t)$ is a stationary white noise, $\tau$ is an infinitesimal time interval, and $\Delta_{\tau}$ is a random variable defined as
\begin{equation}
\Delta_{\tau} = \int_{t}^{t+\tau}
\xi(t')\,dt' = \int_{0}^{\tau}\xi(t')\,dt'.
\label{def_Delta}
\end{equation}
The realizations of $X_{t}$ can be either continuous (as in the case of Gaussian white noise) or discontinuous (as in the cases, e.g., of L\'{e}vy and Poisson white noises). These realizations are, in general, unbounded, i.e., the probability that $|X_{t}|$ exceeds a given level is nonzero. In order to extend the Langevin approach to the description of random processes in bounded domains, we introduce instead of \textcolor{blue}{Eq.\ (\ref{Lang_dif})} a more general difference Langevin equation
\begin{equation}
X_{t + \tau} = S(X_{t} + F(X_{t})\tau +
\Delta_{\tau}),
\label{eq_X}
\end{equation}
where
\begin{equation}
S(x) = \left\{\! \begin{array}{ll}
x, & |x| \leq l,
\\ [6pt]
\mathrm{sgn}(x)\,l, & |x| > l
\end{array}
\right.
\label{S(x)}
\end{equation}
is the saturation function, $2l$ is its width (domain size), and $\mathrm{sgn} (x)$ is the signum function. According to \textcolor{blue}{Eq.\ (\ref{eq_X})} and definition \textcolor{blue}{(\ref{S(x)})}, a nonlinear random process $X_{t}$ is bounded, i.e., if $X_{0} \in [-l, l]$, then $X_{t}$ evolves in such a way that $|X_{t}| \leq l$ for all $t \geq 0$. Note, this equation reduces to \textcolor{blue}{Eq.\ (\ref{Lang_dif})} when $l \to \infty$.
Although in \textcolor{blue}{Eq.\ (\ref{eq_X})} any noise can be used, next we explore Poisson white noise only, which is defined as a sequence of delta pulses (see, e.g., Ref.\ \textcolor{blue} {\cite{Grig2002}} and references therein):
\begin{equation}
\xi(t) = \sum_{i=1}^{n(t)}z_{i}
\delta(t-t_{i}).
\label{def_xi}
\end{equation}
Here, $n(t)$ denotes the Poisson counting process, which is characterized by the probability $Q_{n}(t) = (\lambda t)^{n} e^{-\lambda t}/ n!$ that $n \geq 0$ events occur at random times $t_{i}$ within a given time interval $(0, t]$, $\lambda$ is the rate parameter, $\delta(\cdot)$ is the Dirac $\delta$ function, and $z_{i}$ are independent random variables distributed with the same probability density $q(z)$ [$z \in (-\infty, \infty)$]. It is also assumed that this probability density is symmetric, $q(-z) = q(z)$, and $\xi(t) =0$ if $n(t) =0$. From \textcolor{blue} {(\ref{def_Delta})} and \textcolor{blue} {(\ref{def_xi})} it follows that in the case of Poisson white noise the random variable $\Delta_{\tau}$ is the compound Poisson process \textcolor{blue}{\cite{Grig2002}}, i.e.,
\begin{equation}
\Delta_{\tau} =
\left\{\! \begin{array}{ll}
0, & n(\tau)=0,
\\
\sum\nolimits_{i=1}
^{n(\tau)}\!z_{i},
& n(\tau) \geq 1.
\end{array}
\right.
\label{Delta}
\end{equation}
Since $\tau \to 0$, the probability density $p_{\tau}(z)$ that $\Delta_{\tau} = z$ is written in the linear approximation in $\tau$ as \textcolor{blue}{\cite{DeHoHa_EPJB2009}}
\begin{equation}
p_{\tau}(z) = (1 - \lambda \tau) \delta(z)
+ \lambda \tau q(z).
\label{p2}
\end{equation}
\section{Kolmogorov-Feller equation}
\label{KFeq}
\subsection{Time-depended case}
Our next aim is to derive the Kolmogorov-Feller equation for the normalized time-depended PDF $P_{t}(x)$ of the bounded process $X_{t}$ governed by \textcolor{blue}{Eq.\ (\ref{eq_X})}. Using the definition $P_{t}(x) = \langle \delta(x - X_{t}) \rangle$, where $x \in [-l, l]$, the angular brackets denote averaging over all realizations of $X_{t}$, and two-step averaging procedure for $\langle \delta(x - X_{t+\tau}) \rangle$ \textcolor{blue} {\cite{DeViHo_PRE2003}}, we can write
\begin{align}
P_{t+\tau}(x) &= \langle
\delta [x - S(X_{t} + F(X_{t})\tau + \Delta_{\tau})]
\rangle
\nonumber \\[3pt]
&= \int_{-l}^{l}\! P_{t}(x')\,
\Bigg(\! \int_{-\infty}^{\infty}\! p_{\tau}(z)
\delta [x - S(x' + F(x')\tau + z)]\,dz
\Bigg)\,dx'.
\label{P1}
\end{align}
Taking also into account the representation
\begin{equation}
P_{t}(x) = \int_{-l}^{l}\! P_{t}(x')\,
\Bigg(\! \int_{-\infty}^{\infty}\! p_{\tau}(z)
\delta (x - x')\,dz \Bigg)\,dx'
\label{P2}
\end{equation}
(it holds due to the normalization condition $\int_{-\infty}^ {\infty} p_{\tau}(z)\,dz = 1$ and shifting property of the $\delta$ function) and the definition $(\partial/ \partial t)P_{t}(x) = \lim_{\tau \to 0} [P_{t+\tau}(x) - P_{t}(x)]/ \tau$, from \textcolor{blue}{(\ref{P1})} and \textcolor{blue} {(\ref{P2})} one obtains
\begin{equation}
\frac{\partial}{\partial t}P_{t}(x) =
\int_{-l}^{l}K(x,x')P_{t}(x')\,dx',
\label{eq_P}
\end{equation}
where
\begin{equation}
K(x,x') = \lim_{\tau \to 0} \frac{1}
{\tau}\! \int_{-\infty}^{\infty}p_{\tau}
(z) \{ \delta [x - S(x' + F(x')\tau + z)]
- \delta (x-x')\}\,dz
\label{K1}
\end{equation}
($x,x' \in [-l,l]$) is the kernel of the master equation \textcolor{blue}{(\ref{eq_P})}.
In order to derive the Kolmogorov-Feller equation associated with \textcolor{blue}{Eq.\ (\ref{eq_X})} at $\tau \to 0$, we first substitute the probability density \textcolor{blue} {(\ref{p2})} into \textcolor{blue} {(\ref{K1})}. After integration over $z$ one gets
\begin{align}
K(x,x') = &\lim_{\tau \to 0} \frac{1}
{\tau} \bigg\{ (1 - \lambda \tau)\delta
[x - S(x' + F(x')\tau)] - \delta (x - x')
\nonumber \\[3pt]
&+ \lambda \tau \int_{-\infty}^{\infty}
q(z) \delta [x - S(x' + F(x')\tau + z)]\,dz
\bigg\}.
\label{K1_1}
\end{align}
Then, replacing $S(x' + F(x')\tau + z)$ by $S(x' + z)$ (this is possible because only terms of the order of $\tau$ in braces contribute to the limit) and taking into account that $S(x') = x'$ and, in the linear approximation,
\begin{equation}
\delta[x - S(x' + F(x')\tau)] = \delta
(x - x') - \tau \frac{\partial} {\partial x}
\delta(x - x') F(x'),
\label{expr}
\end{equation}
the kernel \textcolor{blue} {(\ref{K1_1})} can be rewritten in the form
\begin{equation}
K(x,x') = -\frac{\partial} {\partial x}
\delta(x - x')F(x') - \lambda \delta
(x - x') + \lambda \int_{-\infty}^{\infty}
\!q(z) \delta [x - S(x' + z)]\,dz.
\label{K1_2}
\end{equation}
Finally, using in \textcolor{blue} {(\ref{K1_2})} the representation
\begin{align}
\int_{-\infty}^{\infty}q(z) \delta
[x - S(x' + z)]\,dz =
& \, \delta (x + l) \int_{-\infty}^
{-l-x'}q(z) dz + \delta (x - l)
\int_{l-x'}^{\infty}q(z)\,dz
\nonumber \\[3pt]
& + \int_{-l - x'}^{l - x'}
q(z) \delta (x - x' - z)\,dz,
\label{int}
\end{align}
which directly follows from the definition \textcolor{blue} {(\ref{S(x)})} of the saturation function, the integral formula $\int_{-l - x'}^{l - x'} q(z) \delta (x - x' - z)\,dz = q(x - x')$, and the exceedance probability defined as
\begin{equation}
R(z) = \int_{z}^{\infty} q(z')\,dz'
\label{def_R}
\end{equation}
[$R(-\infty)=1$, $R(0)=1/2$, $R(\infty)=0$], we obtain
\begin{align}
K(x,x') = &- \frac{\partial}{\partial x}
\delta(x-x') F(x') - \lambda \delta(x-x')
+ \lambda \delta(x-l) R(l-x')
\nonumber \\[3pt]
& + \lambda \delta(x+l)R(l+x') +
\lambda q(x-x').
\label{K2}
\end{align}
Now, substituting this kernel into \textcolor{blue}{Eq.\ (\ref{eq_P})}, we get the Kolmogorov-Feller equation
\begin{align}
\frac{1}{\lambda}\frac{\partial}
{\partial t} P_{t}(x) &+ \frac{1}
{\lambda}\frac{\partial}{\partial x}
F(x) P_{t}(x) + P_{t}(x) = \delta(x-l)
\!\int_{-l}^{l} R(l-x') P_{t}(x')\,dx'
\nonumber \\[3pt]
&+ \delta(x+l)\! \int_{-l}^{l} R(l+x')
P_{t}(x')\,dx' + \int_{-l}^{l} q(x-x')
P_{t}(x')\,dx',
\label{KF}
\end{align}
which corresponds to the difference Langevin equation \textcolor{blue} {(\ref{eq_X})} with $\tau \to 0$ (note, the Kolmogorov-Feller equation for $F(x)=0$ has been derived in Ref.\ \textcolor{blue} {\cite{DeBy_PhysA2018}}). As usual, \textcolor{blue}{Eq.\ (\ref{KF})} should be supplemented by the normalization, $\int_{-l}^{l} P_{t}(x)\,dx = 1$, and initial, $P_{0}(x) = \delta(x-X_{0})$, conditions. It should also be emphasized that, according to \textcolor{blue} {\cite{DeBy_PhysA2018}}, any boundary conditions at $x=\pm l$ are not needed to solve this equation.
\subsection{Stationary PDF and its representation}
Our future efforts will be focused only on the stationary PDF $P_{\mathrm{st}}(x) = \lim_{t \to \infty} P_{t}(x)$ at $F(x)=0$. Since by assumption $q(-z) = q(z)$, in this case the stationary PDF is symmetric, $P_{\mathrm{st}} (-x) = P_{\mathrm{st}} (x)$, and, as it follows from \textcolor{blue}{Eq.\ (\ref{KF})}, satisfies the integral equation
\begin{equation}
P_{\mathrm{st}}(x) = [\delta(x-l) +
\delta(x+l)]\! \int_{-l}^{l} R(l-x')
P_{\mathrm{st}}(x')\,dx' + \!\int_{-l}^{l}
q(x-x')P_{\mathrm{st}}(x')\,dx'.
\label{KFst}
\end{equation}
According to \textcolor{blue}{\cite{DeBy_PhysA2018}}, the general solution of \textcolor{blue}{Eq.\ (\ref{KFst})} can be represented in the form
\begin{equation}
P_{\mathrm{st}}(x) = a[\delta(x-l) +
\delta(x+l)] + f(x),
\label{Pst}
\end{equation}
where $a$ is the probability that $X_{t}$ in the stationary state equals $l$ (or $-l$), and the non-normalized probability density $f(x)$ is symmetric, $f(-x) = f(x)$, and is governed by the integral equation
\begin{equation}
f(x) = a[q(x-l) + q(x+l)] + \int_{-l}^{l}
q(x-x')f(x')\,dx'.
\label{f_eq}
\end{equation}
Using \textcolor{blue} {(\ref{Pst})} and the normalization condition $\int_{-l}^{l} P_{\mathrm{st}}(x)\,dx = 1$, the probability $a$ of the extremal values of the process $X_{t}$ in the stationary state can be expressed through the non-normalized PDF $f(x)$ as follows:
\begin{equation}
a = \frac{1}{2} - \int_{0}^{l} f(x)\,dx.
\label{def_a}
\end{equation}
\section{Exact solutions for uniform jumps}
\label{ExSol}
\subsection{Basic equations}
\label{BasEq}
In order to solve \textcolor{blue}{Eq.\ (\ref{f_eq})} analytically, we restrict ourselves to the case when the jump magnitudes $z_{i}$ are uniformly distributed on the interval $[-c, c]$ ($c>0$ is the half-width of this distribution). In other words, we assume that the probability density $q(z)$ is given by
\begin{equation}
q(z) = \left\{\! \begin{array}{ll}
1/2c, & |z| \leq c,
\\
0, & |z| > c.
\end{array}
\right.
\label{q(z)}
\end{equation}
Depending on the value of $c$, \textcolor{blue}{Eq.\ (\ref{f_eq})} can be rewritten in three different forms. First, if $c>2l$, then
\begin{equation}
q(x-l) = q(x+l) = q(x-x') = \frac{1}{2c}
\label{q1}
\end{equation}
for all $x,x' \in [-l, l]$, and \textcolor{blue}{Eq.\ (\ref{f_eq})} reduces to
\begin{equation}
f(x) = \frac{a}{c} + \frac{1}{c}
\int_{0}^{l}f(x')\,dx'.
\label{f_eq1}
\end{equation}
Second, if $c \in (l, 2l)$, then
\begin{equation}
q(x-l) = \left\{\! \begin{array}{ll}
0, & x \in [-l, l-c),
\\
1/2c, & x \in [l-c, l],
\end{array}
\right.
\quad
q(x+l) = \left\{\! \begin{array}{ll}
1/2c, & x \in [-l, c-l],
\\
0, & x \in (c-l, l]
\end{array} \label{q2}
\right.
\end{equation}
and
\begin{equation}
\int_{-l}^{l} q(x-x')f(x')\,dx' =
\frac{1}{2c} \times
\left\{\! \begin{array}{ll}
\int_{-l}^{x+c}f(x')\,dx',
& x \in [-l, l-c],
\\ [3pt]
\int_{-l}^{l}f(x')\,dx',
& x \in [l-c, c-l],
\\ [3pt]
\int_{x-c}^{l}f(x')\,dx',
& x \in [c-l, l].
\end{array}
\right.
\label{Int_q2}
\end{equation}
Using these results, from \textcolor{blue}{Eq.\ (\ref{f_eq})} one obtains the following integral equations:
\begin{subequations}
\label{f_eq2}
\begin{equation}
\label{f_eq2a}
f(x) = \frac{a}{2c} + \frac{1}{2c}
\int_{-l}^{x+c}f(x')\,dx'
\end{equation}
\end{subequations}
for $x \in [-l, l-c)$,
\begin{displaymath} \tag{29b}
f(x) = \frac{a}{c} + \frac{1}{2c}
\int_{-l}^{l}f(x')\,dx'
\label{f_eq2b}
\end{displaymath}
for $x \in (l-c, c-l)$, and
\begin{displaymath} \tag{29c}
f(x) = \frac{a}{2c} + \frac{1}{2c}
\int_{x-c}^{l}f(x')\,dx'
\label{f_eq2c}
\end{displaymath}
for $x \in (c-l, l]$.
And third, if $c \in (0, l)$, then the probability densities $q(x-l)$ and $q(x-l)$ are given by the same formulas \textcolor{blue} {(\ref{q2})}, and
\begin{equation}
\int_{-l}^{l} q(x-x')f(x')\,dx' =
\frac{1}{2c} \times
\left\{\! \begin{array}{ll}
\int_{-l}^{x+c}f(x')\,dx',
& x \in [-l, c-l],
\\ [3pt]
\int_{x-c}^{x+c}f(x')\,dx',
& x \in [c-l, l-c],
\\ [3pt]
\int_{x-c}^{l}f(x')\,dx',
& x \in [l-c, l].
\end{array}
\right.
\label{Int_q3}
\end{equation}
Hence, in this case \textcolor{blue}{Eq.\ (\ref{f_eq})} yields
\begin{subequations}
\label{f_eq3}
\begin{equation}
\label{f_eq3a}
f(x) = \frac{a}{2c} + \frac{1}{2c}
\int_{-l}^{x+c}f(x')\,dx'
\end{equation}
\end{subequations}
for $x \in [-l, c-l)$,
\begin{displaymath} \tag{31b}
f(x) = \frac{1}{2c}
\int_{x-c}^{x+c}f(x')\,dx'
\label{f_eq3b}
\end{displaymath}
for $x \in (c-l, l-c)$, and
\begin{displaymath} \tag{31c}
f(x) = \frac{a}{2c} + \frac{1}{2c}
\int_{x-c}^{l}f(x')\,dx'
\label{f_eq3c}
\end{displaymath}
for $x \in (l-c, l]$.
A remarkable advantage of \textcolor{blue}{Eqs.\ (\ref{f_eq1})}, \textcolor{blue} {(\ref{f_eq2})} and \textcolor{blue} {(\ref{f_eq3})} is that they can be solved analytically and, what is especially important, the choice of $q(z)$ in the form \textcolor{blue} {(\ref{q(z)})} permits us to characterize the complexity of the function $f(x)$ by a single ratio parameter $\sigma = 2l/c$. In particular, it will be demonstrated that, if $\sigma \in (n-1, n)$ with $n= \overline{1, \infty}$, then $f(x)$ is a piecewise continuous function, which, in general, consists of $2n-1$ branches. These branches are separated from each other by $2(n-1)$ points $\pm x_{k}$, where $k = \overline{1, n-1}$ ($n\geq 2$) and $x_{k} = |2k/\sigma -1|l$, at which the function $f(x)$ can be either continuous or discontinuous (with jump discontinuity). The change of the number of branches occurs at the critical values $\sigma_{\mathrm{ cr}} = n-1$ of the ratio parameter $\sigma$. Next, we determine the function $f(x)$ for $n=1,2,3$ and $n \to \infty$, calculate the probability $a$, and compare analytical results with those obtained by numerical simulations of \textcolor{blue}{Eq.\ (\ref{eq_X})}.
\subsection{Solution at \texorpdfstring{$\sigma \in (0,1)$}{Lg}}
The condition $n=1$ [i.e., $\sigma \in (0,1)$] means that $c>2l$ and hence the function $f(x)$ obeys \textcolor{blue}{Eq.\ (\ref{f_eq1})}, according to which $f(x) = f = \mathrm{const}$. The substitution of $f(x) = f$ into \textcolor{blue}{Eq.\ (\ref{f_eq1})} and condition \textcolor{blue}{(\ref{def_a})} yields a set of equations $f = a/c + fl/c$ and $a = 1/2 -fl$. Solving it with respect to $f$ and $a$ and introducing the reduced non-normalized probability density $\tilde{f}( \tilde{x})$ as $\tilde{f} (\tilde{x}) = f(l\tilde{x})l$\, ($\tilde{x} = x/l$) and $\tilde{f}$ as $\tilde{f} = fl$, we obtain
\begin{equation}
\tilde{f} = \frac{\sigma}{4},
\quad
a= \frac{1}{2} - \frac{\sigma}{4}.
\label{f1,a1}
\end{equation}
From this, using a general representation
\begin{equation}
\tilde{P}_{\mathrm{st}}(\tilde{x}) =
a[\delta(\tilde{x}-1) + \delta(
\tilde{x}+1)] + \tilde{f} (\tilde{x})
\label{red_Pst}
\end{equation}
of the reduced PDF $\tilde{P}_{\mathrm{st}} (\tilde{x}) = P_{\mathrm{st}} (l\tilde{x})l$, one gets
\begin{equation}
\tilde{P}_{\mathrm{st}}(\tilde{x}) =
\bigg( \frac{1}{2} - \frac{\sigma}{4}
\bigg)\, [\delta(\tilde{x}-1) + \delta(
\tilde{x}+1)] + \frac{\sigma}{4}.
\label{Pst1}
\end{equation}
Thus, at $n=1$ the non-normalized probability density is uniform, i.e., $\tilde{f} (\tilde{x}) = \tilde{f}$ for all $|\tilde{x}| \leq 1$ (the only one branch exists in this case). According to \textcolor{blue} {(\ref{f1,a1})}, the probability density $\tilde{f}$ decreases and the probability $a$ increases as the ratio parameter $\sigma$ decreases. For small $\sigma$, these results can be understood by noting that the mean value of $|z_{i}|$, which we denote as $Z$, is inversely proportional to $\sigma$. Indeed, since $Z = \int_{-\infty}^ {\infty} |z|q(z)\,dz = l/ \sigma$, the higher is $Z$ (i.e., the lower is $\sigma$), the higher is the probability $a$ and hence the lower is the probability density $\tilde{f}$. As illustrated in \textcolor{blue}{Fig.\ \ref{fig1}}, the above theoretical results are in complete agreement with those obtained by solving \textcolor{blue}{Eq.\ (\ref{eq_X})} numerically.
\begin{figure}[ht]
\centering
\includegraphics[width=6.8cm]{fig1.eps}
\caption{Reduced non-normalized probability
density $\tilde{f}(\tilde{x})$ as a function of
the reduced variable $\tilde{x} = x/l$ for
$\sigma = 0.4$ and $\sigma = 0.8$. The solid
horizontal lines represent the theoretical
result \textcolor{blue} {(\ref{f1,a1})}
for $\tilde{f}$, and triangle symbols represent
the results of numerical simulations of \textcolor{blue}
{Eq.\ (\ref{eq_X})}. The theoretical values
of the probability $a$ ($a=0.4$ for $\sigma =
0.4$ and $a=0.3$ for $\sigma = 0.8$) are also
in good agreement with the numerical ones
($a \approx a_{-} \approx a_{+}$).}
\label{fig1}
\end{figure}
In order to derive these and other numerical results, we proceed as follows (see also Ref.\ \textcolor{blue} {\cite{DeBy_PhysA2018}}). First, considering $\tau$ as the time step and assuming that $\tau = 10^{-3}$, $l=1$, $\lambda=1$ and $X_{0}=0$ (here, the model parameters are chosen to be dimensionless), from \textcolor{blue}{Eq.\ (\ref{eq_X})} we find $X_{M\tau}$ for $N = 10^{6}$ simulation runs. Because we are concerned with the stationary state, the number of steps is taken to be large enough: $M=10^{4}$. Then, the interval $(-1,1)$ is divided into $K=50$ subintervals of width $\delta = 2/K$, and the reduced non-normalized probability density is defined as $\tilde{f} (\overline{x}_{m}) = N_{m}/\delta N$ , where $\overline{x}_{m}$ is the middle position of the $m$-th subinterval, $m = \overline{1,K}$, and $N_{m}$ is the number of runs for which $X_{M\tau}$ belongs to the $m$-th subinterval. Finally, the probability $a$ is defined as $a = (a_{-} + a_{+})/2$, where $a_{-} = N_{-}/N$, $a_{+} = N_{+}/N$, and $N_{-}$ and $N_{+}$ are the number of runs for which $X_{M\tau} = -l$ and $X_{M\tau} = l$, respectively.
\subsection{Solution at \texorpdfstring{$\sigma \in (1,2)$}{Lg}}
If $n=2$, then $\sigma \in (1,2)$ and so $c \in (l, 2l)$. Therefore, in this case the non-normalized probability density $f(x)$ must satisfy \textcolor{blue}{Eqs.\ (\ref{f_eq2})}. Assuming that $x \in [-l, l-c)$ and taking into account that $\int_{-l}^{0} f(x')\,dx' = (1-2a)/2$, we can rewrite \textcolor{blue}{Eq.\ (\ref{f_eq2a})} in the form
\begin{equation}
f(y) = \frac{1}{4c} + \frac{1}{2c}
\int_{0}^{c-l}f(x')\,dx' + \frac{1}{2c}
\int_{c-l}^{y+c}f(x')\,dx'.
\label{f_eq2a1}
\end{equation}
Here, for convenience of future calculations, we temporarily replaced the variable $x$ by $y$. By differentiating \textcolor{blue}{Eq.\ (\ref{f_eq2a1})} with respect to $y$, we get the equation
\begin{equation}
\frac{d}{dy}f(y) = \frac{1}
{2c}f(y+c),
\label{f_eq2a2}
\end{equation}
which belongs to a class of differential difference equations (see, e.g., Ref.\ \textcolor{blue} {\cite{BeCo1963}}).
If $x \in (l-c, c-l)$, then, using \textcolor{blue} {(\ref{f_eq2b})} and condition \textcolor{blue} {(\ref{def_a})}, we immediately find
\begin{equation}
f(x) = \frac{1}{2c}.
\label{f2b}
\end{equation}
With this result, \textcolor{blue}{Eq.\ (\ref{f_eq2a1})} is reduced to
\begin{equation}
f(y) = \frac{1}{2c} - \frac{l}
{4c^{2}} + \frac{1}{2c}
\int_{c-l}^{y+c}f(x')\,dx'.
\label{f_eq2a3}
\end{equation}
Finally, if $x \in (c-l, l]$, then it is reasonable to divide the interval of integration in \textcolor{blue}{Eq.\ (\ref{f_eq2c})} by three subintervals $(x-c, l-c)$, $(l-c, 0)$ and $(0, l]$. This, together with the above results \textcolor{blue}{(\ref{def_a})}, \textcolor{blue}{(\ref{f2b})} and condition $x=y+c$, permits us to represent \textcolor{blue}{Eq.\ (\ref{f_eq2c})} in the form
\begin{equation}
f(y+c) = \frac{1}{2c} - \frac{l}
{4c^{2}} + \frac{1}{2c}
\int_{y}^{l-c}f(x')\,dx',
\label{f_eq2c1}
\end{equation}
which, after differentiating with respect to $y$, yields the following differential difference equation:
\begin{equation}
\frac{d}{dy}f(y + c) =
- \frac{1}{2c}f(y).
\label{f_eq2c2}
\end{equation}
A set of differential difference equations \textcolor{blue} {(\ref{f_eq2a2})} and \textcolor{blue} {(\ref{f_eq2c2})} determines the non-normalized probability density on the intervals $[-l, l-c)$ and $(c-l, l]$. Its remarkable feature is that it can be reduced (by a single differentiation of these equations with respect to $y$) to a set of independent ordinary differential equations
\begin{subequations}\label{f_eq2ac}
\begin{gather}
\frac{d^{2}}{dy^{2}}f(y) +
\frac{1}{4c^{2}}f(y) =0,
\label{f_eq2a4}\\
\frac{d^{2}}{dy^{2}}f(y + c) +
\frac{1}{4c^{2}}f(y + c) =0.
\label{f_eq2c3}
\end{gather}
\end{subequations}
Since $-y \in (c-l, l]$ and $f(-y) = f(y)$, \textcolor{blue}{Eq.\ (\ref{f_eq2c3})} is equivalent to \textcolor{blue}{Eq.\ (\ref{f_eq2a4})}. Therefore, returning to the variable $x$, from the equation
\begin{equation}
\frac{d^{2}}{dx^{2}}f(x) +
\frac{1}{4c^{2}}f(x) =0
\label{f_eq2gen}
\end{equation}
we find the function $f(x)$ at $x \in [-l, l-c)$,
\begin{equation}
f(x) = \alpha \cos{\frac{x}{2c}}
+ \beta \sin{\frac{x}{2c}}
\label{f2a}
\end{equation}
($\alpha$ and $\beta$ are parameters to be determined), and at $x \in (c-l, l]$,
\begin{equation}
f(x) = \alpha' \cos{\frac{x}{2c}}
+ \beta' \sin{\frac{x}{2c}}.
\label{f2c}
\end{equation}
Taking also into account that $f(-x) = f(x)$, one can make sure that $\alpha' = \alpha$ and $\beta' = - \beta$. Thus, collecting the above results, for the non-normalized probability density $f(x)$ we obtain a general representation
\begin{equation}
f(x) =
\left\{\! \begin{array}{ll}
\alpha \cos{(x/2c)}
+ \beta \sin{(x/2c)},
& x \in [-l, l-c),
\\
1/2c, & x \in (l-c, c-l),
\\
\alpha \cos{(x/2c)}
- \beta \sin{(x/2c)},
& x \in (c-l, l].
\end{array}
\right.
\label{f2}
\end{equation}
To find the parameters $\alpha$ and $\beta$, we use \textcolor{blue}{Eq.\ (\ref{f_eq2a3})} with $y = x \in [-l, l-c)$. Substituting \textcolor{blue}{(\ref{f2})} into \textcolor{blue}{Eq.\ (\ref{f_eq2a3})}, we arrive to the equation
\begin{gather}
\left[ \alpha \left(1 -\sin{\frac{1}{2}}
\right) - \beta \cos{\frac{1}{2}} \right]
\cos{\frac{x}{2c}} + \left[ \beta \left(1 +
\sin{\frac{1}{2}}\right) - \alpha \cos{
\frac{1}{2}} \right] \sin{\frac{x}{2c}}
\nonumber \\[4pt]
+\, \alpha \sin{\frac{c-l}{2c}} + \beta
\cos{\frac{c-l}{2c}} - \frac{1}{2c} +
\frac{l}{4c^{2}} = 0.
\label{eq_ab1}
\end{gather}
It holds for all $x$ only if three conditions
\begin{subequations}
\begin{gather}
\alpha \left(1 -\sin{\frac{1}{2}}
\right) - \beta \cos{\frac{1}{2}} = 0,
\quad
\beta \left(1 + \sin{\frac{1}{2}}\right)
- \alpha \cos{\frac{1}{2}} = 0,
\label{eq_ab2}
\\[4pt]
\alpha \sin{\frac{c-l}{2c}} + \beta
\cos{\frac{c-l}{2c}} - \frac{1}{2c} +
\frac{l}{4c^{2}} = 0
\label{eq_ab2'}
\end{gather}
\end{subequations}
are simultaneously satisfied. Since conditions in \textcolor{blue} {(\ref{eq_ab2})} are equivalent (this can be verified directly), we can consider one of them (e.g., the first one) and condition \textcolor{blue} {(\ref{eq_ab2'})} as a set of linear equations for $\alpha$ and $\beta$. The straightforward solution of these equations leads to
\begin{equation}
\alpha = \frac{1}{l}\frac{\sigma(1- \sigma/4)\cos{[(\pi-1)/4]}}
{4\cos{[(\sigma + \pi - 1)/4]}},
\quad
\beta = \frac{1}{l}\frac{\sigma(1-
\sigma/4)\sin{[(\pi-1)/4]}}
{4\cos{[(\sigma + \pi - 1)/4]}}.
\label{alpha,beta}
\end{equation}
Formulas \textcolor{blue} {(\ref{f2})} and \textcolor{blue} {(\ref{alpha,beta})} completely determine the non-normalized probability density function $f(x)$ in the case when $\sigma \in (1,2)$. Since $f(x)$ is expressed in terms of trigonometric functions, integral in \textcolor{blue} {(\ref{f2})} can be calculated analytically, yielding
\begin{equation}
a = \frac{\sigma}{4} - \sqrt{2}\,
\frac{(1-\sigma/4) \sin{[(\sigma-1)
/4]}}{\cos{[(\sigma + \pi - 1)/4]}}.
\label{a2}
\end{equation}
For convenience of analysis, we rewrite the non-normalized probability density \textcolor{blue} {(\ref{f2})} in the reduced form
\begin{equation}
\tilde{f}(\tilde{x}) =
\left\{\! \begin{array}{ll}
\alpha l\cos{(\sigma \tilde{x}/4)}
+ \beta l\sin{(\sigma \tilde{x}/4)},
& x \in [-1, -\tilde{x}_{1}),
\\
\sigma/4, & x \in (-\tilde{x}_{1},
\tilde{x}_{1}),
\\
\alpha l\cos{(\sigma \tilde{x}/4)}
- \beta l\sin{(\sigma \tilde{x}/4)},
& x \in (\tilde{x}_{1}, 1],
\end{array}
\right.
\label{red_f2}
\end{equation}
where $\tilde{x}_{1} = |2/\sigma - 1|$ (this definition of $\tilde{x}_{1}$ will be used for $\sigma \in (2,3)$ as well). The properties of this probability density are surprising and unexpected. Indeed, in contrast to the previous case, in this case the function $\tilde{f} (\tilde{x})$ has three branches and it is discontinuous at $\tilde{x} = \pm \tilde{x}_{1}$. We emphasize that this qualitative change of the behavior of $\tilde{f} (\tilde{x})$ occurs when the ratio parameter $\sigma$ exceeds the critical one $\sigma_{\mathrm{cr}} = 1$. Using \textcolor{blue} {(\ref{red_f2})} and \textcolor{blue} {(\ref{alpha,beta})}, it can be shown that
\begin{equation}
\tilde{f}(\pm 1) = \frac{\sigma}{4}\left(
1 - \frac{\sigma}{4} \right), \quad
\tilde{f}(\pm \tilde{x}_{1} \pm 0) = \frac{
\sigma}{4}\left(1 - \frac{\sigma}{4} \right)
\tan{\frac{\sigma + \pi -1}{4}}
\label{f2_lim}
\end{equation}
and $\tilde{f}(\pm 1) < \tilde{f}(\pm \tilde{x}_{1} \pm 0) < \sigma/4$. With increasing $\sigma$ from $1$ to $2$, the width of the intervals $[-1, -\tilde{x}_{1})$ and $(\tilde{x}_{1}, 1]$, where $\tilde{f}(\tilde{x})$ nonlinearly depends on $\tilde{x}$, increases from $0$ to $1$, and the width of the interval $(-\tilde{x}_{1}, \tilde{x}_{1})$, where $\tilde{f}(\tilde{x})$ does not depend on $\tilde{x}$, decreases from $2$ to $0$.
For the sake of illustration, in \textcolor{blue}{Fig.\ \ref{fig2}} we show the behavior of the reduced non-normalized probability density \textcolor{blue}{(\ref{red_f2})} for two values of the ratio parameter $\sigma$ (solid lines). In order to verify these theoretical results, we performed numerical simulations of \textcolor{blue}{Eq.\ (\ref{eq_X})}, paying a special attention to the vicinities of the points of discontinuity $\pm \tilde{x}_{1}$. As seen from this figure, the numerical results (denoted by triangle symbols) are fully consistent with the theoretical ones.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{fig2.eps}
\caption{Reduced non-normalized probability
density $\tilde{f}(\tilde{x})$ as a function of
the reduced variable $\tilde{x} = x/l$ for
$\sigma = 1.3$ (a) and $\sigma = 1.7$ (b). The
solid lines show theoretical dependencies
obtained from \textcolor{blue}{ (\ref{red_f2})}
and \textcolor{blue} {(\ref{alpha,beta})},
and the triangle symbols indicate the results
obtained by numerical simulations of
\textcolor{blue}{Eq.\ (\ref{eq_X})}.}
\label{fig2}
\end{figure}
\subsection{Solution at \texorpdfstring{$\sigma \in (2,3)$}{Lg}}
To determine the non-normalized probability density $f(x)$ at $n=3$, i.e., when $\sigma \in (2,3)$ or, equivalently, when $c \in (2l/3, l)$, we should use \textcolor{blue}{Eqs.\ (\ref{f_eq3})}. Since in this case the chain of inequalities $-l < l-2c < c-l < l-c < 2c-l < l$ holds, it is reasonable to divide the interval $[-l, l]$ into five subintervals $[-l, l-2c)$, $(l-2c, c-l)$, $(c-l, l-c)$, $(l-c, 2c-l)$ and $(2c-l, l]$. Then, using formula \textcolor{blue}{(\ref{def_a})}, from \textcolor{blue}{Eq.\ (\ref{f_eq3a})} one can derive the equations
\begin{equation}
f(y_{1}) = \frac{1}{4c} + \frac{1}
{2c} \int_{0}^{y_{1}+c}f(x')\,dx'
\label{f_eq3a1}
\end{equation}
and
\begin{equation}
\frac{d}{dy_{1}}f(y_{1}) = \frac{1}{2c}
f(y_{1}+c),
\label{f_eq3a2}
\end{equation}
if $y_{1} \in [-l, l-2c)$, and the equations
\begin{equation}
f(y_{2}) = \frac{1}{4c} + \frac{1}
{2c} \int_{0}^{l-c}f(x')\,dx' +
\frac{1}{2c} \int_{l-c}^{y_{2}+c}f(x')
\,dx'
\label{f_eq3a3}
\end{equation}
and
\begin{equation}
\frac{d}{dy_{2}}f(y_{2}) = \frac{1}{2c}
f(y_{2}+c),
\label{f_eq3a4}
\end{equation}
if $y_{2} \in (l-2c, c-l)$ (we temporary use the variables $y_{1}$ and $y_{2}$ instead of the variable $x$).
Similarly, \textcolor{blue}{Eq.\ (\ref{f_eq3b})} at $x = y_{1} +c \in (c-l, l-c)$ yields the equation
\begin{equation}
f(y_{1}+c) = \frac{1}{2c}(1-2a) - \frac{1}
{2c} \int_{-l}^{y_{1}}f(x')\,dx' - \frac{1}
{2c} \int_{y_{1} + 2c}^{l}f(x')\,dx',
\label{f_eq3b1}
\end{equation}
from which one immediately obtains
\begin{equation}
\frac{d}{dy_{1}}f(y_{1} + c) = -\frac{1}{2c}
f(y_{1}) + \frac{1}{2c} f(y_{1} + 2c).
\label{f_eq3b2}
\end{equation}
Finally, from \textcolor{blue}{Eq.\ (\ref{f_eq3c})} we find the equations
\begin{equation}
f(y_{2}+c) = \frac{1}{4c} + \frac{1}
{2c} \int_{c-l}^{0}f(x')\,dx' +
\frac{1}{2c}\int_{y_{2}}^{c-l}f(x')\,dx'
\label{f_eq3c1}
\end{equation}
and
\begin{equation}
\frac{d}{dy_{2}}f(y_{2} +c) = -\frac{1}{2c}
f(y_{2}),
\label{f_eq3c2}
\end{equation}
if $x = y_{2} + c \in (l-c, 2c-l)$, and the equations
\begin{equation}
f(y_{1} + 2c) = \frac{1}{4c} + \frac{1}
{2c} \int_{y_{1} + c}^{0}f(x')\,dx'
\label{f_eq3c3}
\end{equation}
and
\begin{equation}
\frac{d}{dy_{1}}f(y_{1} +2c) = -\frac{1}{2c}
f(y_{1}+c),
\label{f_eq3c4}
\end{equation}
if $x = y_{1} + 2c \in (2c-l, l]$.
Let us first consider two sets of the above differential difference equations, namely, a set of \textcolor{blue}{Eqs.\ (\ref{f_eq3a2})}, \textcolor{blue} {(\ref{f_eq3b2})} and \textcolor{blue} {(\ref{f_eq3c4})}, and a set of \textcolor{blue}{Eqs.\ (\ref{f_eq3a4})} and \textcolor{blue} {(\ref{f_eq3c2})}. Remarkably, each of these sets can also be reduced to a set of independent ordinary differential equations that are easily solved. In particular, by differentiating \textcolor{blue}{Eq.\ (\ref{f_eq3b2})} with respect to $y_{1}$ and using \textcolor{blue}{Eqs.\ (\ref{f_eq3a2})} and \textcolor{blue} {(\ref{f_eq3c4})}, we get
\begin{equation}
\frac{d^{2}}{dy_{1}^{2}}f(y_{1} + c)
+ \frac{1}{2c^{2}} f(y_{1} + c) =0.
\label{eq_f1}
\end{equation}
Returning to the variable $x = y_{1} + c$, the symmetric solution of this equation can be represented as
\begin{equation}
f(x) = \mu \cos{\frac{x}{\sqrt{2}c}},
\label{f3a}
\end{equation}
where $x \in (c-l, l-c)$ and $\mu$ is a parameter to be determined. Then, substituting $f(y_{1} + c)$ from \textcolor{blue}{Eq.\ (\ref{f_eq3a2})} into \textcolor{blue}{Eq.\ (\ref{eq_f1})} and returning to the variable $x$, one obtains the equation
\begin{equation}
\frac{d^{3}}{dx^{3}}f(x)
+ \frac{1}{2c^{2}} \frac{d}
{dx} f(x) =0,
\label{eq_f2}
\end{equation}
which holds for both $x \in [-l, l-2c)$ and $x \in (2c-l, l]$. Using the symmetry condition $f(-x) = f(x)$, the solution of this equation can be written in the form
\begin{equation}
f(x) = \left\{\! \begin{array}{ll}
\eta \cos{(x/\!\sqrt{2}c)}
+ \kappa \sin{(x/\!\sqrt{2}c)}
+ \gamma, & x \in [-l, l-2c),
\\
\eta \cos{(x/\!\sqrt{2}c)}
- \kappa \sin{(x/\!\sqrt{2}c)}
+ \gamma, & x \in (2c-l, l].
\end{array}
\right.
\label{f3b}
\end{equation}
Similarly, it can be shown that the set of \textcolor{blue}{Eqs.\ (\ref{f_eq3a4})} and \textcolor{blue} {(\ref{f_eq3c2})} is reduced to \textcolor{blue}{Eq.\ (\ref{f_eq2gen})}, which holds on intervals $(l-2c, c-l)$ and $(l-c, 2c-l)$. The solution of this equation, satisfying the condition $f(-x) = f(x)$, is given by
\begin{equation}
f(x) = \left\{\! \begin{array}{ll}
\nu \cos{(x/2c)} + \chi \sin{(x/2c)},
& x \in (l-2c, c-l),
\\
\nu \cos{(x/2c)} - \chi \sin{(x/2c)},
& x \in (l-c, 2c-l).
\end{array}
\right.
\label{f3c}
\end{equation}
To determine the unknown parameters in \textcolor{blue} {(\ref{f3a})}, \textcolor{blue} {(\ref{f3b})} and \textcolor{blue} {(\ref{f3c})}, we use \textcolor{blue}{Eqs.\ (\ref{f_eq3a1})}, \textcolor{blue} {(\ref{f_eq3a3})} and \textcolor{blue} {(\ref{f_eq3b1})}. Substituting $f(x)$ from \textcolor{blue} {(\ref{f3a})}, \textcolor{blue} {(\ref{f3b})} and \textcolor{blue} {(\ref{f3c})} into these equations and omitting technical details, we obtain the following representation for the non-normalized probability density:
\begin{equation}
f(x) =
\left\{\! \begin{array}{ll}
(\mu/\!\sqrt{2})\sin{[(c+x)/\!\sqrt{2}c]}
+ 1/4c, & x \in [-l, l-2c),
\\
\nu \left( \cos{(x/2c)} + \sin
{[(c+x)/2c]}\right),
& x \in (l-2c, c-l),
\\
\mu\cos{(x/\!\sqrt{2}c)},
& x \in (c-l, l-c),
\\
\nu \left( \cos{(x/2c)} + \sin
{[(c-x)/2c]}\right),
& x \in (l-c, 2c-l),
\\
(\mu/\!\sqrt{2})\sin{[(c-x)/\!\sqrt{2}c]}
+ 1/4c, & x \in (2c-l, l],
\end{array}
\right.
\label{f3}
\end{equation}
where
\begin{equation}
\mu = \frac{\sigma}{8l}\frac{1}
{\sin{[(\sigma/2 -1)/\!\sqrt{2}]}}
\frac{\sigma/2 -3 + 2
\cot{[(\sigma + \pi -3) /4]}}
{\cot{[(\sigma/2 -1)/
\!\sqrt{2}]} - \!\sqrt{2}
\cot{[(\sigma + \pi -3)/4]}}
\label{mu}
\end{equation}
and
\begin{equation}
\nu = \frac{\sigma}{16l}\frac{\cos{(1/4)}
- \sin{(1/4)}} {\cos{(1/2)}\sin{[(\sigma +
\pi -3) /4]}} \frac{\sigma/2 -3 +
\!\sqrt{2} \cot{[(\sigma/2 -1) /\!\sqrt{2}]}}
{\cot{[(\sigma/2 -1)/\!\sqrt{2}]} - \!\sqrt{2}
\cot{[(\sigma + \pi -3)/4]}}.
\label{nu}
\end{equation}
Finally, by direct integration of $f(x)$, from \textcolor{blue} {(\ref{def_a})} one gets
\begin{equation}
a = \frac{3}{2} - \frac{\sigma}{4}
- \frac{\sqrt{2}}{4} \frac{\sigma/2 -3 +\sqrt{2}
\cot{[(\sigma/2 -1) /\!\sqrt{2}]}}{\cot{[
(\sigma/2 -1)/\!\sqrt{2}]} - \!\sqrt{2} \cot{
[(\sigma + \pi -3)/4]}} \cot{\frac{\sigma +
\pi -3}{4}}.
\label{a3}
\end{equation}
In the reduced form, the non-normalized probability density \textcolor{blue} {(\ref{f3})} is rewritten as
\begin{equation}
\tilde{f}(\tilde{x}) =
\left\{\! \begin{array}{ll}
(\mu l/\!\sqrt{2})\sin{[(c+x)/\!\sqrt{2}c]}
+ \sigma/8, & x \in [-1, -\tilde{x}_{2}),
\\
\nu l\left( \cos{(x/2c)} + \sin
{[(c+x)/2c]}\right),
& x \in (-\tilde{x}_{2}, -\tilde{x}_{1}),
\\
\mu l\cos{(x/\!\sqrt{2}c)},
& x \in (-\tilde{x}_{1}, \tilde{x}_{1}),
\\
\nu l\left( \cos{(x/2c)} + \sin
{[(c-x)/2c]}\right),
& x \in (\tilde{x}_{1}, \tilde{x}_{2}),
\\
(\mu l/\!\sqrt{2})\sin{[(c-x)/\!\sqrt{2}c]}
+ \sigma/8, & x \in (\tilde{x}_{2}, 1],
\end{array}
\right.
\label{red_f3}
\end{equation}
where $\tilde{x}_{2} = |4/\sigma - 1|$ and $\tilde{x}_{1} < \tilde{x}_{2} < 1$. In accordance with the general rule formulated at the end of \textcolor{blue}{Section \ref{BasEq}}, in this case the function $\tilde{f}( \tilde{x})$ has five branches separated from each other by four points $\pm \tilde{x}_{1}$ ($\tilde{f}( \tilde{x})$ is discontinuous at $\tilde{x} = \pm \tilde{x}_{1}$) and $\pm \tilde{x}_{2}$ ($\tilde{f}( \tilde{x})$ is continuous at $\tilde{x} = \pm \tilde{x}_{2}$). The intervals $[-1, -\tilde{x}_{2})$, $(-\tilde{x}_{1}, \tilde{x}_{1})$ and $(\tilde{x}_{2}, 1]$ have the same width $2 - 4/\sigma$, which increases from $0$ to $2/3$ as the ratio parameter $\sigma$ grows from $2$ to $3$. In contrast, the width $6/\sigma - 2$ of the intervals $(-\tilde{x}_{2}, -\tilde{x}_{1})$ and $(\tilde{x}_{1}, \tilde{x}_{2})$ decreases from $1$ to $0$. As in the previous cases, the theoretical results obtained for $\sigma \in (2,3)$ are confirmed by numerical simulations, see \textcolor{blue}{Fig.\ \ref{fig3}}.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{fig3.eps}
\caption{Reduced non-normalized probability
density $\tilde{f}(\tilde{x})$ as a function of
the reduced variable $\tilde{x} = x/l$ for
$\sigma = 2.3$ (a) and $\sigma = 2.7$ (b). The
solid lines represent the theoretical results
obtained using \textcolor{blue}{(\ref{red_f3})},
\textcolor{blue}{(\ref{mu})} and \textcolor{blue}
{(\ref{nu})}, and the triangle symbols show the
numerical results obtained by numerical
simulations of \textcolor{blue}{Eq.\ (\ref{eq_X})}.}
\label{fig3}
\end{figure}
\subsection{Solution at \texorpdfstring{$\sigma \to \infty$}{Lg}}
The above results indicate that, because the number of branches of the non-normalized PDF $f(x)$ grows, its local behavior becomes more and more complex with increasing parameter $\sigma$ (we recall, $\sigma$ is the ratio of the domain size $2l$ of the bounded process $X_{t}$ to the half-width $c$ of uniform distribution of jump magnitudes $z_{i}$). For this reason, we were not able to solve \textcolor{blue}{Eqs.\ (\ref{f_eq3})} analytically for arbitrary large values of $\sigma$ (we solved it for $n=4$ as well, but the results are too cumbersome to present here). However, the function $f(x)$ in the limit $\sigma \to \infty$ approaches a constant, which can be determined as follows. First, using \textcolor{blue} {(\ref{q2})}, we find
\begin{equation}
\int_{-l}^{l} q(x-x')\,dx' =
\frac{1}{2c} \times
\left\{\! \begin{array}{ll}
l+c+x, & x \in [-l, c-l],
\\
2c, & x \in [c-l, l-c],
\\
l+c-x, & x \in [l-c, l].
\end{array}
\right.
\label{Int_q}
\end{equation}
Then, assuming that $f(x) = h = \mathrm{const}$ and substituting expressions \textcolor{blue} {(\ref{q2})} and \textcolor{blue} {(\ref{Int_q})} into \textcolor{blue}{Eq.\ (\ref{f_eq})}, one can make sure that at $x \in (-l+c, l-c)$ this equation is satisfied identically, and at $x \in [-l, -l+c]$ it reduces to
\begin{equation}
h = \frac{a}{2c} + h\, \frac{l+c+x}{2c}.
\label{f_eq4}
\end{equation}
(Note, at $x \in [l-c, l]$ \textcolor{blue}{Eq.\ (\ref{f_eq})} reduces to \textcolor{blue}{Eq.\ (\ref{f_eq4})} with $x$ replaced by $-x$.)
As it follows from \textcolor{blue}{Eq.\ (\ref{f_eq4})}, our assumption that $f(x)$ does not depend on $x$ is, strictly speaking, incorrect. Nevertheless, if $c \ll l$ (i.e., $\sigma \gg 1$), it can be used as a first approximation. Indeed, taking into account that, according to \textcolor{blue} {(\ref{def_a})}, $a = 1/2 - hl$, from \textcolor{blue}{Eq.\ (\ref{f_eq4})} one obtains
\begin{equation}
h = \frac{1}{2l} \frac{1}{1 +
(c-l-x)/l}.
\label{f_4}
\end{equation}
Since the values of $c-l-x$ for $x \in [-l, -l+c]$ belong to the interval $[0,c]$ and the condition $c \ll l$ holds, we get $h = 1/2l$ and $a=0$ as $\sigma \to \infty$. Hence, in this limit the reduced PDF \textcolor{blue} {(\ref{red_Pst})} takes the form
\begin{equation}
\tilde{P}_{\mathrm{st}}(\tilde{x})\,
|_{\sigma \to \infty}
= \frac{1}{2} \quad (|\tilde{x}| \leq 1).
\label{Pst_lim}
\end{equation}
Our numerical simulations show that (if $\sigma \gtrsim 50$) this result is reproduced with an accuracy of a few percent or better [to estimate the accuracy analytically, one can use formula \textcolor{blue} {(\ref{h})}]. It should be noted in this regard that with increasing $\sigma$ the number of time steps $M$, which is necessary to reach the stationary state, increases as well. We also stress that the same result \textcolor{blue} {(\ref{Pst_lim})} holds for the bounded process $X_{t}$ driven by Gaussian white noise \textcolor{blue} {\cite{Gard2009}}.
\subsection{Extreme values probability}
The probability $a$ that in the stationary state $X_{t} = -l$ (or $X_{t} = l$) is determined by the formulas \textcolor{blue} {(\ref{f1,a1})}, \textcolor{blue} {(\ref{a2})} and \textcolor{blue} {(\ref{a3})} for $\sigma \in (0,1)$, $\sigma \in (1,2)$ and $\sigma \in (2,3)$, respectively. Using these formulas, it can be directly shown that $a|_{\sigma=1-0} = a|_{\sigma=1+0}$ and $a|_{\sigma=2-0} = a|_{\sigma=2+0}$, i.e., $a$ is a continuous function of $\sigma$ at the critical points $\sigma_{\mathrm{ cr}} = 1$ and $\sigma_{\mathrm{ cr}} = 2$, and $a$ monotonically decreases as the ratio parameter $\sigma$ increases from $0$ to $3$. As \textcolor{blue}{Fig.\ \ref{fig4}} illustrates, our theoretical results \textcolor{blue} {(\ref{f1,a1})}, \textcolor{blue} {(\ref{a2})} and \textcolor{blue} {(\ref{a3})} are in excellent agreement with the simulation data.
\begin{figure}[ht]
\centering
\includegraphics[width=6.8cm]{fig4.eps}
\caption{Probability $a$ of the extremal
values of the bounded process $X_{t}$ as
a function of the ratio parameter $\sigma$.
The solid lines represent the theoretical
results \textcolor{blue}{(\ref{f1,a1})},
\textcolor{blue} {(\ref{a2})} and \textcolor{blue}
{(\ref{a3})} for $\sigma \in (0,1)$, $\sigma
\in (1,2)$ and $\sigma \in (2,3)$,
respectively. The results obtained via
numerical simulations of \textcolor{blue}{Eq.\
(\ref{eq_X})} are marked by triangle symbols.
Inset: probability $a$ vs.\ $\sigma$ for large
values of $\sigma$. The solid line represents
the asymptotic formula $a=1/2\sigma$.}
\label{fig4}
\end{figure}
Although we have no explicit expressions for the probability $a$ at $\sigma > 3$, it can be easily seen that $a$ as a function of $\sigma$ is continuous at the critical points $\sigma_{\mathrm{ cr}} = \overline{3,\infty}$ as well. Indeed, according to the properties of $f(x)$ formulated in \textcolor{blue}{Section \ref{BasEq}}, the function $f(x)$ at $\sigma = \sigma_{\mathrm{ cr}} + 0$ acquires new branches (compared to $\sigma = \sigma_{\mathrm{ cr}} - 0$), which are located at separate points. Since these points do not contribute to the integral in \textcolor{blue} {(\ref{def_a})}, one may conclude that $a|_{\sigma = \sigma_{\mathrm{ cr}} - 0} = a|_{\sigma = \sigma_{ \mathrm{ cr}} + 0}$, i.e., the probability $a$ as a function of the ratio parameter $\sigma$ is continuous at all critical points $\sigma = \sigma_{\mathrm{ cr}}$.
Using results of the previous section, we can also estimate the dependence of $a$ on $\sigma$ for $\sigma \gg 1$. To this end, we first note that, according to our assumption $f(x) = h = \mathrm{const}$, the condition $\int_{-l}^{l} f(x)\,dx = 2lh$ must hold. On the other hand, from the above results it follows that
\begin{equation}
\int_{-l}^{l} f(x)\,dx = 2(l-c)h + \frac{1}{2}
\int_{-l}^{-l+c} \frac{dx}{c-x} + \frac{1}{2}
\int_{l-c}^{l} \frac{dx}{c+x} .
\label{Int}
\end{equation}
Performing integration and equating the right-hand side of \textcolor{blue} {(\ref{Int})} to $2lh$, we obtain
\begin{equation}
h = \frac{\sigma}{4l} \ln{\! \bigg( 1 +
\frac{2}{\sigma} \bigg)}
\label{h}
\end{equation}
and, since $a = 1/2 - lh$,
\begin{equation}
a = \frac{1}{2} - \frac{\sigma}{4} \ln{\! \bigg( 1 +
\frac{2}{\sigma} \bigg)}.
\label{a4}
\end{equation}
Taking into account that the ratio parameter $\sigma$ is assumed to be large enough, from \textcolor{blue} {(\ref{a4})} one gets in the first nonvanishing approximation: $a = 1/2\sigma$ as $\sigma \to \infty$. Our numerical results confirm this theoretical prediction, see inset in \textcolor{blue}{Fig.\ \ref{fig4}} (note, to reach the stationary state at $\sigma \in (50,150)$, the number of steps $M$ was chosen to be $3\cdot 10^{8}$).
\section{Conclusions}
\label{Concl}
We have studied the statistical properties of a class of bounded jump processes governed by a special case of the difference Langevin equation driven by Poisson white noise, i.e., a random sequence of delta pulses. In contrast to the ordinary Langevin equation, this equation, due to the use of the saturation function, has only bounded solutions. We have derived the Kolmogorov-Feller equation for the normalized probability density function (PDF) of these processes and found its stationary solutions in the case of the uniform distribution of pulse sizes, which is assumed to be symmetric. It has been explicitly shown that the stationary PDF can be decomposed into two singular terms defining the probability of the process extreme values and a regular part representing the non-normalized PDF inside a bounded domain. Amazingly, the non-normalized PDF has proven to be a complex piecewise function with jump discontinuities.
One of the most remarkable findings is that the ratio of the width of the saturation function to the half-width of the uniform distribution of pulse sizes is the only parameter which controls all properties of the stationary PDF. In particular, the ratio parameter determines the number of branches of the non-normalized PDF and coordinates of points separating these branches. It has been also established that, with its increasing, two new branches are created every time the ratio parameter is equal to a natural number. Interestingly, although this enhances the local complexity of the stationary PDF, it approaches a constant in the limit of large values of the ratio parameter. All our theoretical predictions have been confirmed by numerical simulations of the difference Langevin equation.
To the best of our knowledge, the proposed Langevin model of bounded jump processes driven by Poisson white noise is the first one that allows to study the nontrivial statistical properties of these processes in great analytical detail.
\section*{Acknowledgment}
This work was partially supported by the Ministry of Education and Science of Ukraine under Grant No. 0119U100772.
|
2,877,628,089,133 | arxiv | \section{Introduction}
\label{sec:intro}
In recent years there has been considerable interest in the possibility that dark matter (DM) could form bound states, which are ubiquitous in the Standard Model of particle physics. Such bound states could be a consequence of weakly-coupled interactions between the DM and a light mediator (e.g. \cite{Pospelov:2008jd,Kaplan:2009de,Wise:2014jva,Gresham:2017zqi}), or alternatively a strongly-interacting dark sector (e.g. \cite{Frigerio:2012uc,Detmold:2014qqa,Detmold:2014kba,Francis:2018xjd}). For sufficiently heavy dark matter, even interactions through the electroweak gauge bosons are sufficient to support bound states (e.g. \cite{Asadi:2016ybp, Mitridate:2017izz}).
The presence of bound states could lead to novel signatures across a wide range of observational probes, including colliders \cite{Drees:1993uw,Martin:2008sv,Shepherd:2009sa,Kats:2009bv,Kats:2012ym,An:2015pva,Tsai:2015ugz,Elor:2018xku,Krovi:2018fdr} and direct-detection experiments (e.g. \cite{Kaplan:2009de,Laha:2013gva}). In indirect detection, formation of unstable bound states constitutes an additional annihilation channel for the DM, which in some circumstances can dominate over direct annihilation \cite{an2016effects,an2017strong,cirelli2017dark}. With sufficiently good experimental energy resolution, the decays of bound states could be distinguished from direct annihilation; the soft particles radiated in transitions into the bound state, and between bound states in the spectrum, could also lead to observable signals \cite{MarchRussell:2008tu}. Formation of bound states could modify the cosmological history of DM (e.g. \cite{vonHarling:2014kha,Wise:2014jva}), and if the bound states are stable, their presence could also have astrophysical effects in the late universe (e.g. \cite{Fan:2013yva,Gresham:2018anj}). Finally, the presence of bound states is connected to the existence of DM self-interaction, which could have striking effects on the distribution of dark matter at Galactic scales (see \cite{Buckley:2017ijx} for a recent review).
One simple illustrative model for self-interacting DM is where the DM is a member of a multiplet charged under some dark gauge group, with a small breaking of the gauge symmetry conferring a small mass on the corresponding gauge boson. The breaking of the symmetry relating the different components of the multiplet will also generically lead to a mass splitting between those components, with the lightest playing the role of the DM. This scenario is realized in the case of wino or higgsino DM \cite{jungman1996supersymmetric}, and extensions to the case of higher representations of the electroweak gauge group \cite{Cirelli:2005uq} or simple dark sectors (e.g. \cite{ArkaniHamed:2008qn, Cheung:2009qd, Cirelli:2010nh}) have been studied in the literature.
DM annihilation and self-interaction in such models has been studied previously, taking into account the presence of the mass splittings as well as the long-range interaction \cite{Hisano:2003ec,hisano2005nonperturbative,slatyer2010sommerfeld, schutz2015self, Zhang:2016dck, Vogelsberger:2018bok}. In this work we will often refer to the Sommerfeld enhancement, by which we simply mean the enhancement of short-range annihilation processes due to the long-range potential from vector exchange. However, our current analytic understanding of the bound states in such scenarios is largely limited to the case without mass splittings \cite{Petraki:2015hla, Asadi:2016ybp, Petraki:2016cnz, Mitridate:2017izz, Harz:2018csl}; where mass splittings are present, previous studies have relied on numerical work.
In this work, we seek to address this gap. We consider a simple low-energy scenario containing two nearly-degenerate Majorana fermions interacting through vector exchange. The lighter fermion is the DM, the heavier can be thought of as an excited state of the DM. We calculate numerically how the mass splitting between the states alters the bound state spectrum and capture rate relative to the case with only a single state, and develop a simple analytic understanding of the main effects. If dark matter is a Majorana fermion, the interaction with the gauge bosons must be off-diagonal in nature as the DM cannot carry a conserved charge \cite{slatyer2010sommerfeld,schutz2015self}; this setup is naturally realized where the DM is a Dirac fermion charged under the dark gauge symmetry at high energies, and separates into two nearly-degenerate Majorana fermions at low energies due to the symmetry breaking (see e.g. \cite{Finkbeiner:2010sm} for a detailed discussion). Previous studies of this simple model \cite{slatyer2010sommerfeld,schutz2015self} have considered $s$-wave annihilation and scattering; we go beyond by considering bound state formation, and including higher partial waves as well.
We demonstrate that we can analytically estimate the shift to the bound state energies in the presence of a mass splitting, and identify regions of parameter space where mass-splitting-induced changes to the capture cross section follow characteristic patterns. We show that the changes to the capture cross section are dominated by the behavior of the initial-state wavefunction, and the resulting cross section is simply related to the Sommerfeld-enhanced direct annihilation rate (for the same partial wave in the initial state) up to a phase-space factor, to a good approximation. We apply our understanding from this simple model to the case of wino dark matter, and demonstrate that our analytic approximations to the bound state energies in the latter case compare well to previous numerical results \cite{Asadi:2016ybp}.
In Section \ref{sec:toy}, we describe the pseudo-Dirac dark matter model, lay out the relevant non-relativistic Hamiltonian for the problem, and discuss its general properties. In Section \ref{sec:bound}, we discuss the general structure of the bound state spectrum, describe our method for numerically obtaining the bound-state energies, provide an analytic estimate for the shift in bound state energies as a function of the mass splitting, and compare analytic and numerical results for the binding energies and the effect on the position of resonances in the scattering rate. In Section \ref{sec:wino} we apply our analytical insights from this toy model to the case of the wino, as a test case for bound states of electroweakly interacting DM. In Section \ref{sec:capture} we numerically compute the capture rate in this model, and then characterize and discuss the new features relative to the case with no mass splitting, in particular relating the capture rate to the Sommerfeld enhancement. We summarize our conclusions in Section~\ref{sec:conclusion}.
\section{Pseudo-Dirac dark matter: general considerations}\label{sec:toy}
\subsection{The model}
We consider a pseudo-Dirac fermion, charged under a $U(1)$ gauge group in the dark sector, which acquires a Majorana mass term at low energies due to the breaking of the $U(1)$ symmetry by an Abelian dark Higgs. Consequently, the Dirac fermion splits into two non-degenerate Majorana fermion mass eigenstates. We will be solely interested in the low-energy, long-range behavior of the system in the non-relativistic limit, encoded in the potential; consequently, there is considerable freedom in the details of the dark Higgs sector. For example, if the dark Higgs has (dark $U(1)$) charge 1, a small Majorana mass term can be induced by a dimension-5 operator \cite{Finkbeiner:2010sm}; if the dark Higgs instead has charge 2, a Yukawa-type coupling yields a Majorana mass when the Higgs acquires a VEV, as in \cite{Elor:2018xku} (in this case the small splitting between eigenstates may be due to a small Dirac mass, as the Majorana mass is not suppressed by a high scale). We label the lower and upper mass eigenstates as $\chi$ and $\chi^*$ respectively, and hereby denote the mass eigenvalues as $(m_{\chi}, m_\chi + 2 \delta)$.
More explicitly, at high energies, writing the Dirac fermion as $\Psi$ and its Dirac mass as $m_D$, and the gauge boson of the $U(1)$ symmetry as $\phi$, the Lagrangian can be written as:
\begin{equation} \mathcal{L} = i \bar{\Psi} \gamma^\mu (\partial_\mu - i g_D \phi_\mu) \Psi - m_D \bar{\Psi} \Psi + \mathcal{L}_\text{Higgs} + \mathcal{L}_\text{gauge-kin},\end{equation}
where $g_D$ is the dark-sector coupling, and we have omitted the details of the model-dependent Higgs sector. The gauge kinetic term is by default just $\mathcal{L}_\text{gauge-kin} = -\frac{1}{4} F^D_{\mu \nu} F^{D\mu \nu}$, where $F^D_{\mu \nu}$ is the dark field strength associated with the $\phi^\mu$ field, but it could also include e.g. kinetic mixing with the Standard Model gauge bosons. Writing $\Psi$ as a Weyl fermion pair $(\zeta,\eta^\dagger)$ (see Ref.~\cite{Dreiner:2008tw} for an extended discussion), turning on a Higgs-sector-induced Majorana mass $m_M$ for $\Psi$ yields a mass matrix of the form \cite{Finkbeiner:2010sm,Elor:2018xku}:
\begin{equation} \frac{1}{2} \begin{pmatrix} \zeta & \eta \end{pmatrix} \begin{pmatrix} m_M & m_D \\ m_D & m_M\end{pmatrix} \begin{pmatrix} \zeta \\ \eta \end{pmatrix} + h.c.\end{equation}
This leads to mass eigenstates that are a 45$^\circ$ rotation of $\zeta$, $\eta$, i.e. $\chi^* = (\eta + \zeta)/\sqrt{2}$, with mass $m_M + m_D$, and $\chi = i (\eta - \zeta)/\sqrt{2}$, with mass $|m_M - m_D|$. This rotation converts the gauge boson-fermion interaction $g_D \bar{\Psi} \gamma^\mu \phi_\mu \Psi$ into the form \cite{Elor:2018xku}:
\begin{equation} \mathcal{L}_\text{fermion-gauge} = - i g_D \phi_\mu \left((\chi^*)^\dagger\bar{\sigma}^\mu \chi - \chi^\dagger \bar{\sigma}^\mu \chi^*\right).\end{equation}
We observe that the interaction between the mass eigenstates via $\phi_\mu$ is off-diagonal in nature, i.e. it couples the $\chi$ and $\chi^*$ fields. Since Majorana fermions cannot carry conserved charge, this off-diagonal interaction is generic for any system where two Majorana fermions interact through vector exchange.
This off-diagonal interaction structure gives rise to two distinct sectors of two-body states comprised of $\chi$ and $\chi^*$. $\ket{\chi\chi^*}$ states are maintained (converted into the identical state $\ket{\chi^*\chi}$) under the exchange of a vector boson. In contrast, such a vector exchange transforms $\ket{\chi\chi}$ states into $\ket{\chi^*\chi^*}$; thus the potential mixes the non-degenerate $\ket{\chi\chi}$ and $\ket{\chi^*\chi^*}$ states. The $\ket{\chi\chi^*}$ states experience a simple attractive Yukawa potential in the non-relativistic limit, whereas the admixed $\alpha \ket{\chi\chi} + \beta \ket{\chi^*\chi^*}$ states evolve under a more complex potential.
For the purposes of our study, we will be primarily interested in the dynamics of the two-state system spanned by $\ket{\chi\chi}$ and $\ket{\chi^{*}\chi^{*}}$, and the effect of the mass splitting between these states. In the non-relativistic limit, the Schr\"{o}dinger equation for this system contains a matrix potential of the following form \cite{arkani2009theory}:
\begin{equation}\label{potential}
V(r)=\begin{bmatrix}
0 & -\hbar c\alpha\frac{e^{-m_{\phi}cr/\hbar}}{r} \\
-\hbar c\alpha\frac{e^{-m_{\phi}cr/\hbar}}{r} & 2\delta c^2
\end{bmatrix}
\end{equation}where $\alpha$ denotes the dark coupling constant $g_D^2/4\pi$, $m_\phi$ is the mass of the dark gauge boson, and $r$ is the inter-particle separation. The first row (and column) corresponds to the two-body $\ket{\chi \chi}$ state and the second row (and column) to the two-body $\ket{\chi^*\chi^*}$ state. The off-diagonal terms represent the conversion of $\ket{\chi \chi}$ into $\ket{\chi^*\chi^*}$, and vice versa, via the vector exchange, while the $2\delta$ term describes the increased mass of the $\ket{\chi^*\chi^*}$ state.
As in Ref.~\cite{slatyer2010sommerfeld}, we scale the coordinate $r$ by $\frac{\hbar}{m_{\chi}\alpha c}$, thereby obtaining the following radial equation for the reduced wavefunction $\psi(r)$ in the centre-of-mass frame,
\begin{equation}\label{matrix}
\psi''(r)=\begin{bmatrix}
\frac{l(l+1)}{r^2}-\epsilon_{v}^2 & -\frac{e^{-\epsilon_{\phi}r}}{r}\\
-\frac{e^{-\epsilon_{\phi}r}}{r} & \epsilon_{\delta}^2+\frac{l(l+1)}{r^2}-\epsilon_{v}^2
\end{bmatrix}\psi(r)
\end{equation}with the dimensionless parameters defined as
\begin{align}\label{eq:dimensionlessparams}
\epsilon_{v}&=\frac{v}{c\alpha}&
\epsilon_{\phi}&=\frac{m_{\phi}}{m_{\chi}\alpha}&
\epsilon_{\delta}=\sqrt{\frac{2\delta}{m_{\chi}\alpha^2}}
\end{align}
\subsection{Eigenvectors and eigenvalues of the potential}\label{sec:general considerations}
Eq.~\ref{matrix} cannot be solved analytically in general, since the diagonalizing matrix is itself position-dependent. However, it is still helpful to examine the eigenvalues and eigenvectors of the potential, which are respectively given by \cite{slatyer2010sommerfeld,schutz2015self}:
\begin{equation}\label{eigenvectors}
\lambda_{\pm}=-\epsilon_{v}^2+\frac{l(l+1)}{r^2}+\frac{\epsilon_{\delta}^2}{2}\pm\sqrt{\bigg(\frac{\epsilon_{\delta}^2}{2}\bigg)^2+\bigg(\frac{e^{-\epsilon_{\phi}r}}{r}\bigg)^2}\text{,}~\eta_{\pm}(r)=\frac{1}{\sqrt{2}}\begin{bmatrix}
\mp\sqrt{1\mp\frac{1}{\sqrt{1+\big(\frac{2e^{-\epsilon_{\phi}r}}{r\epsilon_{\delta}^2}\big)^2}}}\\
\sqrt{1\pm\frac{1}{\sqrt{1+\big(\frac{2e^{-\epsilon_{\phi}r}}{r\epsilon_{\delta}^2}\big)^2}}}
\end{bmatrix}
\end{equation}
The expressions in Eq.~\ref{eigenvectors} allow the identification of two interesting regimes, where the eigenvectors are nearly $r$-independent and the problem is approximately diagonalizable:\begin{itemize}
\item \textbf{Small $r$ regime}: For sufficiently small $r$, the Yukawa potential dominates the mass-splitting term, yielding: \begin{equation}\label{smallr}
\lambda_{\pm}\approx-\epsilon_{v}^2+\frac{l(l+1)}{r^2}+\frac{\epsilon_{\delta}^2}{2}\pm\frac{e^{-\epsilon_{\phi}r}}{r}+\mathcal{O}(r),~\eta_{\pm}(r)\approx\frac{1}{\sqrt{2}}\begin{bmatrix}
\mp1\\1\end{bmatrix}+\mathcal{O}(r)
\end{equation}The eigenvectors of the potential are those of $V(r)=-\frac{1}{r}\sigma_{x}$, and the eigenvalues physically correspond to the repulsive and attractive potential appropriate to same-sign or opposite-sign scattering respectively. This regime corresponds to the restoration of the $U(1)$ symmetry in the ultraviolet.
\item\textbf{Large $r$ limit}: Far away from the origin, one observes that the mass splitting term dominates the now weak Yukawa potential, leading to \begin{align}\label{adia}
\lambda_{+}&\approx-\epsilon_{v}^2+\frac{l(l+1)}{r^2}+\epsilon_{\delta}^2+\frac{e^{-2\epsilon_{\phi}r}}{\epsilon_{\delta}^2r^2}+\mathcal{O}\bigg(\frac{1}{r^4}\bigg),\quad\quad \eta_{+}(r)\approx\begin{bmatrix}0\\1\end{bmatrix}+\mathcal{O}\bigg(\frac{1}{r}\bigg)\\
\lambda_{-}&\approx-\epsilon_{v}^2+\frac{l(l+1)}{r^2}-\frac{e^{-2\epsilon_{\phi}r}}{\epsilon_{\delta}^2r^2}+\mathcal{O}\bigg(\frac{1}{r^4}\bigg),\quad\quad \eta_{-}(r)\approx\begin{bmatrix}1\\0\end{bmatrix}+\mathcal{O}\bigg(\frac{1}{r}\bigg)
\end{align}
\end{itemize}
Thus, we find that the eigenvectors undergo a rotation at a radius where the Yukawa potential and the mass splitting are comparable in size, as also noted in Ref.~\cite{slatyer2010sommerfeld}.
For bound states where the support of the wavefunction lies primarily within this radius, we can guess that the potential in the small-$r$ regime will be a reasonable approximation when computing the bound-state spectrum, allowing us to ignore the radial variation of the eigenstates/eigenvalues.
\section{The bound-state spectrum}\label{sec:bound}
As discussed previously, there are two types of bound states supported by the dark vector exchange, $\ket{\chi\chi^*}$ states which are supported by a simple Yukawa potential, and states consisting of an admixture of $\ket{\chi\chi}$, $\ket{\chi^*\chi^*}$, which evolve under the potential given in Eq.~\ref{potential}. The admixed states consist of pairs of identical fermions, and thus must be in an antisymmetric configuration; this corresponds to requiring the sum of their orbital and spin angular momentum quantum numbers $L+S$ to be even (see Ref.~\cite{Beneke:2014gja} for a general discussion of the different potentials experienced by even- and odd-$L+S$ states).
These bound states can be produced by radiative capture from scattering states. The dominant contribution to such processes arise from electric-dipole-like transitions with emission of a single particle. These transitions change the angular momentum of the incoming state by $\pm1$, if a vector particle is emitted (such as a photon or dark photon). This restricts the possible types of transitions. For our purposes, we shall assume the scattering state to be $\chi\chi$ at large enough interparticle separation (since only $\chi$ particles are present in the DM halo, unless the mass splitting is small enough to be comparable to the typical DM kinetic energy). The capture is then into bound states in the $\chi\chi^*$ sector, via the emission of a single dark photon. Since the initial wavefunction must have even $L+S$, the bound state in this case has odd $L+S$; for example, the $s$-wave ($L=0$) contribution to the scattering state only receives contributions from spin-singlet states ($S=0$), and capture could occur into an $np$ spin-singlet bound state ($L=1$,$S=0$) where $n\geq 2$. Similarly, to form a spin-triplet bound state by direct radiative capture, the dominant process is capture into $ns$ states with $n \geq 1$, from the $p$-wave or $d$-wave components of the initial state.
The presence of bound states in the $\ket{\chi\chi}+\ket{\chi^*\chi^*}$ sector, with energies close to zero, can lead to resonant enhancement to the radiative capture cross section at low velocities. When there is a bound state with near-zero energy present in the spectrum of $L+S$-even states, the scattering wavefunction for $L+S$-even states is enhanced at short distances, leading to the resonance peaks in the Sommerfeld enhancement \cite{Hisano:2003ec} and also enhancing the radiative capture rate into the $L+S$-odd bound states. The energies of these bound states, and thus the conditions under which resonances occur, depend both on the force carrier mass $m_\phi$, and on the mass splitting between states $\delta$.
While analytic solutions do not exist for bound states in an attractive Yukawa potential, they closely resemble bound states in the Hulth\'{e}n potential, which is given by\begin{equation}
V_\text{H}(r)=-\frac{\alpha_{H} m_{\text{H}}}{e^{m_{\text{H}}r}-1}
\end{equation}where $\alpha_{H}$ is the relevant coupling and $m_{H}$ characterizes the range of the potential. The Hulth\'{e}n potential has the desired asymptotic behaviour, resembling a Coulomb potential in the low-$r$ limit and decreasing exponentially in the large-$r$ limit, similar to the Yukawa potential. Exact solutions for the $s$-wave states and approximate ones for the higher-$l$ states are known for this potential \cite{hamzavi2012approximate,Asadi:2016ybp}. Thus, one can approximate the binding energies for the $\ket{\chi\chi^*}$ states -- relative to the sum of the free-particle masses, $2 m_\chi + \delta$ -- by the corresponding Hulth\'{e}n-potential results:
\begin{align}\label{eq:hulthenenergy}
E_n&=\frac{\kappa_n^2}{m_{\chi}}\\
\kappa_n&=\frac{1}{2}\bigg(\frac{\alpha_{\text{H}}m_{\chi}-n^2 m_{\text{H}}}{n}\bigg)
\end{align}Here $n$ is the principal quantum number of the bound state. To accurately approximate the Yukawa potential by its Hulth\'{e}n counterpart, a normalization condition has to be imposed upon $m_{H}$, which we shall take to be $m_{H}=\frac{\pi^2}{6}m_{\phi}$, as argued for in \cite{Cassel:2009wt}. With this choice, we can simply replace the coupling $\alpha_H$ by $\alpha$.
The bound states in the $L+S$-even $\ket{\chi\chi}+ \ket{\chi^*\chi^*}$ sector are less amenable to analytic approximation due to the presence of the mass splitting in the potential (Eq.~\ref{potential}). However, their presence is crucial in setting the resonance positions. We will thus seek to study the energy spectrum of these states both analytically and numerically.
\emph{A note on binding energy conventions:} Hereafter we will always state binding energies $E$ relative to $2 m_\chi$, in order to have a common mass scale, even when the bound state has $\chi^*$ constituents. We will also quote the binding energies as positive values. In other words, we choose $E$ so that the mass of the bound state is $2 m_\chi - E$. Under this convention, Eq.~\ref{eq:hulthenenergy} gives an estimate for the binding energies of $L+S$-odd states as:
\begin{align}\label{eq:oddjstates}
E&\approx \frac{\alpha^2 m_\chi}{4 n^2} \bigg(1 - \frac{(\pi^2/6) n^2 m_\phi}{\alpha m_\chi}\bigg)^2 - \delta
\end{align}
We can define a dimensionless binding-energy parameter $\epsilon_E \equiv E/(\alpha^2 m_\chi)$; the approximate $L+S$-odd bound-state spectrum then becomes:
\begin{align}\label{eq:oddjstatesnodim}
\epsilon_E &\approx \frac{1}{4 n^2} \bigg(1 - \frac{\pi^2}{6} n^2 \epsilon_\phi\bigg)^2 - \frac{\epsilon_\delta^2}{2}
\end{align}
\subsection{Numerical calculation of the $L+S$-even spectrum}
We are interested in solving the eigenvalue problem \begin{equation}\label{eq:eigenvalue}
\hat{H}\ket{\psi}=-E \ket{\psi}
\end{equation}where $E > 0$ is the binding energy (as defined above) and $\hat{H}$ is the Hamiltonian corresponding to the potential in Eq.~\ref{potential}. Inserting a complete set of states $\ket{j}$, we obtain that \begin{equation}
H_{ij}c_j=-E c_{i}
\end{equation}where $H_{ij} = \langle i | H | j \rangle$ and $c_{i}=\braket{i|\psi}$. A complete set of states would exactly solve this problem, but we can determine the eigenvalues $E_n$ to any desired accuracy by using a sufficiently large finite number of states $|i\rangle$ \cite{Asadi:2016ybp}.
Following \cite{Asadi:2016ybp}, we use the bound-state wavefunctions of the Coulomb potential with strength $\alpha$ as our basis set; that is, the potential for the basis states is $\frac{1}{r}$ in the scaled coordinates. This is motivated from the fact that in the limit $\epsilon_\delta=\epsilon_\phi=0$, the matrix potential decouples into an attractive and repulsive Coulomb potential. More explicitly, the requisite basis is constituted by $\begin{bmatrix} \psi_{nlm}\\
0\end{bmatrix}$ and $\begin{bmatrix} 0\\
\psi_{nlm}\end{bmatrix}$, where $\psi_{nlm}$ is the Couloumbic bound-state wavefunction characterized by the quantum numbers $n,l,m$. We simplify our work further by observing that for a bound state characterized by angular momentum $l$, it suffices to use only the Coulomb bound states having the same quantum number $l$. Note that throughout, we have also fixed $m=0$, since it is easily seen that the binding energies will be degenerate in quantum number $m$. In order to get convergence on the eigenvalues, we used 30 such states. As a numerical check, we ensured that the Couloumbic binding energies were recovered in the limit $\epsilon_{\phi}=\epsilon_{\delta}=0$. As a further check, we ensured convergence by examining that the binding energies were similar when the number of basis states was changed to 40, 50 and 60 respectively.
\begin{figure}[tbp]
\includegraphics[width=\textwidth]{"figure_1".pdf}
\caption{Dimensionless binding energy $\epsilon_E \equiv E(\epsilon_\phi,\epsilon_\delta)/m_{\chi}\alpha^2$ vs $\epsilon_{\delta}$ for the 1$s$ (\emph{top left}), 2$s$ (\emph{top right}), 2$p$ (\emph{bottom left}), and 3$p$ (\emph{bottom right}) states at fixed $\epsilon_\phi$. Blue,green, and orange lines correspond to $\epsilon_{\phi}= 0, 0.01, 0.1$ respectively. The dashed gray lines show the corresponding analytic estimates $(\epsilon_{\delta},\widetilde{E}_b(\epsilon_\phi,0)-\epsilon_{\delta}^2/2)$, where $\widetilde{E}_b$ is the dimensionless binding energy. The curves corresponding to a shift in energy of $\epsilon_{\delta}^2/2$ match their numerical counterparts well for small $\epsilon_{\delta}$. Dotted lines indicate the function $\epsilon_\delta^4$ in all four panels; in the upper two panels, we also plot $\text{min}(\epsilon_\phi^2/\ln(\epsilon_\phi/\epsilon_\delta^2)^2,\ \epsilon_{\phi}^2)$ for $\epsilon_\phi=0.1$ (for smaller $\epsilon_\phi$ this constraint is not relevant for any of the bound states we examine). The expected region of validity for the analytic estimates is above and to the left of these curves.
\label{fig:binding energy}}
\end{figure}
\subsection{Analytic estimates for the $L+S$-even spectrum}
As discussed previously, the bound-state wavefunction should largely have support at small $r$, where the potential is large compared to the mass splitting. In this small-$r$ regime, the mixing between the eigenstates is suppressed, as discussed in Section \ref{sec:general considerations}. Accordingly, we can associate the bound state entirely with the eigenstate that experiences an attractive potential, and read off the potential for that eigenstate from the eigenvalue $\lambda_-$ in Eq.~\ref{smallr}, with the replacement of $-\epsilon_v^2$ with the dimensionless binding-energy parameter $\epsilon_E$ ($\equiv E/\alpha^2 m_\chi$):
\begin{equation} \lambda_- \approx \epsilon_E +\frac{l(l+1)}{r^2}+\frac{\epsilon_{\delta}^2}{2}- \frac{e^{-\epsilon_{\phi}r}}{r}+\mathcal{O}(r).\end{equation}
We see that to lowest order, the effect of switching on the mass splitting in this case is to simply shift the binding energy parameter $\epsilon_E$ by $\epsilon_\delta^2/2$. Explicitly, suppose the Yukawa potential with the same $\epsilon_\phi$ but no mass splitting has a spectrum of bound states with energies $\epsilon_E^\prime$. Then the spectrum of bound states in the case with a mass splitting will (to the degree that this approximation is valid) satisfy $\epsilon_E + \epsilon_\delta^2/2 = \epsilon_E^\prime$.
In terms of the physical bound-state energies, we can write this result as:
\begin{equation}\label{eq:approx}
E(\epsilon_{\phi},\epsilon_{\delta})=E(\epsilon_{\phi},0)-\delta.
\end{equation}
We expect this approximation to break down once the radius of the bound state becomes comparable to the crossover radius where the mass splitting term is comparable to the Yukawa potential. We can estimate the typical momentum of a particle in the bound state as $p\sim\sqrt{m_\chi E}$; rescaling the radial coordinate as previously, we obtain an estimate for the dimensionless radius of the bound state:
\begin{equation} r_B \equiv \alpha m_\chi (m_\chi E)^{-1/2} = (\alpha^2 m_\chi/E)^{1/2} = 1/\sqrt{\epsilon_E}.\end{equation}
The crossover radius $r_C$, in the dimensionless rescaled units, is defined by $\epsilon_\delta^2 = e^{-\epsilon_\phi r_C}/r_C$. If $\epsilon_\delta^2 \gg \epsilon_\phi$, then we have $r_C \approx 1/\epsilon_\delta^2$, as the exponent in the Yukawa potential is negligible at $r=r_C$ in this case. The validity condition $r_B \lesssim r_C$ translates in this case to $\epsilon_E \gtrsim \epsilon_\delta^4$. In the opposite case, where $\epsilon_\delta^2 \ll \epsilon_\phi$, the crossover will be induced by the exponential suppression, and we expect $r_C\sim 1/\epsilon_\phi$. We can get a somewhat better estimate by substituting $r_C\approx 1/\epsilon_\phi$ where it appears \emph{outside} the exponent, in the defining equation $\epsilon_\delta^2 = e^{-\epsilon_\phi r_C}/r_C$. Thus we obtain $r_C \approx (1/\epsilon_\phi) \ln (\epsilon_\phi/\epsilon_\delta^2)$. Requiring $r_B \lesssim r_C$ then demands that $\epsilon_E \gtrsim \epsilon_\phi^2/\ln(\epsilon_\phi/\epsilon_\delta^2)^2$, in the regime where $\epsilon_\phi \gg \epsilon_\delta^2$.
Thus in general this approximation should be valid when:
\begin{equation}\label{eq:validity} \epsilon_E = \frac{E}{\alpha^2 m_\chi} \gg \begin{cases} \epsilon_\phi^2/\ln(\epsilon_\phi/\epsilon_\delta^2)^2, & \epsilon_\delta^2 \ll \epsilon_\phi, \\
\epsilon_\delta^4 & \epsilon_\delta^2 \gg \epsilon_\phi.\end{cases}
\end{equation}
For the intermediate region where $\epsilon_\delta^2 \sim \epsilon_\phi$, both limits become $\epsilon_E \gg \epsilon_\phi^2$. Thus we can estimate the validity constraint over the full region as:
\begin{equation}\epsilon_E \gg \text{max}\left(\epsilon_\delta^4,\text{min}\left[\epsilon_\phi^2,\epsilon_\phi^2/\ln(\epsilon_\phi/\epsilon_\delta^2)^2\right]\right).
\end{equation}
We have confirmed numerically that this is a reasonable approximation (within a few tens of percent) to $\epsilon_E \gg 1/r_C^2$.
We show the results of the analytic approximation, together with the numerically computed binding energies, in Fig.~\ref{fig:binding energy}. We display results for the $1s, 2s, 2p$ and $3p$ states, as a function of $\epsilon_{\delta}$ at several fixed values of $\epsilon_{\phi}$. To employ Eq.~\ref{eq:approx}, we compute $E(\epsilon_\phi,0)$ numerically in each case (this term can also be estimated analytically using results for the Hulth\'{e}n potential, as we will discuss below). We see that indeed the approximation quite accurately captures the shift in the bound state energies, with the expected breakdown of the approximation once the binding energies become sufficiently small. We overplot dotted lines corresponding to the validity criteria $\epsilon_E \gtrsim \epsilon_\delta^4$ and $\epsilon_E \gtrsim \text{min}\left[\epsilon_\phi^2,\epsilon_\phi^2/\ln(\epsilon_\phi/\epsilon_\delta^2)^2\right]$; the approximation is expected to be valid in the region well above both lines. These estimates seem adequate to characterize roughly where there is a $\mathcal{O}(1)$ divergence between the true and predicted bound-state energies.
For sufficiently shallow bound states, where this approximation will eventually break down, it may be possible to make further progress using the universal characterization of the near-threshold bound-state properties in terms of the scattering length (which approaches infinity for zero-energy bound states) \cite{Braaten:2013tza,Laha:2013gva}. We leave this to future work.
\subsection{Deriving the energy shifts with first-order perturbation theory}
The constant offset in the bound-state energies due to the mass splitting (e.g. Eq.~\ref{eq:approx}) can also be obtained from first-order perturbation theory; this approach makes it easy to see how this result generalizes beyond the pseudo-Dirac case. Suppose we can write the non-relativistic Hamiltonian in the form \begin{equation}
H=H_0+\Delta H
\end{equation}where $H_0$ is the Hamiltonian in the zero mass-splitting case, and $\Delta H$ is the constant offset matrix induced by mass splitting. In the pseudo-Dirac case,
\begin{equation} \Delta H = 2 \delta \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}.\end{equation}
If we treat $\Delta H$ as a perturbation, then using first-order perturbation theory we find that the correction to the bound-state energy is given by \begin{align}
\Delta E&=\bra{\psi}\Delta H\ket{\psi}
\end{align}where $\ket{\psi}=\begin{pmatrix}\psi_1(r)\\ \psi_2(r)\end{pmatrix}$ is the bound-state wavefunction in the absence of the mass splitting.
If it is the case that (1) $H_0$ admits approximately $r$-independent eigenvectors $\eta$, so that we can write $\ket{\psi}=\psi(r)\eta$ where $\psi(r)$ is a canonically normalized scalar wavefunction and $\eta$ is a $r$-independent vector with $\eta^\dagger \eta = 1$, and (2) $\Delta H$ is independent of $r$, fixed solely by the mass splitting, then we can write:
\begin{equation} \Delta E = \eta^\dagger \Delta H \eta \int d^3 r \psi^*(r) \psi(r) = \eta^\dagger \Delta H \eta.\end{equation}
Thus the first-order shift is simply determined by the degree to which the bound state overlaps with the mass splitting matrix. If the bound state is completely constituted of one of the two-body states in the spectrum, its energy is offset by exactly the mass splitting of that state from $2 m_\chi$, as one would expect. Where there are only two possible two-particle states (for even $L+S$) and the mass splitting is $2\delta$, the first-order shift is given simply by $|C|^2 \times 2 \delta$, where $C$ is the second component of $\eta$.
For the dark $U(1)$ (pseudo-Dirac) case with even $L+S$, \begin{align}
H_0&=-\nabla^2-\frac{e^{-\epsilon_{\phi}r}}{r}\sigma_x,& \eta&=\frac{1}{\sqrt{2}}\begin{pmatrix}1\\1\end{pmatrix}
\end{align}giving us that $C=1/2$ and the binding energy is given approximately by:\begin{equation}
E=E_0-\delta
\end{equation}which we previously obtained directly from the eigenvalues (Eq.~\ref{eq:approx}).
This alternative derivation has the advantage of being easily applicable to general Hamiltonians of similar form; it manifestly demonstrates that the prefactor in the constant energy offset is determined purely by the fraction of the bound state in the heavier two-particle states of the system.
This approach also clarifies the expected range of validity of the approximation; in the pseudo-Dirac case, it is valid up to $\mathcal{O}(\delta^2) \sim \mathcal{O}(\epsilon_\delta^4)$ terms, and so we expect the corrections to the binding energy from these next-order terms to become large when $\epsilon_E \sim \epsilon_\delta^4$, as previously discussed. This argument further suggests that in the pseudo-Dirac case the constant-energy-shift approximation may still be valid regardless of $\epsilon_\phi$, provided only that $\epsilon_E \gg \epsilon_\delta^4$, since for this $H_0$ the eigenvectors are $r$-independent for arbitrary $\epsilon_\phi$ so long as $\epsilon_\delta = 0$.
\subsection{Shifts to the resonance positions}
The simple behavior of the bound-state energies in the presence of the mass splitting gives us some analytic control over the positions of the resonances in the Sommerfeld enhancement, corresponding to zero-energy bound states, although (from the arguments in the previous subsection) we expect our approximation to eventually fail in the zero-binding-energy limit. We can test this by computing the values of $\epsilon_\phi$ for which $\epsilon_E(\epsilon_\phi,\epsilon_\delta)=0$, calculated fully numerically as a function of $\epsilon_\delta$, with the values obtained by solving for $\epsilon_E(\epsilon_\phi,\epsilon_\delta)=0$ using Eq.~\ref{eq:approx}. In the latter case, we determine $\epsilon_E(\epsilon_\phi,0)$ numerically as a function of $\epsilon_\phi$, and solve for $\epsilon_E(\epsilon_\phi,0) = \epsilon_\delta^2/2$ as a function of $\epsilon_\delta$. For the bound states $1s,2s,2p,3s,3p$, we find reasonable agreement of the resonance positions with the semi-analytic prediction, at least for small $\epsilon_{\delta}$, as is shown in Fig.~\ref{fig:respos}.
\begin{figure}[h]
\includegraphics[width=\textwidth]{"figure_2".pdf}
\caption{Resonance positions for the $1s,2s,3s$ singlet states (\emph{left panel}) and $2p, 3p$ triplet states (\emph{right panel}). The plots show the value of the dimensionless $\epsilon_{\phi}$ parameter at which the bound state energy becomes zero, as a function of $\epsilon_\delta$. The solid lines indicate the solution for $\epsilon_\phi$ obtained by solving Eq.~\ref{eq:approx} for $E=0$, while the dots indicate numerical calculation of the resonance positions from directly evaluating the bound state energies as a function of $\epsilon_{\phi}$ and $\epsilon_\delta$.
}
\label{fig:respos}
\end{figure}
From Fig.~\ref{fig:respos} we can also infer that the shift in the resonance value of $\epsilon_\phi$ is roughly linear in $\epsilon_\delta$. This behavior was already found for $s$-wave resonances in Ref.~\cite{slatyer2010sommerfeld}. We can see why this occurs analytically from the estimate for the binding energies of the Hulth\'{e}n potential given in Eq.~\ref{eq:hulthenenergy}. Choosing $\alpha_H=\alpha$ as previously, and writing $\epsilon_{H}=m_H/\alpha m_{\chi}$, we can write the Hulth\'{e}n spectrum in the form:
\begin{equation} \epsilon_E(\epsilon_H,0) \approx
\frac{\left(1 - n^2 \epsilon_H \right)^2}{4 n^2} \end{equation}
Suppose we approximate the Yukawa potential (with no mass splittings) by a Hulth\'{e}n potential with $\epsilon_H = q \epsilon_\phi$ for some constant $q$ (usually taken to be $q=\pi^2/6$), so $\epsilon_E(\epsilon_\phi,0) \approx \left(1 - n^2 q \epsilon_\phi \right)^2/4 n^2$. Then using Eq.~\ref{eq:approx}, the resonance condition $\epsilon_E(\epsilon_\phi,\epsilon_\delta)=0$ becomes:
\begin{equation}\label{eq:eh}
\epsilon_\phi = \frac{1}{n^2 q} \left(1 - \sqrt{2} n \epsilon_\delta \right)
\end{equation}
This yields the expected linear scaling with $\epsilon_\delta$. The slope of the scaling relation depends on the normalization factor $q$ in a trivial way; $q=\pi^2/6$ works well for the $l=0$ case studied in Ref.~\cite{slatyer2010sommerfeld}, but for higher partial waves, we find that slightly modified values of $q$ may be preferred. (Furthermore, for higher partial waves Eq.~\ref{eq:hulthenenergy} is not exact, even for the Hulth\'{e}n potential.) Nonetheless, this argument is sufficient to demonstrate that the approximately linear shift in the resonance positions with increasing $\epsilon_\delta$ is expected to hold true for all $l$, and to a first approximation depends only on $\epsilon_\delta$ and the principal quantum number of the relevant bound state.
\section{Application to electroweak bound states}
\label{sec:wino}
The basic principles laid out in the previous section are not specific to the model we consider. As another example where they may be applicable, consider the case of $SU(2)_{L}$ triplet wino dark matter. The DM is a Majorana fermion, denoted $\chi^0$, and the lightest member of the multiplet; the rest of the multiplet forms a charged Dirac fermion which we will denote $\chi^-$, the chargino. The gauge group is $SU(2)_{L}\times U(1)_{Y}$, and interactions involving the DM and the chargino are mediated by the massive gauge bosons $W^{\pm}$ and $Z$, as well as the photon. The presence of charged gauge bosons allows capture into bound states via a new channel wherein $W^{\pm}$ could itself emit a photon \cite{Asadi:2016ybp}.
As in the simple pseudo-Dirac model, the relevant potentials differ for $L+S$-even and $L+S$-odd states. The neutral bound states supported by the latter potential are again relatively simple, since the two-state system with $L+S$-odd has no $\chi^0\chi^0$ component, and so consists only of $\chi^+ \chi^-$ bound states supported by $Z$ and $\gamma$ exchange (as studied in e.g. Ref.~\cite{Zhang:2013qza}). The DM-chargino mass splitting thus plays no interesting role for this sector.
Consequently, we focus again on the case of even $L+S$, where the potential \cite{Asadi:2016ybp} is given by \begin{align}
V_{\text{even }L+S}(r)=\begin{bmatrix}
0 & -\sqrt{2}\alpha_{W}\frac{e^{-m_{W}r}}{r}\\
-\sqrt{2}\alpha_{W}\frac{e^{-m_{W}r}}{r}& 2\delta-\frac{\alpha}{r}-\frac{\alpha_{W}\cos^2{\theta_{w}}e^{-m_{Z}r}}{r}
\end{bmatrix}
\end{align}
Let us scale the coordinate $r$ by $\frac{\hbar}{\alpha_W m_{\chi}c}$, yielding the dimensionless analog of Eq.~\ref{matrix} for the wino case, which is \begin{equation}
V_{\text{even }L+S}(r)=\begin{bmatrix}
0 & -\sqrt{2}\frac{e^{-\epsilon_{W}r}}{r}\\
-\sqrt{2}\frac{e^{-\epsilon_{W}r}}{r}& \epsilon_{\delta}^2-\frac{\sin^2\theta_w}{r}-\frac{\cos^2{\theta_{w}}e^{-\epsilon_{Z}r}}{r}
\end{bmatrix}
\end{equation}where we have defined \begin{align}
\epsilon_{W}&=\frac{m_{W}}{m_{\chi}\alpha_W}&
\epsilon_{Z}&=\frac{m_{Z}}{m_{\chi}\alpha_W}&
\epsilon_{\delta}&=\sqrt{\frac{2\delta}{m_{\chi}\alpha_W^2}}
\end{align}
Note that the binding energies are now expressed as multiples of $m_{\chi}\alpha_W^2$, $\epsilon_E = E/(\alpha_W^2 m_\chi)$. We can again evaluate the eigenvalues of the potential matrix:
\begin{align}\label{eweigenvalues}
\lambda_{\pm}(r)=\frac{1}{2}\bigg(\epsilon_{\delta}^2-\frac{\sin^2\theta_w}{r}-\frac{\cos^2{\theta_{w}}e^{-\epsilon_{Z}r}}{r}\bigg)\pm \sqrt{\frac{1}{4}\bigg(\epsilon_{\delta}^2-\frac{\sin^2\theta_w}{r}-\frac{\cos^2{\theta_{w}}e^{-\epsilon_{Z}r}}{r}\bigg)^2+\bigg(\sqrt{2}\frac{e^{-\epsilon_{W}r}}{r}\bigg)^2}
\end{align}
Again, we will be interested in the behavior of the potential over the support of the bound state, i.e. $r \lesssim 1/\sqrt{\epsilon_E}$. In the regime where $\epsilon_E \gg \epsilon_W^2, \epsilon_Z^2,\epsilon_\delta^4$, then within this radius, the Yukawa terms may be replaced by Coulombic terms, and furthermore these Coulombic terms dominate over the $\epsilon_\delta^2$ term. In this case, the eigenvalues become:
\begin{align}
\lambda_{\pm}(r)\approx&\frac{1}{2}\bigg(\epsilon_{\delta}^2-\frac{\sin^2\theta_w}{r}-\frac{\cos^2{\theta_{w}}}{r}\bigg)\pm \sqrt{\frac{1}{4}\bigg(\epsilon_{\delta}^2-\frac{\sin^2\theta_w}{r}-\frac{\cos^2{\theta_{w}}}{r}\bigg)^2+\bigg(\frac{\sqrt{2}}{r}\bigg)^2}\nonumber\\
\approx&\frac{1}{2}\bigg(\epsilon_{\delta}^2-\frac{1}{r}\bigg)\pm\sqrt{\frac{1}{4}\bigg(\epsilon_{\delta}^2-\frac{1}{r}\bigg)^2+\frac{2}{r^2}}\nonumber\\
\approx&\frac{1}{2}\bigg(\epsilon_{\delta}^2-\frac{1}{r}\bigg)\pm\frac{3}{2r}\sqrt{1-\frac{2\epsilon_{\delta}^2 r}{9}}\nonumber\\
\approx&\frac{1}{2}\bigg(\epsilon_{\delta}^2-\frac{1}{r}\bigg)\pm\bigg(\frac{3}{2r}-\frac{\epsilon_{\delta}^2}{6}\bigg)\nonumber
\end{align}
Thus we find a pair of attractive and repulsive potentials, with the attractive one being stronger, with offsets due to the mass splitting similar to what we observed in our toy model:
\begin{align}
\lambda_{-}(r)\approx-\bigg(\frac{2}{r}-\frac{2\epsilon_{\delta}^2}{3}\bigg)& &\lambda_{+}(r)\approx\frac{1}{r}+\frac{\epsilon_{\delta}^2}{3}
\end{align}
In the limit of zero mass splitting (or small $r$), these potentials correspond to full restoration of the $SU(2)_L$ symmetry \cite{Asadi:2016ybp}. Since the repulsive potential cannot accommodate bound states, the effect of the mass splitting is simply to shift the bound state energies supported by the attractive potential to $\epsilon_E(\epsilon_\delta) \approx \epsilon_E(\epsilon_\delta=0) -(2/3) \epsilon_\delta^2$, or in terms of the energies without rescaling,
\begin{equation} E(\epsilon_\delta) \approx E(\epsilon_\delta=0) - (4/3) \delta.\label{eq:winoapprox}\end{equation}
We can also apply the perturbation-theory approach here, in the limit where $\epsilon_W$, $\epsilon_Z$ approach zero and so the potential can be diagonalized in a $r$-independent way. As in the pseudo-Dirac case, we have $\Delta H = \begin{pmatrix} 0 & 0 \\ 0 & 2\delta\end{pmatrix}$. We can rewrite the Hamiltonian (in rescaled coordinates) in the form:
\begin{align}
H_0&\approx-\nabla^2-\frac{1}{r}\begin{pmatrix}0&\sqrt{2}\\
\sqrt{2}&1\end{pmatrix}, & \eta&=\frac{1}{\sqrt{3}}\begin{pmatrix}1\\\sqrt{2}\end{pmatrix}
\end{align}so that the binding energies are approximately given by \begin{equation}
E=E_0-\frac{2}{3} \times 2 \delta
\end{equation} as in Eq.~\ref{eq:winoapprox}.
These arguments only hold in full when the potential is essentially Coulombic over the support of the bound states. If the masses of the $W$ and $Z$ bosons are large enough, relative to the DM mass, to significantly perturb the bound state energies, that must also be taken into account. An ad hoc estimate for this effect can be obtained by replacing the Coulombic bound-state energies with those for the Hulth\'{e}n potential, with $m_H \rightarrow (\pi^2/6) m_W$ and $\alpha_H \rightarrow \alpha_W$. Combining this prescription with the shift due to the mass splitting, we obtain the estimate:
\begin{equation}
E \approx m_{\chi}\alpha_{W}^2\left(\frac{1}{n}-\frac{n\ m_{W}\pi^2}{12\alpha_W m_{\chi}}\right)^2-\frac{4}{3}\delta, \quad \epsilon_E =\frac{1}{n^2} \left(1-\frac{\pi^2 n^2 \epsilon_W }{12}\right)^2-\frac{2}{3}\epsilon_\delta^2
\label{eq:winobound}
\end{equation}
In Fig.~\ref{fig:evenls}, we compare the numerically computed bound-state energies for even $L+S$ to this estimate, and find remarkably good agreement across a broad wino mass range. Throughout, we fixed $m_W=80.38$ GeV, $m_Z=91.19$ GeV, $\delta=0.17$ GeV, and $\alpha_W=0.0335$. For our numerical computation we use 30 basis states as previously, but with the coupling for the basis wavefunctions set to $2\alpha_W$, as discussed in Ref.~\cite{Asadi:2016ybp}.
\begin{figure}[h]
\centering
\includegraphics[scale=1]{"figure_3".pdf}
\caption{The wino bound-state energy as a function of the wino mass $m_{\chi}$. Here the overplotted points represent numerical evaluations of the bound state energy, while the solid lines are the analytic estimates of Eq.~\ref{eq:winobound}. From top to bottom, the blue, red and green lines describe the $1s$ singlet, $2p$ triplet and $3d$ singlet bound states, respectively.}
\label{fig:evenls}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[scale=1.1]{"figure_4".pdf}
\caption{The wino bound-state energy when varying the mass splitting parameter $\epsilon_{\delta}$. The overplotted points represent numerical evaluations of the bound state energy, while the solid lines are the analytic estimates $\epsilon_E(\epsilon_\delta)=\epsilon_E(\epsilon_\delta=0) -(2/3) \epsilon_\delta^2$. In the \emph{left panel} the parameters are $m_{\chi}=50$ TeV, $\epsilon_{W}=0.0479671$; in the \emph{right panel} they are $m_{\chi}=70$ TeV, $\epsilon_{W}=0.0342622$.}
\label{fig:winodel}
\end{figure}
As a further check, we examined the behavior of the estimate,
\begin{equation}
\epsilon_E(\epsilon_\delta) = \epsilon_E(0) -\frac{2}{3} \epsilon_\delta^2
\end{equation}(holding all other parameters fixed) by numerically computing the bound-state energies as a function of the mass splitting $2\delta$, and comparing them to this analytic estimate, for the even-$L+S$ spectrum. We tested the $2s$ singlet and $2p$ triplet cases, and again found reasonable agreement for this simple estimate, as shown in Fig.~\ref{fig:winodel}. Thus, this approach works well even for a more complicated gauge structure such as the wino / $SU(2)$ triplet DM. The approximation of a constant mass-splitting dependent shift to the bound state energies is a generic feature, as demonstrated by the argument from first-order perturbation theory.
The analysis for the $L+S$-odd case is again simpler; we need only recall that since binding energies are defined relative to $2 m_\chi$, the potential for the odd-$L+S$ case contains a constant offset of $2\delta$ due to the higher mass of its constituents. In the regime where the potential is largely Coulombic, and the mass of the $Z$ boson can be neglected, the binding energies defined relative to the sum of the constituent masses, $2 m_\chi + 2 \delta$, follow the usual Coulomb form $E_n = \alpha_W^2 m_\chi/4 n^2$. Thus under our convention, where the total mass of the state is $2 m_\chi - E$, the binding energies $E$ can be approximated in this case as: \begin{align}
E =\frac{\alpha_{W}^2m_{\chi}}{4n^2}-2\delta, \quad \epsilon_E = \frac{1}{4 n^2} - \epsilon_\delta^2
\end{align}
In the limit where the $Z$ boson mass is large relative to $\alpha_W m_\chi$, the Coulombic interaction will dominate the potential, and the bound states will have a similar pattern, but with $\alpha_W \rightarrow \alpha$. This transition is demonstrated numerically in Fig.~\ref{fig:alpha}.
\begin{figure}[tb]
\centering
\includegraphics[scale=1]{"figure_5".pdf}
\caption{The binding energy for the $1s$ spin-triplet bound state (i.e. with odd $L+S$) for wino DM, relative to $2m_\chi + 2 \delta$ (note the binding energies under our usual convention can thus be obtained by subtracting $2\delta$ from this curve), as a function of the wino mass $m_{\chi}$. Here the black points represent numerical evaluations of the bound state energy, while the solid blue lines are the Coulombic limits with two different couplings, $E=\alpha^2m_{\chi}/4$ and $E=\alpha_W^2m_{\chi}/4$. The effective coupling strength transitions from $\alpha$ at low DM masses (where the $\gamma$ exchange dominates) to $\alpha_W$ at high masses (where $SU(2)$ symmetry is restored).}
\label{fig:alpha}
\end{figure}
\section{The cross section for radiative formation of bound states}
\label{sec:capture}
Having derived and understood the bound state spectrum, we now return to the simple pseudo-Dirac model to explore the impact of turning on the mass splitting on the cross section for radiative bound-state formation. There is a subtlety here: using only the ingredients of our model described thus far, the relevant process is emission of a dark photon with mass $m_\phi$, which in the low-velocity limit requires that the binding energy $E$ exceed $m_\phi$, i.e. $m_\phi$ is parametrically of size $\alpha^2 m_\chi$ or smaller. However, to the degree that we wish to use this model as a testbed for other scenarios with mass splittings between DM states, it is useful to consider the possibility that some lighter (or even massless) particle could couple to the $\chi$ or $\chi^*$ states and be radiated to allow formation of a bound state. For example, a similar behavior occurs naturally in the case of wino or higgsino DM: while the DM itself does not couple to the photon, only to the weak gauge bosons (and possibly the Higgs), it has nearly-degenerate heavier charged partners that play a similar role to the $\chi^*$ in our present scenario, and these partners can emit massless photons to allow for bound-state formation. In this way, the presence of a light particle can be crucial for bound-state formation, even if it does not dominate the potential experienced by the DM particles.
In what follows, in order to build intuition for these expanded cases, we will leave open the possibility that a light vector is emitted, with a mass and coupling to the DM that do not match those of the dark photon that dominates the potential; equivalently, when we study how the squared matrix element for this process is modified by the presence of a mass splitting, we will also show results for regimes where the phase-space factor would vanish if the particle being emitted is a dark photon with mass $m_\phi$. However, we will work under the assumption that the particle emitted is a vector rather than a scalar, and thus the selection rules are the same as they would be for dark photon emission; violating this assumption would modify the matrix element, not just the phase-space factor (see Ref.~\cite{Oncala:2018bvl} for a discussion of bound-state formation through scalar emission).
As in Ref.~\cite{Asadi:2016ybp}, we define the matrix element $\bar M$ for radiative capture in the dipole approximation as:
\begin{equation}\label{matrixelement}
\bar M\equiv \frac{\varepsilon}{\mu}\cdot\int d^3 r~ \Psi_{\text{scat}}^{\dagger}(r)\ \hat{C}\ \nabla_{r}\Psi_{\text{bound}}(r)
\end{equation} where $\varepsilon$ describes the polarization of the emitted light vector, $\mu = m_\chi/2$ is the reduced mass of the incoming DM particles, and $\hat{C}$ is the matrix that couples the $\ket{\chi\chi^*}$ sector to the $\ket{\chi\chi} + \ket{\chi^* \chi^*}$ sector. If we expand the previous basis to include $\ket{\chi\chi^*}$, writing the state $\alpha \ket{\chi \chi} + \beta \ket{\chi^*\chi^*} + \gamma \ket{\chi \chi^*}$ as $\begin{pmatrix} \alpha \\ \beta \\ \gamma\end{pmatrix}$, then $\hat{C}$ takes the form:
\begin{equation}\hat{C} =\frac{1}{\sqrt{2}} \begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 1 \\ 1 & 1 & 0 \end{pmatrix} \end{equation}
The factor of $1/\sqrt{2}$ is needed to account for the fact that the $\ket{\chi \chi^*}$ state could equivalently be described as $\ket{\chi^* \chi}$ \cite{Beneke:2014gja}.
The bound states relevant for the single-dipole-vector capture process (which is expected to dominate the overall rate), lie in the $L+S$-odd $\ket{\chi\chi^*}$ sector, since the initial state is $L+S$-even and the dipole photon radiation gives $\Delta L = \pm 1$, $\Delta S = 0$.
We solved for the scattering wavefunctions numerically, using the $L+S$-even potential and employing the method of variable phase, as elucidated in detail in Ref.~\cite{Asadi:2016ybp}. This numerical approach was chosen for its superior stability at large $r$, compared to solving the Schr\"{o}dinger equation via brute force using the \texttt{NDSolve} function in \texttt{Mathematica}. For an initial state consisting of two identical fermions,\footnote{Note that as discussed in Ref.~\cite{Asadi:2016ybp}, the presence of identical fermions in the initial state means this cross section is larger than that for non-identical initial-state particles by a factor of 2 when the initial state has $L+S$ even, and is zero for $L+S$ odd.} the matrix element in Eq.~\ref{matrixelement} can be translated to a physical cross section as:
\begin{equation}\label{xsec}
\sigma v_{\text{rel}}=\frac{\alpha_\text{rad} k}{\pi}\int d\Omega~|\bar{M}|^2.
\end{equation}Here $v_{\text{rel}}$ is the relative velocity between the particles, $\alpha_\text{rad}$ describes the coupling of the emitted light vector to the DM (which may or may not be equal to the coupling $\alpha$ that controls the potential), and the energy of the emitted particle is denoted $k$ and given by:
\begin{align}
k&=E_{\text{scat}}-(2 m_{\chi} - E)\\
&=m_{\chi}\alpha^2(\epsilon_v^2+\epsilon_E)
\end{align}
for a massless or near-massless particle, where $E_\text{scat}$ is the total (mass+kinetic) energy of the initial state, and $E$ is the binding energy of the final-state bound state. (Recall that $\epsilon_E$ contains a contribution of $-\epsilon_\delta^2/2$, as in Eq.~\ref{eq:oddjstatesnodim}, since the bound state is composed of $\chi$ and $\chi^*$.) If the emitted particle has a non-zero mass $m_\text{rad}$, its energy should be modified to:
\begin{align}
E_k&=\sqrt{k^2+m_\text{rad}^2}\nonumber\\
&=\frac{E_{\text{scat}}^2+m_\text{rad}^2-(2 m_{\chi}- E)^2}{2E_{\text{scat}}}\nonumber\\
&\approx \left(E_{\text{scat}}-(2 m_{\chi} - E)\right)\left(1-\frac{m_{\chi}\alpha^2(\epsilon_v^2+\epsilon_E)}{2 E_{\text{scat}}}\right)+\frac{m_\text{rad}^2}{2E_{\text{scat}}}\nonumber\\
&\approx m_{\chi}\alpha^2\left(\epsilon_v^2+\epsilon_E+\frac{\alpha^2 \epsilon_\text{rad}^2}{4}\right),
\end{align}
where $\epsilon_\text{rad} \equiv m_\text{rad}/(\alpha^2 m_\chi)$. Consequently the emitted particle's momentum $k$ is modified to:
\begin{align} k = \sqrt{E_k^2 - m_\text{rad}^2} \approx m_\chi \alpha^2 \sqrt{(\epsilon_v^2 + \epsilon_E + \alpha^2 \epsilon_\text{rad}^2/4)^2 - \epsilon_\text{rad}^2}.\end{align}
As in \cite{Asadi:2016ybp}, we can simplify Eq.~\ref{xsec} further by writing\begin{equation}
\bar{M}=A\epsilon\cdot \hat{r}_{m}
\end{equation}for capture into the bound state characterized by the quantum numbers $\{n,l,m\}$, and we perform the angular integral separately. Using considerations of spherical symmetry, Eq.~\ref{xsec} then simplifies to \begin{equation}
\sigma v_{\text{rel}}= \frac{8\alpha_\text{rad} k|A|^2}{3}
\label{eq:xsecsimple}
\end{equation}
Again, this cross section is valid under the assumption that the initial state contains two identical fermions and has $L+S$ even; in a situation where the initial state contains two distinguishable particles, this cross section should be divided by two (and need not be zero for $L+S$ odd). Note this is the cross section for an initial state of fixed spin, rather than a spin-averaged cross section (performing the spin average would introduce another factor of $1/4$ or $3/4$, for $L$-even and $L$-odd initial states respectively).
We numerically evaluated this cross section for a scan over $(\epsilon_v, \epsilon_{\delta}, \epsilon_{\phi})$, using the phase-space factor for a massless vector to examine the maximum amount of parameter space (our results can trivially be rescaled to a different phase-space factor using Eq.~\ref{eq:xsecsimple}). We chose to examine the region where there is a large Sommerfeld enhancement, and the Born approximation for the scattering cross-sections is insufficient, i.e. where the dimensionless parameters are less than unity \cite{slatyer2010sommerfeld}). Contour plots for the dimensionless cross section $\sigma v_{\text{rel}}m_{\chi}^2c/\hbar^2$, broken down by the initial and final quantum numbers, are shown in Fig.~\ref{fig:contour plots}. Throughout, we have set $\alpha_{\text{rad}}=\alpha=0.01$ in our numerical evaluations.
\begin{figure}[htb]
\includegraphics[width=\textwidth]{"figure_6".pdf}
\caption{The bound-state formation cross section, plotted as $\log_{10}(\sigma v_\text{rel} m_\chi^2 c /\hbar)$, for $\epsilon_\delta=0$ (\emph{top row}), $\epsilon_\delta=0.01$ (\emph{middle row}) and $\epsilon_\delta=0.1$ (\emph{bottom row}). We show results for capture into the $2p$ spin-singlet bound state from the $s$-wave component of an initial plane wave (\emph{left column}) and for capture into the $1s$ spin-triplet bound state from the $p$-wave component of the initial state (\emph{right column}). The white line indicates the locus of $\epsilon_{v}\epsilon_{\phi}=\epsilon_{\delta}^2$; the cross section in the region below the line is double the cross section at zero mass splitting, at velocities far from resonances (see text for details).
\label{fig:contour plots}}
\end{figure}
Several broad features emerge from these plots. Let us first comment briefly on the difference between the capture from $s$-wave and $p$-wave initial states, which is manifest even for $\epsilon_\delta=0$. For a $s$-wave initial state, we expect (in the absence of a long-range potential) $\sigma v_\text{rel} \propto v_\text{rel}^0$, while for the $p$-wave contribution we expect $\sigma v_\text{rel} \propto v_\text{rel}^2$. This behavior is observed in the limit where $\epsilon_v \ll \epsilon_\phi$, the upper left corners of the contour plots. However, in the opposite limit where $\epsilon_v \gg \epsilon_\phi$ (lower right corner of the contour plots), we recover Coulombic behavior where the potential is effectively long-range. In this regime, as we will discuss in more detail below, all partial waves contribute with the same $1/v$ scaling. Thus in the $\epsilon_\delta=0$ case we expect (and observe) $\sigma v_\text{rel}$ to rise with decreasing velocity for both $s$- and $p$-wave initial states, before flattening out for the $s$-wave case, and being suppressed at low velocity for the $p$-wave case.
Upon turning on a mass splitting, we find that:
\begin{itemize}
\item In the case of non-zero $\delta$, in the region of parameter space where $\epsilon_{v}\epsilon_{\phi}\lesssim \epsilon_{\delta}^2$, the cross section is doubled relative to the case with zero mass splitting. This is most clearly illustrated by comparing the first and second rows of Fig.~\ref{fig:contour plots}, and in the second row, observing the cross section at fixed $\epsilon_{v}$ as $\epsilon_{\phi}$ is varied across the boundary $\epsilon_\phi \sim \epsilon_\delta^2/\epsilon_v$.
This behaviour has been noted previously in the context of the Sommerfeld enhancement, and can be explained by a transition between ``adiabatic'' and ``non-adiabatic'' regimes \cite{schutz2015self,zhang2017self}, with regard to the rotation of the eigenvectors with radial distance from the origin. The argument is that if the diagonalizing matrix varies sufficiently slowly with respect to $r$, then a large-$r$ asymptotic state consisting purely of $\ket{\chi\chi}$ (the lower-energy state at large $r$) will smoothly transition to a small-$r$ state that is dominated by the eigenvector experiencing an attractive potential (the lower-energy state at small $r$). In contrast, in the case with $\epsilon_\delta=0$, there is no evolution of the diagonalizing matrix with $r$, and the small-$r$ wavefunction has the same contributions from the repulsed and attracted eigenvectors as the large-$r$ state -- in this model, those contributions are equal. Since only the attracted eigenvector yields a significant contribution to bound-state formation, the probability of bound-state formation is reduced by a factor of 2 in the Coulombic case relative to the adiabatic case.
The adiabatic regime is characterized by the criterion \cite{schutz2015self}: \begin{equation}\label{eq:adiabatic_criterion}
\epsilon_{v}\epsilon_{\phi}\leq\epsilon_{\delta}^2
\end{equation}
In the wino case, we expect similar behavior in the same regime, except that the cross section will be enhanced by a factor of 3 (rather than 2) relative to the case with no mass splitting. The argument is almost identical to that of the dark $U(1)$ case, and the factor is 3 in this case since the large-$r$ asymptotic state has only a $1/3$ overlap with the attracted eigenvector \cite{Asadi:2016ybp}. This factor-of-3 enhancement is observed in the numerical results of Ref.~\cite{Asadi:2016ybp}.
\item As the mass splitting turns on, the resonances in the cross section for capture are enhanced, and the resonance positions undergo a shift relative to the Coulombic case, according to Eq.~\ref{eq:eh}, when the kinetic energy becomes negligible compared to the mass splitting, as illustrated in Fig.~\ref{fig:adiaplot}. This behavior is due to a modification to the initial-state wavefunction, with similar effects being observed in studies of the Sommerfeld enhancement \cite{slatyer2010sommerfeld}; thus the resonance positions are determined by bound states in the $L+S$-even sector, even though the bound state being formed in this case has $L+S$ odd. Because the resonance positions now also depend on the importance of the mass splitting term compared to the kinetic energy, it is possible for a particular choice of $(\epsilon_\phi, \epsilon_\delta)$ to lie on a resonance at one velocity but not at a lower velocity, leading to non-monotonicity in the capture rate even for a $s$-wave initial state (similar behavior is again observed in the multi-state Sommerfeld case \cite{slatyer2010sommerfeld}). This behavior is demonstrated in the right panel of Fig.~\ref{fig:adiaplot}.
\begin{figure}[htb]
\centering
\includegraphics[scale=1]{"figure_7".pdf}
\caption{Radiative capture cross section $\sigma v$ at fixed $\epsilon_{v}$ (\emph{left panel}) and fixed $\epsilon_{\phi}$ (\emph{right panel}) for an initial $s$-wave state and $2p$ bound state, taking $\alpha_\text{rad}=0.01$ and assuming emission of a massless vector (these results can be rescaled to a massive vector by insertion of a modified phase space factor). In the right panel, colored dashed vertical lines indicate the locus of $\epsilon_v=\epsilon_{\delta}^2/\epsilon_{\phi}$, with the region on their left corresponding to the adiabatic regime. We observe the enhancement of the cross section for non-zero $\delta$ in the adiabatic regime, in the right panel, and the $\delta$-dependent shift in the resonance positions, in the left panel.}
\label{fig:adiaplot}
\end{figure}
\end{itemize}
The two generic behaviors described above (adiabatic enhancement and shifting of the resonance structure) originate from the properties of the initial-state wavefunction, and are thus also seen in the calculation of the Sommerfeld enhancement to short-distance annihilation processes (which is set by the initial-state wavefunction evaluated at the origin).
The impact of changing the initial-state wavefunction on the matrix element for bound-state formation is in principle not identical to its effect on the matrix element for short-distance processes such as annihilation; the former involves a non-trivial integration with respect to $r$, as the bound-state formation process is localized within the region $r \lesssim 1/(\alpha m_\chi)$, rather than at the origin. However, if the scale $1/(\alpha m_\chi)$ is small compared to the scales over which the initial-state wavefunction varies significantly, we might expect the fact that the interaction is not zero-range to have only a mild effect. The unperturbed initial-state wavefunction has a natural scale of $1/(m_\chi v)$, the de Broglie wavelength of the incoming particles, and so we might expect the bound-state formation process to be effectively short-range when $m_\chi v \ll m_\chi \alpha$, i.e. $\epsilon_v \ll 1$ (this argument was also made in Ref.~\cite{Finkbeiner:2010sm}). However, it is not clear that this argument will still hold when the initial-state wavefunction is significantly deformed by the potential.
To test the degree to which the capture rate and Sommerfeld enhancement are related, we compare the numerically calculated radiative capture rate to the Sommerfeld enhancement, normalized by the ratio between these two quantities in the Coulomb regime:
\begin{equation}
\sigma v_{\text{rel}}=\left(\sigma v_{\text{rel}}\right)_C\frac{S}{S_{\text{C}}}.
\end{equation}
Here $\left(\sigma v_{\text{rel}}\right)_C$ is the capture cross section for a particular initial-state partial wave in the Coulombic limit ($\epsilon_{\phi}=\epsilon_{\delta}=0$), while $S$ and $S_C$ are the Sommerfeld enhancement factors (for the same partial wave) in the general pseudo-Dirac case and the Coulombic limit, respectively.
For a $s$-wave initial state, there is a semi-analytic approximation for pseudo-Dirac dark matter \cite{slatyer2010sommerfeld}, but in practice we use the numerically computed value in the comparison for both $s$- and $p$-wave initial-state contributions. The Sommerfeld enhancement for $s$-wave case in the Coulombic limit is given by \cite{Cirelli:2007xd,ArkaniHamed:2008qn,slatyer2010sommerfeld}: \begin{equation}
S_{\text{C}}=\frac{\pi/\epsilon_v}{1-e^{-\pi/\epsilon_v}} \rightarrow \frac{\pi}{\epsilon_v}, \, \epsilon_v \ll 1,
\end{equation} whereas for the $p$-wave case we have \cite{Cassel:2009wt}:\begin{equation}
S_{\text{C}}=\frac{\pi/\epsilon_v}{(1-e^{-\pi/\epsilon_v})}\left(1+\frac{1}{4\epsilon_v^2}\right) \rightarrow \frac{\pi}{4\epsilon_v^3}, \, \epsilon_v \ll 1.
\end{equation}
The capture rate is also analytically computable in the Coulombic limit, and takes a particularly simple form in the limit of small $\epsilon_v$. Using the results from Ref.~\cite{Asadi:2016ybp} we have:
\begin{itemize}
\item $s \rightarrow 2p$:
\begin{align} A &\approx \frac{2^5\sqrt{2 \pi}}{3} \alpha^{-1/2} m_\chi^{-3/2} \Gamma(1 - i / 2 \epsilon_v) e^{\pi/(4 \epsilon_v)} e^{-4}, \nonumber \\
(\sigma v_\text{rel})_{s \rightarrow p} & \approx \frac{\alpha_\text{rad}}{v_\text{rel}} \frac{ \alpha^2 }{m_\chi^2} \frac{2^{11} \pi^2}{3^2} e^{-8} \end{align}
This is the cross section for an initial $\chi^0 \chi^0$ spin-singlet state, summed over the three $2p$ bound states ($m=0,\pm 1$). To obtain the contribution to the spin-averaged cross section, one would multiply this cross section by a factor of 1/4. We have also chosen $k=\alpha^2 m_\chi/16$, as appropriate for a $n=2$ state in the Coulomb limit.
\item $p \rightarrow 1s$:
\begin{align} A &\approx 2^4 \sqrt{\pi} \alpha^{-1/2} m_\chi^{-3/2} \Gamma(1 - i / 2 \epsilon_v) e^{\pi/(4 \epsilon_v)} e^{-2}, \nonumber \\
(\sigma v_\text{rel})_{s \rightarrow p} & \approx \frac{\alpha_\text{rad}}{v_\text{rel}} \frac{\alpha^2 }{m_\chi^2} \frac{2^{10} \pi^2}{3} e^{-4} \end{align}
This is again the cross section for an initial state of fixed spin, in this case the spin-triplet configuration. To obtain the contribution to the spin-averaged cross section, this result should thus be multiplied by 3/4. We have set $k=\alpha^2 m_\chi/4$, since the bound state has $n=1$ in this case.
\end{itemize}
As shown in Fig.~\ref{fig:sommerfeld}, the rescaled Sommerfeld enhancement has a very similar behaviour to the full bound-state formation rate. The principal difference between the two is captured in the phase-space factor in the bound state case (or rather, the ratio of the phase-space factor to its value in the Coulomb regime), which ensures that the bound state actually exists; this factor is simple and analytically calculable as soon as the bound-state energies are known. In the low-velocity limit for a massless emitted particle, we can approximate it by:
\begin{equation}E/(\alpha^2 m_\chi/4 n^2) = 4 n^2 \epsilon_E \approx \left(1 - \frac{\pi^2}{6} n^2 \epsilon_\phi\right)^2 - 2 n^2 \epsilon_\delta^2.\end{equation}
This result suggests that at least in this simple Abelian model and for capture into low-lying bound states, the ratio -- in a given partial wave -- of the bound-state formation rate to the Sommerfeld-enhanced annihilation rate is nearly independent of the parameters $\epsilon_\phi$ and $\epsilon_\delta$, and can be quite well-described by the (analytically calculable) ratio for the Coulombic limit with $\epsilon_\phi,\epsilon_\delta \rightarrow 0$, up to the phase-space factor. Thus if the same partial wave dominates both the Sommerfeld-enhanced annihilation rate and the bound-state formation, whichever process has a larger rate in the Coulomb case will also dominate for the bulk of the parameter space with $\epsilon_\phi,\epsilon_\delta \ne 0$, at least in the regime we have studied where bound states exist and the Sommerfeld enhancement is large. This conclusion need not hold, however, if e.g. the Sommerfeld-enhanced annihilation is dominated by $s$-wave processes but the bound-state formation is dominated by $p$-wave initial states capturing into a $s$-wave initial state.
\begin{figure}[h]
\centering
\includegraphics[scale=1]{"figure_8".pdf}
\caption{Comparison of the true capture cross section, $\log_{10}(\sigma v_{\text{rel}}m_{\chi}^2c/\hbar)$, and an estimate using the rescaled Sommerfeld enhancement, $\log_{10}\left((\sigma v_{\text{rel}})_{\text{C}}m_{\chi}^2c/\hbar \frac{S}{S_{\text{C}}}\right)$, as a function of $\log_{10}(\epsilon_v)$ and $\log_{10}(\epsilon_{\phi})$. The solid orange lines are the numerical bound-state capture rate, and the dashed black lines are the rescaled Sommerfeld enhancement. The upper row indicates the $s\rightarrow 2p$ capture, while the lower is for the $p\rightarrow 1s$ transition. The columns are arranged in order of increasing mass splitting $\epsilon_{\delta}$ with the values being $0,0.01,0.1$ respectively. We find that they have similar behaviour, up to the phase space factor appearing in the bound-state formation rate.
\label{fig:sommerfeld}}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
Using a simple model of pseudo-Dirac dark matter interacting with a light vector boson as a testbed, we have derived an analytic expression for the shift in bound-state energies when the bound state has constituents of slightly different masses. In the two-state case, with a mass splitting of $2 \delta$ between the available two-particle configurations, and bound states comprised equally of these configurations, the shift in the binding energies $E$ is given by:
\begin{equation}
E(\epsilon_{\phi},\epsilon_{\delta})=E(\epsilon_{\phi},0)-\delta
\end{equation}where $E$ is the binding energy and $\epsilon_{\phi}$ and $\epsilon_{\delta}$ are as defined in Eq.~\ref{eq:dimensionlessparams}. Thus, the mass splitting can be simply accounted for by subtracting half the mass splitting from the binding energy in the degenerate (mass) case. The expression is approximate and holds only for small mass splitting and small force-carrier mass, and we expect the regime of validity to be given roughly by:\begin{equation}
\frac{E}{m_{\chi}\alpha^2}\gg \max\left(\text{min}\left[\epsilon_{\phi}^2/\ln(\epsilon_\phi/\epsilon_\delta^2)^2,\epsilon_\phi^2\right],\ \epsilon_{\delta}^4\right).
\end{equation}
More generally, if there are $N$ two-body states involved, and the mass splittings between the states are captured in the Hamiltonian for these states as a $N\times N$ constant matrix $\Delta H$, a bound state characterized by a (unit normalized) $N$-vector $\eta$ in the space of two-body states experiences an offset in its binding energy of $\eta^\dagger \Delta H \eta$.
In the case of wino DM, $L+S$-even bound states are an admixture of $\ket{\chi^0 \chi^0}$ and $\ket{\chi^+ \chi^-}$ two-body states, with coefficients $1/\sqrt{3}$ and $\sqrt{2/3}$ respectively. Since the $\chi^+ \chi^-$ state is heavier by a mass splitting $2\delta$, the binding energies for these states are offset by $2 \delta \times 2/3 = (4/3)\delta$. Combining this estimate with an approximate solution for bound-state energies in the Hulth\'en potential, we obtain an analytic estimate for the energies of $L+S$-even wino bound states:
\begin{equation}
E_n=m_{\chi}\alpha_{W}^2\left(\frac{1}{n}-\frac{n\ m_{W}\pi^2}{12\alpha_W m_{\chi}}\right)^2-\frac{4}{3}\delta
\end{equation} where $n$ is the principal quantum number of the bound state, $m_W$ is the mass of the $W^{\pm}$ boson, $\alpha_W$ is the electroweak coupling constant, and $\delta$ is the chargino-neutralino mass splitting between $\chi^0$ and $\chi^-$. Our estimate is in excellent agreement with previous numerical calculations \cite{Asadi:2016ybp}.
This analytic expression for the binding energies enables us to estimate the locations of resonances in the scattering cross section due to near-zero energy bound states. For our model of pseudo-Dirac DM, we find that the resonances in the $\left(\epsilon_{\phi},\ \epsilon_{\delta}\right)$ plane obey the linear relationship \begin{equation}\label{eq:resshift}
\epsilon_\phi = \frac{1}{n^2 q} \left(1 - \sqrt{2} n \epsilon_\delta \right)
\end{equation}where the numeric factor $q$ is approximately $\pi^2/6$ for the $s$ states, and is roughly of the same order for the higher $l$ states. This extends the observation of linear resonance shifts induced by a mass splitting, for the case of $l=0$, made in Ref.~\cite{slatyer2010sommerfeld}.
We also analyzed the effect of the mass splitting on the cross section for radiative capture into bound states in our model of pseudo-Dirac DM. We find that in this case, the effects of the mass splitting on the capture rate are essentially identical to its effects on the Sommerfeld enhancement (for the same partial wave), up to phase space factors that depend on the mass of the emitted particle and the energy of the bound state being formed. Consequently, numerical calculations and analytic estimates for the Sommerfeld enhancement can be equally applied to estimate the bound-state formation rate. Furthermore, it is not feasible to significantly enhance bound-state formation relative to Sommerfeld-enhanced annihilation by turning on a mass splitting, unless different partial waves dominate the Sommerfeld enhancement and the bound-state formation rate.
The features inherited from the Sommerfeld enhancement include:
\begin{itemize}
\item The resonances are enhanced relative to the zero mass splitting case, and undergo a shift at low velocities prescribed by Eq.~\ref{eq:resshift}.
\item As a consequence of the shift in resonances, the bound-state formation rate at small velocities can develop a non-monotonic velocity dependence, even if it was monotonic with velocity for zero mass splitting.
\item There is an ``adiabatic'' regime at non-zero mass splitting, where under appropriate values of the force carrier mass and relative velocity (as dictated by Eq.~\ref{eq:adiabatic_criterion}) the cross section is doubled relative to the corresponding zero-mass-splitting case. This is a generic feature of such multi-state systems, although the size of the enhancement varies (for example, it is a factor of three for wino DM).
\end{itemize}
\acknowledgments We thank Pouya Asadi, Rakhi Mahbubani, Gregory Ridgway and Chih-Liang Wu for helpful comments and discussions. This work was supported by the Office of High Energy Physics of the U.S. Department of Energy under grant Contract Numbers DE-SC0012567 and DE-SC0013999. TRS is partially supported by a John N. Bahcall Fellowship. TRS thanks the Galileo Galilei Institute for Theoretical Physics and the Kavli Institute for Theoretical Physics for hospitality during the completion of this work, and acknowledges partial support from the INFN, the Simons Foundation (341344, LA), and the National Science Foundation under Grant No. NSF PHY-1748958.
\bibliographystyle{unsrt}
|
2,877,628,089,134 | arxiv | \section{Introduction}
HZ~Her/Her~X-1 is an intermediate mass X-ray binary consisting of a $1.8-2.0\, M_{\odot}$ evolved sub-giant star and a $1.0-1.5\ M_{\odot}$ neutron star observed as X-ray pulsar \citep{1972ApJ...174L.143T}. The binary orbital period is $P_b=1.7$ days, the X-ray pulsar spin period is $P_x=1.24$ seconds.
The optical star HZ Her fills its Roche lobe \citep{1972ApJ...178L..65C} to form an accretion disk around the neutron star.
Before X-ray observations, HZ Her had been classified as an irregular variable.
Due to the X-ray irradiation, the optical flux from HZ Her is strongly modulated with the orbital period, as was first found by inspecting archive photo plates \citep{1972IBVS..720....1C}. Using the phase connection technique, the timing analysis of the \textit{RXTE} and \textit{INTEGRAL} observations of Her X-1 enabled the orbital ephemeris of Her X-1 to be updated, the secular decay of the orbital period to be improved, and the orbital eccentricity to be measured \citep{2009A&A...500..883S}.
The X-ray light curve of Her X-1 is additionally modulated with an approximately 35-day period \citep{1973ApJ...184..227G}. Most of the 35-day cycles last 20.0, 20.5 or 21.0 orbital periods \citep{1983A&A...117..215S,1998MNRAS.300..992S,Klochkov2006}. The cycle consists of a 7-orbits \say{main-on} state and a 5-orbits \say{short-on} state of lower intensity, separated by 4-orbits intervals during which the X-ray flux vanishes completely.
Since the first \textit{Uhuru} observations \citep{1973ApJ...184..227G}, the nearly regular 35-day X-ray light curve behaviour of Her X-1 has attracted much attention.
It is now recognized that the 35-day superorbital cycle of Her X-1 can be explained by the retrograde orbital precession of the accretion disk \citep{1976ApJ...209..562G,1999ApL&C..38..165S}. The 35-d cycle X-ray turn-ons most frequently occur at the orbital phases $\sim 0.2$ or $\sim 0.7$, owing to the tidal nutation of the outer parts of the disk with double orbital frequency when the viewing angle of the outer parts of the disk changes most rapidly \citep{1973NPhS..246...87K,1982ApJ...262..294L,1978pans.proc.....G}. The disk retrograde precession results in consecutive opening and screening of the central X-ray source \citep{1978pans.proc.....G}.
The X-ray light curve is asymmetric between the \say{on} states due to the scattering of X-ray radiation in a hot rarefied corona above the disk. Indeed, the X-ray \say{turn-on} at the beginning of the \say{main-on} state is accompanied by a significant decrease in the soft X-ray flux because of strong absorption. No essential spectral change during the X-ray flux decrease is observed, suggesting the photon scattering on free electrons of the hot corona close to the disk inner edge \citep{1977ApJ...214..879B,1977MNRAS.178P...1D,1980MNRAS.192..311P,2005A&A...443..753K}.
Soon after the discovery of the X-ray pulsar, the neutron star free precession was suggested as a possible explanation to the observed 35-day modulation \citep{1972Natur.239..325B, 1973AZh....50..459N}. Later on, the EXOSAT observations of the evolution of X-ray pulse profiles of Her X-1 with the 35-day cycle phase was also interpreted by the neutron star free precession \citep{1986ApJ...300L..63T}. \cite{1995pns..book...55S} studied the influence of the free precession of the neutron star to its rotational period. This was further investigated by \cite{2009A&A...494.1025S} and \cite{2013MNRAS.435.1147P} using an extensive set of \textit{RXTE} data. \cite{2013A&A...550A.110S}, however, showed that the possibly existing two \say{35-d clocks}, i.e. the precession of the accretion disk and the precession of the neutron star, are extremely well synchronized -- they show exactly the same irregularity. This requires a strong physical coupling mechanism which could, for example, be provided by the gas-dynamical coupling between the variable X-ray illuminated atmosphere of HZ Her and gas streams forming the outer part of the accretion disk \citep{1999A&A...348..917S,2013A&A...550A.110S}.
The X-ray pulse profile are observed to strongly vary with the 35-day phase \citep{1986ApJ...300L..63T,1998ApJ...502..802D,2000ApJ...539..392S,2013yCat..35500110S} differing significantly at the main-on and the short-on states.
Such changes are difficult to explain by the precessing disk only. As was shown by \citealt{2013MNRAS.435.1147P}, the X-ray RXTE/PCA pulse evolution with the 35-day phase could be explained by the neutron star free precession with a complex magnetic field structure on the neutron star surface. In this model, in addition to the canonical poles (a dipole magnetic field), arc-like magnetic regions around the magnetic poles are included, which is a consequence of a likely non-dipole surface magnetic field of the neutron star \citep{1991SvAL...17..339S,1994A&A...286..497P}.
Multiyear X-ray observations shows that there was long (up to 1.5 year) turn-offs of X-ray source, but X-ray irradiation effect was present during these periods \citep{1985Natur.313..119P, 1994ApJ...436L...9V, 2000ApJ...543..351C, 2004ATel..307....1B, 2004ApJ...606L.135S}. This is probably due to decreasing the angle between the disk and the orbital plane. X-ray source remains obscured by the disk while the angle is close to 0.
Photoplate data shows that there was periods of absence of X-ray irradiation effect \citep{1973ApJ...182L.109J, 1976BAICz..27..325H}. That means accretion vanished during that periods.
Let's return to the 35-day modulation of X-ray flux. In the present paper, we have analyzed extensive optical photometric observations of HZ Her collected from the literature and obtained by the authors. We have found that the model of a precessing tilted accretion disk around a freely precessing neutron star with complex surface magnetic field is able to explain the detailed photometric light curves of HZ Her constructed from all observations available.
\section{Free precession of neutron star in Her X-1}
The free precession occurs when a non-spherical solid body rotates around an axis misaligned with the inertial axes.
Consider two-axial precession of a neutron star rotating with the angular frequency $\omega$, see Fig. \ref{f:NS_free_precession}. If the moments of inertia are $I_1 = I_2, I_3$ and the difference between the moments of inertia is small, $(I_1-I_3)/I_1\ll 1$, the angular velocity of the free precession reads
\begin{equation}
\Omega_p = \omega\,\frac{I_1 - I_3}{I_1} \cos \gamma
\label{e:Omegap}
\end{equation}
where $\gamma$ is the angle between the $I_3$ inertia axis and the total angular momentum vector which in this case ($\Omega_p \ll \omega$) almost coincides with the instantaneous spin axis $\omega$.\\
From the analysis of the X-ray pulses, in paper \citealt{2013MNRAS.435.1147P} the map of emitting regions on the neutron star surface and the angle between the spin axis and the inertial axis of the neutron star were recovered. The emitting regions include the north and south magnetic poles surrounded by horseshoe-like arcs. The geometry of these regions is due to a complicated non-dipole magnetic field near the neutron star surface. In this model, the emitting arcs enclose the inertial axis, and therefore the storage of accreting matter can produce asymmetry in the principal moments of inertia. In this case, the sign of the precession frequency in equation \ref{e:Omegap} is positive, i.e. the direction of the free precession motion coincides with that of the the neutron star rotation. In the general case, the neutron star can perform a more complicated three-axial free precession \citep{1998A&A...331L..37S}.
The equality of periods of the neutron star free precession and the disk precession is likely to be not coincidental. During the neutron star free precession, the irradiation of the donor star surface strongly changes. The stellar atmosphere heating determines the initial velocity and direction of gas streams flowing through the vicinity of the inner Lagrangian point. In the general case, the gas streams flow off the orbital plane to form the outer parts of a tilted accretion disc. The dynamical action of the streams affects the disk precession, and therefore in such a system, the disk precession can occur synchronously with the neutron star free precession.
\section{Magnetic forces}
\label{magnetic_forces}
The location of the inner edge of the disk is defined by the break of the disk flow near the magnetospheric boundary at a distance of about 100 neutron star radii ($\sim 10^8$ cm). The magnetic field induces a torque on the inner parts of the disk.
In the model of interaction of a diamagnetic thin accretion disk with a magnetic dipole \citep{1976PAZh....2..343L,1981AZh....58..765L,1987anz..book.....L}, the magnetic torque averaged over the neutron star spin period reads:
\begin{equation}
\label{e:Km}
\mathbf{K_m} = \frac{4 \mu^2}{3 \pi R^3_d} \cos \alpha (3 \cos \beta - 1) [\mathbf{n}_{\omega}, \mathbf{n}_{d}].
\end{equation}
Here, $\mu$ is magnetic moment of the neutron star, $R_d$ is inner radius of the disk, $\alpha$ is the angle between the neutron star rotational axis and the inner disk axis, $\beta$ is the angle between the neutron star spin axis and the magnetic dipole. $\mathbf{n}_{\omega}$ is the unity vector along the neutron star spin, $\mathbf{n}_{d}$ is the unity vector along the normal to the inner disk.
Magnetic torque $K_m$ vanishes if $\alpha = 0\degree$, $\alpha = 90\degree$ or $\beta = \beta_0 = \arccos{\sqrt{3}/3}$. If $\alpha \neq 0^{\degree}$, $\alpha \neq 90\degree$ and $\beta \neq \beta_0$, the magnetic torque is non-zero. The sign of the magnetic torque changes when $\beta$ crosses the critical angle $\beta_0$.
In our model, the angle $\beta$ changes because of the neutron star free precession, see Fig. \ref{f:NS_free_precession}, and the angle $\alpha$ changes because of the disk precession. As a result, the function $K_m(\alpha, \beta)$ must be quite complicated, see Fig. \ref{f:theta_Z}.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{NS_free_precession.pdf}
\caption{Schematic of the free precession of a neutron star. The angle between the inertia axis $I_3$ and the neutron star spin axis is $50\degree$. The angle between the radius-vector to the north magnetic pole and the $I_3$ axis is $30\degree$. In the course of free precession, the north magnetic pole draws a circle around the $I_3$ axis on the neutron star surface (solid line), and the angle $\beta$ crosses the critical angle $\beta_{0}$ twice. Trail of the magnetic dipole axis on the surface of the neutron star is shown by the smallest circle (gray line). Sign ``eq.'' denotes the equator of the neutron star.}
\label{f:NS_free_precession}
\end{center}
\end{figure}
A non-zero magnetic torque forces the inner edge of the disk warp with respect to the outer edge. When the magnetic torque vanishes, null warp is expected (the disk becomes flat), which should affect the optical light curve of HZ Her. Below we construct a geometrical model that takes into account the effect of variable irradiation of HZ Her caused by the warped precessing accretion disk with an account of the
periodically variable X-ray beam from the freely precessing neutron star.
\section{\textit{B} and \textit{V} optical observations}\label{s:BV}
To construct optical light curves of HZ Her, the following $B$ and $V$ photometrical observations were used. The 1972 -- 1998 data were compiled from \cite{1973ApJ...181L..39P}, \cite{1972ApJ...177L..97D}, \cite{1973ApJ...186..617B}, \cite{Lyutyj73_PZ}, \cite{1974ApJ...190..365G}, \cite{Lyutyj73_AZH}, \cite{Cherepashhuk_etal74}, \cite{1990PAZh...16..625V}, \cite{1989PAZh...15..806L}, \cite{1978SvAL....4..191K}, \cite{1980PAZh....6..717K}, \cite{1988PAZh...14..438K}, \cite{Kilyachkov94}, \cite{1980A&A....90...54K}, \cite{Gladyshev81}, \cite{1986AZh....63..113M}, \cite{Goransky_Karitskaya86}. In total, the 1972 -- 1998 data include $5771$ individual observations in $B$ and $5333$ in $V$ bands. The 2010 -- 2018 observations were carried out by the authors and include $14034$ $B$ and $8661$ $V$ individual measurements.
To construct orbital light curves of HZ Her at different phases of 35-day cycle, we have used the orbital ephemeris of the binary system from \cite{2009A&A...500..883S}. The 35-day phases were calculated using X-ray turn-ons of Her X-1 as measured by the \textit{Uhuru}, \textit{Swift}, \textit{RXTE}, \textit{BATSE} (Burst And Transient Experiment) and \textit{INTEGRAL} X-ray observatories, which are kindly provided by R\"{u}diger Staubert. The individual measurement occuring during 35-day cycles with unknown turn-ons have been ignored. The resulted orbital $B$, $V$ light curves in 20 35-day intervals can be found on the GitHub Repository\footnote{ \url{https://github.com/eliseys/data}} and are shown in Fig. \ref{f:phases} by dots.
\section{The model}
The numerical approach for calculation of the optical light curves of HZ Her is similar to that used by \citet{1971ApJ...166..605W}, \citet{1983MNRAS.202..347H}. Following this method, the surface of the optical star has been split in small areas. Fluxes from the areas visible to the observer sum up to produce the synthetic light curve. It is assumed that the areas have a blackbody spectrum with a temperature defined by the surface local gravity and the X-ray irradiation from the neutron star. Besides, our model implements the complex X-ray shadow formed by the warped disk and non-isotropic X-ray intensity pattern from the neutron star. The C and Python codes of the model are available on the GitHub Repository\footnote{\url{https://github.com/eliseys/discostar}}.
\subsection{Geometry of the donor star}
We assume that the optical star is bounded by an equipotential surface corresponding to the Roche potential. In the Cartesian coordinates $xyz$ with the origin at the center of mass of the optical star which is rotating synchronously with the binary system, the Roche potential is \citep{1959cbs..book.....K}:
\begin{equation}
\label{e:roshe_potential}
\psi = \frac{G m_1}{r_1} + \frac{G m_2}{r_2} + \frac{\omega^2}{2} \left[ \left( x - \frac{m_2 a}{m_1+m_2} \right)^2 + y^2 \right] \, ,
\end{equation}
where $xy$ axes are in the orbital plane, $x$ axis is forwarded to secondary star center, $r_1 = \sqrt{x^2+y^2+z^2}$ and $r_2 = \sqrt{(a - x)^2+y^2+z^2}$ is the distance from the center of mass of the first star and the second star to a given point, respectively, $a$ is the distance between centers of mass of the stars, see Fig.~\ref{f:irradiation}.
In the dimensionless units ($a = 1$), the Roche potential reads (see, e.g., \cite{Cherep2013}):
\begin{multline}
\Omega = \frac{\psi}{G m_1} - \frac{m_2^2}{2 m_1 (m_1 + m_2)} = \frac{1}{r} + q \left( \frac{1}{\sqrt{1-2x+r^2}} - x \right) + \\
+ \frac{1}{2}(1+q)(x^2+y^2) \, ,
\end{multline}
where $r = \sqrt{x^2 + y^2 + z^2}$ is the distance from the center of mass of the optical star $m_1$, $q = m_2/m_1$ is the compact to optical component mass ratio.
The value of $\Omega$ is defined through the Roche lobe fill fraction:
\begin{equation}
\label{mu}
\mu = \frac{R_0^\star}{R_0} \, ,
\end{equation}
where $R_0$ is the polar radius of the Roche lobe, $R_0^{\star}$ is the polar radius of the star. Two parameters $q$ and $\mu$ explicitly define the shape of the donor star.
The unit normal vector to a point on the surface of the star is defined by the Roche potential gradient:
\begin{equation}
\label{n}
\mathbf{n} = \frac{\nabla \Omega}{|\nabla \Omega|} \, .
\end{equation}
The gravity acceleration vector is:
\begin{equation}
\label{g}
\mathbf{g} = - \nabla \Omega \, .
\end{equation}
The vector of a surface element in the spherical coordinate system with the origin in the star's barycenter is:
\begin{equation}
\label{dS}
d\mathbf{S} = \frac{\mathbf{n}\,dS}{\mathbf{n}\cdot\mathbf{r}}
\end{equation}
where $dS = r^2\,d\varphi\,d\theta\cos\theta$ is the surface area element; the factor $1/(\mathbf{n}\cdot\mathbf{r})$ is introduced because the surface of the star is not perpendicular to the vector $\mathbf{r}$, see Fig.~\ref{f:irradiation}.
The projection of the surface element vector onto the sky plane is:
\begin{equation}
\label{dS_projection}
\mathbf{n}_o \cdot d\mathbf{S} \, ,
\end{equation}
where $\mathbf{n}_o$ is the unity vector pointed to the observer.
\subsection{Surface temperature of the donor star}
\label{surface_temperature}
We assume that the donor star surface at any point emits a blackbody spectrum. The radiation flux from the blackbody surface element $dS$ is defined by its temperature $T$:
\begin{equation}
dF = \sigma_B T^4 dS\,,
\end{equation}
where $\sigma_B$ is the Stefan-Boltzmann constant. Due to the temperature variation over the stellar surface, different parts of the star differently contribute to the total flux. The gravitational darkening and X-ray irradiation are also taking into account in the surface temperature change.
The gravitational darkening depends on the surface gravity $g$:
\begin{equation}
T = T_0 \left( \frac{g}{g_0} \right)^{\beta} \, ,
\end{equation}
where $T_0$ and $g_0$ are the temperature and gravity acceleration at the donor star pole, respectively. The coefficient $\beta$ is set to $0.08$ \citep{1924MNRAS..84..665V}. The polar temperature $T_0$ is free parameter, and $g$ is calculated by differentiating equation \ref{e:roshe_potential}.
\begin{figure}
\includegraphics[width=\columnwidth]{irradiation.pdf}
\caption{Schematic of the donor star illuminated by the central X-ray source (NS).}
\label{f:irradiation}
\end{figure}
In the presence of X-ray irradiation, the surface element $dS$ is illuminated by the external radiation flux $A dF_x$, where $A$ is the fraction of the thermalised X-ray flux. Thus, the total radiation flux from the surface element reads:
\begin{equation}
\sigma T_{irr}^4 dS = dF + A dF_x \, ,
\end{equation}
where $T_{irr}$ is the effective temperature of the element with an account of the X-ray irradiation.
\subsection{Geometry of the accretion disk}
Here we introduce a formalism which enables us to calculate the X-ray shadow produced by a warped, inclined disk with a finite width of the outer edge.
The accretion disk is modelled by $N$ circle rings and an external cylindrical belt centered on the neutron star. The orientation of the each individual ring $i$ is determined by the orthonormal vector $\mathbf{d}_{i}$, $i = \{1,2\dots\,N\}$. The external cylindrical belt is described by the radius $R$ and height $H$.
We assume that the disk is mostly warped near its inner edge because of the interaction with the neutron star magnetosphere. The radii of the circles representing such a disk are ordered as $r_1 \gg r_{2} > r_{3} > \, \ldots \, > r_N$, where $r_1$ is the radius of the outermost ring and the external belt, $r_N$ is the radius of the innermost ring.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{disk_Z.pdf}
\caption{Model of the disk. Disk is modelled by $N$ circle rings. Central part is shown not to scale. Radius of the outer ring is much larger than radii of rest of the rings: $r_1 \gg r_2 > r_3 \dots r_N$. Outer and inner circle has the nodal lines $A_{out}$ and $A_{in}$ respectively. The angle between $A_{out}$ and $A_{in}$ is the twist angle $Z$.}
\label{f:disk_Z}
\end{center}
\end{figure}
In our model, we assume that the coordinates $\{\theta_i, \varphi_i\}$ of the vector $\mathbf{d}_{i}$ change linearly with the index $i$:
\begin{equation}
\theta_i = \theta_{1} + i \, \frac{\theta_{N} - \theta_{1}}{N-1}\,,
\end{equation}
\begin{equation}
\varphi_i = \varphi_{1} + i \, \frac{\varphi_{N} - \varphi_{1}}{N-1}\,.
\end{equation}
This enables us not to define the position of each ring but only of the innermost $\mathbf{d}_{N} \equiv \mathbf{d}_{in}$ and the outermost $\mathbf{d}_{1} \equiv \mathbf{d}_{out}$ rings.
To describe the disk twist, we introduce the twist angle $Z \equiv \varphi_{out} - \varphi_{in}$ between the nodal lines of the outermost and innermost rings, see Fig. \ref{f:disk_Z}.
The X-ray radiation flux from the neutron star passing between the $i$-th and the $i+1$-th ring is blocked. The cylindrical belt also screens the X-ray radiation from the central neutron star within the solid angle $2\pi H/R$.
\subsection{Transit and eclipse of the accretion disk}
During the orbital motion, the accretion disk and the X-ray source are eclipsed by the donor star near the orbital phase $0$. Oppositely, the disk and the X-ray source transits in front of the donor star near the orbital phase $0.5$. This gives rise to the main and secondary minima on the light curves at the orbital phases $0$ and $0.5$, respectively.
In the present study, we have not modelled the primary minima of the orbital light curves and have excluded the orbital phases 0.0--0.13 and 0.87--1.0 from calculations of the synthetic orbital light curves.
To model the secondary minimum of the orbital light curves we have used the ray-marching technique.
The contibution from the disk $F_d$ to the observed light curve is defined by the dimensionless parameters $F_B$ and $F_V$:
\begin{equation}
F_{B,V} = \left( \frac{F_d}{F_0} \right)_{B,V} \,.
\end{equation}
Here $F_0$ is the flux from the optical star at the orbital phase $0$. The parameters $F_B$ and $F_V$ are assumed to be constant during the orbital period but can vary with the 35-day phase.
\subsection{X-ray emission from the neutron star}
To calculate the temperature distribution over the surface of the donor star illuminated by X-rays from the neutron star, we have used non-isotropic X-ray source intensity which was adopted from the model constructed by \cite{2013MNRAS.435.1147P}. The X-ray intensity pattern was derived from the analysis of X-ray pulse evolution of Her~X-1 with the 35-day period. In this model, the neutron star experiences the free precession with a period close to 35 days, see Fig.~\ref{f:NS_map}. According to the model, X-ray emission leaves the neutron star perpendicular to its surface in \say{narrow pencil beams}, such that one does not have to worry about gravitational bending. The X-ray diagram illuminating the optical star HZ~Her is obtained by averaging the emission from the neutron star surface over the fast neutron star spin period $1.24\,\mathrm{s}$. The averaged intensity is modulated by the neutron star free precession.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{xray_diagram.pdf}
\caption{X-ray intensity from the neutron star surface in Her X-1 as function of spherical coordinates at the free precession phases $\Psi=0.0$, $0.25$, $0.5$ and $0.75$ \citep{2013MNRAS.435.1147P}. The latitude of the neutron star's north pole is $90\degree$. The longitude $180\degree$ corresponds to the meridian passing through the poles and the magnetic dipole at the free precession phase $\Psi=0$. This figure was produced by summation of intensity of all emitting regions on the surface of the neutron star. White ring here is the same as solid ring on the Fig.~\ref{f:NS_free_precession}}
\label{f:NS_map}
\end{center}
\end{figure}
Following \cite{2013MNRAS.435.1147P} we set the angle $\theta_{ns}$ between the rotation axis of the neutron star and the sky plane to $-3\degree$. The minus sign means that the north hemisphere is directed off the observer. The second angle $\kappa_{ns}$, which is the position angle of the rotation axis of the neutron star, is a free parameter, see Fig. \ref{f:kappa_theta}.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\columnwidth]{NS_kappa_theta.pdf}
\caption{
Orientation of the neutron star angular momentum $\mathbf{J}$ relative to the observer. The origin of the coordinate system is at the neutron star center. The axes $x$ and $z$ lie in the sky plane. The axis $x$ is pointed to the observer. The axis $z$ goes along the projection of the binary orbital momentum vector to the sky plane. The angle $\theta_{ns}$ is the angle between $\mathbf{J}$ and the sky plane. The angle $\kappa_{ns}$ is the angle between the projection of $\mathbf{J}$ on the sky plane and the $z$ axis.
}
\label{f:kappa_theta}
\end{center}
\end{figure}
In the case of non-isotropic X-ray emission from the neutron star the irradiation flux impinging a surface elemental area of the optical star is
\begin{equation}
\label{dF_non_iso}
dF_{\mathrm{x}} = I_{\mathrm{x}}(\mathbf{r}_2) \, \frac{d\mathbf{S} \cdot \mathbf{r}_2}{r_2} \, ,
\end{equation}
where $I_{\mathrm{x}}(\mathbf{r}_2)=dL_x(\mathbf{r}_2)/d\Omega$ is the X-ray intensity in the direction $\mathbf{r}_2$ (see Fig. \ref{f:irradiation}).
\section{Modelling}
\label{modelling}
We assume that the retrograde disk precession phase $\Phi$ changes linearly with time. The phase angle of the outer disk edge is defined as a function of the 35-day phase $n = \{0,1,2\dots 19\}$
\begin{equation}
\label{e:Phi}
\Phi = - \, n \frac{2\pi}{N} + \Phi_0\,,
\end{equation}
where $N = 20$ is the number of discrete phases of the 35-day cycle. The phase angle $\Phi_0$ is the initial disk precession angle. At the phase angle $\Phi_0$ the disk is mostly opened to the observer. The value of $\Phi_0$ is treated as free parameter.
The neutron star free precession is assumed to be a prograde linear function of $n$ with the initial phase angle $\Psi_0$:
\begin{equation}
\Psi = n \frac{2\pi}{N} - \Psi_0\,.
\end{equation}
$\Psi_0$ is the phase angle at which the north magnetic pole of the neutron star passes most closely to the neutron star equator. Following \cite{2013MNRAS.435.1147P} we set $\Psi_0=2\pi/20$.
\subsection{Free parameters of the model}
The model parameters are summarized in Tables \ref{tab:parameters1} and \ref{tab:parameters2}. They list the parameters that have been fixed and changing during the 35-day cycles, respectively. The fixed model parameters have been taken from the literature, and the changing parameters have been found by best-fitting the model optical light curves within the limits shown in the second column of Table \ref{tab:parameters2}.
\begin{table*}
\caption{The model parameters fixed during the $35^d$ cycle}
\label{tab:parameters1}
\begin{tabular}{l|l|l|l}
\hline
\hline
Parameter & Symbol & Value & Ref.\\
\hline
Semi-major axis & $a$ & $6.502 \times 10^{11}$ cm & \cite{2014ApJ...793...79L} \\
Mass ratio, $M_x/M_v$ & $q$ & $0.6448$ & \cite{2014ApJ...793...79L}\\
Roche lobe filling factor & $\mu$ & $1.0$ & assumed\\
Gravity darkening coefficient & $\beta$ & $0.08$ & assumed\\
X-ray reprocessing factor & $A$ & $0.5$ & assumed \\
Disk radius & $R/a$ & $0.24$ & assumed\\
Outer disk thickness & $H/R$ & $0.15$ & assumed \\
NS orientation angle & $\kappa_{ns}$ & $10\degree$ & assumed\\
NS orientation angle & $\theta_{ns}$ & $-3\degree$ & \cite{2013MNRAS.435.1147P} \\
NS initial phase angle & $\Psi_0$ & $2\pi/20$ & \cite{2013MNRAS.435.1147P}\\
Star's polar temperature & $T_0$ & $7794.0$ K & \cite{2014ApJ...793...79L}\\
Binary inclination & $i$ & $88.93\degree$ & calculated to reproduce correct main-on/short-on beginning\\
Disk max opening phase angle & $\Phi_0$ & $2\pi/5$& assumed \\
\hline
\hline
\end{tabular}
\end{table*}
\begin{table}
\caption{The model parameters changing with the $35^d$ cycle phase}
\label{tab:parameters2}
\begin{tabular}{l|l|l}
\hline
\hline
Parameter & Symbol & Limits \\
\hline
NS X-ray luminosity & $L_x$ & $0.1\,...\,10 \times 10^{37}$ erg/s\\
Disk outer edge tilt & $\theta_{out}$ & $0\,...\,40\degree$\\
Disk inner edge tilt & $\theta_{in}$ & $0\,...\,40\degree$\\
Disk normalized $B$-flux & $F_B$ & $0\,...\,4$\\
Disk normalized $V$-flux & $F_V$ & $0\,...\,4$\\
Disk phase angle & $\Phi$ & $-20\degree \dots 20\degree$ \\
&& dev. from linear law\\
Disk twist $\varphi_{out}-\varphi_{in}$ & $Z$ & $-90\,...\,+90\degree$\\
&& with $10\degree$ step\\
\hline
\hline
\end{tabular}
\end{table}
\subsection{Results of the modelling}
The results of best-fitting of the model parameters with observed $B$ and $V$ light curves constructed for 20 intervals of the 35-day cycle are presented in Figures \ref{f:theta_Z}, \ref{f:F}, \ref{f:Lx}, \ref{f:epsilon}, \ref{f:phi}. The observed $B$ and $V$ light curves constructed for 20 intervals of the 35-day cycle according to the procedure described above in Section \ref{s:BV} are presented in Fig. \ref{f:phases}.
\subsubsection{Disk tilt angles}
In Fig. \ref{f:theta_Z} we show the disk tilt angles to the orbital plane (the inner and outer disk tilts $\theta_{in}, \theta_{out}$) and the disk twist angle $Z$ as a function of the 35-day cycle phase number $n$, for three best-fit models. The model light curves were calculated for an even grid of $Z$ from $-90\degree$ to $+90\degree$ with $10\degree$ step.
Three model light curves with $Z$ producing the minimal reduced $\chi^2$ values are shown in
Fig. \ref{f:phases} by solid lines.
The $\chi^2$ is calculated as
\begin{equation}
\chi^2 = \frac{1}{N - N_{var}} \sum_i^N (y_i - f(x_i))^2 \,
\end{equation}
where $N$ is the number of observed points in each 35-day phase interval (typically several hundreds), $N_{var}=6$ is the number of fitting parameters for each twist angle $Z$ (see Table \ref{tab:parameters2}), $y_i$ and $x_i$ is the observed flux and orbital phase, respectively, $f(x)$ is the synthetic light curve.
The filled circles Fig.~\ref{f:theta_Z} are $\chi^2$ gray-coloured (the scale at the bottom of the Figure) and correspond to three best-fit light curves from Fig. \ref{f:phases}. The phase interval marked with $n=0$ corresponds to the beginning of the main-on state. The light gray vertical strips mark the main-on and short-on states of Her X-1.
Fig. \ref{f:theta_Z} suggests that the outer disk tilt $\theta_{out}$ (the upper panel of the Figure) stays at about $15\degree$ during the main-on and tends to lower values about $10\degree$ during the short-on. The inner disk tilt $\theta_{in}$ (the middle panel) varies between \textbf{$\sim 15\degree$} and $\sim 5\degree$. The $Z$-angle describing the disk twist (the lower panel of the Figure) strongly changes between $\sim -90\degree$ and $\sim +90\degree$. In our model, zero twist angle corresponds to null magnetic force moment $K_m$ (the right scale of the bottom panel) when the accretion disk is the least warped. The solid line in the bottom panel shows the expected magnetic torque acting on the inner edge of the disk see equation \ref{e:Km} with the adopted fixed parameters of the neutron star shown in the Table \ref{tab:parameters1}.
Therefore, the change of the parameters of the warped twisted accretion disk shown in Fig. \ref{f:theta_Z} with the 35-day phase are in qualitative agreement with our physical model.
\subsubsection{Disk fluxes}
Fig. \ref{f:F} shows the proper disk fluxes $F_B$ (the upper panel) and $F_V$ (the lower panel) in units of the optical $B$, $V$ fluxes from the star at the orbital phase 0 (the primary orbital minimum). The filled gray-scaled circles show the same models as in Fig. \ref{f:theta_Z}. It is seen that the disk flux is maximum during the main-on state and points to the existence of the second maximum at 35-day phases 0.65--0.70 at the beginning of the short-on. The first maximum is higher because of the additional (on top of purely geometrical view) irradiation of the outer parts of the disk by the central X-ray source. Such a behaviour of the disk flux is expected in our model because the zero phase of the neutron star free precession is close to the beginning of the main-on.
\subsubsection{X-ray luminosity}
In Fig.~\ref{f:Lx} we plot the total X-ray luminosity of the neutron star as a function of the 35-day phase. The meaning of the filled circles is the same as in the two previous Figures. The spread in the X-ray luminosity found from the best-fitting of the optical light curves can be due to the construction of the observed light curves from different 35-day cycles. The total X-ray luminosity clearly demonstrates the growth from the main-on to short-on state. Physically, this may be connected with the storage of matter in the disk during the main-on when the optical star is mostly illuminated by the X-ray source and the gas stream through the Lagrange point $L_1$ from the optical star is the most powerful. The time delay between the main-on and short-on states roughly corresponds to the viscous time of the accretion disk \citep{Klochkov2006}.
\subsubsection{Outer and inner disk viewing angles}
In Fig. \ref{f:epsilon} we show the angles between the outer ($\epsilon_{out}$) and inner ($\epsilon_{in}$) disk planes and the viewing angle (the upper and bottom panels, respectively). In the upper panel, the light hatched strip marks the range of $\epsilon_{out}$ inside which the X-ray source is screened by the outer disk plane (the low states of Her X-1). It is seen that $\epsilon_{out}$ behaves with the 35-day phase in a way enabling the main-on state. As for the short-on, several points appears inside the screened area, which means that no X-ray radiation should be visible by observer. The visible disagreement with the obtained best-fit $\epsilon_{out}<H/R\sim-8\degree$ can be related to the different 35-day cycles used to construct the optical light curves and to the likely variability of the disk thickness with the 35-day phase. In the bottom panel, the angle $\epsilon_{in}$ vanishes by the end of the main-on, which is indeed expected because the X-ray spectroscopic observations \citep{2005A&A...443..753K} suggest that the hot inner parts of the accretion disk screens the X-ray source at the main-on termination.
\subsubsection{Outer disk retrograde precession}
Fig. \ref{f:phi} shows the phase angle of the outer disk with the 35-day phase (the upper panel). Clearly, this plot confirms our assumption about an almost even rate of the outer disk retrograde precession. Deviations in the disk phase angle $\Delta \Phi$ from the linear law $\Phi=\Omega t$ are within the narrow range $\pm 20\degree$ (the bottom panel), which may be either due to physical variability or a large collection of different 35-day cycles used to construct the optical light curves.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{theta_Z.pdf}
\caption{
Disk tilt angles to the orbital plane $\theta_{in}, \theta_{out}$ and the disk twist angle $Z$ as a function of the 35-day cycle phase number $n$ (the upper, middle and bottom panels, respectively). The solid line in the bottom panel shows the expected magnetic torque $K_m$ acting on the inner disk (in dimensionless units, right axis) from the freely precessing neutron star with param-eters from Table \ref{tab:parameters1}. The gray vertical strips mark the main-on and short-on states of Her X-1. The filled circles gray-coloured with reduced $\chi^2$ values (the scale in the bottom) correspond to three best-fit model light curves presented in Fig. \ref{f:phases}.
}
\label{f:theta_Z}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{F_20_072.pdf}
\caption{The proper optical B,V-fluxes from the disk for the models described in Fig. \ref{f:theta_Z} normalized to the optical star flux at the primary eclipse.}
\label{f:F}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Lx_20_072}
\caption{Total X-ray luminosity of the neutron star for the models described in Fig. \ref{f:theta_Z}.}
\label{f:Lx}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{epsilon_20_0072.png}
\caption{The angle between the outer and inner disk edge with the viewing angle ($\epsilon_{out}$ and $\epsilon_{in}$, the upper and bottom panels, respectively) for the models from Fig. \ref{f:theta_Z}. Hatched area in the upper graph indicate the area where the X-ray radiation is blocked by the outer edge of the disk.
}
\label{f:epsilon}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{phases20BV.pdf}
\caption{Orbital $B$, $V$ light curves constructed in 20 phase intervals of the 35-day cycle. $B$ points are shifted by 2 units for clarity. The fluxes are normalized to the primary minimum. The 35-day intervals are marked with numbers $n$ according to equation \ref{e:Phi}. The solid curves show the best-fit models calculated for the grid of the twist angle $Z$ as shown in the bottom panel in Fig. \ref{f:theta_Z}.}
\label{f:phases}
\end{center}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{phi.pdf}
\caption{The phase angle of the outer disk edge $\Phi$ as a function of the 35-day phase interval $n$ (the upper panel) and its dispersion $\Delta \Phi$ relative to the linear law (the bottom panel) for the same models as in Fig. \ref{f:theta_Z}.}
\label{f:phi}
\end{figure}
\section{Discussion}
The results presented in the present paper support the physical model of the 35-day cycle of HZ~Her/Her~X-1 based on the free precession of the neutron star and retrograde precession of the surrounding twisted warped accretion disk
\citep{1990ESASP.311..235P, 2000astro.ph.10035K, 2009A&A...494.1025S, 2013MNRAS.435.1147P}. In this model, the synchronization between the neutron star free precession period and the precession period of the accretion disc is mediated by variable gas stream from the Lagrange point $L_1$. The properties of the stream are modulated with the neutron star free precession period because of variable X-ray illumination of the optical star atmosphere.
In the present study we have made several assumptions to be discussed.
First, we have fixed the binary parameters of HZ Her/Her X-1, the neutron star parameters (viewing angle, location of emitting regions on the surface around magnetic poles, X-ray emission diagram, etc.), accretion disk radius and outer disk thickness, which were adopted from previous works (see Table \ref{tab:parameters1}). Of them, the neutron star parameters could mostly affect the synthesized light curves because they determine the X-ray illumination of the optical star atmosphere.
Second, we have ignored the detailed temperature distribution across the accretion disk, which may be quite complicated. Therefore, we have not modelled orbital phases 0.0--0.13 and 0.87--1.0 during which eclipsing effects by the optical star are important.
The binary inclination $i$ is known to be close to $90\degree$. In our modelling, we computed it from the following conditions: 1) the outer disk precession is uniform; 2) the outer disk thickness is constant; 3) the X-ray source opens by the outer disk edge both in main-on and short-on states; 4) the maximum disk opening occurs at the precession phase angle $\Phi_0 = 2\pi/5$. Clearly, either of these conditions are approximate, but the change of $i$ by $1\degree$ doesn't change qualitative conclusions of our modelling.
We also discuss a possible syncronisation mechanism between the neutron star free precesion and accretion disc precession.
The observed periodic change of the X-ray pulse shape favours the neutron star free precession. In this model, the neutron star free precession period is very close to that of the accretion disk precession, suggesting a synchronization mechanism between the neutron star and accretion disk motion.
The inner part of the accretion disk is warped by the magnetic torque of the neutron star magnetosphere. The torque changes during the neutron star free precession period which affects the character and degree of the curvature of the inner disk parts.
Dynamic action of accretion streams interacting with the disk tends to increase the disk precession period. Without the dynamic action of accretion streams, the disk precession period would be determined by the tidal torques only and would be shorter than observed. We suggest that the neutron star -- disk synchronization is possible if the periods of the neutron star free precession and accretion disk are close to each other.
However, the neutron star crust is subject to cracking because of variable torques. The neutron star crust crack can abruptly change the neutron star free precession period, see equation \ref{e:Omegap}. If a crack is large enough, synchronization can break, decreasing the anisotropy of the disk shadow on the optical star atmosphere and hence the outer accretion disk tilt to the orbital plane. In this case, an anomalous low state of Her X-1 where the X-ray emission is fully blocked by the accretion disk can occur. However, during the anomalous low state the inner part of the accretion disk subjected to the magnetic torque from the neutron star magnetosphere keeps warped, blocking the X-ray emission from the neutron star in some directions, and the disk shadow on the optical star surface still remains asymmetrical relative to the orbital plane. Therefore, the emerging gas pressure gradients near the inner Lagrangian point will deflect the accretion stream from the orbital plane. Ultimately, such deflected accretion streams may tilt the outer parts of the disk again, enabling the opening of the X-ray source for the observer during the disk precessional motion.
According to our modelling, the magnetic dipole axis far from the surface of the neutron star (at the distance $\sim 100\,R_{ns}$) should be misaligned with the direction to the north magnetic pole on the neutron star surface by about $10\degree$--$15\degree$ (see Fig.~\ref{f:NS_free_precession}). It may be due to a complex structure of the magnetic field near the neutron star surface, as suggested by the modelling of X-ray pulse profiles \citep{2013MNRAS.435.1147P}.
In our model, the neutron star X-ray luminosity changes by a factor of $\sim 3$ with the maximum being at the short-on phase. During the short-on phase, the north magnetic pole of the freely precessing neutrons star is closest to the neutron star spin axis. It means that at these phases most of the X-ray radiation is directed away from the optical star and the observer. This is why the increase in $L_x$ at these phases does not affect both the optical light curve shape and the observable X-ray flux.
Our model suggests that during the main-on state, where the maximal X-ray irradiation of the optical star occurs, the matter additionally accumulates in the disk. This excess matter accretes onto the neutron star on a viscous time scale of the disk, which should be of the order of many orbital periods.
\section{Conclusions}
In this paper, we calculated orbital light curves of HZ~Her/Her~X-1 for 20 phases of 35-day superorbital cycle. The main features of the model include a tilted, warped and precessing accretion disk and a freely precessing neutron star. The precessing accretion disk produces a complex varying shadow on the atmosphere of the optical star shaping the optical light curve.
The freely precessing neutron star serves as the clock mechanism providing the long-term stability of the 35-day cycle.
We find that the model with a warped precessing tilted disk can adequately reproduce the observed light curves photometrical behaviour under physically motivated choice of the model parameters.
The precession angle of the disk linearly increase with the 35-day phase with a maximum opening of the X-ray source to the observer during the main-on state of Her X-1 at the phase $\simeq 0.2$. Geometrical parameters of the disk vary with the 35-day cycle. Only inner regions of the disk are warped due to the magnetic torque from the neutron star magnetosphere. The warp magnitude and sign are in agreement with the predicted behaviour of the magnetic torque as a function of the angle between the magnetic dipole axis and the neutron star spin axis which changes periodically with the neutron star
free precession phase.
The model parameters depending on the 35-day phase change smoothly. The spread of some parameters is due to the use of observations taken over long period of time covering a large number of 35-day cycles.
The obtained high X-ray luminosity of the source during the short-on phase is likely to be due to accumulation of matter in the outer disk during the main-on and latter accretion of the accumulated matter in the viscous disk time. During the short-on state, the north magnetic pole of the neutron star has the largest viewing angle and directed away from the optical star. Therefore, no significant changes in the optical light curves amplitude and X-ray flux are observed.
We discuss the possible synchronization mechanism between the neutron star free precession and the disk precession.
It can be related to the dynamical interaction of gas streams from the inner Lagrangian point with the outer parts of the disk. The streams can flow out of the orbital plane because of the uneven X-ray illumination of the optical star atmosphere produced by the changing shadow of the tilted twisted accretion disk. In this model, the anomalous low states of Her X-1 could be a consequence of crust glitches of the freely precessing neutron star. Such a glitch changes abruptly the neutron star precession period and hence breaks the neutron star -- accretion disk synchronization. This results in a more even illumination of the optical star atmosphere and decrease of the disk tilt. In the absence of the neutron star free precession the recovery of the disk tilt and reappearance of the X-ray source would be impossible.
\section*{Acknowledgements}
The research is supported by the RFBR grant No.\,18-502-12025 (carrying out of the observations and data processing), the DFG grant No. 259364563 (processing of the X-ray data) and program of Leading Science School MSU (Physics of Stars, Relativistic Compact Objects and Galaxies). N. Shakura acknowledges a partial support by the Russian Government Program of Competitive Growth of Kazan Federal University. I. Bikmaev and E. Irtuganov thank TUBITAK, IKI, KFU, and AST for partial support in using RTT150 (the Russian-Turkish 1.5-m telescope in Antalya). The work of I. Bikmaev and E. Irtuganov was partially funded by the subsidy 671-2020-0052 allocated to Kazan Federal University for the state assignment in the sphere of scientic activities. The observations of I. Volkov were fulfilled with 1-m reflector of Simeiz observatory of INASAN. The work of I. Volkov was partially supported by a scholarship of Slovak Academic Information Agency SAIA. The work of S. Yu. Shugarov was supported by the Slovak Academy of Sciences grant VEGA No. 2/0008/17, by the Slovak Research and Development Agency under the contract No. APVV-15-0458.
\section*{Data Availability}
The data and code underlying this article are available in the GitHub Repository at \url{https://github.com/eliseys/data} and \url{https://github.com/eliseys/discostar} respectively.
\bibliographystyle{mnras}
|
2,877,628,089,135 | arxiv | \section{Introduction}
Quantum Field Theories (QFT) are celebrated for being the framework of the Standard Model and for making predictions which coincide with experimental data to extreme accuracy. Yet QFT is perceived as difficult to learn by many and this seems to be at least partially due to the lack of visualization options which would help to develop an intuition for the theory. In quantum mechanics (or `first quantization'), for example, a localized particle is associated with a $\delta$-function and a uniformly moving particle with a plane wave, both of which are relatively easy to depict in a graph. Many textbooks show the energy eigenstates of hydrogen atoms, harmonic oscillators or quantum wells as plotted wave functions. A survey of visualization methods in quantum mechanics can be found in Thaller's books \cite{Thaller}$^,$\cite{Thaller2}. In QFT, on the other hand, visualizations of the fundamental concepts seem to be scarce. One prominent exception are the Feynman diagrams, which are an important tool in studying processes of interacting particles. Yet these diagrams are firmly connected to the specific computational method of perturbation theory and they are not meant as a graphical representation of the quantum field itself. Another approach is to compute and plot statistical quantities, like the particle density or the two point correlation function of a many body system. But a lot of information is lost when moving from the full description of its state to these aggregated quantities. Therefore they tend to be more useful to understand macroscopic properties of the system rather than the underlying details.
The difficulties in visualizing a quantum field obviously stem from the fact that such a system has too many degrees of freedom: Even if the field is simplified to a discrete lattice with $N$ atoms oscillating in only one space dimension each (like, e.g., in the `quantized mattress' model in Zee's book\cite{Zee}), the quantum state is given by a wave function $\Psi: \mathbb{R}^N \rightarrow \mathbb{C}$, which cannot be plotted in a straight-forward way for any $N > 2$.
Such lattices of coupled quantum oscillators are also studied in several other areas of physics and especially in solid state physics they are applied to model phonons in a crystal.
Johnson and Gutierrez\cite{JohnsonGutierrez} visualize phonon states of a one-dimensional quantum lattice by projecting the probability density of the system to each of the one-dimensional spaces in which the atoms oscillate.
In the present article we propose a new way of visualizing the state of a bosonic quantum field and apply it to the example of the harmonic chain, which is essentially a discretized, one-dimensional model for a boson field. The target audience for this material are mainly graduate students who completed the courses on quantum mechanics and who are now about to advance towards quantum field theory. We assume that the reader is familiar especially with the treatment of the quantum harmonic oscillator and base changes in Hilbert space via Fourier transformation.
The article is organized as follows: In Section \ref{SecMonteCarlo} we present a Monte Carlo method for visualizing quantum wave functions in (first quantization) quantum mechanics. Then we introduce a quantum harmonic chain in Section \ref{SecLinHarChain} and we compute its dynamics by diagonalizing its Hamiltonian via a Fourier transformation. In Section \ref{ConnectingDots} we explain how the harmonic chain can be interpreted in the pictures of solid state physics (where each atom in the chain is a particle) and in quantum field theory (where a particle is an excitation of the chain). Setion \ref{SecVisualizationOfChain} contains our main result - a new visualization method for the state of a quantum harmonic chain and the application to several interesting special cases. We conclude with a discussion of our findings and an outlook to future research in Section \ref{SecDiscussion}.
\section{Monte Carlo plot of wave functions} \label{SecMonteCarlo}
De Aquino et. al. have proposed the following method to visualize the wave function $\Psi: \mathbb{R}^2 \rightarrow \mathbb{C}$ of a single quantum particle in two dimensions\cite{DeAquinoEtAl}: Associate $\mathbb{R}^2$ with the two orthogonal axes of a scatter plot and then draw many dots into this plane, with the location of each dot being randomly chosen according to the probability density function $|\Psi|^2$. The resulting charts show a high density of dots where $|\Psi|^2$ is relatively large and only very few dots where $\Psi$ is close to zero.
In our visualization method we follow a similar Monte Carlo approach, but we impose the following rules:
\begin{enumerate}
\item Choose the locations of the dots randomly according to a uniform probability distribution in a rectangular window within $\mathbb{R}^2$.
\item Represent the value of $\Psi$ at each dot via its color. For energy eigenstates, which can always be written as real-valued functions, such a visualization is possible on a black and white scale. For complex-valued functions a color spectrum can be used.
\end{enumerate}
To start with a simple example from ordinary quantum mechanics, consider a two-dimensional harmonic quantum oscillator with the Hamiltonian
\begin{equation}
\hat H = \frac{\hat{p_1}^2}{2m}+\frac{\hat{p_2}^2}{2m} + \frac{1}{2} \kappa (\hat{q_1}^2 + \hat{q_2}^2),
\end{equation}
where $q_l$ with $l \in \{1;2\}$ are the position coordinates of the particle with mass $m$, $\hat{p_l} = -i \partial/\partial q_l$ are the momentum operators, and $\kappa$ controls the strength of the attractive potential. Here and in the rest of the article we use units of measure such that $\hbar = 1$. It is well known\cite{Liboff} that the Hamiltonian is separable and the (non-normalized) energy eigenstates of the system can be written as
\begin{equation}
\Psi_{\nu_1,\nu_2} = \prod_{l=1,2}{H_{\nu_l}(\sqrt{m\omega}\,q_l)e^{-m\omega q_l^2/2}},
\end{equation}
where $\omega^2 = \kappa/m$, $H_\nu$ is the $\nu$-th order Hermite polynomial and $\nu_{1,2} \in \mathbb{N}_0$ are the two quantum numbers which enumerate the energy eigenstates of the system.
As an example, Fig. \ref{2DHarmonic} shows a Monte Carlo plot of the energy state $\Psi_{2,1}$ created according to the two rules above. \BW{Black (white)}\color{Blue (red)} dots represent negative (positive) values of $\Psi_{2,1}$ and dots where $\Psi_{2,1} \approx 0$ blend in with the chart's \BW{grey}\color{white} background\BW{, i.e., the brighter a dot is, the larger is the value of $\Psi_{2,1}$ in the respective point of $\mathbb{R}^2$}. We have omitted a quantitative color scale since the normalization of the wave function is not important. It is easy to recognize the general shape of $\Psi_{2,1}$ and its nodal lines.
Obviously such a Monte Carlo plot is only of limited relevance in two dimensions, since we could have simply colored the whole chart area according to the value of $\Psi_{2,1}$ instead of just picking a few points, or we could have drawn the wave function in a 3D plot. We will see later that such a Monte Carlo visualization actually becomes quite useful when moving beyond two dimensions.
\begin{figure}[h!]
\centering
\BW{\includegraphics[width=180mm]{2DHarmonic-BW.eps}}
\color{\includegraphics[width=180mm]{2DHarmonic-COL.eps}}
\caption{Monte Carlo representation of the two-dimensional harmonic oscillator energy eigenstate $\Psi_{2,1}$. \BW{Bright (dark)}\color{Red (blue)} dots correspond to positive (negative) values of the wave function. Horizontal and vertical nodal lines of the function can be recognized as areas without any visible dots. As expected, the state $\Psi_{2,1}$ has two nodal lines intersecting with the $q_1$ axis and one intersecting with the $q_2$ axis.}
\label{2DHarmonic}
\end{figure}
\section{Quantized Linear Harmonic Chain} \label{SecLinHarChain}
Consider a linear chain of $N$ coupled harmonic oscillators whose positions are defined by the coordinates $q_n$ with $n = 1,...,N$, respectively. In classical mechanics this system's Hamiltonian is
\begin{equation}\label{EqClassicalH}
H = \sum_{n=1}^N \left(\frac{p_n^2}{2m} + \frac{1}{2} \kappa q_n^2 + \frac{1}{2} \gamma (q_n - q_{n+1})^2\right),
\end{equation}
where $p_n = m \dot q_n$. $\kappa > 0$ determines the strength of binding between the oscillating masses and their respective neutral positions, while $\gamma > 0$ controls the coupling between neighbours. For reasons of simplicity we impose periodic boundary conditions, i.e. $q_0$ is the same as $q_N$ and $q_{N+1}$ means just $q_1$. This system is the one-dimensional version of the `mattress' that Zee\cite{Zee} uses to introduce quantum field theory, or it can be seen as a simple model for phonons traveling through a crystal lattice.
We shall now follow the standard procedure: We will decompose the system's dynamics into normal modes which can be excited and evolve in time independently of one another. Mathematically speaking, this means to diagonalize the Hamiltonian $H$ via a Fourier transformation, revealing the system's equivalence to $N$ decoupled harmonic oscillators. One option is to expand $\vec q$ in real-valued sine and cosine modes, like in the article of Johnson and Gutierrez\cite{JohnsonGutierrez}. Alternatively, a complex-valued decomposition into modes of $e^{ikn}$ can be applied like in the book of Greiner\cite{Greiner} on p.18. We shall also follow the latter approach and we will exhibit the technical steps in detail in order to make this article as self-contained as possible. We will deviate slightly from Greiner's path since we also need to find (among other things) the explicit coordinate transformation between the harmonic chain and the decoupled oscillators (eq. \ref{EqQtildeq} below).
We write the Hamiltonian (\ref{EqClassicalH}) as
\begin{equation}
H = \frac{|\vec{p}|^2}{2m} + \frac{1}{2} {\vec{q}}^T D \vec{q},
\end{equation}
where $D$ is an $N \times N$ matrix taking care of the terms with $\kappa$ and $\gamma$ in (\ref{EqClassicalH}):
\begin{equation}
D =
\begin{pmatrix}
\kappa+2\gamma & -\gamma & 0 & 0 & \dots & 0 & -\gamma\\
-\gamma & \kappa+2\gamma & -\gamma & 0 & \dots &0 & 0\\
0 & -\gamma & \kappa+2\gamma & -\gamma & \dots &0 & 0\\
\dots & \dots & \dots & \dots & \dots &\dots & \dots
\end{pmatrix}
\end{equation}
The matrix $D$ is a circulant, which means that each row vector is rotated one element to the right compared to the preceding row vector. In this sense $D$ is generated from the $N$-vector $(\kappa + 2\gamma, -\gamma, 0, 0, ..., 0 , -\gamma)$ which forms the top line of the matrix. Now remember that any circulant matrix can be diagonalized via a discrete Fourier transform\cite{Davis} which, in turn, means to expand the position coordinates $q_n$ with respect to the orthonormal basis $\{\vec{f^{(k)}}\}$ with
\begin{equation} \label{EqONB}
f^{(k)}_n = \frac{1}{\sqrt N} e^{-\frac{2\pi i}{N}k n} \quad \textrm{for} \quad k = -\frac{(N-1)}{2}, \dots, \frac{(N-1)}{2}.
\end{equation}
The vectors $\vec{f^{(k)}}$ are also known as the `normal modes' of the system. We have made the assumption that $N$ is an odd number in order to save us a cumbersome treatment of special cases. Since $N$ is considered to be a large number in all practically relevant cases, this is no real restriction.
We call the position coordinates in the new basis $Q_k$ and they are related to the old ones by
\begin{equation}\label{EqFourierDecomp}
q_n = \sum_k Q_k f^{(k)}_n = \frac{1}{\sqrt N} \sum_k Q_k e^{-\frac{2\pi i}{N}k n}.
\end{equation}
Here and in the rest of the article a sum over $k$ always means that $k$ runs from $-(N-1)/2$ to $+(N-1)/2$ unless explicitly stated otherwise. Note that the $Q_k$ are complex numbers. In order to ensure that the coordinates $q_n$ remain real-valued, we have to impose the constraint
\begin{equation} \label{EqConstraint}
Q_k = \overline{Q_{-k}}.
\end{equation}
An analogous transformation maps the momentum $\vec p$ to the momentum $\vec P$ with $P_k = m \dot Q_k$. In the new coordinates the Hamiltonian takes the form
\begin{equation} \label{EqHnew}
H = \sum_k \left(\frac{|P_k|^2}{2m} + \frac{1}{2} \omega_k |Q_k|^2 \right).
\end{equation}
Here we have used the fact that our transformation was essentially a discrete Fourier transformation and it diagonalized the circulant matrix $D$. This implies that the eigenvectors of $D$ are the base vectors $\vec{f^{(k)}}$. We have chosen to call the respective eigenvalues $\omega_k$. Since $D$ is real symmetric, the $\omega_k$ are real. We have
\begin{equation}
\omega_k \vec{f^{(k)}} = D \vec{f^{(k)}} = D \overline{\vec{f^{(-k)}}} = \overline{D \vec{f^{(-k)}}} = \overline{\omega_{-k} \vec{f^{(-k)}}} = \omega_{-k} \vec{f^{(k)}}
\end{equation}
and thus $\omega_k=\omega_{-k}$. In order to get rid of the complex coordinates before doing a canonical quantization, we perform a second coordinate transformation, splitting up each $Q_k$ into two real-valued coordinates:
\begin{equation} \label{EqQtilde}
Q_k = \frac 12 (1+i) \tilde Q_k + \frac 12 (1-i) \tilde Q_{-k} \quad\textrm{with } \tilde Q_k \in \mathbb{R}.
\end{equation}
The constraint (\ref{EqConstraint}) is automatically observed then. By putting (\ref{EqQtilde}) into (\ref{EqFourierDecomp}) we get the transformation rule between the $q_n$ and the $\tilde Q_k$:
\begin{equation} \label{EqQtildeq}
q_n = \sum_k \tilde Q_k \frac{1}{2} \left((1+i) f^{(k)}_n+(1-i)f^{(-k)}_n\right) =: \sum_k \tilde Q_k \tilde f^{(k)}_n
\end{equation}
We note that the $\{\tilde f^{(k)}_n\}$, in which we have developed $\vec q$, form another orthonormal basis in $\mathbb{R}^N$. This can be verified by direct computation, using the fact that the $\{f^{(k)}_n\}$ are also orthonormal.
The system's Hamiltonian (\ref{EqHnew}) looks very similar in the new coordinates:\\
\begin{equation} \label{EqHnew2}
H = \sum_k \left(\frac{\tilde P_k^2}{2m} + \frac{1}{2} \omega_k \tilde Q_k^2\right),
\end{equation}
where $\tilde P_k = m \partial/\partial_t \tilde Q_k$. $H$ is now separated into $N$ one-dimensional harmonic oscillators with the eigenfrequencies $\omega_k$. Next, we perform a canonical quantization on $H$, promoting the location and momentum coordinates to Hilbert space operators. As a result we get a set of $N$ decoupled quantum oscillators. Solving the harmonic oscillator is one of the most common introductory problems in quantum mechanics and it is presented in many textbooks\cite{Liboff}. It can be achieved either by solving the oscillator's differential (Schroedinger) equation, or in an algebraic way based on commutation relations. Following the latter approach, we introduce the usual creation/annihilation operators $a^\dagger_k$ and $a_k$, so that the quantum version of (\ref{EqHnew2}) becomes
\begin{equation} \label{EqHnew3}
\hat H = \sum_k \omega_k \left(a^\dagger_k a_k + \frac 12 \right).
\end{equation}
The excitations of the harmonic chain created by the $a_k^\dagger$ are also called `particles' since this is how they often manifest themselves in experiments.
\section{Connecting the dots} \label{ConnectingDots}
By now we have established all the machinery needed to advance to the main ideas of this article, the Monte Carlo visualization of bosonic quantum fields. But let's first step back and review how our work so far is related to first and second quantization. This section is meant to remind ourselves of the `big picture' and provide some orientation to novices in the field. It can be omitted by the advanced reader.
We had started with one quantum particle which was free to move in two dimensions $q_1$ and $q_2$ subject to an harmonic potential. As known from introductory quantum mechanics courses, the particle's wave function (or, more precisely, the square of its absolute value) tells us the probability density of finding the particle at a given point in space when performing a measurement. We have visualized such a wave function for the specific eigenstate $\Psi_{2,1}$ in Fig. \ref{2DHarmonic}. The coordinates $q_1$ and $q_2$ are linked to location operators $\hat q_1$ and $\hat q_2$ which in turn describe a quantum measurement of the particle's position in the respective coordinate.
Then we have shown how to solve the dynamics of the harmonic oscillator chain, which can serve for at least two physical models:
On the one hand, we can think of each point in the chain as being an actual physical particle like the atoms in a crystal lattice. In this case $q_n$ is still the spacial coordinate of each mass point relative to its equilibrium position, just like in the two dimensional case before, and $n$ simply enumerates the atoms of the chain. We are still in the `first quantization world' and if we were to measure the exact positions of the $N$ particles at the same time, the wave function of the system would tell us the probability of finding the chain is a certain configuration. The $\hat q_n$ remain simple location operators which describe a position measurement of the $n$th particle. For example, if the system's normalized state is $\Psi$ and we measure the position of the third particle, then the expectation value of the result is $\langle \Psi, \hat q_3 \Psi \rangle$. The normal modes of oscillation in the chain, which we have identified in Section \ref{SecLinHarChain}, are called phonons - an important concept in solid state physics.
On the other hand, the chain can serve as a model for a bosonic quantum field, which belongs to the realm of `second quantization'. In this case $n$ becomes the discrete version of a spacial coordinate and $q_n$ is the field strength at the point $n$. The wave function of the system would give us the probability to encounter the field in a given classical state if it were possible to measure the field strength at all points in space at the same time. The operators $\hat q_n$ are now called `field operators' and they correspond to a measurement of the field strength in the spacial point $n$. In quantum field theory one would typically proceed to the continuum limit, replacing $n$ by $\vec x \in \mathbb{R}^3$ and $\hat q_n$ by some $\hat\phi(\vec x)$, for example. So all we have is a field, but where are particles - the bosons - which we wanted to describe? It turns out that what we call `particles' are simply excitation modes of the quantum field. Such excitations can be limited to a region in space and they can travel through the field very similarly to how particles in first quantization can be more or less localized and move in space. But, in contrast to first quantization, excitations can also become stronger or weaker - which means that particles can be created or destroyed.
In the following section we shall see some examples of such excitations, how they can be visualized, and how they correspond to physical particles.
\section{Visualization of the Quantum Harmonic Chain} \label{SecVisualizationOfChain}
\subsection{Visualization Method}
We have noted in Section \ref{SecLinHarChain} that the Hamiltonian of the quantum harmonic chain of length $N$ is equivalent to the $N$-dimensional quantum harmonic oscillator (\ref{EqHnew2}) up to a coordinate transformation. The latter has the convenient property of being separable, so that its energy eigenstates are simply products of the familiar one-dimensional harmonic oscillator states:
\begin{equation}
\Psi_{\{\nu_k\}} = \prod_{k}{H_{\nu_k}(\sqrt{m\omega_k}\,\tilde Q_k)e^{-m\omega_k\tilde Q_k^2/2}},
\end{equation}
where $H_\nu$ is the $\nu$-th order Hermite polynomial and $\nu_k \in \mathbb{N}_0$ for $k = -(N-1)/2,\dots,(N-1)/2$ are the quantum numbers.
Any wave function $\Psi$ which is given in terms of the coordinates $\tilde Q_k$ can be transformed back to the original coordinates $q_k$ via (\ref{EqQtildeq}). Once $\Psi(\vec q)$ is known, we can apply the Monte Carlo visualization which we have introduced in section \ref{SecMonteCarlo}. Compared to our simple example in Fig. \ref{2DHarmonic}, the main difference is that we now have $N$ coordinates instead of only two. Therefore we have to adapt our visualization prescription as follows:
\begin{enumerate}
\item Choose points $\vec q$ randomly according to a uniform probability distribution in a rectangular window within $\mathbb{R}^N$.
\item Each point $\vec q$ is visualized as a polyline in a parallel axes plot and the value of $\Psi(\vec q)$ is represented by the color of the polyline.
\end{enumerate}
The visualization in a parallel axes diagram has the advantage that each point $\vec q$ is represented by a polyline which can be intuitively associated with the corresponding state of a classical (i.e., non-quantum) oscillator chain.
In order to plot such visualizations we have developed a small program in the numerically oriented programming language Scilab\cite{Scilab}. The source code is available on request from the author of this article.
\subsection{Ground State}
As a first example of the results, Fig. \ref{GroundState} represents the ground state $\Psi_0 = \Psi_{\{\nu_k=0\,\forall k\}}$ of a quantum chain consisting of $N = 15$ oscillators. In the language of QFT, this state is called the `vacuum' since no excitations (=particles) are present. The wave function of the ground state is unique only up to multiplication with a complex number and in this case it has been chosen such that $\Psi_0$ is real and positive. Each line in the plot represents one possible configuration of the 15 oscillators' positions. The \BW{brightness}\color{color} of each line corresponds to the value of the wave function $\Psi_0$ for the respective configuration. The color scale is chosen such that the value $\Psi_0 = 0$ blends in with the \BW{grey}\color{white} background and the \BW{brighter}\color{more colorful} the line, the higher the value of $\Psi_0$. Each polyline is plotted on top of the lines with lower $|\Psi_0|$, so that the most important lines (i.e. those with high $|\Psi_0|$) are more clearly visible in the chart. Those lines represent the most probable configurations of the chain in the sense of a quantum mechanical measurement and they tend to be relatively close to the $\vec q = 0$ line, which is the classical equilibrium state of the system.
\begin{figure}[h!]
\centering
\resizebox{0.99\textwidth}{!}{
\BW{\includegraphics{GroundState-BW-v2.eps}}
\color{\includegraphics{GroundState-COL-v2.eps}}
}
\caption{Monte Carlo representation of the ground state of a quantum harmonic chain. The wave function has been chosen real-valued and positive, thus all the lines are \BW{brighter than the background}\color{of the same color}. The \BW{brightest} lines \color{in deepest red }correspond to those $\vec q$ where the wave function assumes the highest values. Not surprisingly, in the ground state these are the lines close to $\vec q = 0$. Actually, a straight horizontal line corresponding exactly to $\vec q = 0$ would be drawn in the \BW{brightest}\color{deepest} color since it represents the global maximum of the ground state wave function. But due to the random nature of the Monte Carlo approach this line happens not to be drawn in the chart.}
\label{GroundState}
\end{figure}
\subsection{Particles at rest}
\begin{figure}[h!]
\centering
\resizebox{0.99\textwidth}{!}{
\BW{\includegraphics{OnePartRest-BW-v2.eps}}
\color{\includegraphics{OnePartRest-COL-v2.eps}}
}
\caption{Monte Carlo representation of the first excited state of the harmonic chain with wave number zero. In the most prominent configurations of the chain, i.e., those corresponding to \BW{very bright}\color{deeply red} or \BW{very dark}\color{deeply blue} lines, the $q_n$'s tend to be either jointly positive or all negative. Thus, if we could perform a quantum measurement of the exact position of the harmonic chain, we would most likely either find it entirely shifted towards the positive $q$-direction or entirely the other way. In the particle interpretation of a quantum field this state would be called `one particle at rest'.}
\label{OnePartRest}
\end{figure}
\begin{figure}[h!]
\centering
\resizebox{0.99\textwidth}{!}{
\BW{\includegraphics{TwoPartRest-BW-v2.eps}}
\color{\includegraphics{TwoPartRest-COL-v2.eps}}
}
\caption{The second excited state of the harmonic chain with wave number zero is similar to the first one shown in Fig. \ref{OnePartRest}. But where the first excited state has only one `nodal line' at $\vec q = 0$ (or, much more precisely, one nodal hypersurface which contains the point $\vec q = 0$), we can now recognize two such `nodal lines' where the graph turns from being mostly \BW{bright}\color{red} to mostly \BW{dark}\color{blue} and back again. In the particle interpretation of a quantum field, this state would be called `two particles at rest'.}
\label{TwoPartRest}
\end{figure}
As further examples we consider excitations of the chain which can be interpreted as one or two particles at rest, respectively. The one-particle state $\Psi = a_0^\dagger \Psi_0$ is visualized in Fig. \ref{OnePartRest} and the two-particle state $\Psi = (a_0^\dagger)^2 \Psi_0$ in Fig. \ref{TwoPartRest}. As before, the color scale is chosen such that configurations with $\Psi = 0$ blend in invisibly with the \BW{grey}\color{white} background, and the \BW{brightest/darkest}lines \color{in deepest red/blue }correspond to the largest (positive) / smallest (negative) values of $\Psi$. Similar to the ground state, the configurations with the highest absolute value of $\Psi$ tend to be those with little fluctuation along the $n$-axis. But in the one-particle case we note that the polylines with mostly positive (negative) $q_n$ tend to correspond to positive (negative) values of $\Psi$. In the two-particle case the polylines close to the $\vec q = 0$ line tend to correspond to negative values of $\Psi$ while those farther away from that line are rather associated with positive values of $\Psi$. This pattern obviously resembles the positive and negative parts of the energy eigenfunctions of a one-dimensional harmonic oscillator. The classical analog to Fig. \ref{OnePartRest} and Fig. \ref{TwoPartRest} is a chain whose points masses oscillate synchronously and a stronger excitation of the chain as a whole corresponds to a higher number of particles in the quantum field.
\subsection{Particle with Non-Zero Wave Number}
\begin{figure}[h!]
\centering
\resizebox{0.99\textwidth}{!}{
\BW{\includegraphics{OnePartKOne-BW-v2.eps}}
\color{\includegraphics{OnePartKOne-COL-v2.eps}}
}
\caption{The first excited state of the harmonic chain with wave number $k=1$ corresponds to a standing wave with two nodes in classical mechanics. It can be interpreted as one quantum particle in an energy eigenstate of the lowest non-zero kinetic energy.}
\label{OnePartKOne}
\end{figure}
Now we turn to excitations of the chain with non-zero wave number. For example, the state $\Psi = a_1^\dagger \Psi_0$ (which we can also write as $\Psi_{\{\nu_k\}}$ where $\nu_k$ is equal to $1$ for $k=1$ and zero else) is represented in Fig. \ref{OnePartKOne} for $N=15$. Its classical analog is a standing wave in the oscillator chain. The location of the nodes near $n = 6$ and $n = 13$ can be explained by transforming back to the $q_n$ coordinates: In our state $a_1^\dagger \Psi_0$ the oscillating system is excited along the $\tilde Q_1$ coordinate, since this is how we defined the creation operators in (\ref{EqHnew3}). Using (\ref{EqQtildeq}) we see that an oscillation along $\tilde Q_1$ transforms back to an oscillation along the vector $\vec q$ with
\begin{equation}
q_n = \frac{1}{2}\left((1+i) f_n^{(1)} + (1-i) f_n^{(-1)}\right)= \frac{1}{\sqrt{N}}\left(\cos\frac{2\pi n}{N} + \sin \frac{2\pi n}{N}\right),
\end{equation}
which is zero for $n = 3N/8 \approx 5.6$ or for $n = 7N/8 \approx 13.1$.
A graph of the state $a_{-1}^\dagger \Psi_0$ would look very similar to Fig. \ref{OnePartKOne} with the exception that the standing wave is shifted by an offset of $N/4$ along the $n$-axis compared to the state $a_1^\dagger \Psi_0$. In the particle language of excitations $a_1^\dagger \Psi_0$ and $a_{-1}^\dagger \Psi_0$ would both be one-particle states, which is clear from how it is constructed via one creation operator.
We can also construct the quantum analog to a progressive wave in the chain, for example the state $(a_1^\dagger + i a_{-1}^\dagger) \Psi_0$. But the resulting wave function would not be real-valued anymore. A complex-valued $\Psi$ can be visualized with our method by using different colors, but for the present article we limit ourselves to real-valued $\Psi$ functions which can be visualized \BW{in gray scales}\color{with two colors only}.
\subsection{Localized Particle}
\begin{figure}[h!]
\centering
\resizebox{0.99\textwidth}{!}{
\BW{\includegraphics{OnePartLocalized-BW-v6.eps}}
\color{\includegraphics{OnePartLocalized-COL-v6.eps}}
}
\caption{Monte Carlo plot of a local excitation corresponding to one localized particle at the position $n=5$. As opposed to the other states depicted so far, this state is not an energy eigenstate of the harmonic chain, which means that it is not essentially time-independent but would rather evolve into some spread-out oscillation pattern rapidly.}
\label{OnePartLocalized}
\end{figure}
Our next example is a state with one spatially localized particle. Recall that we had expressed $\vec q$ in the orthonormal basis $\{\tilde f^{(k)}_n\}$ in (\ref{EqQtildeq}) and note that the function $\delta_{n,n_0}$ (which is equal to one for $n=n_0$ and else zero) can be expanded in this basis as
\begin{equation} \label{EqDelta}
\delta_{n,n_0} = \sum_k \langle \tilde f^{(k)}_\cdot ,\delta_{\cdot,n_0} \rangle \tilde f^{(k)}_n = \sum_k \tilde f^{(k)}_{n_0} \tilde f^{(k)}_n.
\end{equation}
Remember that our operators $a^\dagger_k$ create a particle `in the state' $\tilde f^{(k)}_n$. Thus, in analogy to (\ref{EqDelta}) we construct new creation operators
\begin{equation} \label{EqCreatorsLocalized}
b^\dagger_n := \sum_k \tilde f^{(k)}_n a^\dagger_{k}
\end{equation}
and we expect $b^\dagger_n$ to create a particle localized in space at the position $n$. Fig. \ref{OnePartLocalized} is a visualization of the state
\begin{equation} \label{EqPsiLoc}
\Psi = b^\dagger_{5} \Psi_0.
\end{equation}
It is apparent that the sign of $\Psi(\vec q)$ depends only on the sign of the single coordinate $q_5$. This is exactly what one should expect from a quantum oscillator which has $N = 11$ (in the case of Fig. \ref{OnePartLocalized}) degrees of freedom, but which is excited only along the fifth coordinate. Yet the effect of the coupling between the point masses is also visible: At least at the positions $n=4$ and $n=6$ there is a notable correlation of the polylines' \BW{brightness}\color{color} with those at $n=5$ in the sense that \BW{white (black)}\color{red (blue)} lines dominate where $q > 0$ $(q<0)$. It looks like the excitation of the quantum field is `blurred' around that point. Where does this blur come from, given that $\Psi$ in (\ref{EqPsiLoc}) is constructed in close analogy to the $\delta$ function (\ref{EqDelta}) with its sharp peak?
The deeper reason is that $a^\dagger_{k} \Psi_0$ is not only associated with a wave $\tilde f^{(k)}_n$, which stems from the Fourier transformation and which gives rise to the analogy between (\ref{EqDelta}) and (\ref{EqPsiLoc}), but also with an oscillation amplitude in the direction of the $\tilde Q_k$ coordinate. This amplitude depends on the strength of the harmonic potential in the $\tilde Q_k$ direction, which in turn depends (via the diagonalization of the system's Hamiltonian) on the coupling constant $\gamma$ between the mass points. Only in the trivial case $\gamma = 0$ the potential is radially symmetric in the space spanned by the $q_n$ or the $\tilde Q_k$ coordinates, which implies that the oscillation amplitudes of excited states don't depend on $n$ or $k$, so that the analogy between (\ref{EqDelta}) and (\ref{EqPsiLoc}) perfectly holds. In this case the `blurring' of the particle's position in Fig. \ref{OnePartLocalized} vanishes, which is also clear from the fact that for $\gamma = 0$ there is not even the concept of two `neighboring' point masses built into the physical system.
\subsection{Two Localized Particles}
\begin{figure}[h!]
\centering
\resizebox{0.99\textwidth}{!}{
\BW{\includegraphics{TwoPartLocalized-large-dist-BW-v3.eps}}
\color{\includegraphics{TwoPartLocalized-large-dist-COL-v3.eps}}
}
\caption{A state with two particles, one of them localized at $n=3$ and the other at $n=8$. While the peaks in excitation at those two locations are clearly visible, the detailed structure of the plot may seem chaotic at first glance. A closer inspection reveals a pattern in the \BW{white and black}\color{red and blue} lines which becomes much more evident in Fig. \ref{TwoPartLocalized-small-dist} below.}
\label{TwoPartLocalized-large-dist}
\end{figure}
\begin{figure}[h!]
\centering
\resizebox{0.99\textwidth}{!}{
\BW{\includegraphics{TwoPartLocalized-small-dist-BW-v2.eps}}
\color{\includegraphics{TwoPartLocalized-small-dist-COL-v3.eps}}
}
\caption{Two particles are localized very closely to each other at $n=5$ and $n=6$, respectively. The wave function assumes relatively large positive values for configurations of the harmonic chain where the atoms at $n=5$ and $n=6$ are strongly and synchronously displaced (i.e. where $q_5$ and $q_6$ are either both positive or both negative and of relatively large absolute value). It assumes negative values especially for configurations where the same two atoms are rather weakly and asynchronously displaced (i.e. $q_5$ is slightly positive and $q_6$ is slightly negative, or vice versa).}
\label{TwoPartLocalized-small-dist}
\end{figure}
\begin{figure}[h!]
\centering
\resizebox{0.99\textwidth}{!}{
\BW{\includegraphics{TwoPartLocalized-no-dist-BW-v3.eps}}
\color{\includegraphics{TwoPartLocalized-no-dist-COL-v3.eps}}
}
\caption{Two particles are in the same state, localized at $n=5$. The wave function $\Psi$ has two changes of sign along the $q_5$ axis: $\Psi$ is negative where $q_5 \approx 0$ and it is positive where the absolute value of $q_5$ is sufficiently large. This pattern resembles the second excited state of a one-dimensional quantum harmonic oscillator.}
\label{TwoPartLocalized-no-dist}
\end{figure}
In our final example we consider two localized particles, which are described by the state
\begin{equation} \label{EqPsi2Loc}
\Psi = b^\dagger_{n_1} b^\dagger_{n_2} \Psi_0.
\end{equation}
This case is particularly interesting because we can visualize how two bosons move gradually from two different states into one and the same. The Figures \ref{TwoPartLocalized-large-dist}, \ref{TwoPartLocalized-small-dist} and \ref{TwoPartLocalized-no-dist} contain Monte-Carlo visualizations for different values of the pair $(n_1,n_2)$, namely for $(3,8)$, $(5,6)$ and $(5,5)$, respectively. The two particles in Fig \ref{TwoPartLocalized-large-dist} are relatively far apart and each of the two localized excitations resemble the localized one-particle state from Fig. \ref{OnePartLocalized}, even though the phase structure of the $\Psi$ function (i.e. \BW{white}\color{red} lines vs. \BW{black}\color{blue} lines) is much more complicated. In Fig. \ref{TwoPartLocalized-no-dist}, on the other hands, two particles are located at the same point in space. Note the similarity of this plot with the two-particle state shown in Fig. \ref{TwoPartRest}: Both exhibit the pattern of an harmonic oscillator's second excited state (a function with two changes of sign), but in this case the excitation is limited to a small neighborhood of the point $n=5$. Fig. \ref{TwoPartLocalized-small-dist} shows how the transition between the two extreme cases (two particles far apart vs. localized in the same point) comes about.
\section{Discussion of Results} \label{SecDiscussion}
We have presented a new way of visualizing states of a quantum field and we conclude this article with a discussion of advantages and limitations of the new method, as well as an outlook for future investigation.
We hope that our visualizations can help students of QFT and solid state physics to develop a better intuitive understanding of some of their fundamental concepts. Compared to the method presented by Johnson and Gutierrez \cite{JohnsonGutierrez} we feel that our graphs are a slightly more direct representation of the quantum state, since they indicate the value of $\Psi$ for (a randomly chosen set of) points in the state space directly instead of showing a projection of $\Psi$ to the $q_n$ coordinates. For example, the spatial `blurriness' of a localized particle is clearly visible in Fig. \ref{OnePartLocalized} but not in the analogous Fig. 23 of Johnson's and Gutierrez' paper.
An inherent limitation of our method is that the graphs are not exactly reproducible. Due to the random nature of the Monte-Carlo approach two graphs generated with the exact same parameters will look similar, but not exactly the same. Also, the graphs are quite sensitive to parameters chosen in the Monte-Carlo algorithm. For example, if the number of random points chosen is too small, then none of these points might be in a region where $|\Psi|$ is large and the resulting graph will look quite noisy and without any clear structure. Whether that number is 'small' also depends on the number $N$ of point masses. The volume from which interesting points can be chosen (e.g. the set of all $\vec q \in \mathbb{R}^N$ with $|q_j| < 1$ for all $j = 1,\dots,N$) grows exponentially with $N$. Consequently, the necessary number of random points and the computation time of the algorithm also grow exponentially with $N$. Finally, for too `complex' states (e.g. those with more than just a few particles of different wave numbers) the graphs become too busy to be readable.
In principle, our method could be extended to two-dimensional fields, where the polylines from the plots in the present paper would be replaced by overlapping surfaces in a 3D plot. In order to avoid a very busy and confusing graph, the number of surfaces drawn would have to be limited strictly to just a few with the very highest values of $|\Psi|$.
An interesting subject for future work will be the extension of the method presented here to more complex phenomena in quantum fields, for example by considering interactions between fields or by making the shift from bosons to fermions.
\section{Acknowledgments}
The author is grateful to the anonymous referees for thorough reviews and several helpful suggestions which have made this article better.
|
2,877,628,089,136 | arxiv | \section{Introduction}
\label{sec:1}
Our Universe seems to be populated exclusively with matter and essentially no antimatter. Although this asymmetry is maximal today, at high temperatures ($T\gtrsim 1$ GeV) when quark-antiquark pairs were abundant in the thermal plasma, the baryon asymmetry observed today corresponds to a tiny asymmetry at recombination~\cite{Ade:2015xua}:
\begin{align}
\eta_B \ \equiv \ \frac{n_B-n_{\overline{B}}}{n_\gamma} \ = \ \left(6.105^{+0.086}_{-0.081}\right)\times 10^{-10} \; , \label{baryo}
\end{align}
where $n_{B(\overline{B})}$ is the number density of baryons (antibaryons) and $n_\gamma=2T^3 \zeta(3)/\pi^2$ is the number density of photons, $\zeta(x)$ being the Riemann zeta function, with $\zeta(3)\approx 1.20206$. {\em Baryogenesis} is the mechanism by which the observed baryon asymmetry of the Universe (BAU) given by Eq.~\eqref{baryo} can arise dynamically from an initially baryon symmetric phase of the Universe, or even irrespective of any initial asymmetry.
This necessarily requires the fulfillment of three basic Sakharov conditions~\cite{Sakharov:1967dj}: (i) baryon number ($B$) violation, which is essential for the Universe to evolve from a state with net baryon number $B=0$ to a state with $B\neq 0$; (ii) $C$ and $CP$ violation, which allow particles and anti-particles to evolve differently so that we can have an asymmetry between them; (iii) departure from thermal equilibrium, which ensures that the asymmetry does not get erased completely. The Standard Model (SM) has, in principle, all these basic ingredients, namely, (i) the triangle anomaly violates $B$ through a non-perturbative instanton effect~\cite{tHooft:1976up}, which leads to effective ($B+L$)-violating sphaleron transitions for $T\gtrsim 100$ GeV~\cite{Kuzmin:1985mm}; (ii) there is maximal $C$ violation in weak interactions and $CP$ violation due to the Kobayashi-Maswaka phase in the quark sector~\cite{Agashe:2014kda}; (iii) the departure from thermal equilibrium can be realized at the electroweak phase transition (EWPT) if it is sufficiently first order~\cite{Cohen:1990it}. However, the SM $CP$ violation turns out to be too small to account for the observed BAU~\cite{Gavela:1993ts}. In addition, for the observed value of the Higgs mass, $m_H=(125.09\pm 0.24)$ GeV~\cite{Aad:2015zhl}, the EWPT is not first order, but only a smooth crossover~\cite{Kajantie:1996mn}. Therefore, {\em the observed BAU provides a strong evidence for the existence of new physics beyond the SM.}
Many interesting scenarios for successful baryogenesis have been proposed in beyond SM theories; see e.g.~\cite{Cline:2006ts}. Here we will focus on the mechanism of {\em leptogenesis}~\cite{Fukugita:1986hr}, which is an elegant framework to explain the BAU, while connecting it to another seemingly disparate piece of evidence for new physics beyond the SM, namely, non-zero neutrino masses; for reviews on leptogenesis, see e.g.~\cite{Davidson:2008bu, Blanchet:2012bk}. The minimal version of leptogenesis is based on the {\em type I seesaw} mechanism~\cite{seesaw}, which requires heavy SM gauge-singlet Majorana neutrinos $N_\alpha$ (with $\alpha=1,2,3$) to explain the observed smallness of the three active neutrino masses at tree-level. The out-of-equilibrium decays of these heavy Majorana neutrinos in an expanding Universe create a lepton asymmetry, which is reprocessed into a baryon asymmetry through the equilibrated $(B+L)$-violating electroweak sphaleron
interactions~\cite{Kuzmin:1985mm}.
In the original scenario of thermal leptogenesis~\cite{Fukugita:1986hr}, the heavy Majorana neutrino masses
are typically close to the Grand Unified Theory (GUT) scale,
as suggested by natural GUT embedding of the seesaw
mechanism. In fact, for a hierarchical heavy neutrino spectrum, i.e. ($m_{N_1} \ll m_{N_{2}} < m_{N_{3}}$),
the light neutrino oscillation data impose a
{\it lower} limit on $m_{N_1} \gtrsim 10^9$ GeV~\cite{Davidson:2002qv}. As a consequence,
such `vanilla' leptogenesis scenarios~\cite{Buchmuller:2004nz} are very
difficult to test in any foreseeable experiment. Moreover, these high-scale thermal leptogenesis scenarios run into difficulties,
when embedded within supergravity models of inflation. In particular, it leads to a potential conflict with an {\em upper} bound on the reheat
temperature of the Universe, $T_R \lesssim 10^6$--$10^9$ GeV, as required to avoid overproduction of gravitinos whose late decays may otherwise
ruin the success of Big Bang
Nucleosynthesis~\cite{Khlopov:1984pf}.
An attractive scenario that avoids the aforementioned problems is {\em resonant leptogenesis} (RL)~\cite{Pilaftsis:1997dr, Pilaftsis:2003gt}, where the $\varepsilon$-type $CP$ asymmetries due to the self-energy
effects~\cite{Flanz:1994yx, Covi:1996wh, Buchmuller:1997yu} in the heavy Majorana neutrino decays get resonantly enhanced. This happens when the masses of at least two of the heavy neutrinos become quasi-degenerate, with a mass difference comparable to their decay widths~\cite{Pilaftsis:1997dr}. The resonant enhancement of the $CP$ asymmetry allows one to avoid the lower bound on $m_{N_1} \gtrsim 10^9$ GeV~\cite{Davidson:2002qv} and have successful leptogenesis at an experimentally accessible energy scale~\cite{Pilaftsis:2003gt, Pilaftsis:2005rv}, while retaining perfect agreement with the light-neutrino oscillation data. The level of testability is further extended in the scenario of Resonant $l$-Genesis (RL$_{l}$), where the final lepton asymmetry is dominantly generated and stored in a {\it single} lepton flavor $l$~\cite{Pilaftsis:2004xx, Deppisch:2010fr, Dev:2014laa, Dev:2015wpa}. In such models, the heavy neutrinos could be as light as the electroweak scale, while still having sizable couplings to other charged-lepton flavors $l'\neq l$, thus giving rise to potentially large lepton flavor violation (LFV) effects.
In this mini-review, we will mainly focus on low-scale leptogenesis scenarios, which may be directly tested at the Energy~\cite{Deppisch:2015} and Intensity~\cite{Alekhin:2015byh} frontiers. For brevity, we will only discuss the type I seesaw-based leptogenesis models; for other leptogenesis scenarios, see e.g.~\cite{Davidson:2008bu, Hambye:2012fh}.
\section{Basic Picture}
Our starting point is the minimal type I seesaw extension of the SM Lagrangian:
\begin{align}
{\cal L} \ = \ {\cal L}_{\rm SM}+ i\overline{N}_{{\rm R}, \alpha}\gamma_\mu \partial ^\mu N_{{\rm R}, \alpha} - h_{l\alpha}\overline{L}_l\widetilde{\Phi}N_{{\rm R}, \alpha} - \frac{1}{2}\overline{N}^C_{{\rm R}, \alpha}(M_{N})_{\alpha\beta}N_{{\rm R}, \beta} +{\rm H.c.} \; ,
\end{align}
where $N_{{\rm R},\alpha} \equiv \frac{1}{2}(\mathbf{1}+\gamma_5) N_\alpha$ are the
right-handed (RH) heavy neutrino fields, $L_l\equiv (\nu_l ~~ l)_L^{\sf T}$ are the $SU(2)_L$ lepton doublets (with $l=e,\mu,\tau$) and $\widetilde{\Phi}\equiv i\sigma_2\Phi^*$, $\Phi$ being the SM Higgs doublet and $\sigma_2$ being the second Pauli matrix. The complex Yukawa couplings $h_{l\alpha}$ induce $CP$-violating decays of the heavy Majorana neutrinos, if kinematically allowed: $N_\alpha\to L_l\Phi$ with a decay rate $\Gamma_{l\alpha}$ and the $CP$-conjugate process $N_\alpha \to L_l^c\Phi^c$ with a decay rate $\Gamma^c_{l\alpha}$ (the shorthand notation $c$ denotes $CP$). The flavor-dependent $CP$ asymmetry can be defined as
\begin{align}
\varepsilon_{l\alpha} \ = \ \frac{\Gamma_{l\alpha}
- \Gamma^c_{l\alpha}}
{\sum_{k}\big(\Gamma_{k\alpha}
+ \Gamma^{c}_{k\alpha}\big)} \
\equiv \ \frac{\Delta \Gamma_{l\alpha}}{\Gamma_{N_\alpha}}\;,
\label{eps}
\end{align}
where $\Gamma_{N_\alpha}$ is the total decay width of the heavy
Majorana neutrino $N_\alpha$ which, at tree-level, is given by $\Gamma_{N_\alpha}=\frac{m_{N_{\alpha}}}{8\pi}(h^\dag h)_{\alpha\alpha}$.
A non-zero $CP$ asymmetry arises at one-loop level due to the interference between the tree-level graph with either the vertex or the self-energy graph. Following the terminology used for $CP$ violation in neutral meson systems, we denote these two contributions as $\varepsilon'$-type and $\varepsilon$-type $CP$ violation respectively. In the two heavy-neutrino case ($\alpha,\beta=1,2$; $\alpha\neq \beta$), they can be expressed in a simple analytic form~\cite{Covi:1996wh, Pilaftsis:2003gt}:
\begin{eqnarray}
\varepsilon'_{l\alpha} \ &=& \ \frac{{\rm Im}
\left[({h}^*_{l\alpha} {h}_{l\beta})({h}^\dag {h})_{\alpha\beta}\right]}
{8\pi \: ({h}^\dag {h})_{\alpha\alpha}} \frac{m_{N_\beta}}{m_{N_\alpha}}\left[1-\bigg(1+\frac{m^2_{N_\beta}}{m^2_{N_\alpha}}\bigg)\ln\bigg(1+ \frac{m^2_{N_\alpha}}{m^2_{N_\beta}}\bigg)\right]
\; , \label{epsp}\\
\varepsilon_{l\alpha} \ &=& \ \frac{{\rm Im}\left[(h^*_{l\alpha} {h}_{l\beta})({h}^\dag {h})_{\alpha\beta}\right]+\frac{m_{N_\alpha}}{m_{N_\beta}}\:{\rm Im}\left[({h}^*_{l\alpha} {h}_{l\beta})({h}^\dag {h})_{\beta\alpha}\right]}
{8\pi \: ({h}^\dag {h})_{\alpha\alpha}} \frac{(m^2_{N_\alpha}-m^2_{N_\beta})
m_{N_\alpha}m_{N_\beta}}
{(m^2_{N_\alpha}-m^2_{N_\beta})^2
+m^2_{N_\alpha}\Gamma^2_{N_\beta}} \; . \nonumber \\
\label{eps}
\end{eqnarray}
In the quasi-degenerate limit $|m_{N_\alpha}-m_{N_\beta}|\sim \frac{1}{2}\Gamma_{N_{\alpha,\beta}}$, the $\varepsilon$-type contribution becomes resonantly enhanced, as is evident from the second denominator in Eq.~\eqref{eps}.
Due to the Majorana nature of the heavy neutrinos, their decays to lepton and Higgs fields violate lepton number which, in presence of $CP$ violation, leads to the generation of a lepton (or $B-L$) asymmetry. Part of this asymmetry is washed out due to the inverse decay processes $L\Phi\to N, L^c\Phi^c \to N$ and various $\Delta L=1$ (e.g. $NL\leftrightarrow Q u^c$) and $\Delta L=2$ (e.g. $L\Phi \leftrightarrow L^c\Phi^c$) scattering processes. In the flavor-diagonal limit, the total amount of $B-L$ asymmetry generated at a given temperature can be calculated by solving the following set of coupled Boltzmann equations~\cite{Buchmuller:2004nz}:
\begin{align}
\frac{d{\cal N}_{N_\alpha}}{dz} \ & = \ -(D_\alpha+S_\alpha)({\cal N}_{N_\alpha}-{\cal N}_{N_\alpha}^{\rm eq}) \; , \label{be1}\\
\frac{d{\cal N}_{B-L}}{dz} \ & = \ \sum_{\alpha} \varepsilon_\alpha D_\alpha ({\cal N}_{N_\alpha}-{\cal N}_{N_\alpha}^{\rm eq}) - {\cal N}_{B-L} \sum_\alpha W_\alpha \; , \label{be2}
\end{align}
where $z\equiv m_{N_1}/T$, with $N_1$ being the lightest heavy neutrino, ${\cal N}_X$ denotes the number density of $X$ in a portion of co-moving volume containing one heavy-neutrino in ultra-relativistic limit, so that ${\cal N}^{\rm eq}_{N_\alpha}(T\gg m_{N_\alpha})=1$, $\varepsilon_\alpha\equiv \sum_l(\varepsilon_{l\alpha}+\varepsilon'_{l\alpha})$ is the total $CP$ asymmetry due to the decay of $N_\alpha$, and $D_\alpha, S_\alpha, W_\alpha$ denote the decay, scattering and washout rates, respectively. Given the Hubble expansion rate $H(T)\simeq 1.66g_*\frac{T^2}{M_{\rm Pl}}$, where $g_*$ is the total relativistic degrees of freedom and $M_{\rm Pl}=1.22\times 10^{19}$ GeV is the Planck mass, we define the decay parameters $K_\alpha \equiv \frac{\Gamma_{D_\alpha}(T=0)}{H(T=m_{N_\alpha})}$, where $\Gamma_{D_\alpha}\equiv \sum_l(\Gamma_{l\alpha}+\Gamma^c_{l\alpha})$. For $K_\alpha\gg 1$, the system is in the {\em strong washout} regime, where the final lepton asymmetry is insensitive to any initial asymmetry present. The decay rates are given by $D_\alpha \equiv \frac{\Gamma_{D_\alpha}}{Hz} = K_\alpha x_\alpha z \frac{{\cal K}_1(z)}{{\cal K}_2(z)}$, where $x_\alpha \equiv \frac{m^2_{N_\alpha}}{m^2_{N_1}}$ and ${\cal K}_n(z)$ is the $n$th-order modified Bessel function of the second kind. Similarly, the washout rates induced by inverse decays are given by $W_\alpha^{\rm ID} = \frac{1}{4}K_\alpha\sqrt{x_\alpha} {\cal K}_1(z_\alpha)z_\alpha^3$, where $z_\alpha\equiv z\sqrt{x_\alpha}$. Other washout terms due to $2\leftrightarrow 2$ scattering can be similarly calculated~\cite{Buchmuller:2004nz}. The final $B-L$ asymmetry is given by
${\cal N}_{B-L}^{\rm f} = \sum_\alpha \varepsilon_\alpha \kappa_\alpha(z\to \infty)$,
where $\kappa_\alpha(z)$'s are the efficiency factors, obtained from Eqs.~\eqref{be1} and \eqref{be2}:
\begin{align}
\kappa_\alpha(z) \ = \ -\int_{z_{\rm in}}^z dz'\: \frac{D_\alpha(z')}{D_\alpha(z')+S_\alpha(z')}\: \frac{d{\cal N}_{N_\alpha}}{dz'} \: \exp\bigg[-\int_{z'}^z dz''\sum_\alpha W_\alpha(z'')\bigg] \; .
\end{align}
At temperatures $T\gg 100$ GeV, when the $(B+L)$-violating electroweak sphalerons are in thermal equilibrium, a fraction $a_{\rm sph}=\frac{28}{79}$ of the $B-L$ asymmetry is reprocessed to a baryon asymmetry~\cite{Khlebnikov:1988sr}. There is an additional entropy dilution factor $f=\frac{{\cal N}_\gamma^{\rm rec}}{{\cal N}_\gamma,*} = \frac{2387}{86}$ due to the standard photon production from the onset of leptogenesis till the epoch of recombination~\cite{Kolb:1990vq}. Combining all these effects, the predicted baryon asymmetry due to the mechanism of leptogenesis is given by
\begin{align}
\eta_B \ = \ \frac{a_{\rm sph}}{f}{\cal N}^{\rm f}_{B-L} \ \simeq \ 10^{-2} \: \sum_\alpha \varepsilon_\alpha \kappa_\alpha(z\to \infty) \; ,
\end{align}
which has to be compared with the observed BAU given by Eq.~\eqref{baryo}.
\section{Flavor Effects}
Flavor effects in both heavy-neutrino and charged-lepton sectors, as well as the interplay between them, can play an important role in determining the final lepton asymmetry, especially in low-scale leptogenesis models~\cite{Blanchet:2012bk}. These intrinsically-quantum effects can, in principle, be accounted for by extending the flavor-diagonal Boltzmann equations \eqref{be1} and \eqref{be2} for the number densities of individual flavor species to a semi-classical evolution equation for a {\it matrix of number densities}, analogous to the formalism presented in~\cite{Sigl:1993} for light neutrinos. This so-called `density matrix' formalism has been adopted to describe flavor effects in various leptogenesis scenarios~\cite{Abada:2006fw, Nardi:2006fx, Akhmedov:1998qx}. It was recently shown~\cite{Dev:2014laa, Dev:2014tpa}, in a semi-classical approach, that a consistent treatment of {\em all} pertinent flavor effects, including flavor mixing, oscillations and off-diagonal (de)coherences, necessitates a {\em fully} flavor-covariant formalism. It was further shown that the resonant mixing of different heavy-neutrino flavors and coherent oscillations between them are two {\em distinct} physical phenomena, whose contributions to the $CP$ asymmetry could be of similar order of magnitude in the resonant regime. Note that this is analogous to the experimentally-distinguishable phenomena of mixing and oscillations in the neutral meson systems~\cite{Agashe:2014kda}.
One can go beyond the semi-classical `density-matrix' approach to leptogenesis by means of a quantum field-theoretic analogue of the Boltzmann equations, known as the Kadanoff-Baym (KB) equations~\cite{Baym:1961zz}. Such `first-principles' approaches to leptogenesis~\cite{Buchmuller:2000nd} are, in principle, capable of accounting consistently for all flavor effects, in addition to off-shell and finite-width effects, including thermal corrections. However, it is often necessary to use truncated gradient expansions and quasi-particle ansaetze to relate the propagators appearing in the KB equations to particle number densities. Recently, using a perturbative formulation of thermal field theory~\cite{Millington:2012pf}, it was shown~\cite{Dev:2014wsa} that quantum transport equations for leptogenesis can be obtained from the KB formalism without the need for gradient expansion or quasi-particle ansaetze, thereby capturing fully the pertinent flavor effects.
Specifically, the source term for the lepton asymmetry obtained, at leading order, in this KB approach~\cite{Dev:2014wsa} was found to be exactly the same as that obtained in the semi-classical flavor-covariant approach of~\cite{Dev:2014laa}, confirming that flavor mixing and oscillations are indeed two {\em physically-distinct} phenomena, at least in the weakly-resonant regime. The proper treatment of these flavor effects can lead to a significant enhancement of the final lepton asymmetry, as compared to partially flavor-dependent or flavor-diagonal limits~\cite{Dev:2014laa, Dev:2015wpa}, thereby enlarging the viable parameter space for models of RL and enhancing the prospects of testing the leptogenesis mechanism.
\section{Phenomenology}
As an example of a testable scenario of leptogenesis, we consider a minimal $\mathrm{RL}_l$ model that possesses an approximate SO(3)-symmetric heavy-neutrino sector at some high scale $\mu_X$, with mass matrix $\bm{M}_N(\mu_X)=m_N\bm{1}_3+\bm{\Delta M}_N$~\cite{Pilaftsis:2005rv, Deppisch:2010fr}, where the SO(3)-breaking mass term is of the form $\bm{\Delta M}_N=\mathrm{diag}(\Delta M_1,\Delta M_2/2,-\Delta M_2/2)$~\cite{Dev:2015wpa}. By virtue of the renormalization group running, an additional mass splitting term
\begin{equation}
\mat{\Delta M}_N^{\rm RG} \ \simeq \ - \,\frac{m_N}{8\pi^2}
\ln\left(\frac{\mu_X}{m_N}\right)
{\rm Re}\left[\mat{h}^\dag(\mu_X) \mat{h}(\mu_X)\right]
\label{deltam}
\end{equation}
is induced at the scale relevant to RL. In order to ensure the smallness of the light-neutrino masses, we also require the heavy-neutrino Yukawa sector to have an approximate leptonic U(1)$_l$ symmetry. As an explicit example, we consider an RL$_\tau$ model, with the following Yukawa coupling structure~\cite{Pilaftsis:2004xx,Pilaftsis:2005rv}:
\begin{eqnarray}
\bm{h} \ = \ \left(\begin{array}{ccc}
\epsilon_e & a \,e^{-i\pi/4} & a\,e^{i\pi/4}\\
\epsilon_\mu & b\,e^{-i\pi/4} & b\,e^{i\pi/4}\\
\epsilon_\tau & c\,e^{-i\pi/4} & c\,e^{i\pi/4}
\end{array}\right) \; ,
\label{yuk}
\end{eqnarray}
where $a,b,c$ are arbitrary complex parameters and $\epsilon_{e,\mu,\tau}$ are the perturbation terms that break the $U(1)$ symmetry. In order to be consistent with the observed neutrino-oscillation data, we require $|a|,|b|\lesssim 10^{-2}$ for electroweak scale heavy neutrinos. In addition, in order to protect the $\tau$ asymmetry from large washout effects, we require $|c|\lesssim 10^{-5}\ll|a|,|b|$ and $|\epsilon_{e,\mu,\tau}|\lesssim 10^{-6}$.
A choice of benchmark values for these parameters, satisfying all the current experimental constraints and allowing successful leptogenesis, is given below:
\begin{align}
& m_N \ = \ 400~{\rm GeV}, \quad
\frac{\Delta M_1}{m_N} \ = \ -3\times 10^{-5}, \quad
\frac{\Delta M_2}{m_N} \ = \ (-1.21+0.10\,i)\times 10^{-9}, \nonumber \\
& a \ = \ (4.93-2.32 \, i)\times 10^{-3}, \quad
b \ = \ (8.04 - 3.79 \, i)\times 10^{-3}, \quad
c \ =\ 2\times 10^{-7}, \nonumber \\
& \epsilon_e \ = \ 5.73\, {i}\times 10^{-8}, \quad
\epsilon_\mu \ =\ 4.30\, {i}\times 10^{-7}, \quad
\epsilon_\tau \ = \ 6.39\, {i}\times 10^{-7} .
\end{align}
The corresponding predictions for some low-energy LFV observables are given by
\begin{align}
& {\rm BR}(\mu\to e\gamma) \ = \ 1.9\times 10^{-13}, \quad
{\rm BR}(\mu^- \to e^-e^+e^-) \ =\ 9.3\times 10^{-15}, \nonumber \\
& R_{\mu\to e}^{\text{Ti}} \ = \ 2.9\times 10^{-13}, \quad
R_{\mu\to e}^{\text{Au}} \ = \ 3.2\times 10^{-13}, \quad
R_{\mu\to e}^{\text{Pb}} \ = \ 2.2\times 10^{-13},
\end{align}
all of which can be probed in future at the Intensity frontier~\cite{Hewett:2014qja}. Similarly, sub-TeV heavy Majorana neutrinos with ${\cal O}(10^{-2})$ Yukawa couplings are directly accessible in the run-II phase of the LHC~\cite{Dev:2013wba} as well as at future lepton colliders~\cite{Banerjee:2015gca}.
In general, any observation of lepton number violation (LNV) at the LHC will yield a lower bound on the washout factor for the lepton asymmetry and could falsify {\em high}-scale leptogenesis as a viable mechanism behind the observed BAU~\cite{Deppisch:2013jxa}. However, one should keep in mind possible exceptions to this general argument, e.g. scenarios where LNV is confined to a specific flavor sector, models with new symmetries and/or conserved charges which could stabilize the baryon asymmetry against LNV washout, and models where lepton asymmetry can be generated below the observed LNV scale. An important related question is whether {\em low}-scale leptogenesis models could be ruled out from experiments. This has been investigated~\cite{Frere:2008ct, Dev:2014iva, Dev:2015vra, Dhuria:2015cfa} in the context of Left-Right symmetric models
and it was shown that the minimum value of the RH gauge boson mass for successful leptogenesis, while satisfying all experimental constraints in the low-energy sector, is about 10 TeV~\cite{Dev:2015vra}. Thus, any positive signal for an RH gauge boson at the LHC might provide a litmus test for the mechanism of leptogenesis.
\section{Conclusion}
Leptogenesis is an attractive mechanism for dynamically generating the observed baryon asymmetry of the Universe, while relating it to the origin of neutrino mass. Resonant leptogenesis allows the relevant energy scale to be as low as the electroweak scale, thus offering a unique opportunity to test this idea in laboratory experiments. Flavor effects play an important role in the predictions for the lepton asymmetry, and hence, for the testability of the low-scale leptogenesis models. We have illustrated that models of resonant leptogenesis could lead to observable effects in current and future experiments, and may even be falsified in certain cases.
\begin{acknowledgement}
I thank the organizers of the XXI DAE-BRNS HEP Symposium for the invitation and IIT, Guwahati for the local hospitality.
This work was supported by the Lancaster-Manchester-Sheffield Consortium for Fundamental
Physics under STFC grant ST/L000520/1.
\end{acknowledgement}
\input{lepto-ref}
\end{document}
|
2,877,628,089,137 | arxiv | \section{INTRODUCTION}
Magnetic skyrmions, local spin textures possessing topological protection and particle-like nature \cite{Bogdanov1989, Bogdanov1994}, have been extensively studied in a broad range of materials including bulk chiral magnets \cite{Muhlbauer2009}, ferromagnetic (FM) thin films grown on single crystals \cite{Heinze2011, Romming2015}, polycrystalline multilayer systems \cite{Chen2015, Moreau-Luchaire2016, Soumyanarayanan2017}, ferrimagnets and, more recently, synthetic antiferromagnets \cite{Dohi2019, Legrand2020}. Even though the field of skyrmions is relatively young, a tremendous amount of progress has been made in terms of both skyrmion stabilization and their dynamics. For the specific case of thin film multilayered materials, in less than one decade, they evolved from hosting atomic-scale skyrmions, but stable in very high fields at low temperature \cite{Heinze2011, Romming2015} to nanometer-sized skyrmions, stabilized by low magnetic fields at room temperature by strategically utilizing interfacial effects such as the Dzyaloshinskii-Moriya interaction (DMI) between a FM layer and an adjacent heavy metal \cite{Moreau-Luchaire2016, Soumyanarayanan2017}.
Due to their small size and possibilities of easily transporting them, room temperature magnetic skyrmions may serve as information carriers in very compact and also energetically-efficient storage such as racetrack memory \cite{Parkin2008, Fert2013, Tomasello2014S, Yu2017}. There are however two fundamental limitations to using skyrmions as magnetic bits in such a device: 1) Not being able to restrict their motion along straight paths due to the skyrmion Hall effect, leading to skyrmions being lost at the device edge \cite{Jiang2017, Litzius2017}; one possible solution is the use of antiferromagnetically-coupled skyrmions \cite{Dohi2019, Legrand2020}; and 2) Not having stable inter-skyrmion distances, leading to fluctuating distances among bits; a potential solution was revealed with the experimental observation of coexisting skyrmions and chiral bobbers in B20-type crystalline materials at low temperatures \cite{Zheng2018}, alluding to the possibility that a chain of binary data bits could be encoded by two different soliton states. Ideally however, the two states would appear at room temperature in multilayer systems that allow an immediate integration in current device technology. In our previous study \cite{Mandru2020}, using FM Ir/Fe/Co/Pt multilayers with N{\'e}el skyrmions stabilized by interfacial DMI as building blocks, we developed a FM/Ferrimagnetic/FM (FM/FI/FM) trilayer system that can host two distinct skyrmion types at room temperature and can serve as a solution to the second limitation. The two types represent a tubular skyrmion (running through the entire trilayer) and an incomplete skyrmion (existing in the FM layers only), as revealed from magnetic force microscopy (MFM) data and micromagnetic simulations.
Having established such a trilayer as a platform for hosting two skyrmion types, its implementation into memory devices and even beyond (e.g., logic devices \cite{Zhang2015}, unconventional computing architectures \cite{Prychynenko2018, Pinna2018} and transistor applications \cite{Zhang2015_2}) requires tunable systems and good control over the bit type and density. To this end, we explore the magnetic parameter space of the individual layers within the trilayer structure, revealing that the coexistence range of the two skyrmions can be tuned by altering the magnetic properties of the FM layers. More specifically, we investigate the trilayer system when changing the thickness of the FM layers by using a combination of MFM, vibrating sample magnetometry (VSM) and Brillouin light scattering (BLS) experiments together with micromagnetic simulations. We show how the competition between different magnetic energies present in this system allows for the stabilization of either incomplete or tubular skyrmions, or a combination of both with varying ratios, providing a useful avenue for tuning the skyrmion type and density within the same material. Having a good understanding of the interplay between different magnetic properties can lead to better-tailored systems and possibly bring skyrmions a step closer to applications.
\section{RESULTS AND DISCUSSION}
The FM/FI/FM trilayer schematics is shown in Fig. \ref{fig:Fig1}. The samples consist of a FI [(TbGd)(0.2)/Co(0.4)]$_{\times 6}$/(TbGd)(0.2)] layer sandwiched between two skyrmion-generating FM [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times 5}$ layers; all nominal thicknesses in parentheses are in nm and $x$ is the Fe-sublayer thickness, the only parameter that is varied for the different samples. The Fe was varied since it has been previously shown that thickness changes in this particular sublayer leads to a more dramatic change in the skyrmion density as opposed to changes in the Co sublayer \cite{Soumyanarayanan2017}. The thickness and magnetic properties of the FI layer are kept the same for all samples (see Fig. S1 in the supplementary information).
\begin{figure}[t]
\includegraphics[width=0.7\textwidth]{Fig1.png}
\caption{[Ir/Fe($x$)/Co/Pt]$_{\times 5}$/[(TbGd)/Co]$_{\times 6}$/(TbGd)]/[Ir/Fe($x$)/Co/Pt]$_{\times 5}$ FM/FI/FM trilayer sample schematics with $x$ being the Fe-sublayer thickness that is modified among different samples. The various skyrmion types that could appear in such samples are also indicated: top and bottom FM layer skyrmions along with incomplete (in both FM layers) and tubular (in all FM and FI layers) skyrmions.}
\label{fig:Fig1}
\end{figure}
In order to establish a base for discussion, we next summarize our earlier results \cite{Mandru2020} on a sample similar to that shown in Fig. \ref{fig:Fig1} having an Fe thickness $x = 0.29\,$nm and with comparable magnetic properties for the FI layer (see section 1 in the supplementary information). Zero-field MFM investigations on this sample revealed, as the lowest energy configuration, a maze domain pattern with two different types of domains having lower and higher frequency shift $|\Delta f|$ contrast. In an increasing applied magnetic field, these two domain types give rise to two distict skyrmions, also having lower and higher $|\Delta f|$ contrast and corresponding to smaller- and larger-diameter skyrmions, respectively. The locations of the two types of skyrmions throughout the trilayer structure were determined by preparing a subset of layers from the original sample and by performing MFM investigations using the same tip and the same tip-sample distance for all samples. One of the most relevant samples was a trilayer with non-magnetic Ta instead of the FI, i.e. FM/Ta(3.8)/FM, revealing that its MFM $|\Delta f|$ contrast matched best that of the low contrast skyrmion in the original FM/FI/FM structure, thus indicating that such an incomplete skyrmion exists in both the top and bottom FM layers, but not in the FI. Another relevant sample was the FM [Ir(1)/Fe(0.29)/Co(0.6)/Pt(1)]$_{\times 5}$ since it revealed, based on its much lower $|\Delta f|$ contrast, that the incomplete skyrmion contrast cannot correspond to skyrmions in the top FM layer of the FM/FI/FM structure. Regarding the high contrast skyrmions, no other subset of samples showed such a large contrast and therefore this type of skyrmion was attributed to a tubular skyrmion that was present in all three layers of the structure. The incomplete and tubular skyrmions are shown schematically in Fig. \ref{fig:Fig1}. Using micromagnetic simulations, we found that such a trilayer structure can indeed host the two types of skyrmions and that one of the key ingredients required to stabilize a tubular skyrmion is to have sufficient DMI in the FI layer, namely $D_\text{FI}$ = 0.8\,mJ/m$^{2}$. Note that in addition to incomplete and tubular skyrmions, such samples could in principle also host only top or bottom FM layers skyrmions (also shown schematically in Fig. \ref{fig:Fig1}). We have found top FM layer skyrmions in our earlier study that are very few in number (about 8\,\% of the total number of skyrmions in a given area, see the supplementary information of ref.\,\cite{Mandru2020}). However, we could not detect any bottom FM layer skyrmions: although the top FM layer skyrmions could easily be detected by MFM at a distance of 21\,nm (6\,nm Pt capping layer\,+\,15\,nm tip-sample distance) from the top-most FM layer, the bottom ones would not be so easily discernable due to the fact that the MFM signal decays exponentially with distance and they would have to be detected at 39.3\,nm (21\,nm\,+\,14.5\,nm\,+\,3.8\,nm), making the MFM experiments quite challenging. Nonetheless, if they exist, it is expected that their number is also very small compared to the total number of incomplete and tubular skyrmions observed in such trilayers.
Figures \ref{fig:Fig2}(a)-(q) illustrate MFM images of the trilayer samples for six Fe-sublayer thicknesses in zero (top row), intermediate (middle row) and at the first fields where solely skyrmions are nucleated (bottom row). A common feature for all samples is that they show a maze domain pattern in zero field. Some of the samples however indicate the presence of two different types of domains having higher and lower $|\Delta f|$ contrast, as visible in Figs. \ref{fig:Fig2}(g)-(j) taken at intermediate magnetic fields, where apart from extended domains, some skyrmions appear. As the field is increased further, the domains shrink to the point where only skyrmions exist [Figs. \ref{fig:Fig2}(l)-(q)]. It is clear from Figs. \ref{fig:Fig2}(m)-(p) that the two different $|\Delta f|$ contrast domains from Figs. \ref{fig:Fig2}(g)-(j) give rise to two distinct skyrmion types. Figures \ref{fig:Fig2}(l) and (q) on the other hand indicate the nucleation of mainly single-contrast skyrmions.
\begin{figure}[t]
\includegraphics[width=1\textwidth]{Fig2.png}
\caption{(a)-(f) 4\,{\textmu}m\,$\times$\,4\,{\textmu}m zero-field MFM images of the trilayer samples for five selected Fe-sublayer thicknesses, taken at remanence after out-of-plane saturation in 450\,mT magnetic field (above the saturation field of all samples, see Fig. S2 in the supplementary information). (g)-(k) Intermediate-field MFM images taken on the same area as the images shown in (a)-(f); note that there is no intermediate-field MFM image for the $x = 0.18\,$ sample. (h)-(l) Subsequent MFM images recorded at the fields where skyrmions-only exist (no maze domains left). The inset from (a) shows a simplified schematics of the trilayer structure. All images were taken under the same conditions, with the same tip and at the same tip-sample distance. The $|\Delta f|$ contrast scale is the same for all images.}
\label{fig:Fig2}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.6\textwidth]{Fig3.png}
\caption{MFM $|\Delta f|$ contrast comparison between the three types of skyrmions observed in the trilayer samples and individual [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times 5}$ samples as a function of Fe-sublayer thicknesses $x$. Regions I, II, and III correspond to experimental observations of the mainly incomplete-skyrmion sample, coexisting incomplete- and tubular-skyrmion samples, and mainly tubular-skyrmion sample.}
\label{fig:Fig3}
\end{figure}
By comparing the MFM $|\Delta f|$ contrasts between different samples and also with a selected subset of layers grown for comparison, we can establish what types of skyrmions are present in our films following a similar line of experiments and argumets as for the initial study from ref.\,\cite{Mandru2020} described above. The MFM $|\Delta f|$ contrast comparison plot is presented in Fig. \ref{fig:Fig3} and the analyses were performed on the data shown in Figs. \ref{fig:Fig2}(l)-(q) for the trilayers and in Figs. \ref{fig:Fig4}(g)-(l) for the corresponding FM [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times 5}$ skyrmion-generating layers. Note that such a comparison is only possible under the same MFM imaging conditions, at the same tip-sample distance \cite{Zhao2018}, using the same tip and for the same capping-layer thickness among these samples (see Methods for further details). The highest contrast points (light blue symbols) correspond to tubular skyrmions and the lower contrast points (dark blue symbols) are attributed to incomplete skyrmions. The green symbols, corresponding to the [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times 5}$ samples are also shown to either validate or discard the possibility of top FM layer skyrmions. Considering the $x = 0.28\,$nm sample [Figs. \ref{fig:Fig2}(d), (i) and (o)], analogous to the sample presented in our previous study for a very similar (0.29\,nm) Fe-sublayer thickness \cite{Mandru2020}, the two observed skyrmion contrasts correspond to an incomplete skyrmion (with lower $|\Delta f|$ and existing in the FM layers only) and a tubular skyrmion (with higher $|\Delta f|$ and running through the entire structure). Three other samples, i.e. with $x$ = 0.21, 0.26 and 0.31\,nm also show two types of domains/skyrmions that correspond to incomplete and tubular skyrmions. For these four samples that show the coexistence of the two skyrmion types, the density of both skyrmions increases with increasing Fe thickness. If the Fe-sublayer thickness is increased further to $x = 0.35\,$nm, we observe mainly one type of contrast. Considering the contrast evolution of both the tubular and incomplete skyrmions in the samples with smaller Fe thicknesses, we conclude that they are tubular skyrmions; these skyrmions have the lowest $|\Delta f|$ contrast among all samples that host tubular skyrmions. Note that this sample also has a very small number of incomplete skyrmions (not shown as a contrast data point in Fig. \ref{fig:Fig3}). Finally, for the sample with the smallest Fe-thickness, i.e. $x = 0.18\,$nm, the MFM data and contrast analysis show incomplete skyrmions that have the lowest density out of all samples (except the one with $x = 0.35\,$nm). However, this sample shows another contrast (black symbol in Fig. \ref{fig:Fig3}) that corresponds to a top FM skyrmion since it has about the same value as the FM [Ir(1)/Fe(0.18)/Co(0.6)/Pt(1)]$_{\times 5}$ sample (green symbol). At very isolated locations on the $x = 0.18\,$nm sample, we observe (not shown) a very small number of tubular skyrmions that have a higher contrast than those found in the $x = 0.21\,$nm sample, which is expected considering the contrast trends in Fig. \ref{fig:Fig3}. We therefore identify three separate regimes for the stabilization of different skyrmion types as a function of the Fe-sublayer thickness: I) mainly incomplete skyrmions, II) coexistence of incomplete and tubular skyrmions, and III) mainly tubular skyrmions.
The MFM $|\Delta f|$ contrast for all skyrmions (including those in the [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times 5}$ samples - discussed in more detail below) decreases with increasing Fe-thickness. Due to the fact that both the magnetization and the total thickness change very little between these samples (see Figs. S2 and S3 in the supplementary information), the decay of the MFM contrast for all skyrmions with increasing Fe thickness can be attributed to a smaller skyrmion diameter. This conclusion is reached by noting the relationship between the MFM contrast and skyrmion size and is explained as follows. The MFM contrast depends on the effective magnetic surface charge, the spatial wavelengths of the imaged features, the tip-sample distance, and on the tip transfer function (i.e. the decay of the measured frequency shift signal with decreasing spatial wavelength of the stray field at the location of the tip \cite{Feng2022}). Note that the stray field decays exponentially with the spatial wavelength and increasing tip-sample distance. Therefore, if the tip-sample distance is kept constant at $z\,\approx 12\,$nm for all measured samples \cite{Zhao2018} and the magnetization and total thickness of the samples are about the same, then the observed reduction in MFM skyrmion contrast can be attributed to a reduced skyrmion diameter. Note that the same arugument also applies to the incomplete and tubular skyrmion diameters in a given sample: the total length of a tubular skyrmion is a factor of $\approx$\,1.1 larger than that of the incomplete skyrmion for the case of $x = 0.28\,$nm sample. Therefore, an increase of the contrast generated by the tubular versus the incomplete skyrmion in this sample by much more than this factor ($\approx$\,2.4) must thus arise from a larger diameter of the tubular skyrmion (that is also confirmed by micromagnetic simulations).
Since the properties of the FI layer are kept the same for all current samples, we conclude that the FM skyrmion-generator layers with varying Fe thickness are solely responsible for the three different regimes determined from MFM imaging and observed contrast evolution with Fe thickness. Therefore, along with MFM imaging, we have further investigated the individual [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times 5}$ layers by extracting magnetic parameters for each sample from vibrating sample magnetometry (VSM) measurements and by measuring DMI constants using Brillouin light scattering (BLS) experiments (see Methods and Sections 3 and 4 in the supplementary information).
In terms of MFM results, varying the Fe-sublayer thickness has an impact on the magnetic properties of the FM [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times 5}$ layers (see Fig. S3 in the supplementary information), and will therefore affect the skyrmion density and size. The zero-field MFM images from Figs. \ref{fig:Fig4}(a)-(f) for these layers show regular maze domain patterns that become denser as the Fe thickness is increased. For the fields at which extended domains have been erased and solely skyrmions exist [Figs. \ref{fig:Fig4}(g)-(l)], this translates into an increasing skyrmion density and decreasing skyrmion size, in agreement with the observed trend for the MFM contrast shown with green symbols in Fig. \ref{fig:Fig3} and also with previous observations \cite{Soumyanarayanan2017}. By looking at Fig. \ref{fig:Fig4}(f), the domain pattern is not as regular and continous as for the other samples. In its corresponding skyrmion-only image from Fig. \ref{fig:Fig4}(l), the skyrmions are not as easily discernable (they are very small and arranged in a very dense pattern) and they also have the smallest MFM contrast [$|\Delta f| = (0.25\pm0.1)\,$Hz from Fig. \ref{fig:Fig3}] compared to the other samples. Note that the observed changes cover an Fe-sublayer thickness range that is varied in sub-\AA\, increments (see Methods also), making this structure extremely sensitive to changes in magnetic parameters while allowing a very fine tuning of the skyrmion density and size.
\begin{figure}[t]
\includegraphics[width=\textwidth]{Fig4.png}
\caption{(a)-(f) 4\,{\textmu}m$\,\times$\,4\,{\textmu}m zero-field MFM images performed on individually-grown FM [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times 5}$ layers for the same Fe thicknesses $x$ as in Fig. \ref{fig:Fig2}; all data are taken at remanence after out-of-plane saturation in 450\,mT magnetic field. (g)-(l) Subsequent MFM images recorded at the fields where mainly skyrmions exist and no maze domains are left. The inset from (a) shows a simplified schematics of the FM layers. The $|\Delta f|$ contrast scale is the same for all images. All MFM data are taken with the same tip, at the same tip-sample distance and under the same conditions as the images from Fig. \ref{fig:Fig2}.}
\label{fig:Fig4}
\end{figure}
To address the MFM observations in the single FM layers and ultimately connect with the trilayer results, we next discuss the exact implications of the Fe thickness variation on the magnetic parameters and DMI constant values in the [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times 5}$ layers. More specifically, the DMI, exchange and anisotropy energies are contained within the material parameter $\kappa = \frac{\pi D}{4\sqrt{A K_\text{eff}}}$ \cite{Kiselev2011, Rohart2013, Leonov2016, Bogdanov1994, Heide2008}, where $D$ is the DMI constant, $A$ is the exchange stiffness and $K_\text{eff}$ is the magnetic perpendicular anisotropy (which is the same as the total effective magnetic anisotropy for the case of ultra-thin films \cite{Johnson1996, Lemesh2017, Wang2021}). The $K_\text{eff}$ values were extracted from in-plane M-H loops (see Fig. S3 and corresponding text in the supplementary material) and are presented in Fig. \ref{fig:Fig5}(a). We find that $K_\text{eff}$ decreases from 150\,$\pm$\,25\,kJ/m$^3$ to 7\,$\pm$\,3\,kJ/m$^3$ with increasing Fe thickness; this trend is also in agreement with other reports for similar systems \cite{Katayama1991, Chen2015, Soumyanarayanan2017}. Figure \ref{fig:Fig5}(b) shows the $D$ values which we find to vary between 1.7\,$\pm$\,0.2\,mJ/m$^{2}$ and 2.0\,$\pm$\,0.2\,mJ/m$^{2}$, with highest values for the thicker Fe layers. Note that both the anisotropy decreasing and the DMI increasing with increasing Fe thickness lower the domain wall energy and thus favor smaller domains in order to decrease the magnetostatic energy of the system. Being an interfacial effect, the DMI generally decreases with increasing FM layer thickness, just as the perpendicular magnetic anisotropy. Although the opposite behaviour is observed here, we attribute this to an incomplete Fe layer at the Ir interface even for our largest Fe-sublayer thickness. Therefore, it is expected that the DMI will increase as a more continuous Fe layer is formed. Note that our $D$ values are comparable to those reported in ref.\,\cite{Soumyanarayanan2017} for similar Fe and Co thicknesses. As a side note, in ref.\,\cite{Soumyanarayanan2017} [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)] layers with 20 repeats were used instead of 5. Interestingly, for our case we observe an increase in $D$ and also an increase in $K_\text{eff}$ as the repetition number increases for a fixed Fe-sublayer thickness (see Fig. S5 in the supplementary information for further details and also for MFM investigations on these samples). Having determined the experimental values for $K_\text{eff}$ and $D$, we can now calculate the parameter $\kappa$ of each FM layer. As previously established \cite{Rohart2013, Soumyanarayanan2017}, $\kappa$ is an indication of the skyrmion stability: for 0 $<$ $\kappa$ $<$ 1 metastable and isolated skyrmions can be stabilized, whereas for $\kappa$ $\geq$ 1 a stable and dense skyrmion lattice can exist in an applied field. Note that for the $\kappa$ calculations we have used an estimated value for $A$ of 15\,pJ/m \cite{Sampaio2013, Metaxas2007, Vidal-Silva2017, Wang2018} (see Section 3 in the supplementary information). The trend in $\kappa$ is plotted in Fig. \ref{fig:Fig5}(a) and it is found to be increasing from $\approx$ 1 to $\approx$ 5, consistent with the evolution from less- to highly-dense skyrmion arrays as observed by MFM (Fig. \ref{fig:Fig4}) with increasing Fe thicknesses. Since $D$ does not vary substantially between our samples, $K_\text{eff}$ appears to be the main reason for the increase in $\kappa$ and skyrmion density, as also observed for [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times 20}$ \cite{Soumyanarayanan2017} and for Pt/Co/Ta(/MgO) multilayers \cite{Wang2021, Wang2019}.
\begin{figure}[t]
\includegraphics[width=\textwidth]{Fig5.png}
\caption{(a) Perpendicular magnetic anisotropy $K_\text{eff}$ and skyrmion stability parameter $\kappa$ values together with (b) DMI constants $D$ for the FM [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times 5}$ layers as a function of the Fe-sublayer thickness.}
\label{fig:Fig5}
\end{figure}
Finally, since $K_\text{eff}$ and $D$ are known to play a key role in the skyrmion properties \cite{Soumyanarayanan2017, Wang2021, Wang2018, Wang2019}, the question is what combination of these parameters in the [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times 5}$ layers gives rise to the observed coexistence or lack thereof for the incomplete and tubular skyrmions in the FM/FI/FM trilayer samples. To address this question, we performed micromagnetic simulations (see Methods) and the results are summarized in Table \ref{tab:Table1}. Note that: i) the FI alone supports very large (tens of microns) perpendicular domains and its DMI constant value has been set to $D_\text{FI}$ = +0.8\,mJ/m$^{2}$ \cite{Mandru2020}; ii) the DMI constant of the FM layers is actually negative, supporting clockwise N{\'e}el skyrmions \cite{Soumyanarayanan2017, Mandru2020}. By varying $K_\text{eff}$, we determine for what $D$ values can the system support one, both or none of the two skyrmion types, identifying three main regimes: high, mid and low $K_\text{eff}$. In the high $K_\text{eff}$ regime, a uniform FM state, where no skyrmions are stable and the system behaves as a saturated ferromagnet, is obtained for $\lvert$$D$$\rvert$ values $\leq$\,2.2\,mJ/m$^{2}$. Since a high $K_\text{eff}$ is not favorable for skyrmion formation, having additionally $D$ values lower than a minimum value would also not permit any skyrmions to be stable, particularly with the FI layer supporting a larger domain size and thus a homogenuous magnetization. For larger $D$, i.e. 2.2\,$<$\,$\lvert$$D$$\rvert$\,$<$\,2.5\,mJ/m$^2$, an incomplete skyrmion becomes stable. Beyond an ever higher DMI threshold $\lvert$$D$$\rvert$\,$\geq$\,2.5\,mJ/m$^2$, the coexistence of incomplete and tubular skyrmions is achieved. In this case, due to the high $D$ of the FM layers and the existing $D_\text{FI}$, the skyrmions in the FM layers would also imprint a skyrmion in the FI layer, making tubular skyrmions energetically favorable along with the incomplete skyrmions. Finally, no $D$ value can stabilize the tubular skyrmion as a single phase. The existence of DMI threshold values is a clear signature that the skyrmion types are sustained by the DMI in the high $K_\text{eff}$ regime, i.e. DMI dominates over the dipolar interactions. In the mid $K_\text{eff}$ regime, the coexistence of tubular and incomplete skyrmions occurs beyond a DMI threshold $\lvert$$D$$\rvert$\,$\geq$\,1.4\,mJ/m$^2$, lower than the coexistence $D$ value for the high $K_\text{eff}$ regime. Since a lower $K_\text{eff}$ is favorable for skyrmion formation, $D$ can also be lower in this case, thus indicating the increasing effect of the dipolar interactions on skyrmion stabilization. In fact, as additional evidence, incomplete skyrmions are no longer stable, whereas we observe the existence of (magnetostatically stabilized) tubular skyrmions only for $\lvert$$D$$\rvert$ values lower than 1.4\,mJ/m$^2$. Eventually, in the low $K_\text{eff}$ regime, only the coexistence of tubular and incomplete skyrmions is observed independent of the DMI value, with dipolar interactions now dominating over the DMI.
Since our experimental values are between 150\,$\pm$\,25\,kJ/m$^3$ and 7\,$\pm$\,3\,kJ/m$^3$ for $K_\text{eff}$ and between 1.7\,$\pm$\,0.2\,mJ/m$^{2}$ and 2.0\,$\pm$\,0.2\,mJ/m$^{2}$ for $D$, it is now clear why we find that all of our samples fall within the coexistence range, although approaching an incomplete state only and a tubular state only for small and large Fe thicknesses, respectively [see Fig. \ref{fig:Fig3} and Fig. \ref{fig:Fig2}]. Even though we already reach the right $K_\text{eff}$ values to allow the stabilization of tubular skyrmions only, none of the samples have $D$ lower than 1.5\,mJ/m$^{2}$ (taking into account the error in the BLS measurements). Regarding the stabilization of an incomplete skyrmion, even though in principle we could experimentally obtain an even higher $K_\text{eff}$ than 150\,$\pm$\,25\,kJ/m$^3$ by slightly decreasing the Fe-sublayer thickness to below 0.18\,nm, most likely the $D$ constant in this case will still be comparable to the 0.18\,nm sample, thus hindering the stabilization of incomplete skyrmions only.
\begin{table}[h]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{l|c|c|cl}
& \multicolumn{1}{l|}{\textbf{\,\,\,\,\,\,\,high $K_\text{eff}$ (230\,kJ/m$^3$)}} & \multicolumn{1}{l|}{\textbf{\,mid $K_\text{eff}$ (55\,kJ/m$^3$)\,}} & \multicolumn{1}{l}{\textbf{\,low $K_\text{eff}$ (-10\,kJ/m$^3$)\,}} & \\ \cline{1-4}
\textit{uniform FM} & $\lvert$$D$$\rvert$\,$\leq$\,2.2\,mJ/m$^2$ & $\boldmath\times$ & $\boldmath\times$ & \\
\textit{incomplete only\,} &\,2.2\,mJ/m$^2$\,$\textless$\,$\lvert$$D$$\rvert$\,$\textless$\,2.5\,mJ/m$^2$\, & $\boldmath\times$ & $\boldmath\times$ & \\
\textit{coexistence} &\,$\lvert$$D$$\rvert$\,$\geq$\,2.5\,mJ/m$^2$ &\,$\lvert$$D$$\rvert$\,$\geq$\,1.4\,mJ/m$^2$ &\,$\lvert$$D$$\rvert$\,$\geq$\,0 & \\
\textit{tubular only} & $\boldmath\times$ &\,$\lvert$$D$$\rvert$\,$\textless$\,1.4\,mJ/m$^2$ & $\boldmath\times$
&
\end{tabular}
}
\caption{Summary of results obtained from micromagnetic simulations performed in 130 mT covering a large range of $K_\text{eff}$ values. Four different states (uniform FM, incomplete skyrmions only, coexisting skyrmions, and tubular skyrmions only) can be stabilized depending on the combination of $K_\text{eff}$ and $D$ values of the [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times 5}$ layers. The $\boldmath\times$ symbol indicates the type of skyrmion that is not stable.}
\label{tab:Table1}
\end{table}
The stability of the different skyrmion types depends on the competition among magnetostatic, exchange, anisotropy and DMI energies, toghether with the interlayer exchange coupling of the FM layers (via the Pt and Ir interfaces) with the FI layer. The micromagnetic simulations also provide insight into the size and chirality of the different skyrmions types as determined by these competing energies. Cross-sections of the trilayer structure for each simulation result discussed above together with further details are given in Fig. S6 of the supplementary information and corresponding text. As a final note, simulations were also performed for very high negative $K_\text{eff}$ values of -176 and -231\,kJ/m$^{3}$ (not shown), finding that for any $D$ value, the system does not support skyrmions, but stripe (for the case of tubular skyrmions) and vortex-like (for the incomplete skyrmions) states, which is compatible with what we would expect for in-plane systems.
\section{SUMMARY AND OUTLOOK}
In summary, we have developed trilayers consisting of skyrmion-hosting Ir/Fe/Co/Pt ferromagnetic multilayers separated by a ferrimagnetic layer. Such structures can simultaneously host two types of skyrmions: incomplete - existing in the Ir/Fe/Co/Pt layers only, and tubular - present throughout the whole film. By keeping the properties of the ferrimagnetic layer fixed, we have demonstrated that by changing the magnetic properties of the skyrmion-hosting layers, the coexistence of the two types of skyrmions can be steered towards a single type, of either incomplete or tubular skyrmions. The experimental findings were further explored by micromagnetic simulations, establishing the right combinations of perpendicular magnetic anisotropy $K_\text{eff}$ and DMI constants $D$ needed in the Ir/Fe/Co/Pt layers to stabilize the two skyrmion types either separately or collectively. The stabilization of purely incomplete skyrmions takes place for very high $K_\text{eff}$ and high $D$ values. Stabilizing a purely tubular skyrmion is possible for both lower $K_\text{eff}$ and $D$. The coexistence of incomplete and tubular skyrmions can occur for the whole range of $K_\text{eff}$ and for a wide range of $D$ values. Such heterostructure engineering allows to effectively control the skyrmion type and density that can be obtained within the same material and could have useful implications in designing future skyrmion-based devices. Tuning the anisotropy may be achieved by irradiation or hydrogen chemisorption/desorption \cite{Chen2022}, which can be applied locally on the same structure. Such precise control permits the fabrication of, for example, racetrack structures where two input branches, one supporting tubular and the other incomplete skyrmions, are merged into a single output branch that equally supports both skyrmion types.
\section{ACKNOWLEDGMENTS}
O.Y. and A.-O.M. thank Empa for financial support. P.M.V. and T.D. acknowledge the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement number 754364. This work has also been supported by the Project entitled “The Italian factory of micromagnetic modeling and spintronics” No. PRIN 2020LWPKH7 funded by the Italian Ministry of University and Research.
\section{Methods}
\subsection{Magnetic force microscopy measurements}
The MFM measurements were performed using a home-built high-vacuum ($\approx$ 10$^{-6}$\,mbar) system equipped with an in-situ out-of-plane magnetic field of up to $\approx$ 300\,mT. By operating the MFM in vacuum, we obtain a mechanical quality factor $Q$ for the cantilever of $\approx$ 200k, which increases the sensitivity by a factor of $\approx$ 40 compared to MFM performed under ambient conditions, and also allows thin magnetic coatings on the tip to minimze the influence of its stray field on the micromagnetic state of the sample. SS-ISC cantilevers from Team Nanotech GmbH with a tip radius below 5\,nm (without any coating) were used. In order to make the cantilever tip sensitive to magnetic fields, we coated the tip with a Ta(2\,nm)/Co(3\,nm)/Ta(4\,nm) layer at room temperature using DC magnetron sputtering. A Zurich Instruments phase locked loop (PLL) system was used to oscillate the cantilever on resonance at a constant amplitude of $A_{\rm rms} = 5\,$nm and to measure the frequency shift arising from the tip-sample interaction force derivative. Note that the frequency shift is negative (positive) for an attractive (repulsive) force (derivative). For the MFM data shown in Figs. 2, 3 and in Fig. S5, an up field was applied and an MFM tip with an up magnetization was used. Therefore, the skyrmions have a down magnetization just as the ones in the micromagnetic simulations (Fig. S6). The up tip magnetization and the down magnetization of the skyrmions then generates a positive frequency shift contrast. Note that in order to quantitatively compare the MFM contrasts from different samples, we used the frequency-modulated distance feedback method described in ref.\,\cite{Zhao2018}. This method allows keeping the tip-sample distance constant with a precision of $\approx$ 0.5\,nm over many days, even after re-approaching the same tip on different samples and in applied magnetic fields. All this can be achieved without ever bringing the tip in contact with the sample surface such that the magnetic coating of the tip remains intact and therefore the same tip can be used for all samples.
\subsection{Sample preparation}
All films were grown using DC magnetron sputtering under a 2\,{\textmu}bar Ar atmosphere using an an AJA Orion-8 system with base pressure of ${\approx}$ 1\,$\times$\,10$^{-9}$\,mbar. All multilayers were deposited onto thermally oxidized Si(100) substrates coated with Ta(3\,nm)/Pt(10\,nm) seed layers; after growth, all films were capped with Pt(6\,nm) for oxidation protection. The substrates were annealed at $\approx$ 100\,$^{\circ}$C for an hour and cooled down until they reached temperatures close to room temperature prior to each deposition. The thickness for each sublayer was determined by regular calibrations performed using X-ray reflectivity on samples containing single layers of each individual element. Since the [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times 5}$ FM layers are particularly sensitive to the Fe and Co thicknesses, the sub-\AA\, variation in the Fe thickness $x$ was controlled by alternating the deposition time between 29\,s and 56\,s from $x$ = 0.18\,nm to 0.35\,nm, respectively, while keeping the shutter opening/closing time $\textless$\,1\,s. Moreover, the reproducibility of FM layers was verified periodically by re-growing a FM sample with $x = 0.28\,$nm Fe thickness and performing MFM measurements under the same conditions as for the target samples. Similarly, the reproducibility of the FI layer was also tested by re-growing the same layer periodically and performing magnetometry measurements (see also Section 1 in the supplementary information).
\subsection{Magnetometry measurements}
The bulk magnetic properties of all samples were determined by vibrating sample magnetometry (VSM) using a 7\,T Quantum Design system. The measurements were performed at 300\,K for in-plane (IP) and out-of-plane (OOP) sample geometries and in fields of up to 2\,T. All samples were measured using the same VSM holders (i.e. one dedicated for IP and another one for OOP measurements). In addition, the background signal coming from the VSM holder and bare Si substrates was periodically checked \cite{Mandru2020-VSM} to ensure a clean magnetic signal coming from only the trilayers or from the individual FM and FI layers.
\subsection{Brillouin light scattering measurements}
The effective DMI constants of the FM layer samples were determined using BLS measurements from thermally excited spin waves ($k$) in the backscattering geometry. A monochromatic laser beam that is set to 150\,mW power (having wavelength $\lambda$ = 532\,nm) was focused on the sample surface through a camera objective of numerical aperture NA = 0.24. The scattered light was frequency analyzed by a Sandercock-type (3\,+\,3)-tandem Fabry-Perot interferometer \cite{Mock1987}. An in-plane field sufficient to saturate all samples ($\mu_\text{0}H$ = $\pm$0.7\,T) was applied along the z-axis, while the incident light ($k_{i}$) was swept along the perpendicular direction ($x$-axis), corresponding to the Damon-Eshbach (DE) geometry [see section 4 in the supplementary information and corresponding figures for further details].
\subsection{Micromagnetic simulations details}
The micromagnetic computations were carried out by means of a state-of-the-art micromagnetic solver PETASPIN \cite{Giordano2012} based on the finite difference scheme and which numerically integrates the Landau-Lifshitz-Gilbert (LLG) equation by applying the Adams-Bashforth time solver scheme:
\begin{equation}
\frac{d{\bf m}}{d\tau} = -({\bf m} \times {\bf h}_{\rm eff}) + \alpha _{\rm G} \left( {\bf m} \times \frac{d{\bf m}}{d\tau} \right)\,,
\label{Giovannis eq}
\end{equation}
where $\alpha _{\rm G}$ is the Gilbert damping, ${\bf m } = {\bf M} / M_{\rm s}$ is the normalized magnetization, and $\tau = \gamma _0 M_{\rm s} t$ is the dimensionless time, with $\gamma _0$ being the gyromagnetic ratio, and $M_{\rm s}$ the saturation magnetization. $ {\bf h}_{\rm eff} $ is the normalized effective field in units of $M_{\rm s}$, which includes the exchange, interfacial DMI, magnetostatic, anisotropy and external fields \cite{Li2019,Tomasello2014}. The DMI is implemented as:
\begin{equation}
\epsilon_{\rm InterDMI} = D\left[ m_z \nabla \cdot {\bf m} - ({\bf m} \cdot \nabla) m_z \right]\,.
\label{Dieters eq}
\end{equation}
The [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times 5}$ FM layers are simulated by 5 repetitions of a 1\,nm-thick CoFe ferromagnet separated by a 2\,nm-thick Ir/Pt non-magnetic layer. Each FM layer is coupled to the other ones by means of the magnetostatic field only (exchange decoupled); for simplicity, we neglect any Ruderman-Kittel-Kasuya-Yosida (RKKY) interactions. For the FM layers, we used the experimentally-obtained (except for the exchange constant $A$, which was kept at 15\,pJ/m for all simulations) physical parameters given in the inset tables from Fig. S3. The ferrimagnetic [(TbGd)(0.2)/Co(0.4)]$_{\times 6}$/(TbGd)(0.2)] multilayer is simulated by a 4\,nm-thick magnetic layer, with physical parameters given in the inset table from Fig. S1; an exchange constant $A$ = 4\,pJ/m was used in agreement with our prior work for rare-earth-transition metal alloy layers \cite{Zhao2019}. We use a discretization cell size of $3\,\times\,3\,\times\,1\,$nm$^3$. The top ferromagnetic layer of the bottom FM layer is coupled to the first 1\,nm of the ferrimagnetic layer via an RKKY-like interlayer exchange coupling \cite{Tomasello2017}. The bottom Ir/Fe/Co/Pt layer finishes with a 1\,nm-thick Pt layer that is known to lead to a large RKKY exchange coupling to the Fi layer; we set a positive value of the constant (ferromagnetic coupling) equal to $0.8\,$mJ/m$^2$ from \cite{Omelchenko2018}. The top FM layer grown on the top of the Fi layer starts with a 1\,nm-thick Ir layer. Since the RKKY coupling through 1\,nm of Ir is known to be very weak \cite{Meijer2020}, it has been neglected here. In all the simulations, an out-of-plane external field $H_{\rm ext} = 130$\,mT is applied.\\
\section{Section 1: Reproducibility of the FI layers}
\begin{figure}[htbp]
\includegraphics[width=\textwidth]{FigS1.png}
\caption{(a) out-of-plane (OOP) and (b) in-plane (IP) magnetization measurements of three FI [(TbGd)(0.2)/Co(0.4)]$_{\times 6}$/(TbGd)(0.2)] layers grown several samples apart. The inset in (b) shows the magnetic parameters calculated from the magnetization loops (* except for $A$, which is taken from ref.\,\cite{Zhao2019}).}
\label{fig:FigS1}
\end{figure}
The reproducibility of the ferrimagnetic (FI) layers was tested throughout this study by periodically re-growing them and performing magnetometry measurements. Given in Figs. \ref{fig:FigS1}(a) and (b) are the out-of-plane (OOP) and in-plane (IP) magnetization measurements performed on three different FI samples, demonstrating that the magnetic properties of the FI layers can be considered as constant for all trilayer samples prepared for this study. The minute differences that can be seen from the magnetization curves remain within the uncertainity levels given in the table shown as an inset in Fig. \ref{fig:FigS1}(b). We also note that the magnetic properties of the FI layers in this study are very close to the values of the FI in our previous investigations \cite{Mandru2020}, i.e. $M_\text{S}$ = 490\,$\pm$\,30\,kA/m $K_\text{eff}$ = 340\,$\pm$\,30\,kJ/m$^3$.
\clearpage
\section{Section 2: Magnetic properties of the trilayer heterostructures}
\begin{figure}[htbp]
\includegraphics[width=\textwidth]{FigS2.png}
\caption{(a)-(f) IP (black) and OOP (red) hysteresis measurements of the trilayer heterostructures consisting of a FI layer sandwiched between two [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times 5}$ FM layers, with the Fe-sublayer thickness $x$ varied from 0.18\,nm to 0.35\,nm. Surface-area-normalized magnetic moment values are given on the $y$-axis. Corresponding FM-only and FI-only layer measurements are given in Figs. \ref{fig:FigS3} and \ref{fig:FigS1}, respectively.}
\label{fig:FigS2}
\end{figure}
\clearpage
\section{Section 3: Magnetic properties of the FM layers}
\begin{figure}[htbp]
\includegraphics[width=\textwidth]{FigS3.png}
\caption{(a)-(f) IP (black) and OOP (red) magnetization curves of the individual [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times 5}$ FM samples with varied Fe thickness $x$. Insets in each graph show the magnetic parameters extracted from the corresponding measurement (* except for $A$, which is taken from refs. \cite{Sampaio2013, Metaxas2007, Vidal-Silva2017, Wang2018}).}
\label{fig:FigS3}
\end{figure}
Figure \ref{fig:FigS3} displays the IP (black) and OOP (red) magnetometry measurements of the FM [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times 5}$ layers for all six investigated Fe thicknesses. Inside each graph as insets are the magnetic parameters of each sample extracted from the corresponding data. Increasing the Fe thickness within the investigated range does not have a major impact on the saturation magnetization $M_\text{S}$, as the variations mostly stay within the uncertainity. The anisotropy field $H_{A}$ (saturation field for the in-plane loop) on the other hand decreases with increasing Fe thickness. By using the $M_\text{S}$ and $H_{A}$ values obtained from the magnetometry data, $K_\text{eff}$ values were calculated using the following equation \cite{Johnson1996, Lemesh2017, Wang2021}:
\begin{equation}
K_\text{eff} = \frac{1}{2}\mu_\text{0} H_{A} M_\text{S}\,,
\label{Eq:Eq1}
\end{equation}
where $\mu_\text{0}$ is the permeability of vacuum. Note that the magnetic perpendicular anisotropy that should technically appear in the $\kappa$ formula in the main text is $K = K_\text{u} - \frac{1}{2}\mu_\text{0} M_\text{S}^{2}$, with $K_\text{u}$ being the uniaxial magnetic anisotropy and $\frac{1}{2}\mu_\text{0} M_\text{S}^{2}$ the magnetostatic energy. However, the magnetic perpendicular anisotropy $K$ is the same as the total effective magnetic anisotropy $K_\text{eff}$ given in eq.\,(1) above for the case of ultra-thin films \cite{Johnson1996, Lemesh2017} and that is why we use the notation $K_\text{eff}$ in the main text.
Regarding the exchange stiffness constant $A$, typically, for similar systems, it is estimated to be around 15\,pJ/m \cite{Sampaio2013, Metaxas2007, Vidal-Silva2017, Wang2018} and this is also the value that we used in this study for both the $\kappa$ calculations and the micromagnetic simulations. However, since all of the magnetic parameters used in this study were experimentally obtained, we have also attempted to estimate $A$ for each FM layer by fitting $M_\text{S}$\,vs.\,T measurements (with T varied from 10\,K to 400\,K) to Bloch's $T^{\frac{3}{2}}$ law \cite{Zeissler2017}. From these fits, we obtain $A$ values varying from 15.8\,pJ/m to 16.4\,pJ/m. Even though this rough aproximation may not be suitable for thin film systems \cite{Yastremsky2019}, whatever errors come from this method would be the same for every sample. Since the estimated $A$ values remain virtually unchanged with varying Fe-sublayer thickness, using the same value for all films is justified. In order to be consistent with our previous work \cite{Mandru2020} and with other studies, we chose to use the 15\,pJ/m value.
\clearpage
\section{Section 4: Determination of DMI constants from BLS measurements}
\begin{figure}[htbp]
\includegraphics[width=\textwidth]{FigS4.png}
\caption{(a) Schematic diagram of the BLS measurement geometry, indicating the incident ($k_i$) and backscattered ($k_s$) light wave vectors, along with the magnetization and applied field directions; the spin waves propagating along the $x$-axis involved in the Stokes (anti-Stokes) process, i.e. in the generation (annihilation) of a magnon, are also shown. (b) Example BLS frequency spectra for the [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times 5}$ FM sample with $x$ = 0.18\,nm obtained for two opposite field directions. Dashed lines highlight the peak positions for different field directions and are given as a guide to the eyes.}
\label{fig:FigS4}
\end{figure}
In order to quantitatively estimate the magnetic parameters of the samples, we used the analytical expression of the spin waves in the DE configuration, valid for an in-plane magnetized FM film of thickness $t$ :
\begin{multline}
f(k) = f_{0}(k)\,\pm\,f_{DMI}(k) = \\
\frac{\gamma \mu_{0}}{2\pi}\sqrt{\left (H+Ak^{2}+\frac{M_{S}kt}{2}\right ).\left[H-\frac{2K_\text{u}}{M_{S}}+Ak^{2}+M_{S}\left(1-\frac{kt}{2}\right)\right]}\,\pm\,\frac{\gamma Dk}{\pi M_{S}}.
\label{Eq:Eq2}
\end{multline}
where $K_\text{u}$ and $\gamma$ are the uniaxial magnetic anisotropy constant and the gyromagnetic ratio, respectively. These analyses were carried out fixing the exchange constant $A$ to the value of 15\,pJ/m and $M_{S}$ to values measured by VSM (Fig. \ref{fig:FigS3}). The value of $\gamma$ is 176\,GHz/T, as obtained from the dependence of the average frequency $f_{0}(k)$ measured for different values of the applied magnetic field $H$.
Then, for each sample we obtained the value of $D$ from the frequency asymmetry $f_{DMI}(k)$ between the Stokes and anti-Stokes peaks measured for $\mu_\text{0}H$ = $\pm$0.7\,T using $D = \frac{\pi M_{S}f_{DMI}}{\gamma k}$. The absolute values of $D$ for the different samples are reported in Fig. 3(b) of the main text and the error reflects the uncertainties in $M_{S}$ and in determining $f_{DMI}$. The measured sign of $f_{DMI}$, and therefore that of $D$, accounts for a favored clockwise (CW) domain wall chirality in our samples, that is in fact in agreement with previous results on Co films with a Pt overlayer. Note that the sign of $D$ depends on the convention chosen in relation with the definition of the DMI Hamiltonian and of the BLS interaction geometry. Here, consistent with previous studies of domain walls, a negative/positive $D$ corresponds to N{\'e}el domain walls having CW/couter-clockwise (CCW) chirality. However, one should take into account that, in most of the previous BLS studies, the opposite convention is adopted, as summarized in a recent review paper \cite{Kuepferling2022}.
\clearpage
\section{Section 5: Comparison between 5, 10 and 20 repeats FM layers}
\begin{figure}[htbp]
\includegraphics[width=\textwidth]{FigS5.png}
\caption{(a)-(c) 4\,{\textmu}m\,$\times$\,4\,{\textmu}m zero-field MFM images measured on FM [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times N}$ layers with $N$ = 5, 10, and 20 for the same Fe thicknesses $x$ = 0.28\,nm; all images are taken at remanence after out-of-plane saturation in 450\,mT magnetic field. (d)-(f) Subsequent MFM images recorded at the fields where mainly skyrmions exist and no maze domains are left. The $|\Delta f|$ contrast scale is the same for all images. All images are taken with the same tip, at the same tip-sample distance and under the same conditions as the images from Figs. 2 and 3 in the main text. (g)-(i) Corresponding IP and OOP magnetometry loops with insets showing the relevant magnetic parameters for these samples.}
\label{fig:FigS5}
\end{figure}
\clearpage
We have also investigated the impact of the repetition number $N$ on [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times N}$ FM layers for a fixed Fe-sublayer thickness $x$ = 0.28\,nm. A summary of the results is shown in Fig. \ref{fig:FigS5} for $N$ = 5, 10, and 20. MFM investigations on these samples reveal that with increasing $N$, the skyrmion density decreases, consistent with an increase in $K_\text{eff}$ [see insets from Figs. \ref{fig:FigS5}(g)-(i)], just as for the [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times 5}$ with varying Fe thickness described in the main text. The field that is required to attain a skyrmion-only state increases with increasing $N$, due mainly to the gain in magnetostatic energy, also consistent with the increased MFM $|\Delta f|$ contrast from 5 to 10 to 20 repeats [compare Figs. \ref{fig:FigS5}(d)-(f), all shown on the same contrast scale]. Interestingly, the $D$ value also increases with increasing $N$, from 1.7\,$\pm$\,0.2\,mJ/m$^{2}$ for $N$ = 5 to 2.2\,$\pm$\,0.2\,mJ/m$^{2}$ for $N$ = 20. These observations are in contrast to the Pt/Co/Ta/MgO system, where $D$ was found to be unaffected by the repetition number \cite{Wang2021} (note that here the $D$ constant was also determined by BLS). When comparing our values for $N$ = 20 with the identical [Ir(1)/Fe($x$)/Co(0.6)/Pt(1)]$_{\times 20}$ system from ref.\,\cite{Soumyanarayanan2017} for $x$ = 0.3\,nm, we find that our value of 2.2\,$\pm$\,0.2\,mJ/m$^{2}$ is larger than the one reported there, which is 1.9\,$\pm$\,0.2\,mJ/m$^{2}$. However, $D$ was estimated in ref.\,\cite{Soumyanarayanan2017} by comparing domain periodicities from MFM with those from micromagnetic simulations; different methods of obtaining $D$ could lead to slightly different results.
\clearpage
\section{Section 6: Further micromagnetic simulations results}
\begin{figure}[htbp]
\includegraphics[width=0.97\textwidth]{FigS6.png}
\caption{Cross-sections of the trilayer heterostructures obtained from micromagnetic simulations for different $K_\text{eff}$ and $D$ values for the FM layers; the parameters for the FI layer remain fixed. The three main regions are divided according to the $K_\text{eff}$ used in the simulations: high $K_\text{eff}$ region (a),(d) and (g); mid $K_\text{eff}$ region (b), (e) and (h); and low $K_\text{eff}$ region (c), (f) and (i). The dashed line that divides the coexistence row separates the incomplete case (higher panel) from the tubular case (lower panel) and is meant as a guide to the eyes. The $\boldmath\times$ symbol indicates the type of skyrmion that is not stable. Individual sublayers T1-T5 and B1-B5 (lower panel) as well as the strength of the IEC through Ir and Pt (upper panel) are indicated in (f).}
\label{fig:FigS6}
\end{figure}
\clearpage
We perform systematic micromagnetic simulations for a range of uniaxial anisotropy constants $K_\text{u}$ and different (negative) interfacial DMI parameters $D$ of the FM layers. The magnetic parameters of the FI layer remain fixed and $D_\text{FI}$ is set to +0.8\,mJ/m$^{2}$. For each pair of $K_\text{u}$-$D$ values, we relax the system by starting the simulations from two different initial states, i.e. the incomplete skyrmion [clockwise (CW) N{\'e}el skyrmions in the FM layers, but no skyrmion is imposed in the FI] and the tubular skyrmion (again, CW skyrmions in the FM layers, but this time a skyrmion is also placed in the FI layer). In this way, we can confirm which state is stable. Even though $K_\text{u}$ is used in our modeling work (along with $M_{S}$, $D$ and $A$), we refer to the value of the total effective anisotropy constant $K_\text{eff}$ (as explained above in section 3 of the supplementary information) to facilitate the comparison with the experimental data. As already discussed in the main manuscript, we identify three regions, high, mid, and low $K_\text{eff}$ with corresponding $D$ values, leading to the stabilization of different skyrmion types. Shown in Fig. \ref{fig:FigS6} are the cross-sections of the spin structures that can be stabilized in our system and already highlighted in Table I of the main text.
In terms of size, we find that as $K_\text{eff}$ reduces, the diameter of both types of skyrmions increases. Note that from the MFM measurements and contrast analysis shown in the main text (Figs. 2 and 3), with decreasing $K_\text{eff}$ (or increasing Fe thickness) the skyrmion diameter becomes smaller, seemingly in contrast with the aforementioned simulation results. However, the simulations were all performed for a field of 130\,mT, whereas the fields required in the experiment to collapse all stripe domains and obtain a skyrmion-only magnetic state increase with decreasing $K_\text{eff}$, exceeding 130\,mT for some of the samples. For simulations performed in larger fields the skyrmion diameter will become smaller, in agreement with our experimental results.
We also explored the skyrmion chirality for the three $K_\text{eff}$ regimes and corresponding $D$ values of the FM layers. We first describe the case of high $K_\text{eff}$ and high $D$ shown in Fig. \ref{fig:FigS6}(d), which is the same as our previous study \cite{Mandru2020}. For the case of incomplete skyrmions, the strong negative $D$ (in this case $\lvert$$D$$\rvert$\,$\geq$\,2.5\,mJ/m$^2$) favors CW skyrmions in all the Fe/Co sublayers of the top (T) and bottom (B) FM layers, except for the top-most sublayer of the bottom FM layer, i.e. B5; the skyrmion in this sublayer is supressed due to the large interlayer exchange coupling (IEC) between the FI (which in this case has a uniform magnetization) and the last Pt sublayer of the bottom FM layer. Note that a CW chirality is observed in all these sublayers due to the dominating effect of DMI over magnetostatics. For the tubular case, skyrmions now exist not only in sublayers B1-B4, but also in B5, albeit with a Bloch-type chirality arising from the competition between positive DMI in the FI, IEC through Pt and negative DMI in the FM layer. Sublayers B2-B4 have a CW chirality, whereas sublayer B1 now has a counter-clockwise (CCW) chirality, opposite to what is expected for negative $D$; we attribute this to an improved flux closure at the bottom layer that reduces the stray field below layer B1 and thus the overall magnetostatic energy. The chirality of the skyrmion that is now present in the FI is hybrid between CCW N{\'e}el and Bloch. Finaly, the chirality of the top Fe/Co sublayers T1-T5 remains CW (the IEC of the FI through the first Ir sublayer of the top FM layer is weaker than the IEC through Pt of the bottom FM layer). For the case of high $K_\text{eff}$ and lower $D$ shown in Fig. \ref{fig:FigS6}(a), only the incomplete skyrmion can be stabilized and its chirality is the same as for the incomplete case in Fig. \ref{fig:FigS6}(d). Considering the case of mid $K_\text{eff}$ and $D$ values $\lvert$$D$$\rvert$\,$\geq$\,1.4\,mJ/m$^2$, lower than the coexistence $D$ value for the high $K_\text{eff}$ regime, the contributions from magnetostatics becomes more pronounced. This leads to more complex spin textures of the skyrmions in the different sublayers of the top and bottom FM layers [compare Figs. \ref{fig:FigS6}(e) and \ref{fig:FigS6}(d)]. The incomplete skyrmion is now composed of two skyrmions with hybrid chiralities (some layers are CW and others CCW) and the tubular skyrmion has more bottom Fe/Co sublayers with a CCW chirality. For mid $K_\text{eff}$ but for $\lvert$$D$$\rvert$ values $\textless$\,1.4\,mJ/m$^2$ [Fig. \ref{fig:FigS6}(h)], only a tubular skyrmion can be stabilized with a uniform CCW chirality in the bottom FM layer (dictated by the strong IEC through Pt) and a uniform CW chirality in the top FM layer (due to the magnetostatic field of bottom FM layer). This indicates that the IEC of the FI through Pt and also magnetostatics become more pronounced than the DMI contributions. For the last case, i.e. low $K_\text{eff}$ and any $D$ values shown in Fig. \ref{fig:FigS6}(f), the incomplete skyrmion is again hybrid, but with more top and bottom CCW sublayers compared to the one in Fig. \ref{fig:FigS6}(e). The tubular skyrmion on the other hand shows a very similar spin profile to that shown in Fig. \ref{fig:FigS6}(h).
\clearpage
|
2,877,628,089,138 | arxiv | \section{Introduction.}
Quantum information opens new perspectives in the fields of communication and computation \cite{NC00}. More efficient algorithms \cite{Sho97,Gro97} and secure cryptography \cite{BB84,Eke91} are only some of the applications of quantum mechanics to information technologies.
As the physical realization of quantum computers comes closer, the interest in quantum computer networks is steadily growing. Small operational quantum key distribution networks have already been built \cite{Ell02,EPT03,Ell04,ECP05} and there are proposals for qubit teleportation networks compatible with the existing optical fibre infraestructure \cite{Sha02,YS03,LSF04}. Network elements are being generalized into the quantum case \cite{TK02,CW06}, and new applications like delayed commutation \cite{GC06b}, which uses quantum mechanics to perform tasks impossible with classical networks, have appeared. There are also advances in the related topic of quantum computer architecture, which treats the communication of the different inner blocks of a quantum computer \cite{OCC02,COI03}.
In multiuser networks, many users might want to share the channel, and multiple access issues arise. Quantum communication results that expand classical information transmission capacity, like superdense coding \cite{BW92} can also be applied to multiple access scenarios using higher-dimensional systems \cite{LLT02,GW02,FZL06}. There exist information theoretical results with bounds on the capacity of quantum multiple access channels transmitting classical information \cite{Win01}, including derivations for quantum optical channels \cite{CP04}. The classical information capacity of a quantum multiple access channel can be increased by quantum channel coding. There are practical optical schemes that exploit the superadditivity property of quantum coding, i.e. the ability to obtain an increase in the capacity more than proportional to the size of the code, to improve the information transmission performance \cite{FTM03,BVF00}. With adequate quantum code design and decoding schemes, an increase in the word size, which can also be seen as the use of symbols from a higher-dimensional Hilbert space, allows to optimise the channel capacity, and even to adapt the channel capacity distribution between two senders \cite{HZH00}. Under this formulation, superdense coding can be defined and the properties of multiple-access bosonic channels have been studied \cite{YS04,YS05}.
In communication networks there is a limited amount of resources that the users have to share. It is in this framework where \emph{multiple access} techniques appear \cite{Skl83,Skl00}. Most communication systems employ one form or another of \emph{multiplexing}, i.e. the sharing of the channel by various users. In multiplexing, the information of many users is transmitted by a single channel. The information is transformed in order to reduce the use of the most scarce resource of the communication system of interest, and to take full advantage of the channel information capacity.
To avoid the interference between users, the signals in each subchannel must present some orthogonality properties in at least one domain. Common multiple access methods include time division multiple access, TDMA, where each user transmits in a different time slot so that their information does not overlap, frequency division multiple access, FDMA, where the signals are moved into different bands of the available spectrum, and code division multiple access, or CDMA where the algebraic properties of the signals allow to send them at the same time and in the same frequency range and to separate them at the receiver. Wavelength division multiple access, WDMA, the preferred method in optical networks, is a form of FDMA where the stress is put on the wavelength instead of on the related frequency parameter.
Other multiple access methods are space division multiple access, SDMA, used mostly in wireless networks with adaptative antennas, and, polarization division multiple access, PDMA, where the separation of signals is made using orthogonal polarizations.
These techniques can also be applied to quantum communication. A probabilistic SDMA scheme can be employed in passive optical networks \cite{Tow97,FGC05}. WDMA has been demonstrated in quantum key distribution networks \cite{BBG03,BBG04}, and in classical-quantum multiplexing, combining quantum key distribution with the transmission of classical data \cite{Tow97b,NTR05}. There are also non-optical multiplexing schemes, like the proposal for a magnetic version of FDMA in quantum communication with spin chains \cite{WLK06}.
In this context, it is natural to consider the new resources quantum systems offer as possible candidates for new multiplexing schemes. The structure of the Hilbert space presents some interesting properties of orthogonality that can be exploited. In this paper, we will propose a scheme for Hilbert space division multiple access, HDMA, in quantum networks used for \emph{quantum information transmission}. In HDMA the information of many qubits is carried by a single element in a higher-dimensional Hilbert space. This d-dimensional information unit, or \emph{qudit}, will carry the same information as the original group of qubits.
Section \ref{notation} introduces the gates and symbols used in the rest of the paper. Section \ref{QMUXGate} presents a Quantum Multiplexer Gate, and derives a general multiplexing scheme based on it. Section \ref{qubitqudit} compares the properties of bidimensional information units to those of d-dimensional systems. In section \ref{QST} a general quantum state transfer circuit for qudits is put forward. Section \ref{QMUX} derives a general multiplexer/demultiplexer scheme based on quantum state transfer. An example for a three channel multiplexer is given in section \ref{eg}. Finally, section \ref{discussion} discusses the results and examines quantum applications like superdense coding and classical multiplexing schemes in the light of the more general HDMA framework.
\section{Notation and gates.}
\label{notation}
We will work with qubits and qudits. Qubits are binary quantum information units that can exist in a superposition of states of the form $\ket{\psi}=\alpha \ket{0}+\beta \ket{1}$, where $|\alpha|^2$ and $|\beta|^2$ are the probabilities of finding $\ket{0}$ and $\ket{1}$ respectively. Qudits are d-dimensional quantum information units that, in the most general form, can be written as \[\ket{\psi}^d=\mathop{\sum}_{i=0}^{d-1}\alpha_i \ket{i}^d \textrm{, with} \sum_{i=0}^{d-1} \mid \!\alpha_i\! \mid^2 = 1.\] Qubits are the particular case for $d=2$. The coefficients $\alpha_i$ are complex numbers that give the probability of finding each value and also carry important phase information. The state of the qubits will be expressed in the usual ket notation whereas qudits will have an extra superscript to indicate the dimension of the Hilbert space, e.g. $\ket{\psi}$ for qubits, but $\ket{\psi}^d$ for a qudit state.
All the operations in the multiplexer will be given in terms of controlled NOT operations. For qubits the controlled NOT, CX, or CNOT, is an operator acting on a pair of qubits that preserves the state of the first qubit, the control qubit, and has at the output of the second qubit, or target, the logical XOR of both input qubits. It can also be seen as a modulo 2 addition: $CNOT\ket{x}\ket{y}=\ket{x}\ket{x\oplus y}=\ket{x}\ket{x + y \textrm{ (mod 2)}}$. This gate can be generalized for qudits as $CX^d=\ket{x}^d\ket{y}^d=\ket{x}^d\ket{x + y \textrm{ (mod d)}}^d$. Through the paper $\oplus$ will be used to indicate modulo $d$ addition, where $d$ can be determined from the superscript of the ket, and is taken to be 2 if nothing is specified. These operators can also be written in vectorial notation as $d^2 \times d^2$ unitary matrices.
The inverse operator of any quantum gate $U$ is its inverse matrix. For unitary evolution this inverse is $U^{\dag}$. For $CX^d$, ${CX^d}^{\dag}\ket{x}^d\ket{y}^d=\ket{x}^d\ket{x\ominus y}^d$, where $\ominus$ represents modulo $d$ subtraction. All the given qudit definitions hold for qubits ($d=2$), for which the CNOT operation is its own inverse.
The NOT gate, $X\ket{x}=\ket{x \oplus 1}$, can be generalized for qudits as $X_{m}^{d}\ket{x}^d=\ket{x\oplus m}^d$, for $m<d$. Thus, $X^2_{1}=NOT$. We will also define a new qudit control $C^{\mathcal{S}}U$, meaning that the operation $U$ will be applied on the target qudit if and only if the control qudit is in a state contained in the set $\mathcal{S}$. If no set is given, $\mathcal{S}=\{\ket{1}\}$ is supposed for two-dimensional systems, and the previous $\ket{x\oplus y}^d$ is kept in qudits. This way, CU recovers, for qubits, the meaning of a gate that is only applied when the control qubit is in $\ket{1}$. With this notation CNOT can also put as $CX^{2}_{1}$.
\section{Quantum Multiplexer Gate.}
\label{QMUXGate}
Multiplexers play an essential role in classical multiple access systems, as well as in many digital circuits. Multiplexers are blocks that take a group of input channels into a smaller number of outputs. The term multiplexer, or MUX, usually refers to the system that combines $n$ inputs into a single output. Combinations of this simpler multiplexer can give conversions between any desired number of inputs and outputs. The inverse operation, the expansion of one channel into the original $n$ signals, is performed by a demultiplexer, or DEMUX. Multiplexers and demultiplexers are usually represented as isosceles trapezoids with the longer side facing the $n$ lines, and the shorter side facing the single line. Figure \ref{muxgeneral}.a shows the usual configuration in classical communication systems.
\begin{figure}[ht!]
\centering
\includegraphics{muxgeneral.eps}
\caption{Multiplexing schemes for classical and quantum communications.\label{muxgeneral}}
\end{figure}
Similarly, a quantum MUX gate can be defined. There are a few restrictions, though. Quantum gates must be reversible. In the same way that quantum evolution alone does not allow to build a direct equivalent of the AND gate, an additional tool is needed to reduce the number of lines. Figure \ref{muxgeneral}.b gives a quantum multiplexer gate, QMUX, that transfers the state to a qudit and has at the qubit channels a fixed state, in this case $\ket{0}$. Then we apply a demultiplexer gate, $QDEMUX=QMUX^{\dag}$, that inverts the process. As the intermediate state of the $n$ qubits is known and there is no entanglement with the qudit, we can measure them on the transmitter side. Latter, at the receiver, we only need to generate $\ket{0}$ qubits to undo the transformation (Fig. \ref{muxgeneral}.c). This will be the architecture in our quantum multiplexer proposal.
\section{Qubits and qudits.}
\label{qubitqudit}
Qubits are the preferred quantum information unit. They are binary digital units encoded into two-dimensional systems. They are the generalization of the classical binary information unit, the bit. It is also possible to do computation with analogical variables \cite{How05}, or with multivalued logic \cite{Smi81,Hur84,Smi88}, with more than only two possible states. The first digital computers, like the ENIAC, were, in fact, decimal \cite{GG96}. There are also proposals for quantum computation with continuous variables \cite{SB99,BP00a,BV05}, and qudits \cite{BGS02,BOB05,OBB06}, but, as occurs in the classical case, digital binary logic is the more widespread option. The simplicity of the gates and the binary logic intuition inherited from classical computers made it the starting point for the theoretical developments, and most of the physical implementation attempts have been made for qubits. There is no preferred multivalued logic to compete with qubits. Continuous variables proposals are better developed than qudit systems, but, as happens in classical computers, it is not likely that we will see in the next few years a multileveled or continuous system with the same degree of technological development and understanding we have of two-level systems.
Qudits can provide advantages from the point of view of communications. Their higher dimensionality gives a more compact mean of transmitting the same amount of information, increasing the information transmission rate. Furthermore, quantum communication protocols with qudits, like quantum coin tossing \cite{MVU05}, improve the performance of certain applications with respect to their qubit counterparts. Quantum cryptography using higher-dimensional alphabets \cite{BKB01} makes it easier to detect eavesdroppers \cite{BT00,WLA06}, is more robust agaisnt coherent attacks \cite{BM02,CBK02}, and, in general, brings a higher degree of security \cite{BP00b, BKB02}. Multilevel encoding also permits the incorporation of superdense coding schemes into secure direct communication \cite{WDL05}.
It would be desirable to have at our disposal an element that can convert the qubits used for the local computations, with the better known qubit circuits, into flying qudits that take advantage of their superior communication properties. The quantum multiplexer will be defined in this context. The information of $n$ qubits will be carried in a $2^n$-dimensional qudit. A $2^n$-dimensional Hilbert space is the smallest that can describe the global state of $n$ arbitrary qubits. Imagine we use the qubits for classical information transmission. To be able to differenciate between the $2^n$ possible values $n$ bits have, we would need, at least, $2^n$ states. A system with a smaller number of states wouldn't allow for a deterministic recovery of the data, although a probabilistic coding can be obtained \cite{GWC06}.
\section{Quantum state transfer.}
\label{QST}
The transfer of information from one qubit to another is the starting point of many quantum information applications and is a primitive that appears in teleportation and superdense coding \cite{Mer01,Mer02,Mer03}. Particularly important is the quantum swap circuit (Fig. \ref{SWAP}, left), the quantum generalization of the classical reversible XOR swapping. In the figure, we see the usual representation for the CNOT gate, with a dot on the control qubit and a $\oplus$ on the target. The circuit acts in three stages $\ket{x}\ket{y}\rightarrow \ket{x\oplus y}\ket{x} \rightarrow \ket{x\oplus y}\ket{y\oplus x\oplus y} \rightarrow \ket{y}\ket{x}$. The same applies to superpositions.
\begin{figure}[ht!]
\centering
\includegraphics{swap.eps}
\caption{Swap circuit.\label{SWAP}}
\end{figure}
Figure \ref{SWAP} (right) shows how the circuit can be simplified if we are allowed to choose one of the input states, in this case choosing $\ket{0}$ for the second qubit. When controlled by $\ket{0}$, the CNOT operation is equivalent to the identity gate. The circuit on the right implements a quantum state transfer. The second CNOT gate is essential for a complete transfer. The first step already gives $\ket{x}\ket{x}$, but, without the last CNOT, the upper qubit is still entangled to the lower line and we have an entangling gate instead of a transfer in the proper sense.
This qubit state transfer can be extended to qudits generalizing the quantum swap circuit and performing a similar simplification (Fig. \ref{QuditST}). Here, qudits are represented as thicker lines with their dimension $d$ written at the beginning of the line. The control qudit is represented by a dot with a vertical line going to the controlled gate on the target qudit.
\begin{figure}[ht!]
\centering
\includegraphics{quditstatetransfer.eps}
\caption{Qudit state transfer circuit.\label{QuditST}}
\end{figure}
The $CX^d$ gate can be decomposed in a number of ways. The easiest to see is the concatenation of $C^{\mathcal{S}}X^d_x$ gates, each controlled by the corresponding $\ket{x}$ state, so that $CX^d\ket{x}\ket{y}=(\prod_{i=0}^{d-1}C^{\{\ket{i}\}}X^d_i)\ket{x}\ket{y}$. The control depends on different qudit states. So, each gate acts on a different subspace of the tensor product that gives the composite system's state $\ket{x}^d\ket{0}$ and the order of the gates is not important, i.e. they commute. In figure \ref{expQuditST} this qudit state control is represented by a circle containing the state number, and the $X^d_m$ gate by a $+m$ and the ${X^d_m}^{\dag}$ gate by a $-m$. Although, strictily speaking, only modulo d addition and subtraction give a unitary evolution, for HDMA simple addition and subtraction, without the circularity property, will suffice. This can be an advantage. In many physical systems qudits are implemented restricting to only a few states of higher-dimensional systems (even infinite-dimensional spaces) and addition and subtraction are easy while modular arithmetic is not. For instance, if the qudits are encoded in the orbital angular momentum of photons, the state number can be easily increased or decreased by a fixed number using holograms \cite{LPB02}, while there is no evident method for modulo $d$ operations.
\begin{figure}[ht!]
\centering
\includegraphics{2nquditstatetransfer.eps}
\caption{Expanded qudit state transfer circuit. The +0 and -0 gates have no effect and can be omitted.\label{expQuditST}}
\end{figure}
In this application the main concern is on an \emph{n qubits to qudit state transfer}, and we will work with qudits such that $d=2^n$. In fact, $n$ qubits are a possible embodiment of a power of two-dimensional qudit. The multiplexer will convert this qudit into a more compact form. From that point of view, it is better to think in terms of a binary decomposition. For qudit state $\ket{x}$, with $x_{n-1}x_{n-2}\ldots x_{1}x_{0}$ the binary string for $x$, we will define the sets $\mathcal{S}_i=\{ \ket{x} : x $ div $ 2^i$ is odd $\}$, where a div b denotes the quotient of the division of a by b. The $C^{\mathcal{S}_i}X_{2^i}^d$ gate is a $X^d_{2^i}$ gate, controlled by the value of the $i$-th digit of the binary expression of the qudit state. Here, a state $\ket{x}^d$ acts on a distributed way, and the $+x$ sum is done by powers of two. After the $n$ steps the sums only happened when there was a 1 in the binary decomposition, and each $\ket{x}$ has done a $+x$ sum in the form $+x_{n-1}2^{n-1}+x_{n-2}2^{n-2}+\ldots+x_1 2 + x_0$. Fig. \ref{binQuditST} shows the quantum circuit using a square with the bit position index in the control qudit.
\begin{figure}[ht!]
\centering
\includegraphics{binaryquditstatetransfer.eps}
\caption{Qudit state transfer with a binary expansion in the control.\label{binQuditST}}
\end{figure}
This circuit gives a decomposition where the number of gates has been reduced from $2^n$ to $n$ and has an intuitive interpretation in terms of qubit controlled gates. The procedure can be translated to any base $l$ expansion and produce circuits for state transfer between $log_l d$ $l$-leveled systems and a qudit, $log_l d$ steps, for $d$ a power of $l$.
\section{Quantum multiplexer.}
\label{QMUX}
At this point, all the elements of the quantum multiplexer are given. The scheme will consist of a $n$ qubits to qudit state transfer at the transmitter side, and the inverse qudit to $n$ qubits state transfer at the receiver side. The transfer will be done qubit by qubit, by means of partial state transfer circuits, which can be written as pairs of entangler and correlation eraser gates, similar to the $\ket{x}\ket{0}\rightarrow \ket{x}\ket{x}$ and $\ket{x}\ket{x}\rightarrow\ket{0}\ket{x}$ pair that appears in qubit state transfer.
The gates of the circuit of Fig. \ref{binQuditST} can be put in terms of these partial transfer subblocks if the gates are reordered. This reordering is valid because some of the gates commute. The last ${C^{\mathcal{S}_{n-1}}X_{2^{n-1}}^{d}}^{\dag}$ will only have an effect on states $\ket{x}^d$ with $x\geq 2^{n-1}$. The input state for the lower line is $\ket{0}^d$. If we exclude the first gate, even all the other gates combined cannot increase the value of $\ket{0}^d$ up to that number. So, we can move the subtraction gate near to the $C^{\mathcal{S}_{n-1}}X_{2^{n-1}}^{d}$ gate. We can recursively apply this reasoning until we find a convenient reordering for the MUX gate. Looking back at the construction of Fig. \ref{muxgeneral}.c we can give the complete multiplexer and demultiplexer scheme for HDMA (Figure \ref{muxdemux}).
\begin{figure}[ht!]
\centering
\includegraphics{muxdemux.eps}
\caption{Quantum multiplexer and demultiplexer schemes.\label{muxdemux}}
\end{figure}
Two types of gate are used: qubit controlled addition and subtraction modulo $d$ over the qudit, $CX^d_m$ and ${CX^d_m}^{\dag}$, and qudit controlled NOT, $C^{\mathcal{S}_ i}X^2_1$. The number $i$ on the qudit control line, for a qudit state $\ket{x}^d$, is the $i$-th digit of the binary representation of $x$. The right part of the circuit, on the receiver side, implements the inverse qudit transfer from $\ket{\psi}^d$ to the qudit formed by the global state of the $n$ qubits.
The separation in partial state transfer gate pairs is important when it comes to the scaling of the solution. The same configurable block is valid for all the channels. An additional user can be added with an extra partial state transfer and an erasure at both sides of the transmission. If the qudit is a virtual collection of states from an infinite-dimensional space, as in optical angular momentum qudits, the qudit line can be kept and the system can accommodate a different number of users without a new qudit design.
What's more, each of the $CX^d_m$ gates acts only on the value of one of the digits of the binary expansion of $\ket{x}^d$, different for each partial transfer, and the effect on the qudit is ignored by all of the qudit controlled CNOT gates that are associated to the other qubits. This means that the multiplexing and demultiplexing blocks for different channels commute and we can add or recover a channel at any point of the transmission, instead of multiplexing or demultiplexing only at two fixed points.
\section{Example.}
\label{eg}
Suppose a local network with three users that can only have one output to the network. Following the architecture of the previous section, the multiplexer circuit of figure \ref{mux3ex} transfers the state to the qudit line, and recovers the original channels at the receiver.
\begin{figure}[ht!]
\centering
\includegraphics{mux3.eps}
\caption{Quantum multiplexer for 3 qubits.\label{mux3ex}}
\end{figure}
The operation can be divided in partial state transfers for each of the channels. First we transfer the state of the first qubit onto the qudit (with $\ket{0}^8$ for $\ket{0}$, and $\ket{4}^8$ for $\ket{1}$).
\begin{equation}
\ket{\psi_2}\ket{\psi_1}\ket{\psi_0}\ket{0}^8 = (\alpha_2 \ket{0}+\beta_2 \ket{1})\ket{\psi_1}\ket{\psi_0}\ket{0}^8 \stackrel{CX^8_4}{\longrightarrow} \alpha_2 \ket{0} \ket{\psi_1}\ket{\psi_0}\ket{0}^8+\beta_2 \ket{1} \ket{\psi_1}\ket{\psi_0}\ket{4}^8.
\end{equation}
The next gate will erase the correlation to the original qubit, $\ket{\psi_2}$,
\begin{equation}
\stackrel{C^{\mathcal{S}_2}X^2_1}{\longrightarrow} \alpha_2 \ket{0} \ket{\psi_1}\ket{\psi_0}\ket{0}^8+\beta_2 \ket{0} \ket{\psi_1}\ket{\psi_0}\ket{4}^8=\ket{0}\ket{\psi_1}\ket{\psi_0} ( \alpha_2 \ket{0}^8+\beta_2 \ket{4}^8)=\ket{0} (\alpha_1 \ket{0}+\beta_1 \ket{1})\ket{\psi_0}(\alpha_2 \ket{0}^8+\beta_2 \ket{4}^8).
\end{equation}
The same process can be aplied to the qubit $\ket{\psi_1}$:
\begin{equation}
\stackrel{CX^8_2}{\longrightarrow} \ket{0}(\alpha_1\alpha_2\ket{0}\ket{\psi_0}\ket{0}^8 + \alpha_1\beta_2\ket{0}\ket{\psi_0}\ket{4}^8 +\beta_1\alpha_2\ket{1}\ket{\psi_0}\ket{2}^8 + \beta_1\beta_2\ket{1}\ket{\psi_0}\ket{6}^8).
\end{equation}
Now we have four states. For $\ket{2}$ and $\ket{6}$, 010 and 110 in binary, there is a 1 in the second position, while the others have a 0. So only in these cases, for which the qubit in $\ket{1}$, the state suffers a transformation (into $\ket{0}$) and the correlations are erased.
\begin{equation}
\stackrel{C^{\mathcal{S}_1}X^2_1}{\longrightarrow} \ket{0}\ket{0}\ket{\psi_0}(\alpha_1\alpha_2\ket{0}^8 + \alpha_1\beta_2\ket{4}^8 +\beta_1\alpha_2\ket{2}^8 + \beta_1\beta_2\ket{6}^8).
\end{equation}
The partial state transfer from the $\ket{\psi_0}$ qubit completes the encoding of all the information into the qudit.
\begin{eqnarray}
\stackrel{CX^8_1}{\longrightarrow} \ket{0}\ket{0}(&\alpha_0 \alpha_1\alpha_2\ket{0}\ket{0}^8& + \alpha_0 \alpha_1\beta_2\ket{0}\ket{4}^8 +\alpha_0 \beta_1\alpha_2\ket{0}\ket{2}^8 + \alpha_0 \beta_1\beta_2\ket{0}\ket{0}\ket{6}^8 \nonumber \\
+&\beta_0 \alpha_1\alpha_2\ket{1}\ket{1}^8& + \beta_0 \alpha_1\beta_2\ket{1}\ket{5}^8 +\beta_0 \beta_1\alpha_2\ket{1}\ket{3}^8 + \beta_0 \beta_1\beta_2\ket{0}\ket{1}\ket{7}^8).
\end{eqnarray}
Now, there are four qudit states that trigger the erasure, one for each state affected by the $\ket{1}$ part of the qubit.
\begin{equation}
\stackrel{C^{\mathcal{S}_1}X^2_1}{\longrightarrow} \ket{0}\ket{0}\ket{0}(\alpha_2\alpha_1\alpha_0\ket{0}^8 +\alpha_2\alpha_1\beta_0\ket{1}^8 +\alpha_2\beta_1\alpha_0\ket{2}^8 +\alpha_2\beta_1\beta_0\ket{3}^8 +\beta_2\alpha_1\alpha_0\ket{4}^8 +\beta_2\alpha_1\beta_0\ket{5}^8 +\beta_2\beta_1\alpha_0\ket{6}^8 +\beta_2\beta_1\beta_0\ket{7}^8).
\end{equation}
After the measurement, we have a state that is equivalent to the original three qubits, $\ket{\psi}^{b}=\ket{\psi_2}\ket{\psi_1}\ket{\psi_0}=(\alpha_2 \ket{0}+\beta_2 \ket{1})\otimes (\alpha_1 \ket{0}+\beta_1 \ket{1})\otimes (\alpha_0 \ket{0}+\beta_0 \ket{1})$:
\begin{center}
\begin{tabular}{l@{=}l@{}r@{+}l@{}r@{+}l@{}r@{+}l@{}r@{+}l@{}r@{+}l@{}r@{+}l@{}r@{+}l@{}r}
$\ket{\psi}^b$&$\alpha_2\alpha_1\alpha_0$&$\ket{000}$&$\alpha_2\alpha_1\beta_0$&$\ket{001}$&$\alpha_2\beta_1\alpha_0$&$\ket{010}$&$\alpha_2\beta_1\beta_0$&$\ket{011}$&$\beta_2\alpha_1\alpha_0$&$\ket{100}$&$\beta_2\alpha_1\beta_0$&$\ket{101}$&$\beta_2\beta_1\alpha_0$&$\ket{110}$&$\beta_2\beta_1\beta_0$&$\ket{111},$\\
$\ket{\psi}^8$&$\alpha_2\alpha_1\alpha_0$&$\ket{0}^8$&$\alpha_2\alpha_1\beta_0$&$\ket{1}^8 $&$\alpha_2\beta_1\alpha_0$&$\ket{2}^8 $&$\alpha_2\beta_1\beta_0$&$\ket{3}^8 $&$\beta_2\alpha_1\alpha_0$&$\ket{4}^8 $&$\beta_2\alpha_1\beta_0$&$\ket{5}^8 $&$\beta_2\beta_1\alpha_0$&$\ket{6}^8 $&$\beta_2\beta_1\beta_0$&$\ket{7}^8.$
\end{tabular}
\end{center}
Once all of the input channels are put to $\ket{0}$, we can measure them to make sure the encoding happened without errors. If a $\ket{1}$ is found, there was an error.
Demultiplexing at the receiver is achieved by applying the inverse operations. $\ket{0}\ket{0}\ket{0}\ket{\psi}^8$, after the $C^{\mathcal{S}_0}X^2_1$ gate becomes,
\begin{eqnarray}
\stackrel{C^{\mathcal{S}_0}X^2_1}{\longrightarrow} \ket{0}\ket{0}(&\alpha_2\alpha_1\alpha_0\ket{0}\ket{0}^8& +\alpha_2\alpha_1\beta_0\ket{1}\ket{1}^8 +\alpha_2\beta_1\alpha_0\ket{0}\ket{2}^8 +\alpha_2\beta_1\beta_0\ket{1}\ket{3}^8 \nonumber \\
+&\beta_2\alpha_1\alpha_0\ket{0}\ket{4}^8& +\beta_2\alpha_1\beta_0\ket{1}\ket{5}^8 +\beta_2\beta_1\alpha_0\ket{0}\ket{6}^8 +\beta_2\beta_1\beta_0\ket{1}\ket{7}^8).
\end{eqnarray}
The ${CX^8_1}^\dag$ gate erases the traces of $\ket{\psi_0}$ in the qudit.
\begin{eqnarray}
\stackrel{{CX^8_1}^\dag}{\longrightarrow} \ket{0}\ket{0}(&\alpha_2\alpha_1\alpha_0\ket{0}\ket{0}^8& +\alpha_2\alpha_1\beta_0\ket{1}\ket{0}^8 +\alpha_2\beta_1\alpha_0\ket{0}\ket{2}^8 +\alpha_2\beta_1\beta_0\ket{1}\ket{2}^8 \nonumber \\
+&\beta_2\alpha_1\alpha_0\ket{0}\ket{4}^8& +\beta_2\alpha_1\beta_0\ket{1}\ket{4}^8 +\beta_2\beta_1\alpha_0\ket{0}\ket{6}^8 +\beta_2\beta_1\beta_0\ket{1}\ket{6}^8).
\end{eqnarray}
The four qudit states that remain can be extracted as a common factor and we have $\ket{0}\ket{0}(\alpha_0\ket{0}+\beta_0\ket{1})(\alpha_2\alpha_1\ket{0}^8+ \alpha_2\beta_1\ket{2}^8 + \beta_2\alpha_1\ket{4}^8 + \beta_2\beta_1\ket{0}\ket{6}^8)$. Notice that, at this point, channel 0 is again independent from the rest of the channels and from the qudit. After erasing the correlations, measuring the state of the qudit would destroy the information of the rest of the users but not channel 0. It is evident from the example that the destinations need not to be at the same point. We can think of a network where the qudit delivers each channel at a different point of its path to the last receiver. We can repeat the process until all of the original channels are recovered.
To get second qubit, we apply the second binary position controlled CNOT gate,
\begin{equation}
\stackrel{C^{\mathcal{S}_1}X^2_1}{\longrightarrow} \ket{0}(\alpha_2\alpha_1\ket{0}\ket{\psi_0}\ket{0}^8 + \alpha_2\beta_1\ket{1}\ket{\psi_0}\ket{2}^8 +\beta_2\alpha_1\ket{0}\ket{\psi_0}\ket{4}^8 + \beta_2\beta_1\ket{1}\ket{\psi_0}\ket{6}^8).
\end{equation}
If we were to measure in this moment, the state of $\ket{\psi_1}$ would be affected by the qudit state, as they are still entangled. In order to avoid that, we erase the correlation.
\begin{equation}
\stackrel{{CX^8_2}^\dag}{\longrightarrow} \ket{0}\ket{\psi_1}\ket{\psi_0}(\alpha_2\ket{0}^8 +\beta_2\ket{4}^8).
\end{equation}
We already have the last qubit encoded in the qudit state, with $\ket{0}^8$ corresponding to logical $\ket{0}$ and $\ket{4}^8$ corresponding to logical $\ket{1}$. The last two gates will transfer this state to a two-dimensional system, more adequate for the quantum gates at the destination quantum computer. Although it is not really necessary to measure the qudit state it can serve as an additional check for the correct operation of the system. If we find any state other than $\ket{0}^8$, there was some problem, like an eavesdropper or decoherence, during the transmission.
\section{Discussion.}
\label{discussion}
Starting from quantum state transfer, a quantum multiplexing scheme for multiple access quantum communications can be derived. HDMA broadens the options in quantum multiple access and gives a circuit for $n$ qubits to qudit and qudit to $n$ qubits conversion. Quantum multiplexers can be useful whenever such a conversion is needed. The two clearer examples are the communication of qubits between distant nodes in a quantum network and quantum buses inside a quantum computer. They are related communication problems, each one at a different distance scale.
In quantum networks we need a compact mean to take full advantage of the usually scarce communication resources. Remote quantum computers can be connected by intermediate qudit channels. The second problem is one of quantum computer architecture. In a complete quantum computer, quantum buses are needed to connect the different quantum registers, or to take the qubits from the quantum memory to the quantum processor. In both cases we can take advantage of the fact that during the pure communication step no information processing is needed.
Quantum information must be encoded into a physical system. From the desiderata of properties of such a system must have \cite{Div00}, there are two somewhat contradictory criteria. Quantum information units, be them qubits or qudits, should be resilient to decoherence and, at the same time, allow for the construction of efficient quantum gates. The loss of quantum superposition decoherence brings about can destroy one of the key features from which quantum computers get their exceptional computing properties. Without efficient, controlled interaction between qubits, no computation is possible. But, as a rule, the easier it is to have an interaction with a quantum system, the greater the effect of decoherence is.
A good example is the one of photonic qubits. Photons are especially well suited for long distance quantum communication. The success of existing quantum optical networks, such as those used in quantum cryptography, owes much to the large coherence time of photons. Photons' resistance to decoherence stems from their particularly weak interaction with the environment and with other photons. This very same feature makes it difficult to build efficient photonic gates. This tradeoff between coherence time and gate construction is a constant in the search for physical systems to implement quantum computers
The existence of a quantum multiplexer permits to have one kind of information unit for the quantum operations, and another one for the transmission. Although some interaction is needed for the state transfer, the total number of gates is only four for each qubit, two at the multiplexer and two at the demultiplexer, and, then, the transfer of the information into a different system can be done with another encoding in a less sensitive to decoherence system. Later, a more interacting form of information can be fed into the quantum gates for the processing.
\subsection{Hilbert space distribution of the channels' information.}
The whole process can be seen as a redistribution of the probability amplitude in the qudit. In the first step $\ket{0}^d$ has a probability of 1. As the different qubits are encoded, the state is diffused through the Hilbert space, occupying $2^m$ states at the $m$-th step. Figure \ref{tree} shows the branching process that distributes the original information into the $d$-dimensional Hilbert space. Each column corresponds to a new partial transfer/erasure pair. The coefficients of each of the qudit states can be reconstructed by multiplying the qubit coefficients that are picked up on the branches on the way to the state.
\begin{figure}[ht!]
\centering
\includegraphics{treecoeff.eps}
\caption{Qudit state branching in the $2^n$-dimensional Hilbert space.\label{tree}}
\end{figure}
The final state will be
\begin{equation}
\ket{\psi}^d=\sum_{k=0}^{d-1} \left( \prod_{i=0}^{n-1} \alpha_i^{\bar{b_i}}\beta_i^{b_i} \right)\ket{k}^d,
\end{equation}
where $b_i$ is the $i$-th binary digit of $k$. The coefficients are the same of the binary expansion of $k$, with $\alpha$s in the place of 0s and $\beta$s in the place of 1s.
This multiplexing can also be described in terms of subspaces of the whole qudit space. Each of the $n$ qubits is encoded in one half of the d-dimensional Hilbert space of the qudit. The state of the $i$-th qubit, $\ket{\psi_i}$, will be mapped into two orthogonal subspaces that divide the qudit space into two. There are two subspaces; states in $S_i^0$ represent $\ket{0}$ and states in $S_i^1$ indicate a qubit in $\ket{1}$, with $S_i^0 \perp S_i^1$ and $ S_i^0 \cup S_i^1 = \mathcal{H}^d$, for $\mathcal{H}^d$ the global Hilbert space of dimension d. The subspaces are spanned by the states with a 0 or with a 1 in the $i$-th digit of the binary representation of the number of the state, so that $S_i^{0}=\{ \ket{x} : x$ div $2^i$ is even$\}$ and $S_i^{1}=\{ \ket{x} : x$ div $2^i$ is odd$\}$.
At each multiplexing stage all the states are divided into two components, one for the new qubit's $\ket{0}$ and another one for the $\ket{1}$. The number of occupied states is doubled with each user. At the end of the multiplexing process, the information of the $n$ qubits is distributed among the $2^n$ states of the qudit. The demultiplexing combines the states again, transferring the probability amplitudes that gave the separation into the recovered qubit. Each state of the qudit target space can be seen as the intersection of the subspaces that have at their corresponding indices the binary representation of the state number,
\begin{equation}
\ket{x}= \displaystyle \bigcap_{i=0}^{n-1} S_i^{b_i},
\end{equation}
for $b_{n-1}b_{n-2}\ldots b_0$ the binary string that represents $x$.
The probability amplitude associated with a channel's $\ket{0}$ and $\ket{1}$ states will be distributed through a different number of states depending on how many other channels there are. The mapping occurs in two steps. First, entanglement between the qubit and the target qudit subspace is created in the $2^{n+1}$-dimensional space of the qubit/qudit joint space. Later, the spaces are separated by erasing the correlations so that the reduced density matrix of the qudit system has the information formerly in the qubit.
A similar spreading of one qubit state through a larger multidimensional subspace is found in multilevel encoding \cite{GBR06}. In this case, the spreading is a mean to fight decoherence. We can imagine certain forms of decoherence as a multiplexer whose inputs are a qubit channel and $n-1$ other channels representing the degrees of freedom of the environment that can couple with the qubit, in a line similar to that of the `caricature' model of decoherence as CNOT gates with the environment of \cite{Zur03}. If the subspaces are properly chosen, the unknown effect of the environment won't prevent us form recovering the qubit, in the same way that adding or dropping other channels at arbitrary points does not impede extracting the desired qubit from the qudit just by applying the corresponding partial state transfer and erasure pair of gates. Multilevel encoding can also give efficient gates to operate on the individual qubits while they are still in the qudit.
\subsection{Multiplexers in quantum communication.}
Up to this point, we have considered the $n$ qubit channels to be independent, and an intuitive product state interpretation arises. If the qubits are entangled, everything still holds, since, from the beginning, we have studied multiplexing as a qudit to qudit state transfer. The given circuits already take the $n$-qubit system as a whole. The proposed quantum multiplexer scheme is still valid in quantum netorks with applications where different channels can be entangled . From the basic quantum communication perspective, though, it will usually be easier to think of each channel as a separate entity, as entanglement does not usually appear on communication between different channels. When multipartite entanglement is needed, it is usually preferred to have an ealier stage of entanglement distribution, or flexible entanglement distribution networks \cite{HB04,BHM96}. These proposed networks usually have a central node that distributes correlations among the users. When multipartatite entanglement is needed entanglement swapping schemes are used \cite{ZHW97,BVK98}.
Many existing quantum information applications can be seen under a new light as a case of quantum multiplexing. In classical information transmission through a quantum network the erasure of the original qubit is not necessary. In that case, only the partial state transfer gates are needed at the transmitter. The final state will be only one of the $2^n$ possible values. Thus, at the receiver, we could skip the demultiplexing gates and proceed with a measurement, or include some $\mathcal{S}_i$ controlled NOT gates if we want to generate multiple qubits carrying the classical information.
If we have two channels sending classical information, we only need a +2 and a +1 gate controlled by the first and second bit respectively. One possible embodiment of the qudit is a Bell state. Then, the contents will be sent as $\ket{0}^4=\frac{\ket{00}+\ket{11}}{\sqrt{2}}, \ket{1}^4=\frac{\ket{00}-\ket{11}}{\sqrt{2}},\ket{2}^4=\frac{\ket{10}+\ket{01}}{\sqrt{2}}$ or $\ket{3}^4 =\frac{\ket{10}-\ket{01}}{\sqrt{2}}$ for $\ket{0}\ket{0},\ket{0}\ket{1},\ket{1}\ket{0}$, and $\ket{1}\ket{1}$ respectively. The $CX^4_2$ gate corresponds to a CNOT on the second qubit of the Bell pair, and the $CX^4_1$ to a controlled sign shift, CZ, on the same qubit. The CZ operation is defined as $CZ\ket{x}=(-1)^x\ket{x}$, which changes the sign of $\ket{1}$ states. Both operations can be done without the presence of the first qubit. So, it is possible to send the first qubit in advance and include both channels in one qubit. This is exactly what happens in superdense coding.
This economy of qubits cannot be extended for quantum information transmission. If a superposition exists, we must erase the correlations to the original qubit channels. Instead of the whole qudit, only one half of the system is available, and the state cannot be separated using the present qubits alone. Measuring the $n$ channels would destroy superposition. Keeping data in a quantum memory would separate the states in a larger Hilbert space, with effects akin to those of decoherence. As each qudit state would be entangled to different states of the ``environment'', here the quantum memory, interferences in the recovered qubits would be prevented.
The two qubits must be interpreted as a four-dimensional system. For multiple systems, when no entanglement is present, each system can be studied separately, and the $n$ bits of classical information $n$ qubits carry can be asigned one bit to each qubit. If there is entanglement the $n$ bits of information are associated to the joint state \cite{BZ99}. In some cases, the information can even be distributed through the Hilbert space in such a way that no single system, on its own, can be said to carry one bit of information. In maximally entangled states, like the Bell pairs, the information is on the joint system. In superdense coding the information is encoded on the whole qudit. We can act on the encoding even if half of the system is not present, but at the price of loosing the capacity to send superpositions.
\subsection{HDMA, orthogonality and multiple access.}
The motivation behind multiplexing is sharing a scarce communication resource, usually at the cost of an increase of complexity in another domain. To avoid interference between users, the signals that transmit the information must be orthogonal.
In quantum information, qubits are given by systems with two orthogonal states that represent $\ket{0}$ and $\ket{1}$. The orthogonality, here, guarantees that there is no interference between the states. The schemes of classical multiple access can be used to achieve this orthogonality. Photonic qubits are quite illustrative to this respect. One example are time-bin qubits \cite{BGT99,TTT02,MRT03} and qudits \cite{SZG05}, where a pulse is divided into a superposition of two or $d$ pulses with a long enough temporal separation so that they can no longer interfere. This can be interpreted as a form of TDMA. A spectral separation similar to FDMA/WDMA has also be proposed for quantum information units \cite{Mol98,BGE99,LWE00} and there are experiments where qubits have been encoded in the sidebands of phase modulated light \cite{SMF95,MMG99,GMS03}. In dual rail and polarization encoded optical qubits \cite{Ral06}, the orthogonal modes are separated paths \cite{CY96,KLM00}, a form of SDMA, and different polarizations \cite{PJF02b,DRM03}, like in PDMA. In all those cases orthogonality is used as a separator for logical values. The same degrees of freedom that can be used for this separation can be employed to combine channels.
There is also a plethora of photonic nonclassical states, each with its own special characteristics \cite{Lou00,Dod02}. Some examples are photon number states, or coherent and squeezed states. We also have at our disposal a great variety of operators that define the observables of photon modes, which present diverse uncertainty relations between non commuting observables. Quadrature operators, for instance, give analogues to position and momentum observables, and we can measure, among others, the number of photons, and the phase of the field. Each set of states has its own orthogonality properties that can serve as the base of a new multiple access scheme.
There exist many connections between qubit and qudit encodings and orthogonal states and classical multiplexing, modulation and symbol encoding. Different photon number states can encode qubits, like in single-rail encoding \cite{LR02,LLC03}, in a way not unlike the classical different voltage level assignements to represent different symbols. It is also possible to create qudits where the number of photons gives the state number, with photon number state $\ket{k}$ corresponding to the $\ket{k}^d$ qubit state, or with more complex correspondences. This representation could be the base of a HDMA system. In fact, a similar energy-based separation has been proposed for classical information as Power Division Multiple Access \cite{Maz98}.
Continuous signals can be decomposed into in-phase and quadrature components. This decomposition is at the origin of classical quadrature modulation schemes, such as the modulation with multilevel symbols of QAM (quadrature amplitude modulation), and gives related quadrature multiplexing strategies \cite{Hay01}. Quantum information with continuous variables is usually based on the quadratures of a mode. With these canonical operators the methods of classical communications can be employed \cite{BV05,FSB98,SKO06}. Qubits and qudits can be encoded into the infinite-dimensional Hilbert space of the canonical coordinates of photon modes \cite{PMV04}, opening the door for usage in multiple access. Other classical modulation-related strategies, like multicarrier modulation multiple access based on OFDM (orthogonal frequency division multiplexing) \cite{Bin90, ADL98}, are much used in wideband systems and modern wireless networks. A quantum extension could be derived relating different subspaces to the various subcarriers.
There are also schemes that encode qubits in two near orthogonal coherent states \cite{CMM99,RGM03,GVR04}. As there are infinite coherent states with an almost null overlap, they seem fit to quantum multiplexing scenarios where the same channel can be used for any number of users, choosing a new coherent state when a new user joins the channel. This multiple access philosophy can mimic classical CDMA networks with channel overloading. In those networks, almost orthogonal codes permit to accommodate an increasing number of users, at the cost of a degradation in the transmission quality that is proportional to the user number at a particular time \cite{YKK00,SVM00}. The greater the number of users, the more difficult it becomes to find states with a small overlap. It would also be interesting to reproduce the same behaviour with an imperfect state transfer circuit that maps $n$ qubits into a $d< 2^n$ qudit. The given quantum multiplexer has some common features with certain forms of channel coding and quantum data compression \cite{CM00,Lan02,BHL06}. Methods for efficient channel coding could be adapted for multiplexing to give multiple access scenarios with a flexible upper bound on the number of users.
Optical angular momentum, OAM, is also a promising incarnation for qudits \cite{TDT03,VPJ03,MVR04} and a good candidate for optical multiplexing, carrying the information of many channels in a single photon. Optical angular momentum in optical vortices has already been used for multilevel classical communication \cite{BC04,GCP04,RHB06}.
More generally, any observable with many incompatible outcomes can give rise to new multiple access schemes. The principle is the same as the one of qubit design, finding orthogonal states. Other systems will have their own observables, equivalent or not to photons' operators.
\subsection{Overview on HDMA}
HDMA can be seen as a generalization that encompasses all other multiple access schemes. HDMA has a strong connection with CDMA. CDMA is the prevailing multiple access model in third generation mobile radio networks, and has established itself as one of the most important classsical multiple access technologies. In CDMA, users exploit the orthogonality, or near orthogonality, of certain codes to share the same band of the spectrum at the same time \cite{PSM82,PO98,Sta01}. Among other advantages, CDMA offers an improved behaviour against noise and a dynamical limit for the number of users, with a graceful degradation when the number of channels that can be perfectly separated has been exceeded. TDMA and FDMA can be put as special cases of CDMA. The formalism of CDMA makes use of the orthogonality of signals in the Hilbert space. The main different between the CDMA and HDMA is the point of view. HDMA stresses some aspects that are often overlooked in the standard formulation of CDMA, like the complex nature of the elements of the Hilbert space, and the interpretation of coding and decoding in terms of subspace projections. In our multiplexer there are also considerations that are not taken in classical systems. The first point of divergence is the presence of entanglement, which makes it necessary to erase correlations to be able to dispose of the information in the qubits. The second difference is the no-cloning prohibition in quantum information, which difficults the recovery at the receiver. Instead of having one copy for each output channel, we need a sequence of gates to extract the information for each destination and erase the correlations with the common channel (the qudit).
The treatment under the HDMA framework unifies the study of multiple access techniques and can be a useful abstraction when comparing systems. It is common to find qudits that combine orthogonality in more than one domain. Some examples are transverse momentum and position entangled qudits \cite{OKB05}, single photon implementations of two qubits using spatial and polarization modes \cite{EKW01} or hyperentangled photons, with polarization, spatial mode, and time energy orthogonality \cite{BLP05}. This also happens in classical systems, where hybrid multiple access methods appear and different domains are used in the separation of uplink and uplink channels, or \emph{duplexing}, and the separation of channels in multiplexing \cite{Rap01}. HDMA emphasizes the equivalence of these systems with others with qubits implemented only in the temporal (TDMA) \cite{TAZ04}, spatial (SDMA) \cite{NPS04,NLA05} or optical angular momentum \cite{MVW01} degree of freedom.
Multiplexing is a process of re-encoding, and can be understood as a state transfer. The same information is put together in a channel that exhausts the more scarce resource, at the cost of another resource that is not so limited. This tradeoff appears in all multiple access systems. For instance, in optical networks, where fibre links are a precious resource, multiplexers take many channels into one link avoiding the costly deployment of optical fibre. There are multiple equivalent systems for qudits. The more trivial embodiement of a $d=2^n$ qudit is $n$ qubits in $n$ channels. For information transmission we prefer more economical encodings, like OAM qudits that can send $n$ qubits in a single photon without increasing the time span or the number of spatial modes. A new important resource in quantum communications is the coherence time. Inside each quantum computer, relatively easy interacting qubits are desirable to perform the quantum algorithms, but, during the transmission, decoherence resilient qudits are better, as the only interactions occur at the multiplexer and demultiplexer, and maybe inside quantum repeaters. Other issues that should be taken into account include error correction, and fault tolerance. A single qudit is more vulnerable to photon loss problems, but with error correction codes \cite{Got96,CS96} this and other problems can be overcome.
We have presented a quantum version the multiplexer and demultiplexer gates that appear in communication networks and in the inner architecture of quantum computers. These circuits can be better understood in terms of a qudit state transfer, where the particularities of quantum information are taken into account. The explicit circuit is formed by a group of $n$ subunits that define the inclusion and extraction of each user into the channel and erase the contents of the qubit to avoid undesired entanglement. The HDMA scheme gives a framework to compare different multiple access proposals and to derive new multiple access methods. With it, it is possible to gain some insight into qubit state separation and coding issues. Hopefully, HDMA will improve the potential of new quantum networks where the quantum information of many users will be transmitted sharing the existing network resources for a more efficient distribution.
\newcommand{\noopsort}[1]{} \newcommand{\printfirst}[2]{#1}
\newcommand{\singleletter}[1]{#1} \newcommand{\switchargs}[2]{#2#1}
|
2,877,628,089,139 | arxiv | \section{Introduction}
Understanding neural networks is crucial in applications like autonomous vehicles, health care, robotics, for validating and debugging, as well as for building the trust of users \cite{kim2018textual, uzunova2019interpretable}. This paper strives to understand and explain the decisions of deep neural networks by studying the behavior of predicted attributes when adversarial examples are introduced. We argue that even if no adversaries are being inserted in real world applications, adversarial examples can be exploited for understanding neural networks in their failure modes. Most of the state of the art approaches for interpreting neural networks work by focusing on features to produce saliency maps by considering class specific gradient information \cite{selvaraju2017grad, simonyan2013deep, sundararajan2017axiomatic}, or by finding the part of the image which influences classification the most and removing it by adding perturbations \cite{zeiler2014visualizing, fong2017interpretable}. These approaches reveal the part in the image where there is support to the classification and visualize the performance of known good examples. This tells a little about the boundaries of a class where dubious examples reside.
However, humans motivate their decisions through semantically meaningful observations. For example, this type of bird has a blue head and red belly so, this must be a painted bunting. Hence, we study changes in the predicted attribute values of samples under mild modification of the image through adversarial perturbations. We believe this alternative dimension of study can provide a better understanding of how misclassification in a deep network can best be communicated to humans. Note that, we consider adversarial examples that are generated to fool only the classifier and not the interpretation (attributes) mechanism.
Interpreting deep neural network decisions for adversarial examples helps in understanding their internal functioning \cite{tao2018attacks,du2018towards}. Therefore, we explore
\begin{figure}[t]
\includegraphics[width=\linewidth, trim=0 0 0 0, clip]{WACV_Motivation.pdf}
\caption{
Our study with interpretable attribute prediction-grounding framework shows that, for a clean image predicted attributes ``red belly'' and ``blue head'' are coherent with the ground truth class (painted bunting), and for an adversarial image ``white belly'' and ``white head'' are coherent with the wrong class (herring gull) .}
\label{fig:Motivation}
\vspace{-3mm}
\end{figure}
\textit{How do the attribute values change under an adversarial attack on the standard classification network?}
However while, describing misclassifications due to adversarial examples with attributes helps in understanding neural networks, assessing whether the attribute values still retain their discriminative power after making the network robust to adversarial noise is equally important. Hence, we also ask
\textit{How do the attribute values change under an adversarial attack on a robust classification network?}
To answer these questions, we design experiments to investigate which attribute values change when an image is misclassified with increasing adversarial perturbations, and further when the classifier is made robust against an adversarial attack. Through these experiments we intend to demonstrate what attributes are important to distinguish between the right and the wrong class. For instance, as shown in Figure~\ref{fig:Motivation}, ``blue head'' and ''red belly'' associated with the class ``painted bunting'' are predicted correctly for the clean image. On the other hand, due to predicting attributes incorrectly as ``white belly'' and ``white head'', the adversarial image gets classified into ``herring gull'' incorrectly. After analysing the changes in attributes with a standard and with a robust network we propose a metric to quantify the robustness of the network against adversarial attacks. Therefore, we ask
\textit{Can we quantify the robustness of an adversarially robust network? }
In order to answer the third question, we design a robustness quantification metric for both standard as well as attribute based classifiers.
To the best of our knowledge we are the first to exploit adversarial examples with attributes to perform a systematic investigation on neural networks, both \textit{quantitatively} and \textit{qualitatively}, for not only \textit{standard}, but also for \textit{adversarially robust} networks. We explain the decisions of deep computer vision systems by identifying what attributes change when an image is perturbed in order for a classification system to produce a specific output. Our results on three benchmark attribute datasets with varying size and granularity elucidate why adversarial images get misclassified, and why the same images are correctly classified with the adversarially robust framework. Finally we introduce a new metric to quantify the robustness of a network for both general as well as attribute based classifiers.
\vspace{-2.5mm}
\section{Related Work}
In this section, we discuss related work on interpretability and adversarial examples.
\myparagraph{Interpretability.}
Explaining the output of a decision maker is motivated by the need to build user trust before deploying them into the real world environment. Previous work is broadly grouped into two: 1) \textit{rationalization}, that is, justifying the network's behavior and 2) \textit{introspective explanation}, that is, showing the causal relationship between input and the specific output \cite{du2018techniques}. Text-based class discriminative explanations~\cite{hendricks2016generating,park2016attentive}, text-based interpretation with semantic information~\cite{dong2017improving} and counter factual visual explanations \cite{goyal2019counterfactual} fall in the first category. On the other hand activation maximization \cite{simonyan2013deep, zintgraf2017visualizing}, learning the perturbation mask \cite{fong2017interpretable}, learning a model locally around its prediction and finding important features by propagating activation differences \cite{ribeiro2016should,shrikumar2017learning} fall in the second group. The first group has the benefit of being human understandable, but it lacks the causal relationship between input and output. The second group incorporates internal behavior of the network, but lacks human understandability. In this work, we incorporate human understandable justifications through attributes and causal relationship between input and output through adversarial attacks.
\myparagraph{Interpretability of Adversarial Examples.} After analyzing neuronal activations of the networks for adversarial examples in \cite{dong2017towards} it was concluded that the networks learn recurrent discriminative parts of objects instead of semantic meaning. In \cite{jiang2018recent}, the authors proposed a datapath visualization module consisting of the layer level, feature level, and the neuronal level visualizations of the network for clean as well as adversarial images.
In \cite{zhang2019interpreting}, the authors investigated adversarially trained convolutional neural networks by constructing images with different textural transformations while preserving the shape information to verify the shape bias in adversarially trained networks compared with standard networks. Finally, in \cite{tsipras2018robustness}, the authors showed that the saliency maps from adversarially trained networks align well with human perception.
These approaches use saliency maps for interpreting the adversarial examples, but
saliency maps~\cite{selvaraju2017grad} are often weak in justifying classification decisions, especially for fine-grained adversarial images. For instance, in Figure~\ref{fig:saliency} the saliency map of a clean image classified into the ground truth class, ``red winged blackbird'', and the saliency map of a misclassified adversarial image, look quite similar. Instead, we propose to predict and ground attributes for both clean and adversarial images to provide visual as well as attribute-based interpretations. In fact, our predicted attributes for clean and adversarial images look quite different. By grounding the predicted attributes one can infer that the ``orange wing'' is important for ``red winged blackbird'' while the ``red head'' is important for ``red faced cormorant''. Indeed, when the attribute value for orange wing decreases and red head increases the image gets misclassified.
\myparagraph{Adversarial Examples.} Small carefully crafted perturbations, called \textit{adversarial perturbations}, when added to the inputs of deep neural networks, result in \textit{adversarial examples}. These adversarial examples can easily drive the classifiers to the wrong classification~\cite{szegedy2013intriguing}. Such attacks involve iterative fast gradient sign method (IFGSM) \cite{kurakin2016adversarial}, Jacobian-based saliency map attacks \cite{papernot2016limitations}, one pixel attacks \cite{su2019one}, Carlini and Wagner attacks \cite{carlini2017towards} and universal attacks \cite{moosavi2016deepfool}. We select IFGSM for our experiments, but our method can also be used with other types of adversarial attacks.
Adversarial examples can also be used for understanding neural networks. \cite{anonymous2020evaluations} aims at utilizing adversarial examples for understanding deep neural networks by extracting the features that provide the support for classification into the target class. The most salient features in the images provide the way to interpret the decision of a classifier, but they lack human understandability. Additionally, finding the most salient features is computationally rather expensive. The crucial point, however, is that if humans explain classification by attributes, they are also natural candidates to study misclassification and robustness. Hence, in this work in order to understand neural networks we utilize adversarial examples with attributes which explain the misclassification due to adversarial attacks.
\vspace{-1.5mm}
\section{Method}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth, trim=0 0 0 0, clip]{WACV_Saliency_vs_ours.pdf}
\caption{\textbf{Adversarial images are difficult to explain:} when the answer is wrong, often saliency based methods (left) fail to detect what went wrong. Instead, attributes (right) provide intuitive and effective visual and textual explanations.}
\vspace{-2.5mm}
\vspace{-1mm}
\label{fig:saliency}
\end{figure}
In this section, in order to explain what attributes change when an adversarial attack is performed on the classification mechanism of the network, we detail a two-step framework. First, we perturb the images using adversarial attack methods and robustify the classifiers via adversarial training. Second, we predict the class specific attributes and visually ground them on the image to provide an intuitive justification of why an image is classified as a certain class. Finally, we introduce our metric for quantifying the robustness of an adversarially robust network against adversarial attacks.
\subsection{Adversarial Attacks and Robustness}
Given a clean $n\text{-th}$ input $x_n$ and its respective ground truth class $y_n$ predicted by a model $f(x_n)$, an adversarial attack model generates an image $\hat{x}_n$ for which the predicted class is $y$, where $y \neq y_n$. In the following, we detail an adversarial attack method for fooling a general classifier and an adversarial training technique that robustifies it.
\myparagraph{Adversarial Attacks.} The iterative fast gradient sign method (IFGSM)~\cite{kurakin2016adversarial} is leveraged to fool only the classifier network. IFGSM solves the following equation to produce adversarial examples:
\vspace{-1.5mm}
\begin{align}
& \hat{x}^0 =x_n \nonumber \\
& \hat{x}_n^{i+1}=\text{Clip}_{\epsilon}\{\hat{x}_n^{i}+\alpha\text{Sign}(\bigtriangledown{\hat{x}_n^i}\mathcal{L}(\hat{x}_n^i,y_{n}))\}
\end{align}
where $\bigtriangledown{\hat{x}_n^i}\mathcal{L}$ represents the gradient of the cost function w.r.t. perturbed image $\hat{x}_n^i$ at step $i$. $\alpha$ determines the step size which is taken in the direction of sign of the gradient and finally, the result is clipped by epsilon $\text{Clip}_{\epsilon}$.
\myparagraph{Adversarial Robustness.} We use \textit{adversarial training} as a defense against adversarial attacks which minimizes the following objective \cite{43405}:
\begin{align}
\mathcal{L}_{adv}(x_n,y_n) & = \alpha \mathcal{L}(x_n,y_n)
+ (1-\alpha)\mathcal{L}(\hat{x}_n,y)
\end{align}
where, $\mathcal{L}(x_n,y_n)$ is the classification loss for clean images, $\mathcal{L}(\hat{x}_n,y)$ is the loss for adversarial images and $\alpha$ regulates the loss to be minimized. The model finds the worst case perturbations and fine tunes the network parameters to reduce the loss on perturbed inputs. Hence, this results in a robust network $f^r(\hat{x})$, using which improves the classification accuracy on the adversarial images.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{WACV_Model_Figure}
\caption{\textbf{Interpretable attribute prediction-grounding model.} After an adversarial attack or adversarial training step, image features of both clean $\theta(x_n)$ and adversarial images $\theta(\hat{x})$ are extracted using Resnet and mapped into attribute space $\phi(y)$ by learning the compatibility function $F(x_n,y_n;W)$ between image features and class attributes. Finally, attributes predicted by attribute based classifier $\bold{A}_{x_n,y_n}^q$ are grounded by matching them with attributes predicted by Faster RCNN $\mathbb{A}_{x_n}^j$ for clean and adversarial images.}
\vspace{-2.5mm}
\vspace{-1.5mm}
\label{fig:ADV_SJE}
\end{figure}
\vspace{-2mm}
\subsection{Attribute Prediction and Grounding}
Our attribute prediction and grounding model uses attributes to define a joint embedding space that the images are mapped to.
\myparagraph{Attribute prediction.} The model is shown in Figure~\ref{fig:ADV_SJE}. During training our model maps clean training images close to their respective class attributes, e.g. ``painted bunting'' with attributes ``red belly, blue head'', whereas adversarial images get mapped close to a wrong class, e.g. ``herring gull'' with attributes ``white belly, white head''.
We employ structured joint embeddings (SJE)~\cite{akata2015evaluation} to predict attributes in an image. Given the input image features $\theta(x_n) \in \mathcal{X}$ and output class attributes $\phi(y_n) \in \mathcal{Y}$ from the sample set $\mathcal{S}=\{(\theta(x_n),\phi(y_n),n=1...N \}$ SJE learns a mapping $\mathbb{f}:\mathcal{X} \to \mathcal{Y}$ by minimizing the empirical risk of the form $\frac{1}{N}\sum_{n=1}^N \Delta(y_n,\mathbb{f}(x_n))$ where $\Delta: \mathcal{Y} \times \mathcal{Y} \to \mathbb{R} $ estimates the cost of predicting $\mathbb{f}(x_n)$ when the ground truth label is $y_n$.
A compatibility function $F:\mathcal{X}\times\mathcal{Y}\to \mathbb{R}$ is defined between input $\mathcal{X}$and output $\mathcal{Y}$ space:
\begin{equation}
F(x_n,y_n;W)=\theta(x_n)^TW\phi(y_n)
\end{equation}
Pairwise ranking loss $\mathbb{L}(x_n,y_n,y)$ is used to learn the parameters $(W)$:
\begin{equation}
\Delta(y_n,y)+\theta(x_n)^TW\phi(y_n)-\theta(x_n)^TW\phi(y)
\end{equation}
Attributes are predicted for both clean and adversarial images by:
\vspace{-3mm}
\begin{equation}
\vspace{-3mm}
\bold{A}_{n,y_n}=\theta(x_n)W \, , \bold{\hat{A}}_{n,y}=\theta(\hat{x}_n)W
\end{equation}
The image is assigned to the label of the nearest output class attributes $\phi(y_n)$.
\myparagraph{Attribute grounding.} In our final step, we ground the predicted attributes on to the input images using a pre-trained Faster RCNN network and visualize them as in~\cite{anne2018grounding}. The pre-trained Faster RCNN model $\mathcal{F}(x_n)$ predicts the bounding boxes denoted by $b^j$. For each object bounding box it predicts the class $\mathbb{Y}^j$ as well as the attribute $\mathbb{A}^j$ ~\cite{anderson2018bottom}.
\begin{equation}
\vspace{-1.5mm}
b^j,\mathbb{A}^j,\mathbb{Y}^j=\mathcal{F}(x_n)
\end{equation}
where, $j$ is the bounding box index. The most discriminative attributes predicted by SJE are selected based on the criteria that they change the most when the image is perturbed with noise. For clean images we use:
\begin{equation}
q=\underset{i}{\mathrm{argmax}}(\bold{A}_{n,y_n}^i-\phi(y^i))
\label{eq:att_sel1}
\vspace{-1mm}
\end{equation}
and for adversarial images we use:
\begin{equation}
p=\underset{i}{\mathrm{argmax}}(\bold{\hat{A}}_{n,y}^i-\phi(y_n^i)).
\label{eq:att_sel2}
\vspace{-1mm}
\end{equation}
where $i$ is the attribute index, $q$ and $p$ are the indexes of the most discriminative attributes predicted by SJE and $\phi(y^i)$, $\phi(y_n^i)$ are wrong class and ground truth class attributes respectively. Then we search for selected attributes $\bold{A}_{x_n,y_n}^q, \bold{A}_{\hat{x}_n,y}^p$ in attributes predicted by Faster RCNN for each bounding box $\mathbb{A}_{x_n}^j, \mathbb{A}_{\hat{x}_n}^j$, and when the attributes predicted by SJE and Faster RCNN are found, that is $\bold{A}_{x_n,y_n}^q = \mathbb{A}_{x_n}^j$, $\bold{A}_{\hat{x}_n,y}^p = \mathbb{A}_{\hat{x}_n}^j$ we ground them on their respective clean and adversarial images. Note that the adversarial images being used here are generated to fool only the general classifier \textit{and not the attribute predictor nor the Faster RCNN}.
\subsection{Robustness Quantification}
To describe the ability of a network for robustification, independent of its performance on a standard classifier we introduce a metric called \textit{robust ratio}. We calculate the loss of accuracy $L_R$ on a robust classifier, by comparing a standard classifier $f(x_n)$ on clean images with the robust classifier $f^r(\hat{x}_n)$ on the adversarially perturbed images as given below:
\begin{equation}
L_R=f(x_n)-f^r(\hat{x}_n)
\end{equation}
And then we calculate the loss of accuracy $L_S$ on a standard classifier, by comparing its accuracy on the clean and adversarially perturbed images:
\vspace{-2mm}
\begin{equation}
L_S=f(x_n)-f(\hat{x}_n)
\end{equation}
The ability to robustify is then defined as:
\vspace{-2mm}
\begin{equation}
R=\frac{L_R}{L_S}
\end{equation}
$R$ is the robust ratio. It indicates the fraction of the classification accuracy of the standard classifier recovered by the robust classifier when adding noise.
\vspace{-1.5mm}
\section{Experiments}
\begin{figure*}[t]
\centering
\includegraphics[width=0.324\linewidth, trim=15 0 45 20, clip]{Acc_AWA_1.png}
\includegraphics[width=0.324\linewidth, trim=15 0 45 20, clip]{Acc_CUB_1.png}
\includegraphics[width=0.324\linewidth, trim =10 115 0 110, clip]{Legend}
\caption{\textbf{Comparing the accuracy of the general and the attribute based classifiers for adversarial examples to investigate change in attributes.} We evaluate both classifiers by extracting features from a standard network and the adversarially robust network.}
\vspace{-2.5mm}
\label{fig:acc_plots}
\end{figure*}
In this section, we perform experiments on three different datasets and analyse the change in attributes for clean as well as adversarial images. We additionally analyse results for our proposed robustness quantification metric on both general and attribute based classifiers.
\myparagraph{Datasets.} We experiment on three datasets, Animals with Attributes 2 (AwA) \cite{lampert2009learning}, Large attribute (LAD) \cite{zhao2018large} and Caltech UCSD Birds (CUB) \cite{wah2011caltech}. AwA contains 37322 images (22206 train / 5599 val / 9517 test) with 50 classes and 85 attributes per class. LAD has 78017 images (40957 train / 13653 val / 23407 test) with 230 classes and 359 attributes per class. CUB consists of 11,788 images (5395 train / 599 val / 5794 test) belonging to 200 fine-grained categories of birds with 312 attributes per class. All the three datasets contain real valued class attributes representing the presence of a certain attribute in a class.
Visual Genome Dataset \cite{krishna2017visual} is used to train the Faster-RCNN model which extracts the bounding boxes using 1600 object and 400 attribute annotations. Each bounding box is associated with an attribute followed by the object, e.g. a brown bird.
\myparagraph{Image Features and Adversarial Examples.} We extract image features and generate adversarial images using the fine-tuned Resnet-152. Adversarial attacks are performed using IFGSM method with epsilon $\epsilon$ values $0.01$, $0.06$ and $0.12$. The $\l_\infty $ norm is used as a similarity measure between clean input and the generated adversarial example.
\myparagraph{Adversarial Training.}
As for adversarial training, we repeatedly computed the adversarial examples while training the fine-tuned Resnet-152 to minimize the loss on these examples. We generated adversarial examples using the projected gradient descent method. This is a multi-step variant of FGSM with epsilon $\epsilon$ values $0.01$, $0.06$ and $0.12$ respectively for adversarial training as in~\cite{madry2017towards}.
Note that we are not attacking the attribute based network directly but we are attacking the general classifier and extracting features from it for training the attribute based classifier. Similarly, the adversarial training is also performed on the general classifier and the features extracted from this model are used for training the attribute based classifier.
\myparagraph{Attribute Prediction and Grounding.}
At test time the image features are projected onto the attribute space. The image is assigned with the label of the nearest ground truth attribute vector. The predicted attributes are grounded by using Faster-RCNN pre-trained on Visual Genome Dataset since we do not have ground truth part bounding boxes for any of attribute datasets.
\vspace{-2.5mm}
\section{Results}
We investigate the change in attributes quantitatively (i) by performing classification based on attributes and (ii) by computing distances between attributes in embedding space. We additionally investigate changes qualitatively by grounding the attributes on images for both standard and adversarially robust networks.
At first, we compare the general classifier $f(x_n)$ and the attribute based classifier $\mathbb{f}(x_n)$ in terms of the classification accuracy on clean images. Since the attribute based model is a more explainable classifier, it predicts attributes, compared to general classifier, which predicts the class label directly. Therefore, we first verify whether the attribute based classifier performs equally well as the general classifier. We find that, the attribute based and general classifier accuracies are comparable for AWA (general: 93.53, attribute based: 93.83). The attribute based classifier accuracy is slightly higher for LAD (general: 80.00, attribute based: 82.77), and slightly lower for CUB (general: 81.00, attribute based: 76.90) dataset.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth, trim=0 0 0 30, clip]{standard_quantitative.pdf}
\vspace{-3mm}
\caption{ \textbf{Attribute distance plots for standard learning frameworks.} Standard learning framework plots are shown for the clean and the adversarial image attributes.}
\label{fig:standardattr}
\vspace{-3mm}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth, trim=0 5 0 30, clip]{Standard_qualitative.pdf}
\vspace{-6mm}
\caption{\textbf{Qualitative analysis for adversarial attacks on standard network.} The attributes ranked by importance for the classification decision are shown below the images. The grounded attributes are color coded for visibility (the ones in gray could not be grounded). The attributes for clean images are related to the ground truth classes whereas the ones predicted for adversarial images are related to the wrong classes. }
\label{fig:Qualitative-1}
\vspace{-3mm}
\end{figure*}
To qualitatively analyse the predicted attributes, we ground them on clean and adversarial images. We select our images among the ones that are correctly classified when clean and incorrectly classified when adversarially perturbed. Further we select the most discriminative attributes based on equation \ref{eq:att_sel1} and \ref{eq:att_sel2}. We evaluate $50$ attributes that change their value the most for the CUB, $50$ attributes for the AWA, and $100$ attributes for the LAD dataset.
\subsection{Adversarial Attacks on Standard Network}
\subsubsection{Quantitative Analysis}
\vspace{-2.5mm}
\myparagraph{By Performing Classification based on Attributes.} With adversarial attacks, the accuracy of both the general and attribute based classifiers drops with the increase in perturbations see Figure~\ref{fig:acc_plots} (blue curves). The drop in accuracy of the general classifier for the fine grained CUB dataset is higher as compared to the coarse AWA dataset which confirms our hypothesis. For example, at $\epsilon=0.01$ for the CUB dataset the general classifier's accuracy drops from $81\%$ to $31\%$ ($\approx 50\%$ drop), while for the AWA dataset it drops from $93.53\%$ to $70.54\%$ ($\approx 20\%$ drop). However, the drop in accuracy with the attribute based classifier is almost equal for both, $\approx 20\%$ . We propose one of the reasons behind the smaller drop of accuracy for the CUB dataset with the attribute based classifier compared to the general classifier is that for fine grained datasets there are many common attributes among classes. Therefore, in order to misclassify an image a significant number of attributes need to be changed. For a coarse grained dataset, changing a few attributes is sufficient for misclassification. Another reason is that there are $9\%$ more attributes per class in the CUB dataset as compared to the AWA dataset.
\vspace{-0.5mm}
For the coarse dataset the attribute based classifier shows comparable performance with the general classifier. While for the fine grained dataset the attribute based classifier shows better performance than the general classifier so a large change in attributes is required to cause misclassification with attributes. Overall, the drop in the accuracy with the adversarial attacks demonstrates that, with adversarial perturbations, the attribute values change towards those that belong to the new class and cause the misclassification.
\myparagraph{By Computing Distances in Embedding Space.} In order to perform analysis on attributes in embedding space, we consider the images which are correctly classified without perturbations and misclassified with perturbations. Further, we select the top $20\%$ of the most discriminative attributes using equation \ref{eq:att_sel1} and \ref{eq:att_sel2}. Our aim is to analyse the change in attributes in embedding space.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth, trim=0 0 0 27, clip]{robust_quantitative.pdf}
\vspace{-3mm}
\caption{ \textbf{Attribute distance plots for robust learning frameworks.} Robust learning framework plots are shown only for the adversarial image attributes but for adversarial images misclassified with the standard features and correctly classified with the robust features.}
\label{fig:robustattr}
\vspace{-3mm}
\vspace{-2.5mm}
\end{figure}
We contrast the Euclidean distance between predicted attributes of clean and adversarial samples:
\begin{equation}
d_1 = d\{\bold{A}_{n,y_n},\bold{\hat{A}}_{n,y}\} =\parallel \bold{A}_{n,y_n}-\bold{\hat{A}}_{n,y} \parallel_2
\label{eq:d1_1}
\end{equation}
with the Euclidean distance between the ground truth attribute vector of the correct and wrong classes:
\begin{equation}
d_2 = d\{\phi(y_n),\phi(y)\}=\parallel\phi(y_n)-\phi(y)) \parallel_2
\label{eq:d2_1}
\end{equation}
and show the results in Figure~\ref{fig:standardattr}. Where, $\bold{A}_{n,y_n}$ denotes the predicted attributes for the clean images classified correctly, and $\bold{\hat{A}}_{n,y}$ denotes the predicted attributes for the adversarial images misclassified with a standard network. The correct ground truth class attribute is referred to as $\phi(y_n)$ and wrong class attributes are $\phi(y)$.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth, trim=0 0 0 30, clip]{robust_qualitative.pdf}
\vspace{-6mm}
\caption{\textbf{Qualitative analysis for adversarial attacks on robust network.} The attributes are ranked by importance for the classification decision, the grounded attributes are color coded for visibility (the ones in gray could not be grounded). The attributes for adversarial images with robust network are related to ground truth classes whereas the ones predicted for adversarial images change towards wrong classes }
\label{fig:Qualitative-2}
\vspace{-3mm}
\end{figure*}
We observe that for the AWA dataset the distances between the predicted attributes for adversarial and clean images $d_1$ are smaller than the distances between the ground truth attributes of the respective classes $d_2$. The closeness in predicted attributes for clean and adversarial images as compared to their ground truths shows that attributes change towards the wrong class but not completely. This is due to the fact that for coarse classes, only a small change in attribute values is sufficient to change the class.
The fine-grained CUB dataset behaves differently. The overlap between $d_1$ and $d_2$ distributions demonstrates that attributes of images belonging to fine-grained classes change significantly as compared to images from coarse categories. Although the fine grained classes are closer to each other, due to the existence of many common attributes among fine grained classes, attributes need to change significantly to cause misclassification. Hence, for the coarse dataset, the attributes change minimally, while for the fine grained dataset they change significantly.
\vspace{-2.5mm}
\vspace{-2.5mm}
\subsubsection{Qualitative Analysis}
We observe in Figure~\ref{fig:Qualitative-1} that the most discriminative attributes for the clean images are coherent with the ground truth class and are localized accurately; however, for adversarial images they are coherent with the wrong class. Those attributes which are common among both clean and adversarial classes are localized correctly on the adversarial images; however, the attributes which are not related to the ground truth class, the ones that are related to the wrong class can not get grounded as there is no visual evidence that supports the presence of these attributes. For example ``brown wing, long wing, long tail'' attributes are common in both classes; hence, they are present both in the clean image and the adversarial image. On the other hand, ``has a brown color'' and ``a multicolored breast'' are related to the wrong class and are not present in the adversarial image. Hence, they can not be grounded. Similarly, in the second example none of the attributes are grounded. This is because attributes changed completely towards the wrong class and the evidence for those attributes is not present in the image. This indicates that attributes for the clean images correspond to the ground truth class and for adversarial images correspond to the wrong class. Additionally, only those attributes common among both the wrong and the ground truth classes get grounded on adversarial images.
Similarly, our results on the LAD and AWA datasets in the second row of Figure~\ref{fig:Qualitative-1} show that the grounded attributes on clean images confirm the classification into the ground truth class while the attributes grounded on adversarial images are common among clean and adversarial images. For instance, in the first example of AWA, the ``is black'' attribute is common in both classes so it is grounded on both images, but ``has claws'' is an important attribute for the adversarial class. As it is not present in the ground truth class, it is not grounded.
\begin{figure*}[t]
\centering
\includegraphics[width=0.35\linewidth, trim=0 0 10 20, clip]{Robustifiability.png}
\includegraphics[width=0.35\linewidth, trim=0 0 10 20, clip]{Robustifiability_SJE.png}
\vspace{-1mm}
\caption{\textbf{Ability to robustify a network.} Ability to robustify a network with increasing adversarial perturbations is shown for three different datasets for both general and attribute based classifiers.}
\label{fig:Robustifiability}
\vspace{-4mm}
\end{figure*}
Compared to misclassifications caused by adversarial perturbations on CUB, images do not necessarily get misclassified into the most similar class for the AWA and LAD datasets as they are coarse grained datasets. Therefore, there is less overlap of attributes between ground truth and adversarial classes, which is in accordance with our quantitative results. Furthermore, the attributes for both datasets are not highly structured, as different objects can be distinguished from each other with only a small number of attributes.
\subsection{Adversarial Attacks on Robust Network}
\subsubsection{Quantitative Analysis}
\vspace{-2.5mm}
\myparagraph{By Performing Classification based on Attributes.} Our evaluation on the standard and adversarially robust networks shows that the classification accuracy improves for the adversarial images when adversarial training is used to robustify the network: \ref{fig:acc_plots} (purple curves). For example, in Figure ~\ref{fig:acc_plots} for AWA the accuracy of the general classifier improved from $ 70.54\%$ to $92.15\%$ ($\approx 21\%$ improvement) for adversarial attack with $\epsilon=0.01$. As expected for the fine grained CUB dataset the improvement is $\approx 31\%$ higher than the AWA dataset. However, for the attribute based classifier, the improvement in accuracy for AWA ($\approx 18.06\%$) is almost double that of the CUB dataset ($\approx 7\%$). We propose this is because the AWA dataset is coarse, so in order to classify an adversarial image correctly to its ground truth class, a small change in attributes is sufficient. Conversely the fine grained CUB dataset requires a large change in attribute values to correctly classify an adversarial image into its ground truth class. Additionally, CUB contains $9\%$ more per class attributes. For a coarse AWA dataset the attributes change back to the correct class and represent the correct class accurately. While for the fine grained CUB dataset, a large change in attribute values is required to correctly classify images.
This shows that with a robust network, the change in attribute values for adversarial images indicate to the ground truth class, resulting in better performance. Overall, we observe by analysing attribute based classifier accuracy that with the adversarial attacks the change in attribute values indicates in which wrong class it is assigned and with the robust network the change in attribute values indicates towards the ground truth class.
\myparagraph{By Computing Distances in Embedding Space}
We compare the distances between the predicted attributes of only adversarial images that are classified correctly with the help of an adversarially robust network $\bold{\hat{A}}^{{r}}_{n,y_n}$ and classified incorrectly with a standard network $\bold{\hat{A}}_{n,y}$:
\vspace{-2.5mm}
\begin{equation}\label{eq:d1_3}
d_1 = d\{\bold{\hat{A}}^{{r}}_{n,y_n},\bold{\hat{A}}_{n,y}\}=\parallel \bold{\hat{A}}^{{r}}_{n,y_n}-\bold{\hat{A}}_{n,y} \parallel_2
\vspace{-2.5mm}
\end{equation}
with the distances between the ground truth target class attributes $\phi(y_n)$ and ground truth wrong class attributes $\phi(y)$:
\vspace{-2.5mm}
\begin{equation}\label{eq:d2_3}
d_2 = d\{\phi(y_n),\phi(y)\}=\parallel\phi(y_n)-\phi(y)) \parallel_2
\end{equation}
The results are shown in Figure~\ref{fig:robustattr}. By comparing Figure~\ref{fig:robustattr} with Figure~\ref{fig:standardattr} we observe a similar behavior. The plots in Figure~\ref{fig:standardattr} are plotted between clean and adversarial image attributes. While plots in Figure~\ref{fig:robustattr} are plotted between only adversarial images but classified correctly with an adversarially robust network and misclassified with a standard network. This shows that the adversarial images classified correctly with a robust network behave like clean images, i.e. a robust network predicts attributes for the adversarial images which are closer to their ground truth class.
\vspace{-2.5mm}
\subsubsection{Qualitative Analysis}
Finally, our analysis with correctly classified images by the adversarially robust network shows that adversarial images with the robust network behave like clean images also visually. In the Figure~\ref{fig:Qualitative-2}, we observe that the attributes of an adversarial image with a standard network are closer to the adversarial class attributes. However, the grounded attributes of adversarial image with a robust network are closer to its ground truth class. For instance, the first example contains a ``blue head'' and a ``black wing'' whereas one of the most discriminating properties of the ground truth class ``blue head'' is not relevant to the adversarial class. Hence this attribute is not predicted as the most relevant by our model, and thus our attribute grounder did not ground it. This shows that the attributes for adversarial images classified correctly with the robust network are in accordance with the ground truth class and hence get grounded on the adversarial images.
\subsection{Analysis for Robustness Quantification}
The results for our proposed robustness quantification metric are shown in Figure~\ref{fig:Robustifiability}. We observe that the ability to robustify a network against adversarial attacks varies for different datasets. The network with fine grained CUB dataset is easy to robustify as compared to coarse AWA and LAD datasets. For the general classifier as expected the ability to robustify the network increases with the increase in noise. For the attribute based classifier the ability to robustify the network is high with the small noise but it drops as the noise increases (at $\epsilon=0.06$) and then again increases at high noise value (at $\epsilon=0.12$).
\vspace{-2.5mm}
\section{Conclusion}
In this work we conducted a systematic study on understanding the neural networks by exploiting adversarial examples with attributes. We showed that if a noisy sample gets misclassified then its most discriminative attribute values indicate to which wrong class it is assigned. On the other hand, if a noisy sample is correctly classified with the robust network then the most discriminative attribute values indicate towards the ground truth class. Finally, we proposed a metric for quantifying the robustness of a network and showed that the ability to robustify a network varies for different datasets. Overall the ability to robustify a network increases with the increase in adversarial perturbations.
{\small
\bibliographystyle{ieee}
|
2,877,628,089,140 | arxiv | \section{Introduction}
A classical theorem of S. Mazur asserts that the convex hull of a compact set in a Banach space is again
relatively compact. In a similar way, Krein-\v{S}mulian's Theorem says that the same property holds for
weakly compact sets, that is, these sets have relatively weakly compact convex hull. There is a third
property, lying between these two main kinds of compactness, which is defined in terms of Ces\`aro
convergence. Namely, a subset $A $ of a Banach space $X$ is called Banach-Saks if every sequence in $A$ has a
Ces{\`a}ro convergent subsequence (i.e. every sequence $ (x_n)_n$ in $A$ has a subsequence $ (y_n)_n$ such
that the sequence of arithmetic means $((1/n)\sum_{i=1}^n y_{i})_n$ is norm-convergent in $X$). In modern
terminology, as it was pointed out by H. P. Rosenthal \cite{Rosenthal}, this is equivalent to saying that no
difference sequence in $A$ generates an $\ell_1$-spreading model.
The Banach-Saks property has its origins in the work of S. Banach and S. Saks \cite{BS}, after whom the property is
named. In that paper it was proved that the unit ball of $L_p$ ($1<p<\infty$) is a Banach-Saks set.
Recall that a Banach space is said to have the Banach-Saks property when its unit ball is a Banach-Saks set.
This property has been widely studied in the literature (see for instance \cite{Baernstein}, \cite{Beauzamy},
\cite{Farnum}) and more recently in \cite{ASS} and \cite{DSS}. Observe that since a
Banach space with the Banach-Saks property must be reflexive \cite{NW}, it is clear that neither $L_1$ nor
$L_\infty$ have this property. However, weakly compact sets in $L_1$ are Banach-Saks \cite{Szlenk}, and every
sequence of disjoint elements in $L_\infty$ is also a Banach-Saks set.
Since every compact set is Banach-Saks, and these sets are in turn weakly compact, taking into account both Mazur's and Krein-\v{S}mulian's results, it may seem reasonable to expect that the convex hull of a Banach-Saks set is also Banach-Saks. We will show in Section \ref{counterexample} that this is not the case in general. We present a canonical example consisting of the weakly-null unit basis
$(u_n)_n$ of a Schreier-like space $X_\mc F$ for a certain family of finite subsets $\mc F$ on $\N$ that we
call a $T$-family (see Definitions \ref{def-2} and \ref{ij4tijrigrf}). The role of the Schreier-like spaces and such families
is not incidental. There are several equivalent conditions to the
Banach-Saks property in terms of properties of certain families of finite subsets of $\N$ (see Theorem \ref{char1}), and in fact we
prove in Theorem \ref{erioeiofjioedf} that a possible counterexample must be of the form $X_\mc F$ for a
$T$-family $\mc F$. Therefore, an analysis of the families of finite subsets of integers is needed to understand the Banach-Saks property.
The example of a $T$-family we present is influenced on a classical construction P. Erd\H{o}s and A. Hajnal
\cite{ErHa} of a sequence of measurable subsets of the unit interval indexed by pairs of integers.
These sequences of events behave in general in a completely different way than those indexed by integers, as
it can be seen, for example, in the work of D. Fremlin and M. Talagrand \cite{FrTa}. Coming back to our
space, every subsequence of the basis $(u_n)_n$ has a further subsequence which is equivalent to the unit
basis of $c_0$, yet there is a block sequence of averages of $(u_n)_n$ generating an $\ell_1$-spreading
model. There is also the reflexive counterpart, either by considering a Baernstein space associated to $\mc
F$, or from a more general approach
considering a Davis-Figiel-Johnson-Pelczynski interpolation space of $X_\mc F$.
As far as we know, the main question considered in this paper appeared explicitly in \cite{GG}, where the
authors also proved that every Banach-Saks set in the Schreier space has Banach-Saks convex hull. We will
see in Theorem \ref{ioo34iji4jtr} that this fact can be further extended to Banach-Saks sets contained
in generalized Schreier spaces.
The paper is organized as follows: In Section 2 we introduce some notation, basic definitions and facts
concerning the Banach-Saks property, with a special interest on its combinatorial nature. In Section 3 several sufficient conditions are given for the stability of the Banach-Saks property under taking convex hulls. This includes the study of Banach-Saks sets in Schreier-like spaces $X_{\mc S_\al}$ defined from any generalized Schreier family $\mc S_\al$. Finally, in Section \ref{counterexample} we present a canonical example of a Banach-Saks set whose convex hull is not, as well as the corresponding reflexive version.
\section{Notation, basic definitions and facts}\label{Preliminaries}
We use standard terminology in Banach space theory from the monographs \cite{A-K} and \cite{Li-Tz}. Let us introduce now some basic concepts in infinite Ramsey theory, that will be used
throughout this paper. Unless specified otherwise, by a family $\mathcal{F}$ on a set $I$ we mean a
collection of finite subsets of $I$. We denote infinite subsets by capital letters $M,N,P,\dots$, and finite
ones with $s,t,u,\dots$. Given a family $\mc F$ on $\N$, and $M\con \N$, we define the \emph{trace} $\mc
F[M]$ of $\mc F$ in $M$ and the \emph{restriction} $\mc F\rest M$ of $\mc F$ in $M$ as
\begin{align*}
\mc F[M]:=&\conj{s\cap M}{s\in \mc F}, \\
\mc F\rest M:=&\conj{s\in \mc F}{s\con M},
\end{align*}
respectively. A family $\mc F$ on $I$ is called compact, when it is compact with respect to the topology
induced by the product topology on $2^I$. The family $\mc F$ is pre-compact, or relatively compact, when the
topological closure of $\mc F$ consists only of finite subsets of $I$. The family $\mc F$ is
\emph{hereditary} when for every $s\con t\in \mc F$ one has that $s\in \mc F$. The $\con$-closure of $\mc F$
is the minimal hereditary family $\widehat{\mc F}$ containing $\mc F$, i.e. $\widehat{\mc F}:=\conj{t\con
s}{s\in \mc F}$. It is easy to see that $\mc F$ is pre-compact if and only if $\widehat{\mc F}$ is compact.
Typical examples of pre-compact families are
\begin{align*}
{[I]}^n\,\,\, :=& \{s\con I : \#s =n\},\\
{[I]}^{\leq n} :=& \{s\con I : \#s \leq n\}, \\
{[I]}^{<\omega} :=& \{s\con I : \#s < \infty\}.
\end{align*}
A natural procedure to obtain pre-compact families is to consider, given a relatively weakly-compact subset
$\mc K$ of $c_0$ and $\vep,\de>0$, the sets
\begin{align*}
\supp_{\vep}(\mc K):= &\conj{\supp_{\vep}x}{x\in \mc K},\\
\supp_{\vep,+}(\mc K):= &\conj{\supp_{\vep,+}x}{x\in \mc K},\\
\supp_{\vep}^{\de}(\mc K):=&\conj{\supp_{\vep,+}x}{x\in (\mc K)_\vep^\de},
\end{align*}
where $\supp_{\vep}x:=\conj{n\in \N}{|(x)_n|\ge \vep}$, $\supp_{\vep,+}x:=\conj{n\in \N}{(x)_n\ge \vep}$,
$(\mc K)_\vep^\de:=\conj{x\in \mc K}{\sum_{n\notin \supp_\vep x}|(x)_n|\le \de}$, and $(x)_n$ denotes the
$n^\text{th}$ coordinate of $x$ in the canonical unit basis of $c_{00}$.
In particular, when $(x_n)_n$ is a weakly-convergent sequence to $x$ in some Banach space $X$, and $\mc M$ is
an arbitrary subset of $B_{X^*}$ the family $\mc K:=\conj{(x^*(x_n-x))_n}{x^*\in \mc M}\con c_0$ is
relatively weakly-compact. Given $\vep,\de>0$ and $\mc M\con B_{X^*}$, we define
\begin{align*}
\mc F_\vep((x_n)_n,\mc M):=&\supp_\vep(\mc K),\\
\mc F_\vep^\de((x_n)_n,\mc M):=&\supp_\vep^\de(\mc K).
\end{align*}
When $\mc M=B_{X^*}$ we will simply omit $\mc M$ in the terminology above.
Given $n\in \N$, a family $\mc F$ on $I$ is called \emph{$n$-large} in some $J\con I$ when for every infinite
$K\con J$ there is
$s\in \mc F$ such that $\#(s\cap K)\ge n$. Or equivalently, when $\mc F[K]\not\con [K]^{\le n-1}$
for any $K\con J$. The family $\mc F$ is \emph{large} on $J$ when it is $n$-large on $J$ for every $n\in \N$.
Perhaps the first known example of a compact, hereditary and large family is the Schreier family
$$\mc S:=\conj{s\con \N}{\# s \le \min s}.$$
Generalizing ideas used for families of sets, given $\mc K\con c_0$ and $M\con \N$, we define $\mc
K[M]:=\conj{\mathbbm 1_M \cdot x}{x\in \mc K}$ as the image of $\mc K$ under the natural restriction to the
coordinates in $M$.
The following is a list of well-known results on compact families, commonly used by the specialist, which are necessary to understand most of the properties of Banach-Saks sets.
\begin{thm}\label{classif1}
Let $\mc K$ be a relatively weakly-compact subset of $c_0$, $\vep,\de>0$. Then there is an infinite subset
$M\con \N$ such that
\begin{enumerate}
\item[(a)] $\supp_\vep(\mc K [M])=\supp_\vep^\de(\mc K[M])$ and $\supp_\vep(\mc K [M])$ is hereditary, and
\item[(b.1)] either there is some $k\in \N$ such that $\supp_\vep(\mc K [M])=[M]^{\le k}$,
\item[(b.2)] or else ${}_*(\mc S\rest M):=\conj{s\setminus \{\min s\}}{s\in \mc S\rest M}\con \supp_\vep(\mc
K[M])$, and consequently $ \supp_\vep(\mc K[M])$ is large in $M$.
\end{enumerate}
\end{thm}
The proofs of these facts are mostly based on the Ramsey property of a particularly relevant type of
pre-compact families called \emph{barriers} on some set $M$, that were introduced by C. ST. J. A.
Nash-Williams \cite{Nash}. These are families $\mc B$ on $M$ such that every further subset $N\con M$ has an
initial segment in $\mc B$, and such that there do not exist two different elements of $\mc B$ which are subsets one of the other.
Examples of barriers are $[\N]^{n}$, $n\in \N$, and the \emph{Schreier barrier} $\mk S:=\conj{s\in \mc
S}{\#s=\min s}$. As it was proved by Nash-Williams, barriers have the Ramsey property, and in fact provide a
characterization of it. The final ingredient is the fact that if $\mc F$ is pre-compact, then there is a
trace $\mc F[M]$ of $\mc F$ which is the closure of a barrier on $M$ (we refer the reader to
\cite{AGR},\cite{Lo-To}).
\begin{defn}\label{def-1}
A subset $A$ of a Banach space $ X $ is a Banach-Saks set (or has the Banach-Saks property) if every
sequence $ (x_n)_n$ in $A$ has a \emph{Ces\`{a}ro}-convergent subsequence $(y_n)_n $, i.e. the sequence of
averages $ ((1/n) \sum_{k = 1}^n y_k)_n$ is norm-convergent in $X$.
\end{defn}
It is easy to see that compact sets are Banach-Saks, that the Banach-Saks property is hereditary (every
subset of a Banach-Saks set is again Banach-Saks), it is closed under sums, and that it is preserved under
the action of a bounded operator. It is natural to ask the following.
\begin{que-intro}\label{qu1}
Is the convex hull of a Banach-Saks set again a Banach-Saks set?
\end{que-intro}
Using the localized notion of the Banach-Saks property, a space has the
Banach-Saks property precisely when its unit ball is a Banach-Saks set. A classical work by T. Nishiura and D.
Waterman \cite{NW} states that a Banach space with the Banach-Saks property is reflexive. Here is the local version of this fact.
\begin{prop}\label{BS->rwc}
Every Banach-Saks set is relatively weakly-compact.
\end{prop}
\prue
Let $A$ be a Banach-Saks subset of a Banach space $X$, and fix a sequence $(x_n)_n$ in $A$. By Rosenthal's $\ell_1$ Theorem, there is a subsequence
$(y_n)_{n}$ of $(x_n)_n$ which is either equivalent to the unit basis of $\ell_1$ or weakly-Cauchy. The
first alternative cannot occur, since the unit basis of $\ell_1$ is not a Banach-Saks set. Let now $x^{**}\in
X^{**}$ be the $\mathrm{weak}^*$-limit of $(y_n)_{n}$. Since $A$ is a Banach-Saks subset of $X$, there is a
further subsequence $(z_n)_{n}$ of $(y_n)_{n}$ which is Ces\`{a}ro-convergent to some $x\in X$. It follows that
$x^{**}=x$, and consequently $(z_n)_{n}$ converges weakly to $x\in X$.
\fprue
As the previous proof suggests, the unit basis of $\ell_1$ plays a very special role for the Banach-Saks
property. This is fully explained by the following characterization, due to H. P. Rosenthal \cite{Rosenthal}
and S. Mercourakis \cite{Mercourakis} in terms of the asymptotic notions of \emph{Spreading models} and
\emph{uniform weakly-convergence}.
\begin{dfn-intro}
Let $X$ be a Banach space and let $(x_n)_n$ be a sequence in $X$ converging weakly to $x\in X$. Recall that
$(x_n)_n$ \emph{generates an $\ell_1$-spreading model} when there is $\de>0$ such that
\begin{equation}
\Big\|\sum_{n\in s} a_n (x_{n}-x)\Big\|\ge \de\sum_{n\in s} |a_n|
\end{equation}
for every $s\con \N$ with $\# s\le \min s$ and every sequence $(a_n)_{n\in s}$ of scalars.
The sequence $(x_n)_n$ \emph{uniformly weakly-converges} to $x$ when for every $\vep>0$ there is an integer
$n(\vep)>0$ such that for every functional $x^*\in B_{X^*}$
\begin{equation}\label{iijijdf}
\#(\conj{n\in \N}{|x^*(x_n-x)|\ge \vep})\le n(\vep).
\end{equation}
\end{dfn-intro}
The notion of $\ell_1$ spreading model is orthogonal to the Banach-Saks property: Suppose that
$(x_n)_n$ weakly-converges to $x$ and generates an $\ell_1$-spreading model. Let $\de>0$ be witnessing that.
Set $y_n=x_n-x$ for each $n$. Since $\nrm{y_n}\ge \de$ for all $n$, it follows by Mazur's Lemma that there is
a subsequence $(z_n)_{n}$ of $(y_n)_n$ which is a 2-basic sequence. We claim that no further subsequence of
$(z_n)_{n}$ is Ces\`{a}ro-convergent: Fix an arbitrary subset $s\con \N$ with even cardinality. Then the upper
half part $t$ of $s$ satisfies that $\#t\le \min t$. So, using also that $(z_n)_{n}$ is $2$-basic,
\begin{equation}
\left\|\frac1{\#s}\sum_{n\in s}z_n\right\|\ge \frac{1}{2}\left\|\frac{1}{\#s}\sum_{n\in t}z_n\right\|\ge \frac{\de}{2}\frac{\#t}{\#s}=\frac{\de}{4}.
\end{equation}
This immediately gives that no subsequence of $(z_n)_n$ is Ces\`{a}ro-convergent to 0.
On the other hand if $(x_n)_n$ is uniformly weakly-convergent to some $x$, then every subsequence of
$(x_n)_n$ is Ces\`{a}ro-convergent (indeed these conditions are equivalent \cite{Mercourakis}): Suppose that
$(y_n)_n$ is a subsequence of $(x_n)_n$. Now for each $\vep>0$ let
$n(\vep)$ be witnessing that \eqref{iijijdf} holds. Set $z_n=y_n-x$ for each $n$. Now suppose that $s$ is an
arbitrary finite subset of $\N$ with cardinality $\ge n(\vep)$. Then, given $x^*\in B_{X^*}$, and setting $t:=\conj{n\in s}{|x^*(z_n)|\ge \vep}$,
we have that
\begin{equation}
\left| x^*(\frac{1}{\#s}\sum_{n\in s}z_n)\right| \le \frac{1}{\#s}\sum_{n\in t}|x^*(z_n)|+ \frac{1}{\#s}\sum_{n\in s\setminus t}|x^*(z_n)|\le
\frac{n(\vep)}{\#s}C+\vep.
\end{equation}
Hence,
\begin{equation}
\left\|\frac{1}{\#s}\sum_{n\in s}z_n\right\| \le \frac{n(\vep)}{\#s}C+\vep.
\end{equation}
This readily implies that $(z_n)_n$ is Ces\`{a}ro-convergent to 0, or, in other words, $(y_n)_n$ is
Ces\`{a}ro-convergent to $x$. Next result summarizes the relationship between these three notions.
\begin{thm}\label{char1}
Let $A$ be an arbitrary subset of a Banach space $X$. The following are equivalent:
\begin{enumerate}
\item[(a)] $A$ is a Banach-Saks subset of $X$.
\item[(b)] $A$ is relatively weakly-compact and for every weakly-convergent sequence in $A$ it never generates an $\ell_1$-spreading model.
\item[(c)] $A$ is relatively weakly-compact and for every weakly-convergent sequence $(x_n)_n$ in $A$ and every $\vep>0$ the family
$\mc F_\vep((x_n)_n)$ is not large in $\N$.
\item[(d)] $A$ is relatively weakly-compact and for every weakly convergent sequence $(x_n)_n$ in $A$ there is some norming set $\mc
N$ such that for every $\vep>0$ the family $\mc F_\vep((x_n)_n,\mc N)$ is not large.
\item[(e)] For every sequence $(a_n)_n$ in $A$ there is a subsequence $(b_n)_n$ and some norming set $\mc N$ such
that for every $\vep>0$ there is $m\in \N$ such that $\mc F_\vep((b_n)_n,\mc N)\con [\N]^{\le m}$.
\item[(f)] Every sequence in $A$ has a uniformly weakly-convergent subsequence.
\end{enumerate}
\end{thm}
Recall that a $\la$-norming set, $0<\la\le 1$ is a subset $\mc N\con B_{X^*}$ such that
$$\la\nrm{x}\le \sup_{f\in \mc N}{|f(x)|} \text{ for every $x\in X$}.$$
The subset $\mc N\con B_{X^*}$ is norming when it is $\la$-norming for some $0<\la\le 1$. Note we could rephrase (e) as saying that the sequence $(b_n)_n$ is uniformly weakly-convergent with respect to $\mc N$.
The equivalences between (a) and (b), and between (a) and (f) are due to Rosenthal \cite{Rosenthal} and
Mercourakis \cite{Mercourakis}, respectively. For the sake of completeness, we give now hints of the proof of
Theorem \ref{char1} using, mainly, Theorem \ref{classif1}:
(a) implies (b) because we have already seen that if a sequence $(x_n)_n$ converges weakly to $x$, generates
an $\ell_1$-spreading model and is such that $(x_n-x)_n$ is basic, then it does not have Ces\`{a}ro-convergent
subsequences. We prove that (b) implies (c) by using Theorem \ref{classif1}. Let $(x_n)_n$ be a weakly
convergent sequence in $A$ with limit $x$, and let us see that $\mc F_\vep((x_n))$ is not large for any
$\vep>0$. Otherwise, by Theorem \ref{classif1}, there is some $M$ such that
$${}_*(\mc S\rest M)\con \mc F_\vep^\de((x_n)_n)[M]=\mc F_\vep((x_n)_{n})[M].$$
Set $y_n:=x_n-x$ for each $n\in M$. It follows that $(y_n)_{n\in M}$ is a non-trivial weakly-null sequence,
hence by Mazur's Lemma, there is $N\con M$ such that $(y_n)_{n\in N}$ is a 2-basic sequence. We claim that
then $(y_n)_{n\in N}$ generates an $\ell_1$-spreading model, which is impossible: Let $s\in \mc S\rest N$,
and let $(\la_k)_{k\in s}$ be a sequence of scalars. Let $t\con s$ be such that $\la_k \cdot \la_l\ge 0$ for
all $k,l\in t$, $|\sum_{k\in t} \la_k|\ge 1/4\sum_{k\in s}|\la_k|$ and $t\in {}_*(S\rest N)$. Then let
$x^*\in B_{X^*}$ be such that
$$\text{$x^*(y_n)\ge \vep$ for $n\in t$, and $\sum_{n\in M\setminus t}|x^*(y_n)|\le
\frac\vep4$.}$$
It follows that
\begin{align*}
\left\|\sum_{k \in s} \la_k y_k\right\|\ge & \left|x^*(\sum_{k\in s}\la_k y_k)\right|\ge \left|\sum_{k\in t}\la_k x^*( y_k)\right|
-\frac{\vep}4\max_{k\in s}|\la_k|\ge \vep
\left|\sum_{k\in t}\la_k\right| -\frac{\vep}4\max_{k\in s}|\la_k|\\
\ge & \frac\vep4 \sum_{k\in s}|\la_k|- {\vep}\left\|\sum_{k\in s}\la_k y_k\right\|,
\end{align*}
and consequently,
\begin{equation}
\left\|\sum_{k \in s} \la_k y_k\right\|\ge\frac{\vep}{4(1+\vep)}\sum_{k\in s}|\la_k|.
\end{equation}
Now, we have that (c) implies (d) and (d) implies (e) trivially. For the implication (e) implies (f) we use
the following classical result by J. Gillis \cite{Gi}.
\begin{lemma}\label{gillis}
For any $\vep,\de>0$ and $m\in \N$ there is $n:=\mathbf{n}(\vep,\de,m)$ such that whenever
$(\Omega,\Sigma,\mu)$ is a probability space and $(A_i)_{i=1}^n$ is a sequence of $\mu$-measurable sets with
$\mu(A_i)\ge \vep $ for every $1\le i\le n$, there is $s\con \{1,\dots,n\}$ of cardinality $m$ such that
$$\mu(\bigcap_{i\in s}A_i)\ge (1-\de)\vep^m.$$
\end{lemma}
Incidentally, the counterexample by P. Erd{\H{o}}s and A. Hajnal of the natural generalization of Gillis'
result concerning double-indexed sequences will be crucial for our solution to Question 1 (see Section
\ref{counterexample}).
We pass now to see that (e) implies (f): Fix a sequence $(x_n)_n$ in $A$ converging weakly to $x$ and
$\vep>0$. By (e), we can find a subsequence $(y_n)_n$ of $(x_n)_n$ and a $\la$-norming set $\mc N$,
$0<\la\le 1$, such that $(y_n)_n$ uniformly-weakly-converges with respect to $\mc N$. Going towards a
contradiction, suppose $(y_n)_n$ does not uniformly weakly-converge to $x$. Fix then $\vep>0$ such that there
are arbitrary large sets in $\mc F_\vep((y_n)_n)$. In this case we see that then $\mc
F_{\la\vep(1-\de)}((y_n)_n, \mc N)$ has also arbitrary large sets, contradicting our hypothesis. Set
$z_n:=y_n-x$ for every $n\in \N$. Now given $m\in \N$, let $x^*\in B_{X^*}$ be such that
$$s:=\conj{n\in M}{|x^*(z_n)|\ge \vep} \text{ has cardinality $\ge \mathbf{n}(\frac{\vep\de\la}{2K},\frac12,m)$},$$
where $K:=\sup_n \nrm{z_n}$. By a standard separation result, there are $f_1,\dots,f_l\in \mc N$ and
$\nu_1,\dots,\nu_l$ such that $\sum_{i=1}^l |\nu_i|\le \la^{-1}$ and
\begin{equation}
\Big|\sum_{i=1}^l \nu_i f_i(z_n)\Big|\ge \vep(1-\frac{\de}2) \text{ for every $n\in s$}.
\end{equation}
Now on $\{1,2,\dots,l\}$ define the probability measure induced by the convex combination
$$\Big(\frac{1}{\sum_{j=1}^l |\nu_j| }|\nu_i|\Big)_{i=1}^l.$$
For each $n\in s$, let
$$A_n:=\conj{j\in \{1,\dots,l\}}{|f_j(z_n)|\ge \vep(1-\de)}.$$
Then, for every $n\in s$ one has that
\begin{align*}
\vep(1-\frac{\de}{2}) \le \Big|\sum_{j=1}^l \nu_j f_j(z_n)\Big|\le \sum_{j\in A_n}|\nu_j|K +\vep(1-\de).
\end{align*}
Hence,
$$ \mu(A_n)\ge \frac{\de \vep \la}{2K}.$$
By Gillis' Lemma, it follows in particular that there is some $t\con\{1,\dots,l\}$ of cardinality $m$ such
that $\bigcap_{n\in t}A_n \neq \buit $, so let $j$ be in that intersection. It follows then that
$|f_j(z_n)|\ge \la \vep(1-\de)$ for every $n\in t$, hence $t\in \mc F_{\la\vep(1-\de)}((y_n)_n,\mc N)$.
(f) implies (a) because uniformly weakly-convergent sequences are Ces\`{a}ro-convergent. This finishes the proof.
Hence, Question 1 for weakly-null sequences can be reformulated as follows:
\begin{que-intro}\label{qu2}
Suppose that $(x_n)_n$ is a weakly-null sequence such that some sequence in $\conv(\{x_n\}_n)$ generates an
$\ell_1$-spreading model. Does there exist a subsequence of $(x_n)_n$ generating an $\ell_1$-spreading model?
\end{que-intro}
As a consequence of Theorem \ref{char1} we obtain the following well-known 0-1-law by P. Erd\"{o}s and M.
Magidor \cite{EM}.
\begin{cor}
Every bounded sequence in a Banach space has a subsequence such that either all its further subsequences are
Ces\`{a}ro-convergent, or none of them.
\end{cor}
To see this, let $(x_n)_n$ be a sequence in a Banach space. If $A:=\{x_n\}_n$ is Banach-Saks, then, by (e)
above, there is a uniformly weakly-convergent subsequence $(y_n)_n$ of $(x_n)_n$, and as we have mentioned
above, every further subsequence of $(y_n)_n$ is Ces\`{a}ro-convergent. Now, if $A$ is not Banach-Saks, then by (b)
there is a weakly-convergent sequence $(y_n)_n$ in $A$ with limit $y$ generating an $\ell_1$-spreading model.
We have already seen that if $(z_n)_n$ is a basic subsequence of $(y_n-y)_n$, then no further subsequence of
it is Ces\`{a}ro-convergent.
We introduce now the Schreier-like spaces, which play an important role for the Banach-Saks property.
\begin{defn} \label{def-2}
Given a family $\mc F$ on $\N$, we define the Schreier-like norm $\nrm{\cdot}_\mc F$ on $c_{00}(\N)$ as follows.
For each $x\in c_{00}$ let
\begin{equation}\label{def-21}
\|x\|_\mc F=\max\{\|x\|_\infty,\sup_{s\in \mc F}\sum_{n\in s}|(x)_n|\},
\end{equation}
where $(x)_n$ denotes the $n^{\mr{th}}$-coordinate of $x$ in the usual Hamel basis of $c_{00}(\N)$. We define
the Schreier-like space $X_\mc F$ as the completion of $c_{00}$ under the ${\mc F}$-norm.
\end{defn} Note that $X_{\mc F}=X_{\widehat{\mc F}}$ for every family $\mc F$, so the hereditary property of $\mc F$ plays no
role for the corresponding space. It is clear that the unit vector basis $(u_n)_n$ is a 1-unconditional
Schauder basis of $X_\mc F$, and it is weakly-null if and only if $\mc F$ is pre-compact. In fact,
otherwise there will be a subsequence of $(u_n)_n$ 1-equivalent to the unit basis of $\ell_1$. So,
Schreier-like spaces will be assumed to be constructed from pre-compact families. It follows then that for
pre-compact families $\mc F$, the space $X_\mc F$ is $c_0$-saturated. This can be seen, for example, by using
Pt\'{a}k's Lemma, or by the fact that $X_\mc F=X_{\widehat{F}} \hookrightarrow C(\widehat{\mc F})$ isometrically,
and the fact that the function spaces $C(K)$ for $K$ countable are $c_0$-saturated, by a classical result of
A. Pelczynski and Z. Semadeni \cite{Pelczynski-Semadeni}.
Observe that the unit basis of the \emph{Schreier space} $X_\mc S$ generates an $\ell_1$-spreading model, so
no subsequence of it can be Ces\`{a}ro-convergent. In fact, the same holds for the Schreier-like space $X_\mc F$ of
an arbitrary large family $\mc F$. However, it was proved by M. Gonz\'alez and J. Guti\'errez in \cite{GG} that
the convex hull of a Banach-Saks subset of the Schreier space $X_\mc S$ is again Banach-Saks. In fact, we
will see in Subsection \ref{generalized-Schreier} that the same holds for the spaces $X_\mc F$ where $\mc F$
is a generalized Schreier family. Still, a possible counterexample for Question \ref{qu1} has to be a
Schreier like space, as we see from the following characterization.
\begin{thm}\label{erioeiofjioedf}
The following are equivalent:
\begin{enumerate}
\item[(a)] There is a normalized weakly-null sequence having the Banach-Saks property and whose convex hull is not a Banach-Saks set.
\item[(b)] There is a Shreier-like space $X_\mc F$ such that its unit basis $(u_n)_n$ is Banach-Saks and its convex hull is not.
\item[(c)] There is a compact and hereditary family $\mc F$ on $\N$ such that:
\begin{enumerate}
\item[(c.1)] $\mc F$ is not large in any $M\con \N$.
\item[(c.2)] There is a partition $\bigcup_n I_n=\N$ in finite sets $I_n$ a probability measure $\mu_n$
on $I_n$ and $\de>0$ such that the set
\begin{equation}
\label{j4ijirjtf}\mc G_\de^{\bar \mu}(\mc F):=\conj{t\con \N}{\text{there is $s\in \mc F$ such that
$\min_{n\in t}\mu_n(s\cap I_n)\ge \de$}}
\end{equation}
is large.
\end{enumerate}
\end{enumerate}
\end{thm}
For the proof we need the following useful result.
\begin{lemma}\label{lem-3}
Let $(x_n)_n$ and $(y_n)_n$ be two bounded sequences in a Banach space $X$.
\begin{enumerate}
\item[(a)] If $\sum_n\nrm{x_n-y_n}<\infty$, then $\{x_n\}_n$ is Banach-Saks if and only if $\{y_n\}_n$ is
Banach-Saks.
\item[(b)] $\conv(\{x_n\}_n)$ is a Banach-Saks set if and only if every block sequence in $\conv(\{x_n\}_n)$ has
the Banach-Saks property.
\end{enumerate}
\end{lemma}
\begin{proof} The proof of (a) is straightforward. Let us concentrate in (b): Suppose that $\conv(\{x_n\}_n)$ is not Banach-Saks, and let $(y_n)_n$ be a sequence in $\conv(\{x_n\}_n)$
without Ces\`{a}ro-convergent subsequences. Write $y_n:=\sum_{k\in F_n}\la_k^{(n)}x_k$, $(\la_k^{(n)})_{k\in
F_n}$ a convex combination, for each $n$. By a Cantor diagonalization process we find $M$ such that
$((\lambda_{k}^{(n))_{k\in \N}})_{n\in M}$ converges pointwise to a (possibly infinite) convex sequence
$(\la_k)_k \in B_{\ell_1}$. Set $\mu_k^{(n)}:=\la_k^{(n)}-\la_k$ for each $n\in M$. Then there is an
infinite subset $N\con M$ and a block sequence $((\eta_{k}^{(n)})_{k\in s_n})_{n\in N}$, $\sum_{k\in
s_n}|\eta_k^{(n)}|\le 2$, such that
\begin{equation}\label{dkfmkdfmskd}
\sum_{n\in N}\sum_{k\in \N}|{\mu_k^{(n)}-\eta_k^{(n)}}|<\infty.
\end{equation}
Setting $z_n:=\sum_{k\in s_n} \eta_k^{(n)}x_k$ for each $n$, it follows from \eqref{dkfmkdfmskd} that
\begin{equation}
\sum_{n\in N}\nrm{y_n-z_n}<\infty.
\end{equation}
By (a), no subsequence of $(z_n)_{n\in N}$ is Ces\`{a}ro-convergent. Now set $t_n:=\conj{k\in
s_n}{\eta_k^{(n)}\ge 0}$, $u_n=s_n\setminus t_n$, $z_n^{(0)}:=\sum_{k\in t_n}{\eta_k^{(n)}}$ and
$z_n^{(1)}:=z_n-z_n^{(0)}$. Then, either $\{z_n^{(0)}\}_{n\in N}$ or $\{z_n^{(1)}\}_{n\in N}$ is not
Banach-Saks. So, without loss of generality, let us assume that $\{z_n^{(0)}\}_{n\in N}$ is not Banach-Saks.
Then, using again (a), and by going to a subsequence if needed, we may assume that $\sum_{k\in
t_n}\eta_k^{(n)}=\eta$ for every $n\in N$. It follows that the block sequence $((1/\eta)\sum_{k\in
t_n}\eta_k^{(n)}x_n)_{n\in N}$ in $\conv(\{x_n\}_n)$ does not have that Banach-Saks property.
\end{proof}
\prue[\textsc{Proof of Theorem \ref{erioeiofjioedf}}]
It is clear that (b) implies (a). Let us prove that (c) implies (b). We fix a family $\mc F$ as in (c). We
claim that $X_\mc F$ is the desired Schreier space: Let $(u_n)_n$ be the unit basis of $X_\mc F$, and let
$$\mc N:=\{\pm u_n^*\}_n\cup \conj{\sum_{n\in s}\pm u_n^*}{s\in \mc F},$$
where $(u_n^*)$ is the biorthogonal sequence to $(u_n)_n$. Then
$$\mc F_\vep((u_n),\mc N)=\mc F\cup [\N]^1$$
for every $\vep>0$, so it follows from our hypothesis (c.1) and Theorem \ref{char1} (d) that $\{u_n\}_n$ is
Banach-Saks. Define now for each $n\in \N$, $x_n:=\sum_{k\in I_n} (\mu_n)_k u_k$. Then
$$\mc F_\de((x_n)_n,\mc N)=\mc G_\de(\mc F)$$
so $\mc F_\de((x_n)_n)$ is large, hence $\{x_n\}_n\con \conv(\{u_n\}_n)$ is not Banach-Saks.
Finally, suppose that (a) holds and we work to
see that (c) also holds. Let $(x_n)_n$ be a weakly-null sequence in some space $X$ with the Banach-Saks
property but such that $\conv(\{x_n\}_n)$ is not Banach-Saks. By the previous Lemma \ref{lem-3} (b), we may
assume that there is a block sequence $(y_n)_n$ with respect to $(x_n)_n$ in $\conv(\{x_n\}_n)$ without the
Banach-Saks property. By Theorem \ref{char1} there is some subsequence $(z_n)_n$ of $(y_n)_n$ and $\vep>0$
such that
\begin{equation}
\label{opj4t4rjt44}\mc F_\vep((z_n)_n)\text{ is large}.
\end{equation}
By re-enumeration if needed, we may assume that $\bigcup_n \supp z_n=\N$, where the support is taken with
respect to $(x_n)$. Let
$$\mc F:=\mc F_{\frac{\vep}2}((x_n)_n).$$ On the other hand, since
$(x_n)_n$ is weakly-null, it follows that $\mc F$ is pre-compact, and, since it is hereditary by definition,
it is compact. Again by invoking Theorem \ref{char1} we know that $\mc F$ is not large in any $M\con \N$. Now
let $I_n:=\supp z_n$ and let $\mu_n$ be the convex combination with support $I_n$ such that $z_n=\sum_{k\in
I_n}(\mu_n)_k x_k$ for each $n\in \N$. Then $(I_n)_n$ is a partition of $\N$ and $\mu_n$ is a probability
measure on $I_n$. We see now that \eqref{j4ijirjtf} holds for $\de:=\vep/2$: Fix an infinite subset $M\con
\N$, and fix $m\in \N$. By \eqref{opj4t4rjt44}, we can find $x^*\in B_{X^*}$ such that
\begin{equation}
\label{j4o3rp4jr4}
s:=\conj{n\in M}{|x^*(z_n)|\ge \vep }\text{ has cardinality $\ge m$.}
\end{equation}
We claim that $s\in \mc G_{\vep/2}^{\bar \mu}(\mc F)$: Fix $n\in s$, and let $s_n:=\conj{k\in
I_n}{|x^*(x_k)|\ge \vep/2}$ and $t_n:=I_n\setminus s_n$. Then
\begin{align*}
\vep \le x^*(z_n)\le \sum_{k\in s_n}(\mu_n)_k+\sum_{k\in t_n} (\mu_n)_k \frac{\vep}2 \le \sum_{k\in s_n}(\mu_n)_k+\frac{\vep}2
\end{align*}
hence $\mu_n(s_n)\ge \vep/2$, and so $s\in \mc G_{\vep/2}^{\bar \mu}(\mc F)$.
\fprue
\section{Stability under convex hull: positive results}\label{positive results}
Recall that a Banach space $X$ is said to have the weak Banach-Saks property if every weakly convergent
sequence in $X$ has a Ces\`aro convergent subsequence. Equivalently, every weakly compact set in $X$ has
\pbs. Examples of Banach spaces with the weak Banach-Saks property but without the Banach-Saks property are
$L^1$ and $ c_0 $ (see \cite{Szlenk}).
The following simple observation provides our first positive result concerning the stability of Banach-Saks sets under convex hulls.
\begin{prop} \label{prop-1}
Let $X$ be Banach space with the weak Banach-Saks property. Then the convex hull of a Banach-Saks subset of
$X$ is also Banach-Saks.
\end{prop}
\begin{proof}
If $A \subseteq X$ has \pbs, then $A$ is relatively weakly compact. Therefore, by Krein-\v{S}mulian's
Theorem, $\conv(A)$ is also relatively weakly compact. Since $X$ has the weak Banach-Saks property, it
follows that $\conv(A)$ has \pbs.
\end{proof}
However, the weak Banach-Saks property is far from being a necessary condition. For instance, the Schreier
space $X_{{\mc S}}$ does not have the weak Banach-Saks property \cite{Szlenk}, but the convex hull of any
Banach-Saks set is again a Banach-Saks set (see \cite[Corollary 2.1]{GG}). In Section
\ref{generalized-Schreier}, we will see that this result can be extended to generalized Schreier spaces.
Another partial result is the following.
\begin{prop}\label{Teor-1}
Let $(x_n)_n$ be a sequence in a Banach space $X$ such that every subsequence is Ces{\`a}ro convergent. Then
$\conv(\{x_n\})$ is a Banach-Saks set.
\end{prop}
\begin{proof}
As we mentioned in Section 2, the hypothesis is equivalent to saying that $(x_n)_n$ is uniformly
weakly-convergent to some $x\in X$ \cite[Theorem 1.8]{Mercourakis}. Now, by Lemma \ref{lem-3} (b), it
suffices to prove that every block sequence $(y_n)_n$ with respect to $(x_n)_n$ in $\conv(\{x_n\}_n)$ is
Banach-Saks. Indeed we are going to see that such sequence $(y_n)_n$ is uniformly weakly-convergent to x. Fix
$\vep>0$, and let $m$ be such that
\begin{equation}
\label{i4ijti4jtr}
\mc F_\vep((x_n)_n)\con [\N]^{\le m}.
\end{equation}
We claim that $\mc F((y_n)_n,\vep)\con [\N]^{\le m}$ as well: So, let $x^*\in B_{X^*}$ and define
$s:=\conj{n\in \N}{|x^*(y_n-x)|\ge \vep}$. Using that $\{y_n\}_n\con \conv(\{x_n\}_n)$ we can find for each
$n\in s$,an integer $l(n)\in \N$ such that $|x^*(x_{l(n)}-x)|\ge \vep$. Since $(y_n)_n$ is a block
sequence with respect to $(x_n)_n$, it follows that $(l(n))_{n\in s}$ is a 1-1 sequence. Finally, since
$\{l(n)\}_{n\in s}\in \mc F_\vep((x_n)_n)$, it follows from \eqref{i4ijti4jtr} that $\#s\le m$.
\end{proof}
It is worth to point out that the hypothesis and conclusion in the previous proposition are not equivalent:
The unit basis of the space $(\bigoplus_n \ell_1^n)_{c_0}$ is not uniformly weakly-convergent (to 0) but its
convex hull is a Banach-Saks set.
Recall that for a $\sigma$-field $\Sigma$ over a set $\Omega$ and a Banach space $X$, a function $\mu:\Sigma\rightarrow X$ is called a (countably additive) vector measure if it satisfies
\begin{enumerate}
\item $\mu(E_1\cup E_2)=\mu(E_1)+\mu(E_2)$, whenever $E_1,E_2\in \Sigma$ are disjoint, and
\item for every pairwise disjoint sequence $(E_n)_n$ in $\Sigma$ we have that $\mu(\cup_{n=1}^\infty E_n)=\sum_{n=1}^\infty \mu(E_n)$ in the norm of
$X$.
\end{enumerate}
\begin{prop}\label{vectormeasure}
If a Banach-Saks set $A$ is contained in the range of some vector measure, then $\conv(A)$ is also Banach-Saks.
\end{prop}
\begin{proof}
J. Diestel and C. Seifert proved in \cite{Diestel-Seifert76} that every set contained in the range of a vector measure is
Banach-Saks. Although the range of a vector measure $\mu(\Sigma)$ need no be a convex set, by a classical result of I. Kluvanek and G. Knowles
\cite[Theorems IV.3.1 and V.5.1]{KK}, there is always a (possibly different) vector measure $\mu'$ whose range contains the convex hull
of $\mu(\Sigma)$. Thus if a set $A$ is contained in the range of a vector measure, then $\conv(A)$ is also a Banach-Saks set.
\end{proof}
However, there are Banach-Saks sets which are not the range of a vector measure: consider for instance the unit ball of $\ell_p$ for $1<p<2$ \cite{Diestel-Seifert76}.
\subsection{A result for generalized Schreier spaces}\label{generalized-Schreier}
We present here a positive answer to Question 1 for a large class of Schreier-like spaces, the spaces
$X_\al:=X_{\mc S_\al}$ constructed from the generalized Schreier families $\mc S_\al$ for a countable ordinal
number $\al$.
Recall that given two families $\mc F$ and $\mc G$ on $\N$, we define
\begin{align*}
\mc F \oplus \mc G:=&\conj{s\cup t}{s\in \mc G,\, t\in \mc F \text{ and $s<t$}} \\
\mc F\otimes \mc G :=&\conj{s_0\cup \dots \cup s_n}{(s_i) \text{ is a block sequence in $\mc F$ and $\{\min s_i\}_{i\le n}\in \mc G$}},
\end{align*}
where $s<t$ means that $\max s<\min t$.
\begin{defn}
For each countable limit ordinal number $\al$ we fix a strictly increasing sequence $(\be^{(\al)}_n)_n$ such
that $\sup_n \be_n^{(\al)}=\al$. We define now
\begin{enumerate}
\item[(a)] $\mc S_0:= [\N]^{\le 1}$.
\item[(b)] $\mc S_{\al+1}=\mc S_\al \otimes \mc S$.
\item[(c)] $\mc S_\al:= \bigcup_{n\in \N} \mc S_{\be_n^{(\al)}}\rest [n+1,\infty[$.
\end{enumerate}
\end{defn}
Then each $S_\al$ is a compact, hereditary and spreading family with Cantor-Bendixson rank equal to
$\om^\al$. These families have been widely used in Banach space theory. As an example of their important role
we just mention that given a pre-compact family $\mc F$ there exist an infinite set $M$, a countable ordinal
number $\al$ and $n\in \N$ such that $\mc S_\al \otimes [M]^{\le n}\con \mc F[M]\con \mc S_{\al}\otimes
[M]^{\le n+1} $. It readily follows that every subsequence of the unit basis of $X_\mc F$ has a subsequence
equivalent to a subsequence of the unit basis of $X_\al$. The main result of this part is the following.
\begin{thm}\label{ioo34iji4jtr}
Let $\alpha$ be a countable ordinal number. $A\con X_\al$ has the Banach-Saks property if and only if $\conv(A)$ has the Banach-Saks property.
\end{thm}
The particular case $\al=0$ is a consequence of the weak-Banach-Saks property of $c_0$ and Proposition
\ref{prop-1}. For $\al\ge 1$ the spaces $X_\al$ are not weak-Banach-Saks. Still, Gonz\'alez and Guti\'errez
proved the case $\al=1$ in \cite{GG}. Implicitly, the case $\al<\om$ was proved by I. Gasparis and D. Leung
\cite{GL} since it follows from their result stating that every seminormalized weakly-null sequence in
$X_{\al}$, $\al<\om$, has a subsequence equivalent to a subsequence of the unit basis of $X_\be$, $\be\le
\al$. We conjecture that the same should be true for an arbitrary countable ordinal number $\al$.
The next can be proved by transfinite induction.
\begin{prop}\label{njkrjggff} Let $\be<\ou$.
\begin{enumerate}
\item[(1)]For every $\al<\be$ there is some $n\in \N$ such that $(\mc S_\al \otimes \mc S)\rest (\N / n)\con \mc S_\be$.
\item[(2)] For every $n\in \N$ there are $\al_0,\dots,\al_n<\be$ such that
$$(\mc S_\al)_{\le n}:=\conj{s\in \mc S_\be}{\min s\le n}\con \mc S_{\al_0}\oplus \cdots \oplus \mc S_{\al_n}.$$
\end{enumerate}\qed
\end{prop}
Fix a countable ordinal number $\al$. We introduce now a property in $X_\al$ that will be used to
characterize the Banach-Saks property for subsets of $X_\al$.
\begin{defn}
We say that a weakly null sequence $(x_n)_n$ in $X_\al$ is \emph{$<\al$-null} when
$$\text{for every $\be<\al$ and every $\vep>0$ the set
$\conj{n\in \N}{\nrm{x_n}_{\be}\ge \vep}\text{ is finite}.$}$$
\end{defn}
\begin{prop}\label{iurhtiurt}
Suppose that $(x_n)_n$ is a bounded sequence in $X_\al$ such that there are $\vep>0$, $\be<\al$ and a block
sequence $(s_n)_n$ in $\mc S_\be$ such that $\sum_{k\in s_n}|(x_n)_k|\ge \vep$. Then $\{x_n\}_{n}$ is not
Banach-Saks.
\end{prop}
\prue
Let $K=\sup_{n}\nrm{x_n}$. Let $\bar{n}\in \N$ be such that $(\mc S_\be \otimes \mc S)\rest [\bar n,\infty[\con \mc S_\al$. Fix a subsequence $(x_n)_{n\in
M}$.
\begin{claim}\label{ijirjirjgr}
For every $\de>0$ there is a subsequence $(x_n)_{n\in N}$ such that for every $n\in N$ one has that
\begin{equation}\label{iorijir}
\sum_{m\in N, \, m<n} \max\left\{\sum_{k\in s_n} |(x_m)_k|, \sum_{k\in s_m} |(x_n)_k|\right\}\le \de.
\end{equation}
\end{claim}
The proof of this claim is the following. Using that $(u_n)_n$ is a Schauder basis of $X_\al$ and that
$(s_n)_n$ is a block, we can find a
subsequence $(x_n)_{n\in N}$ such that for every $n\in N$ one has that
\begin{equation}
\sum_{m\in N, \, m<n} \sum_{k\in s_n} |(x_m)_k|\le \de.
\end{equation}
We color each pair $\{m_0<m_1\}\in [\N]^2$ by
$$c(\{m_0,m_1\})=\left\{\begin{array}{ll}
0 & \text{ if $\sum_{k\in s_{m_0}}|(x_{m_1})_k|\ge \de$}\\
1 & \text{ otherwise}.
\end{array}\right.$$
By the Ramsey Theorem, there is some infinite subset $P\con N$ such that $c$ is constant on $[P]^2$ with
value $i= 0,1$. We claim that $i=1$. Otherwise, suppose that $i=0$. Let $m_0\in P$, $m_0>\bar n$ be such
that $ m_0 \cdot \de
> K $, and let $m_1\in P$ be such that $t=[m_0,m_1[\cap P$ has cardinality $m_0$. Then $ n_0< m_0\le \min
s_{m_0}$, and hence $s=\bigcup_{m\in t}s_m\in \mc S_\al$. But then,
$$K\ge \nrm{x_{m_1}}\ge \sum_{k\in s}|(x_{m_1})_k|=\sum_{m\in t}\sum_{k\in s_m} |(x_{m_1})_k|\ge \# t \cdot \de >K,$$
a contradiction. Now it is easy to find $P\con N$ such that for every $n\in P$,
\begin{equation}
\sum_{m\in P, \, m<n} \sum_{k\in s_m} |(x_n)_k|\le \de.
\end{equation}
Using the Claim \ref{ijirjirjgr} repeatedly, we can find $N\con M$ such that
$$\sum_{n\in N}\sum_{m\neq n\in N }\sum_{k\in s_m}|(x_n)_k|\le \frac{\vep}2.$$
In other words, $(x_n, \sum_{k\in s_n }\theta_k^{(n)}u_k^*)_{n\in N}$ behaves almost like a biorthogonal
sequence for every sequence of signs $((\theta_k^{(n)})_{k\in s_n})_{n\in N}$. We see now that $(x_n)_{n\in
N}$ generates an $\ell_1$-spreading model with constant $\ge \vep/2$. We assume without loos of generality
that $\bar n<N$. Let $t\in \mc S\rest N$, and let $(a_n)_{n\in t}$ be a sequence of scalars such that
$\sum_{n\in t}|a_n|=1$. Then $s=\bigcup_{n\in t} s_n \in \mc S_\al $, and hence,
\begin{align*}
\nrm{\sum_{n\in t}a_n x_n}\ge &\sum_{k\in s}|(\sum_{n\in t}a_n x_n)_k| =\sum_{n\in t}\sum_{k\in s_n}|(\sum_{m\in t}a_m x_m)_k|\ge\\
\ge & \sum_{n\in t}|a_n|\sum_{k\in s_n} |(x_n)_k| -
\sum_{n\in t}\sum_{k\in s_n} \sum_{m\in t\setminus \{n\}}|(x_m)_k|\ge \vep \sum_{n\in t}|a_n|- \frac{\vep}{2}\ge \frac\vep2 \sum_{n\in t}|a_n|.
\end{align*}
\fprue
The following characterizes the Banach-Saks property of subsets of $X_\al$.
\begin{prop}\label{odjfiojdijdsfsd}
Let $(x_n)_n$ be a weakly null sequence in $X_\al$. The following are equivalent:
\begin{enumerate}
\item[(1)] Every subsequence of $(x_n)_n$ has a further subsequence dominated by the unit basis of $c_0$.
\item[(2)] Every subsequence of $(x_n)_n$ has a further norm-null subsequence or a subsequence
equivalent to the unit basis of $c_0$.
\item[(3)] $\{x_n\}_n$ is a Banach-Saks set.
\item[(4)] $(x_n)_n$ is $<\al$-null.
\end{enumerate}
\end{prop}
\prue
$(1)\Rightarrow (2) \Rightarrow (3)$ trivially. (3) implies (4): Suppose otherwise that $(x_n)_n$ is not
$<\al$-null. Fix $\vep>0$ and $\be<\al$ such that
$$M:=\conj{n\in \N}{\nrm{x_n}_{\be}\ge \vep} \text{ is infinite}.$$
For each $n\in M$, let $s_n\in \mc S_{\be}$ such that $\sum_{k\in s_n} |(x_n)_k|\ge \vep.$ Since $(x_n)_{n\in
\N}$ is weakly-null, we can find $N\con \N$ and $t_n\con s_n$ for each $n\in N$ such that $(t_n)_{n\in N}$ is
a block sequence and $\sum_{k\in t_n} |(x_n)_k|\ge \vep/2.$ Then by Proposition \ref{iurhtiurt},
$\{x_n\}_{n\in N}$ is not Banach-Saks, and we are done.
(4) implies (1). Let $K:=\sup_{n\in \N}\nrm{x_n}$. Let $(x_n)_{n\in M}$ be a subsequence of $(x_n)_{n\in
\N}$. If $\al=0$, Then $X_\al$ is isometric to $c_0$, and so we are done. Let us suppose that $\al>0$.
Fix $\vep>0$.
\begin{claim}\label{njnjvnfd}
There is $N=\{n_k\}_k\con M$, $n_k<n_{k+1}$, such that for every $i<j$ and every $s\in \mc S_{\al}$
$$\text{ if $\sum_{k\in s}|(x_{n_{i}})_k|> \vep /2^{i+1}$, then }\sum_{k\in s}|(x_{n_{j}})_k|\le \frac{\vep}{2^{j}}.$$
\qed
\end{claim}
Its proof is the following: Let $n_0=\min M$. Let $m_0\in \N$ be such that
\begin{equation}
\sum_{k>m_0}|(x_{n_0})_k|\le \frac{\vep}{2}.
\end{equation}
In other words,
\begin{equation}
\conj{s\in \mc S_\al}{\sum_{k\in s}|(x_{n_0})_k|> \frac{\vep}2}\con (\mc S_\al)_{\le m_0}.
\end{equation}
By Proposition \ref{njkrjggff} (2) there are $\al_0^{(0)},\dots,\al_{l_0}^{(0)}<\al$ such that
\begin{equation}
(\mc S_\al)_{\le m_0}\con \mc S_{\al_0^{(0)}}\oplus \cdots \oplus \mc S_{\al_{l_0}^{(0)}}.
\end{equation}
We use that $(x_n)_n$ is $<\al$-null to find $n_1\in M$, $n_1>n_0$, be such that for every $n\ge n_1$ one has
that
\begin{equation}
\nrm{x_n}_{(\mc S_\al)_{\le m_0}}\le \frac{\vep}{2 }.
\end{equation}
Let now $m_1>\max\{n_1,m_0\}$ be such that
\begin{equation}
\sum_{k>m_1}|(x_{n_1})_k|\le \frac{\vep}{4}.
\end{equation}
Then there are $\al_0^{(1)},\dots,\al_{l_1}^{(1)}<\al$ such that
\begin{equation}
\conj{s\in \mc S_\al}{\sum_{k\in s}|(x_{n_1})_k|> \frac{\vep}4}\con (\mc S_\al)_{\le m_1}\con \mc S_{\al_0^{(1)}}\oplus \cdots \oplus \mc S_{\al_{l_1}^{(1)}}.
\end{equation}
Let now $n_2\in M$, $n_2>n_1$ be such that for every $n\ge n_2$ one has that
\begin{equation}
\nrm{x_n}_{(\mc S_\al)_{\le m_1}} \le \frac{\vep}{4}.
\end{equation}
In general, suppose defined $n_i$, let $m_i>\max\{n_i,m_{i-1}\}$ be such that
\begin{equation}
\sum_{k>m_i}|(x_{n_i})_k|\le \frac{\vep}{2^{i+1}}.
\end{equation}
Then,
\begin{equation}
\conj{s\in \mc S_\al}{\sum_{k\in s}|(x_{n_i})_k|> \frac{\vep}{2^{i+1}}}\con (\mc S_\al)_{\le m_i}\con \mc S_{\al_0^{(i)}}\oplus \cdots \oplus \mc S_{\al_{l_i}^{(i)}},
\end{equation}
for some $\al_0^{(i)},\dots,\al_{l_i}^{(i)}<\al$. Let $n_{i+1}\in M$, $n_{i+1}>n_i$ be such that for all
$n\ge n_{i+1}$ one has that
\begin{equation}\label{jkreirjng}
\nrm{x_n}_{(\mc S_\al)_{\le m_i}}\le \frac{\vep}{2^{i+1}}.
\end{equation}
We have therefore accomplish the properties we wanted for $N$.
Now fix $N$ as in Claim \ref{njnjvnfd}. Then $(x_n)_{n\in N}$ is dominated by the unit basis of $c_0$. To see
this, fix a finite sequence of scalars $(a_i)_{i\in t}$, and $s\in \mc S_\al$. If $\sum_{k\in
s}|(x_{n_i})_k|\le \vep/2^{i+1} $ for every $i\in t$, then,
\begin{align*}
\sum_{k\in s} |(\sum_{i\in t}a_i x_{n_i})_k|\le \max_{i\in t} |a_i| \cdot \sum_{k\in s}\sum_{i\in t}|(x_{n_i})_k|\le \max_{i\in t} |a_i| \sum_{i\in t} \frac\vep{2^{i+1}}\le \vep \max_{i\in t}|a_i|.
\end{align*}
Otherwise, let $i_0$ be the first $i\in t$ such that $\sum_{k\in s}|(x_{n_i})_k|> \vep/2^{i+1}$. It follows
from the claim that
\begin{equation}
\sum_{k\in s}|(x_{n_{j}})_k|\le \frac\vep{2^{j}} \text{ for every $i_0<j$}
\end{equation}
Hence,
\begin{align*}
\sum_{k\in s} |(\sum_{i\in t}a_i x_{n_i})_k| \le & \sum_{k\in s}|(\sum_{i\in t, \, i<i_0} a_i x_{n_i})_k|+ |a_{i_0}|\cdot \sum_{k\in s} |(x_{n_{i_0}})|_k + \sum_{k\in s}|(\sum_{i>i_0} a_i x_{n_i})_k|\le \\
\le & \max_{i\in t}|a_i|\sum_{i<i_0}\frac{\vep}{2^{i+1}}+ |a_{i_0}|\nrm{x_{n_{i_0}}} + \max_{i\in t}|a_i| \sum_{i>i_0} \frac{\vep}{2^i} \le
(\vep +K)\max_i |a_i| .
\end{align*}
\fprue
\prue[Proof of Theorem \ref{ioo34iji4jtr}]
Suppose that $A$ is Banach-Saks, and suppose that $(x_n)_n$ is a sequence in $\conv(A)$ without
Ces\`{a}ro-convergent subsequences. Since $\conv(A)$, is relatively weakly-compact, we may assume that $x_n
\to_n x\in X_\al$ weakly. Let $y_n:=x_n-x$ for each $n\in \N$. Then $(y_n)_n$ is a weakly-null sequence
without
Ces\`{a}ro-convergent subsequences. Hence, by Proposition \ref{odjfiojdijdsfsd}, there is some $\vep>0$ and some $\be>0$ such that
$$M=\conj{n\in \N}{\nrm{y_n}_\be\ge \vep}\text{ is infinite}.$$
For each $n\in M$, let $s_n\in \mc S_\be$ such that
$$\sum_{k\in s_n}|(y_n)_k|\ge \vep.$$
For each $n\in M$, write as convex combination, $x_n=\sum_{a\in F_n}\la_a \cdot a$, where $F_n\con A$ is
finite. Since $(y_n)_n$ is weakly-null, it follows that by going to a subsequence if needed that we may
assume that $(s_n)_n$ is a block sequence. Let $n_0$ be such that for all $n\ge n_0$ one has that
$\sum_{k\in s_n}|(x)_k|\le \vep/2$. Hence for every $n\ge n_0$ one has that
\begin{align*}
\vep \le & \sum_{k\in s_n} |(y_n)_k| \le \sum_{k\in s_n}|(x)_k| +\sum_{k\in s_n}\sum_{a\in F_n} \la_a |(a)_k| = \sum_{k\in s_n}|(x)_k| +\sum_{a\in F_n}\la_a\sum_{k\in s_n} |(a)_k|\le \\
\le & \sum_{k\in s_n}|(x)_k| + \max_{a\in F_n} \sum_{k\in s_n} |(a)_k| \le \frac{\vep}2 + \max_{a\in F_n} \sum_{k\in s_n}|(a)_k|.
\end{align*}
So for each $n\ge n_0$ we can find $a_n\in F_n$ such that $\sum_{k\in s_n}|(a_n)_k|\ge \vep/2$. Then, by
Proposition \ref{iurhtiurt}, $(a_n)_n$ is not Banach-Saks.
\fprue
\begin{conje} Let $\mc F$ be a compact, hereditary and spreading family on $\N$. Then
the convex hull of any Banach-Saks subset $A\con X_\mc F$ is again Banach-Saks.
\end{conje}
\section{A Banach-Saks set whose convex hull is not Banach-Saks}\label{counterexample}
The purpose of this section is to present an example of a Banach-Saks set whose convex hull is not. To do
this, using our characterization in Theorem \ref{erioeiofjioedf}, it suffices to find a special pre-compact
family $\mc F$ as in (c) of that proposition. The requirement of $\mc F$ being hereditary is not essential
here because $X_{\mc F}=X_{\widehat{\mc F}}$.
We introduce now some notions of special interest. In what follows, $I=\bigcup_{n\in \N}I_n$ is a partition
of $I$ into finite pieces $I_n$. A \emph{transversal} (relative to $(I_n)_n$) is an infinite subset $T$ of
$I$ such that $\#(T\cap I_n)\le 1$ for all $n$. By reformulating naturally Theorem \ref{classif1} we obtain
the following.
\begin{lemma} \label{lem-4} Let $T\con I$ be a transversal and $n\in \N$.
\begin{enumerate}
\item[(a)] If ${\mc F}$ is not $n$-large in $T$, then there exist a transversal $T_0 \subseteq T$ and
$m\leq n$ such that $\mc F[T_0]=[T_0]^{\leq m}$.
\item[(b)] If $\mc F$ is not large in $T$ then there is some transversal $T_0\con T$ and $n\in \N$ such
that $\mc F[T_0]=[T_0]^{\le n}$.
\item[(c)]If $\mc F$ is $n$-large in $T$, then there exists a transversal $T_0 \subseteq
T$ such that $[T_0]^{\le n} \subseteq {\mc F}[T_0]$.
\end{enumerate}
\end{lemma}
\begin{defn} \label{def-4} For every $0<\lambda<1$ and $s \in{\mc F}$ let us define
\begin{enumerate}
\item[(a)] $s[\lambda]:=\{n\in \mathbb{N}: \#(s\cap I_n) \geq \lambda \# I_n \}$,
\item[(b)] $s[+]:=\{n\in \mathbb{N}: s\cap I_n \neq\emptyset \}$,
\end{enumerate}
and the families of finite sets of $\mathbb{N}$
\begin{enumerate}
\item[(c)] $\mc G_\lambda({\mc F}) := \{ s[\lambda] : s\in{\mc F}\}$,
\item[(d)] $\mc G_+({\mc F}) := \{ s[+] : s\in{\mc F}\}$.
\end{enumerate}
\end{defn}
\begin{prop}\label{prop-5} Suppose that $\mc F$ is a $T$-family on $I$. For every $0<\lambda<1$ and every sequence of scalars $(a_n)_n$, we have that
\begin{equation}\label{nvvnbkvv}
{\lambda}\Big\|\sum_{n }a_n u_n\Big\|_{\mc G_\la(\mc F)}\le
\max\left\{\Big\|\sum_{n}a_n \Big(\frac{1}{\#I_n}\sum_{j\in I_n}
u_j\Big)\Big\|_{\mc F}, \sup_n |a_n|\right\}\le \Big\|\sum_{n}a_n u_n\Big\|_{\mc G_+(\mc F)}.
\end{equation}
\end{prop}
\begin{proof} For each $n$, set
$$
x_n:=\frac{1}{\# I_n}\sum_{j\in I_n} u_j.
$$
Given $(a_n)_n$, by Definition \ref{def-2}, for every $s\in \mc F$, we have that
\begin{eqnarray*}
\sum_{k\in s} \Big|\Big(\sum_n a_n x_n\Big)_k\Big| &=& \sum_{n\in s[+]}\sum_{k\in s\cap I_n} \frac{|a_n|}{\#I_n}=\sum_{n\in s[+]}|a_n|\frac{\#(s\cap I_n)}{\#I_n}\\
&\leq& \sum_{n\in s[+]}|a_n| \leq\Big\|\sum_{n}a_n u_n\Big\|_{\mc G_+(\mc F)},
\end{eqnarray*}
and
\begin{align*}
\sup_k \left|\left(\sum_n a_n x_n \right)_k \right|\le \sup_n \frac{|a_n|}{\#I_n} \le \sup_n|a_n\le \nrm{\sum_n a_nu_n}_{\mc G_+(\mc F)}.
\end{align*}
This proves the second inequality in \eqref{nvvnbkvv}.
Now, given $t\in \mc G_\lambda(\mc F)$, let $s\in \mc F$ be such that $s[\lambda]=t \subseteq s[+]$. Then
$$
\sum_{k\in s}\Big|\Big(\sum_{n}a_n x_n\Big)_k\Big| \ge \sum_{n\in s[\lambda]}\sum_{k\in s\cap I_n}|a_n|\frac{1}{\#I_n}=\sum_{n\in s[\lambda]}|a_n|\frac{\#(s\cap
I_n)}{\#I_n}\ge \lambda \sum_{n\in t}|a_n|.
$$
This proves the first inequality in \eqref{nvvnbkvv}.
\end{proof}
Observe that the use of the $\sup$-norm of $(a_n)_n$ in the middle term of \eqref{nvvnbkvv} can be explained
by the fact that the sequence of averages $(x_n)_n$ is not always seminormalized, independently of the
family $\mc F$. However, for the families we will consider $(x_n)_n$ will be normalized and 1-dominating the
unit basis of $c_0$, so the term $\sup_n |a_n|$ will disappear in \eqref{nvvnbkvv}.
\begin{defn}\label{ij4tijrigrf}
A pre-compact family $\mc F$ on $I$ is called a \emph{$T$-family} when there is a partition $(I_n)_n$ of $I$ into finite pieces $I_n$ such that
\begin{enumerate}
\item[(a)] $\mc F$ is not large in any $J\con I$.
\item[(b)] There is $0<\la\le 1$ such that $\mc G_{\la}(\mc F)$ is large in $\N$.
\end{enumerate}
\end{defn}
Observe that the pre-compactness of $\mc F$ follows from (a) above.
\begin{prop} \label{prop-6} Let ${\mc F}$ be a $T$-family on $I=\bigcup_n I_n$. Then
\begin{enumerate}
\item[(a)] the block sequence of averages
$\left({1}/{\#I_n}\sum_{i\in I_n} u_i\right)_n$ is not Banach-Saks in $X_{{\mc F}}$.
\item[(b)] Every subsequence $(u_i)_{i\in T}$ of $(u_i)_{i\in I}$ has a further subsequence $(u_i)_{i\in
T_0}$ equivalent to the unit basis of $c_0$. Moreover its equivalence constant is at most the integer
$n$ such that $\mc F[T_0]=[T_0]^{\le n}$.
\end{enumerate}
\end{prop}
\begin{proof} Set $x_n:= {1}/{\#I_n}\sum_{i\in I_n} u_i$ for each $n\in \N$.
(a): From Theorem \ref{classif1} there is $M\con \N$ such that $[M]^1\con \mc G_\la(\mc F)[M]$. This readily
implies that $\nrm{x_n}_\mc F\ge \la$ for every $m\in M$. Therefore, $(x_n)_{n\in M}$ is a seminormalized
block subsequence of the unit basis $(u_n)_{n}$, and it follows that $(x_n)_{n\in M}$ dominates the unit basis of
$c_0$. From the left inequality in \eqref{nvvnbkvv} in Proposition \ref{prop-5} we have that
$(x_n)_{n\in M}$ also dominates the subsequence $(u_n)_{n\in M}$ of the unit basis of $X_{\mc G_\la(\mc F)}$.
Since $\mc G_\la(\mc F)$ is large, no subsequence of its unit basis is Banach-Sack and therefore $(x_n)_{n}$
is not Banach-Saks.
(b) Let $(u_i)_{i\in T}$ be a subsequence of the unit basis of $X_{\mc F}$. Without loss of generality, we
assume that $T$ is a transversal of $I$. Using our hypothesis (a), the Lemma \ref{lem-4} (b) gives us
another transversal $T_0 \subset T$ and $n\in \N$ such that ${\mc F}[T_0]=[T_0]^{\leq n}$. Then the
subsequence $(u_i)_{i\in T_0}$ is equivalent to the unit basis of $c_0$ and therefore Ces{\`a}ro convergent
to 0. In fact, for every $s\in {\mc F}$ and for every scalar sequence $(a_j)_{j\in T_0}$
$$
\sum_{i\in s}|a_i| = \sum_{i\in s \cap T_0}|a_i|\leq \max\{ |a_i| : i\in s\cap T_0\} \#(s\cap T_0) \leq n \|(a_i)\|_{\infty}.
$$
On the other hand it is clear that $\|(a_i)\|_{\infty}\leq \|\sum_{i\in T_0}a_ie_i\|_{{\mc F}}$.
\end{proof}
This is the main result.
\begin{thm} \label{ioo4ui4}There is a $T$-family on $\N$. More precisely, for every $0<\vep<1$ there is a partition $\bigcup_n I_n$ of $\N$ in
finite pieces $I_n$ and a pre-compact family $\mc F$ on $\N$ such that
\begin{enumerate}
\item[(a)] $\mc F$ is not 4-large in any $M\con \N$.
\item[(b)] $\mc G_{1-\vep}(\mc F)=\mc G_+(\mc F)=\mk S$, the Schreier barrier.
\item[(c)] For every $s\in \mc G_+(\mc F)$ one has that $s\cap I_n=I_n$, where $n$ is the minimal $m$ such that $s\cap I_m\neq
\buit$.
\end{enumerate}
\end{thm}
\begin{cor}
For every $\vep>0$ there is a Schreier-like space $X_{\mc F}$ such that every subsequence of the unit basis
of it has a further subsequence 4-equivalent to the unit basis of $c_0$, yet there is a block sequence of
averages $((1/\#I_n)\sum_{i\in I_n}u_i)_n$ which is $1+\vep$-equivalent to the unit basis of the Schreier
space $X_\mc S$.
\end{cor}
\begin{proof}
From Proposition \ref{prop-5}, it only rests to see that $\nrm{\sum_n a_n x_n}_\mc F\ge \sup_n |a_n|$, where
$x_n=1/\#I_n \sum_{i\in I_n}u_i$ for every $n\in \N$. To see this, fix a finite sequence of scalars
$(a_n)_{n\in t}$, and fix $m\in t$. Let $u\in \mk S$ be such that $\min u=m$ and $u\cap t=\{m\}$, and let
$s\in \mc F$ such that $s[+]=u$. Then, by the properties of $\mc F$, it follows that $s\cap I_m=I_m$, while
$s\cap I_n=\buit$ for $n\in t\setminus\{m\}$. Consequently,
\begin{align*}
\nrm{\sum_{n\in t}a_n x_n}_\mc F \ge & \sum_{k\in s}\left|\left(\sum_{n\in t}a_n \frac{1}{\#I_n}\sum_{i\in I_n} u_i \right)_k\right|=|a_m|.
\end{align*}
\end{proof}
The construction of our family as in Theorem \ref{ioo4ui4} is strongly influenced by the following
counterexample of Erd\H{o}s and Hajnal \cite{ErHa} to the natural generalization of Gillis' Lemma
\ref{gillis} to double-indexed sequences of large measurable sets.
\begin{lemma} \label{lem-5}
For every $m\in \N$ and $\vep>0$ there is probability space $(\Om,\Sig,\mu)$ and a sequence $(A_{i,j})_{1\le
i<j\le n}$ with $\mu(A_{i,j})\ge \vep$ for every $1\le i<j\le n$ such that for every $s\con \{1,\dots,n\}$ of
cardinality $m$ one has that
$$\bigcap_{\{i,j\}\in [s]^2}A_{i,j}=\buit.$$
\end{lemma}
\begin{proof}
Given $n,r\in \N$ let $\Om:=\{1,\dots,r\}^n$, and let $\mu$ be the probability counting measure on $r^n$.
Given $1\le i<j\le n$ we define the subset of $n$-tuples
\begin{equation}
\label{oi4oi3u4iu5t4} A_{i,j}^{(n,r)}:=\{(a_l)_{l=1}^{n}\in \{1,\dots,r\}^n \, : \, a_{i}\neq a_{j}\}.
\end{equation}
This is the desired
counterexample. In fact,
\begin{enumerate}
\item[(a)] $\# A_{i,j}^{(n,r)}= r^n(1-1/r)$ for every $1\le i<j\le n$, and
\item[(b)] $\bigcap_{\{i,j\}\in [s]^2}A^{(n,r)}_{i,j} =\emptyset $ for every $s\in [\{1,\dots,n\}]^{r+1}$.
\end{enumerate}
To see (a), given $1\le i< j \leq n$
$$ \{1,2,\ldots,r\}^n \setminus A_{i,j}^{(n,r)}= \bigcup_{\theta =1}^r \{(a_l)_{l=1}^{n} \in \{1,2,\ldots,r\}^k \, : \, a_{i}= a_{j}=\theta \}
$$
being the last union disjoint. Since
$$\# \{(a_l)_{l=1}^n\in \{1,2,\ldots,r\}^n : a_{i}=a_{j}=\theta \}=r^{n-2},
$$
it follows that $\# A_{i,j}^{(n,r)}= r^n(1-1/r)$. It is easy to see (b) holds since otherwise we would
have found a subset of $\{1,\dots,r\}$ of cardinality $r+1$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{ioo4ui4}] For practical reasons
we will define such family not in $\N$ but in a more appropriate countable set $I$. Fix $0< \lambda <1 $. We define first the disjoint sequence $(I_n)_n$.
For each $m \in \mathbb{N}, m
\geq 4$, let $r_m$ be such that
\begin{equation} \label{equac-1}
\left(1-\frac1{r_m}\right)^{\binom{m-2}{2}} \geq \lambda .
\end{equation}
Let $4\leq m\leq n$ be fixed. Let
$$
I_{m,n}:= \{1,\dots,r_m\}^{n \times [\{2,\ldots,m-1\}]^2}.
$$
Let $I_n=\{n\}$ for $n=1,2,3$. For $n\geq 4$ let
$$I_n:=\prod_{4\le m \le n}I_{m,n}=\prod_{4\le m \le n}\{1,\dots,r_m\}^{n \times [\{2,\ldots,m-1\}]^2}.$$
Observe that for $n\neq n'$ one has that $I_n \cap I_{n'}=
\emptyset$. Let $I:=\bigcup_n I_n$. Now, given $4\leq m_0 \leq n$ and $2\le i_0<j_0\le m_0-1$, let
$$ \pi_{i_0,j_0}^{(n,m_0)} : I_n\rightarrow \{1,2,\ldots,r_{m_0}\}^n
$$
be the natural projection,
$$ \pi_{i_0,j_0}^{(n,m_0)}(\,\big((b_{i,j}^{(l,m)})_{}\big)_{4\le m\le n,\,1\le l\le n ,\, 2\le i<j\le m-1 } \,):=(b^{(l,m_0)}_{i_0,j_0})_{l=1}^n\in \{1,2,\ldots,r_{m_0}\}^n.$$
We start with the definition of the family ${\mc F}$ on $I$. Recall that $\mk S:=\conj{s\con \N}{\#s=\min
s}$ is the Schreier barrier. We define $F:\mk S\to [I]^{<\infty}$ such that $F(u)\con \bigcup_{n\in u}I_n$
and then we will define $\mc F$ as the image of $F$. Fix $u=\{n_1 <\cdots< n_{n_1}\}\in {\mk S}$:
\begin{enumerate}
\item[(i)] For $u=\{1\}$, let $F(u):= I_1$.
\item[(ii)] For $u:=\{2,n\}$, $2<n$, let $F(u):= I_2\cup I_n$.
\item[(iii)] For $u:=\{3,n_1,n_2\}$, $3<n_1<n_2$, let $F(u):= I_3\cup I_{n_1}\cup I_{n_2}$.
\item[(iv)] For $u=\{n_1,\dots,n_{n_1}\}$ with $3<n_1<n_2<\cdots< n_{n_1}$, then let
$$F(u)\cap I_{n_{k}}:= I_{n_{k}} \text{ for } k=1,2,3,$$
\end{enumerate}
and for $3<k\leq n_1$, let
\begin{equation} \label{equac-2} F(u)\cap I_{n_k}: =
\bigcap_{1<i<j<k}\big(\pi_{i,j}^{(n_k,n_1)}\big)^{-1}\big(A^{(n_k,r_{n_1})}_{n_i ,n_j} \big)
\end{equation}
Where the $A$'s are as in \eqref{oi4oi3u4iu5t4}. Explicitly,
$$ F(u)\cap I_{n_k}= \conj{\big((b_{i,j}^{(l,m)})_{}\big)_{4\le m\le n_k,\,1\le l\le n_k ,\, 2\le i<j\le m-1 } \in I_{n_k} }
{ b_{i,j}^{(n_i,n_1)}\neq b_{i,j}^{(n_j,n_1)} \text{, $1<i<j<k$}
}.
$$
Observe that it follows from \eqref{equac-2} that
\begin{equation}
\pi_{i,j}^{(n_k,n_1)}\big(F(u)\cap I_{n_k} \big) = A_{
n_i ,n_j}^{(n_k,r_{n_1})} \subset \{1,2,\ldots,r_{n_1}\}^{n_k}
\end{equation}
for every $1<i<j<k$.
From the definition of ${\mc F}$ it follows that $u=F(u)[+]$ for every $u \in {\mk S}$. Now, we claim that
given $u \in {\mk S}$, we have that $u=F(u)[\lambda]$, or, in other words, $\#(F(u)\cap I_n)\ge \la \#I_n$
for every $n\in u$. The only non-trivial case is when $u=\{n_1<\cdots <n_{n_1}\}$ with $n_1>3$, and $n=n_{k}$
is such that $3<k\leq n_1$. It follows from the equality in \eqref{equac-2}, (a) in the proof of Lemma
\ref{lem-5}, and the choice of $r_{n_1}$ in \eqref{equac-1} that
$$
\frac{\#(F(u)\cap I_{n_{k}})}{\#(I_{n_k})} =\prod_{1<i<j<k}
\frac{\#(A^{(n_k, r_{n_1})}_{ n_{i},n_{j}})}{(r_{n_1})^{n_{k}}}=
\prod_{1<i<j<k} \left(1-\frac{1}{r_{n_1}}\right) \ge
\left(1-\frac{1}{r_{n_1}}\right)^{\binom{n_1-2}{2}}\ge \lambda
$$
Summarizing, ${\mc G}({\mc F},\lambda)= {\mc G}({\mc F},+)= {\mk S}$. Thus, ${\mc F}$ satisfies the property
(b) in Theorem \ref{ioo4ui4}. For the property (a) we use the following fact.
\begin{lemma} \label{lem-6}Suppose that $\mc A \subseteq \mk S$ is a subset such that
\begin{enumerate}
\item[(a)] $\min u=\min v= n_1 > 3$ for all $u,v\in \mc A$.
\item[(b)] there are $1<i<j< n_1$ and a set $w\subset \mathbb{N}$ such that
\begin{enumerate}
\item[(b.1)] $\#w \ge r_{n_1}+2$ and $ n_1<\min w$.
\item[(b.2)] For every $l_1<l_2<\max w$ in $w$ there is $u\in \mc A$
such that $\{n_1,l_1,l_2,\max w\}\subset u$, $\#(u\cap \{1,2,\ldots,l_1\})=i$ and $\#(u\cap
\{1,2,\ldots,l_2\})=j$.
\end{enumerate}
\end{enumerate}
Then
$$
I_{\max w}\cap \bigcap_{u\in \mc A} F(u)=\emptyset.$$
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem-6}] Observe that for $l\in u$,
$\#(u\cap\{1,2,\ldots,l\})=i$ just means that $l$ is the
$i^{\text{th}}-$element of $u$. For every couple $\{l_1 <l_2\}\in [w\setminus \{\max w\}]^2$, take $u_{l_1 ,l_2}\in {\mc A}$ satisfying the
condition of (b.2). Since, $u_{l_1,l_2}=\{n_1<\cdots<n_i=l_1<\cdots<n_j=l_2<\cdots<\max w<\cdots\leq
n_{n_1}\}$, it follows from the equality in \eqref{equac-2} that
$$
\pi_{i,j}^{(\max w,n_1)}\big(F(u_{l_1 ,l_2})\cap I_{\max w} \big) = A_{l_1,l_2}^{(\max w,r_{n_1})}.
$$
Hence
\begin{align*}
\pi_{i,j}^{(\max w,n_1)}(I_{\max w} \cap \bigcap_{u\in {\mc A}}F(u)) \subseteq &
\bigcap_{\{l_1,l_2\}
\in [w\setminus \{\max w\}]^2} \pi_{i,j}^{(\max w,n_1)}( I_{\max w} \cap F(u_{l_1,l_2}))= \\
=& \bigcap_{\{l_1,l_2\}
\in [w\setminus \{\max w\}]^2} A_{l_1,l_2}^{(\max w,r_{n_1})}
= \emptyset
\end{align*}
where the last equality follows from (b) in the proof of Lemma \ref{lem-5}, since $\#w\geq r_{n_1} +2$.
\end{proof}
We continue with the proof property (a) of $\mc F$ in Theorem \ref{ioo4ui4}. Suppose otherwise that there
exists a transversal $T$ of $I$ such that ${\mc F}$ is $4$-large in $T$. By Lemma
\ref{lem-4} (c), there exists $T_0 \subseteq T$ such that $[T_0]^4 \subseteq {\mc
F}[T_0]$. For every $k \in T_0$, $n(k)$ denotes the unique integer $m$ for which $k\in I_{m}$. It is easy to see that if
$k_1,k_2 \in T_0$ with $k_1 <k_2$, then $n(k_1)<n(k_2)$. Now, for each $t=\{k_0<k_1<k_2<k_3\}$ in $[T_0]^4$,
let us choose $U(t)\in \mk S$ such that
$${t}\subset F(U({t})).$$
Observe that $\{n(k_0),n(k_1),n(k_2), n(k_3)\}\subset U({t})$, and hence $\#U({t})\le n(k_0)$. Now, let
$$\text{$\bar
k :=\min T_0$ and $\bar n:=n(\bar k)$.}$$
Define the coloring $\Theta:[T_0\setminus \{\bar k\} ]^3\to [\{1,2,\ldots,n(\bar k)\}]^3$ for each
$t=\{k_1<k_2<k_3\}$ in $T_0\setminus \{\bar k\}$ as
$$
\Theta(t)=\big(\#(U(\{\bar k\}\cup t)\cap \{1,\ldots, n(k_1)\}),\#(U(\{\bar k\}\cup t)\cap \{1,\ldots,n(k_2)\}), \min U(\{\bar k\}\cup t)\big).
$$
By the Ramsey theorem, there exist $1<i<j<n_1\leq n(\theta)$ and $T_1 \subseteq T_0 \setminus \{\theta\}$
such that $\Theta$ is constant on $T_1$ with value $\{i,j,n_1\}$. Choose $k_1< \dots <k_{r_{\bar n}+2}$ in
$T_1$, and set
$$\mc A:=\{ U(\{\bar k,k_{l_1},k_{l_2},k_{r_{\bar n}+2}\}) : 1\le l_1 <l_2< r_{\bar n}+2 \}.
$$
Notice that $\mc A$ fulfills the hypothesis of Lemma \ref{lem-6}
with respect to the set $w= \{n(t_{l_1}): 1\le l_1 \le
r_{n(\theta)}+2\} $, and therefore
\begin{equation}
I_{n(t_{r_{n(\theta)}+2})}\cap \bigcap_{u\in \mc A}
F(u)=\emptyset,
\end{equation}
which contradicts the fact that
$$k_{r_{\bar n}+2} \in I_{n(k_{r_{\bar n}+2})}\cap \bigcap_{u\in \mc A} F(u).$$
The family $\mc F$ clearly has property (c) from the statement of Theorem \ref{ioo4ui4} by construction. This
finishes the proof of the desired properties of $\mc F$.
\end{proof}
A similar analysis will be used now to prove that the closed linear span of the sequence $$x_n=\frac{1}{\#
I_n}\sum_{j\in I_n}u_j$$ is not a complemented subspace of $X_\mc F$. Let $(x_n^*)_n$ denote the sequence of
biorthogonal functionals to $(x_n)_n$ on $[x_n]^*$.
\begin{prop}
If $T:[u_k]_k\rightarrow [x_n]_n$ is a linear mapping such that
$$
\lim_{k\rightarrow\infty}\langle x_{n(k)}^*,Tu_k\rangle\neq0,
$$
then $T$ cannot be bounded. In particular, there does not exist a projection $P:X_\mc F\rightarrow [x_n]_n$.
\end{prop}
\prue
Let us suppose that $T$ is bounded. Since $\lim_{k\rightarrow\infty}\langle x_{n(k)}^*,Tu_k\rangle\neq0$, let $\alpha>0$ be such that $|\langle x_{n(k_j)}^*,Tu_{k_j}\rangle|\geq\alpha$ for every $j\in \mathbb{N}$. Moreover, since $(u_k)_k$ is weakly null, up to equivalence we can assume that $(Tu_{k_j})_j$ are disjoint blocks with respect to $(x_n)_n$.
By Proposition \ref{prop-6}(b), passing to a further subsequence it holds that $(u_{k_j})_j$ is 3-equivalent to the unit basis of $c_0$. Now, let $0<\lambda\leq1$ such that $\mc G_\lambda(\mc F)=\mk S$, and take $n_0>\frac{3\|T\|}{\alpha\lambda}$. Let $u\in \mk S$ with $\min u=n_0$. We have
$$
3\geq\Big\|\sum_j u_{k_j}\Big\|\geq\frac{1}{\|T\|}\Big\|\sum_j Tu_{k_j}\Big\|_{X_\mc F}\geq\sum_{i\in F(u)}|\langle u^*_i,\sum_jTu_{k_j}\rangle|\geq\frac{n_0\alpha\lambda}{\|T\|}.
$$
This is a contradiction with the choice of $n_0$.
\fprue
\begin{rem} The Cantor-Bendixson rank of a $T$-family must be infinite. To see this, observe that if $f:I\to
J$ is finite-to-one \footnote{ $f:I\to J$ is finite-to-one when $f^{-1}\{j\}$ is finite for every $j\in J$.}
then $f$ preserves the rank $\ro(\mc F)$ of pre-compact families $\mc F$ in $I$. Since $n(\cdot):I\to \N$,
$n(i)=n$ if and only if $i\in I_n$ is finite-to-one and since $n(\mc F)=\conj{\{n(i)\}_{i\in s}}{s\in \mc
F}=\mc G_+(\mc F)\supseteq \mc G_\la(\mc F)$ is large, it follows that $\ro(n(\mc F))=\ro(\mc F)$ is
infinite. In this way our $T$-family $\mc F$ in Theorem \ref{ioo4ui4} is minimal because $\ro(\mc
F)=\ro(n(\mc F))=\ro(\mk S)=\om$.
\end{rem}
\subsection{A reflexive counterexample}
There is a reflexive counterpart of our example $X_\mc F$. Indeed we are going to see that the Baernstein
space $X_{\mc F,2}$ for our family $\mc F$ is such space. It is interesting to note that the corresponding
construction $X_{\mc S,2}$ for the Schreier family $\mc S$ was used by A. Baernstein II in \cite{Baernstein}
to provide the first example of a reflexive space without the Banach-Saks property. This construction was
later generalized by C. J. Seifert in \cite{Sei} to obtain $X_{\mc S,p}$.
\begin{defn}
Given a pre-compact family $\mc F$, and given $1\le p\le \infty$, one defines on $c_{00}(\N)$ the norm
$\nrm{x}_{\mc F,p}$ for a vector $x\in c_{00}(\N)$ as follows:
\begin{equation}
\nrm{x}_{\mc F,p}:= \sup\conj{\nrm{(\nrm{E_i x}_{\mc F})_{i=1}^n}_p}{E_1<\dots<E_n, n\in \N}
\end{equation}
where $E_1<\dots<E_n$ are finite sets and $E x$ is the natural projection on $E$ defined by $Ex:= \mathbbm
1_E \cdot x$. Let $X_{\mc F,p}$ be the corresponding completion of $(c_{00},\nrm{\cdot}_{\mc F,p})$.
\end{defn}
Again, the unit Hamel basis of $c_{00}$ is a 1-unconditional Schauder basis of $X_{\mc F,p}$. Notice also
that this construction generalizes the Schreier-like spaces, since $X_{\mc F,\infty}=X_\mc F$.
\begin{prop}\label{ij4i5otjirjtr}
The space $X_{\mc F,p}$ is $\ell_p$-saturated. Consequently, if $1<p<\infty$, the space $X_{\mc F,p}$ is
reflexive.
\end{prop}
\begin{proof}
The case $p=\infty$ was already treated when we introduced the Schreier-like spaces after Definition
\ref{def-2}. So, suppose that $1\le p<\infty$.
\begin{claim}\label{ioo4rji4j}
Suppose that $(x_n)_n$ is a normalized block sequence of $(u_n)_n$. Then
\begin{equation}
\nrm{\sum_n a_n x_n}_{\mc F,p}\ge \nrm{(a_n)_n}_p.
\end{equation}
\end{claim}
To see this, for each $n$, let $(E_i^{(n)})_{i=1}^{k_n}$ be a block sequence of finite sets such that
\begin{equation}
1=\sum_{i=1}^{k_n} \nrm{E_i^{(n)}x_n}_{\mc F}^p.
\end{equation}
Without loss of generality we may assume that $\bigcup_{i=1}^{k_n}E_i^{(n)}\con \supp x_n$, hence
$E_{k_n}^{(n)}<E_{1}^{(n+1)}$ for every $n$. Set $x=\sum_n a_n x_n$. It follows that
\begin{align*}
(\nrm{\sum_n a_n x_n}_{\mc F,p})^p \ge \sum_n \sum_{i=1}^{k_n} \nrm{E_i^{(n)} x}_\mc F^p=\sum_n |a_n|^p.
\end{align*}
This finishes the proof of Claim \ref{ioo4rji4j}. It follows from this claim that $c_0\not\hookrightarrow
X_{\mc F,p}$. Fix now a normalized block sequence $(x_n)_n$ of $(u_n)_n$ and $\vep>0$. Let $(\vep_n)_n$ be
such that $ \sum_n \vep_n^p\le \vep/2$, $\vep_n>0$ for each $n$. Since $c_0\not\hookrightarrow X_{\mc F,p}$
and since $X_\mc F$ is $c_0$-saturated, we can find a $\nrm{\cdot}_{\mc F,p}$-normalized block sequence $(y_n)_n$
of $(x_n)_n$ such that
\begin{equation}
\nrm{y_n}_\mc F\le \vep_n.
\end{equation}
\begin{claim}
For every sequence of scalars $(a_n)_n$ we have that
\begin{equation} \label{kjhjohoiuhiu}
\nrm{(a_n)_n}_p\le \nrm{\sum_n a_n y_n}_{\mc F,p}\le (1+\vep)\nrm{(a_n)_n}_p.
\end{equation}
\end{claim}
Once this is established, we have finished the proof of this proposition. The first inequality in
\eqref{kjhjohoiuhiu} is consequence of Claim \ref{ioo4rji4j}. To see the second one, fix a block sequence
$(E_i)_{i=1}^l$ of finite subsets of $\N$. For each $n$, let $B_n:=\conj{j\in \{1,\dots,l\}}{E_j x_n\neq
\buit}$, and for $n$ such that $B_n\neq \buit$, let $i_n:=\min B_n$, $j_n:=\max B_n$. Observe that
$i_n,j_n\in B_m$ for at most one $m\neq n$. Then, setting $y=\sum_n a_n y_n$,
\begin{align*}
\sum_{i=1}^l \nrm{E_i y}_\mc F^p = & \sum_{i\in \bigcup_n B_n}\nrm{E_i y}_\mc F^p \le \sum_n \sum_{i\in B_n}
\nrm{E_i y}_\mc F^p\le |a_1|^p\sum_{i\in B_1}\nrm{E_i y_1}_\mc F^p +\nrm{E_{j_1}y}_\mc F^p + \\
+& \sum_{n\ge 2} \left(|a_n|^p\sum_{i\in B_n}\nrm{E_i y_n}_\mc F^p+\nrm{E_{i_n} y}_\mc F^p+ \nrm{E_{j_n} y}_\mc F^p \right)\le \\
\le & \sum_n |a_n|^p\nrm{y_n}_{\mc F,p}^p + 2\max_n|a_n|^p \sum_n \vep_n^p \le (1+\vep)\sum_n |a_n|^p.
\end{align*}
\end{proof}
\begin{prop}
Given $0<\la< 1$, let $\mc F$ be a $T$-family for $\la$ as in Theorem \ref{ioo4ui4} with respect to some $\bigcup_n I_n$.
Then
\begin{enumerate}
\item[(a)] Every subsequence of the unit basis of $X_{\mc F,p}$ has a further subsequence 6-equivalent to the
unit basis of $\ell_p$.
\item[(b)] The sequence of averages
$$\left( \frac{1}{\#I_n}\sum_{i\in I_n}u_i \right)_n$$
is $\la$-equivalent to the unit basis of the Seifert space $X_{\mc S,p}$.
\end{enumerate}
\end{prop}
\begin{proof}
(a): Fix a subsequence $(u_n)_{n\in M}$ of $(u_n)_n$ and let $(u_n)_{n\in N}$ be a further sequence of it
such that $\mc F[N]\con [N]^{\le 3}$. Fix also a sequence of scalars $(a_n)_{n\in N}$ such that $x=\sum_{n\in
N} a_n u_n\in X_{\mc F,p}$. Given a finite subset $E\con \N$ we obtain that
\begin{equation}
\nrm{E x}_\mc F\le 3\max_{n \text{ is such that $E x_n\neq 0$}}|a_n|.
\end{equation}
Now given a block sequence $(E_i)_{i=1}^l$ of finite subsets of $\N$, and given $i=1,\dots,l$, let
$A_i=\conj{n\in N}{E_i x_n\neq 0}$ and let $B:=\conj{i\in \{1,\dots l\}}{A_i\neq \buit}$. Then we obtain that
\begin{align*}
\sum_{i=1}^l \nrm{E_i x}_\mc F^p \le 3\sum_{i\in B} (\max_{n\in A_i} |a_n|)^p \le 6 \sum_n |a_n|^p
\end{align*}
the last inequality because $A_i\cap A_j=\buit$ if $i<j$ are not consecutive in $B$, and if $i<j$ are
consecutive, then $\#(A_i\cap A_j)\le 1$. The other inequality is proved in the Claim \ref{ioo4rji4j} of
Proposition \ref{ij4i5otjirjtr}.
Let us prove (b): First of all, observe that by definition we have that $X_{\mc S,p}=X_{\mk S,p}$. Set
$x_n:=(1/\#I_n)\sum_{i\in I_n}u_i$ for each $n\in \N$, and fix a sequence of scalars $(a_n)_n$. Set also
$$x=\sum_n a_n x_n \text{ and } u=\sum_{n}a_n u_n.$$
Let $(E_i)_{i=1}^l$ be a block sequence of finite subsets of $\N$ such that
\begin{equation}
\nrm{\sum_{n}a_n u_n}_{\mk S,p}^p=\sum_{i=1}^l \nrm{E_i u}_{\mk S}^p.
\end{equation}
For each $i=1,\dots,l$, let $t_i\in \mk S$ be such that $\nrm{E_i u}_{\mk G}=\sum_{n\in t_i\cap E_i}|a_n|$.
For each $i=1,\dots,l$ let $s_i\in \mc F$ be such that $s_i[\la]=t_i$, and set $F_i:=\bigcup_{n\in E_i} I_n$.
Notice that $(F_i)_{i=1}^l$ is a block sequence of finite subsets of $\bigcup_n I_n=\N$. Then
\begin{align*}
\nrm{\sum_n a_n x_n}_{\mc F,p}^p \ge & \sum_{i=1}^l \nrm{F_i(\sum_n a_n x_n)}_\mc F^p= \sum_{i=1}^l \nrm{\sum_{n\in E_i} a_n x_n}_\mc F^p \ge \\
\ge & \sum_{i=1}^l \left(\sum_{k\in s_i}|(\sum_{n\in E_i} a_n x_n)_k|\right)^p \ge \sum_{i=1}^l (\la\sum_{n\in E_i\cap t_i}|a_n|)^p=\la^p\nrm{\sum_{n}a_n u_n}_{\mk S,p}^p.
\end{align*}
For the other inequality, let $(F_i)_{i=1}^l$ be a block sequence such that
\begin{equation}
\nrm{\sum_{n}a_n x_n}_{\mc F,p}^p=\sum_{i=1}^l \nrm{F_i x}_{\mc F}^p.
\end{equation}
For each $i=1,\dots,l$, let $s_i\in \mc F$ be such that $\nrm{F_i x}_{\mc F}=\sum_{k\in s_i} |(F_i x)_k|$,
and $E_i:=\conj{n\in \N}{F_i\cap I_n\neq \buit}$. Then, setting $t_i:=s_i[+]\in \mk S$, we have that
\begin{equation}
\nrm{F_i x}_{\mc
F,p} = \sum_{n\in s_i[+]\cap E_i}|a_n|\frac{\#((s_i \cap F_i)\cap I_n)}{\#I_n}\le \sum_{n\in s_i[+]}|(E_i u)_n|\le \nrm{
E_i u}_\mk S.
\end{equation}
Since $(E_i)_{i=1}^l$ is a block sequence it follows that
\begin{align*}
\nrm{\sum_n a_n u_n}_{\mk S,p}^p\ge &\sum_{i=1}^l \nrm{E_i u}_{\mk S}^p \ge \sum_{i=1}^l \nrm{F_i x}_{\mc F}^p=\nrm{\sum_{n}a_n x_n}_{\mc F,p}^p.
\end{align*}
\end{proof}
There is another, more general, approach to find a reflexive counterexample to Question 1. This can be done
by considering the interpolation space $\De_p(W,X)$, $1<p<\infty$, where $W$ is the closed absolute convex
hull of a Banach-Saks subset of $X$ which it is not Banach-Saks itself.
Recall that given a convex, symmetric and bounded subset $W$ of a Banach space $X$, and $1<p<\infty$, one defines the
Davis-Figiel-Johnson-Pelczynski \cite{Da-Fi-Jo-Pel} interpolation space $Y:=\De_{p}(W,X)$ as the space
$$\conj{x\in X}{\nrm{x}_Y<\infty},$$
where
$$\nrm{x}_Y:=\nrm{(|x|_n)_n}_p$$ and where for each $n$,
$$|x|_n:=\inf\conj{\la>0}{\frac{x}{\la}\in 2^{n} W +\frac1{2^n}B_X}.$$
The key is the following.
\begin{lemma}
A subset $A$ of $W$ is a Banach-Saks subset of $X$ if
and only if $A$ is a Banach-Saks subset of $Y:=\De_{p}(W,X)$.
\end{lemma}
\prue
Fix $A\con W$, and set $Y:=\De_{p}(W,X)$ Since the identity $j: Y\to X$ is a bounded operator, it follows
that if $A$ is a Banach-Saks subset of $Y$ then $A=j(A)$ is also a Banach-Saks subset of $X$.
Now suppose that $A$ is a Banach-Saks subset of $X$. Going towards a contradiction, we fix a weakly
convergent sequence $(x_n)_n$ in $A$ with limit $x$ generating an $\ell_1$-spreading model. Let $\de$
witnessing that, and set $y_n:=x_n-x\in 2 W$ for each $n$. Observe that it follows from the definition that
\begin{enumerate}
\item[(a)] For every $\la>0$ and every $\vep>0$ there is $n_0$ such that for every $x\in \la W$ we have that
$\sum_{n>n_0}|x|_n^p\le \vep$.
\end{enumerate}
Since $A$ is Banach-Saks in $X$, we assume without loss of generality that the sequence $(y_n)_n$ is
uniformly weakly-convergent (to 0). Observe that then
\begin{enumerate}
\item[(b)] For every $\vep>0$ there is $n$ such that if $\#s=n$, then $\nrm{\sum_{n\in s}y_n}_X\le \vep \#s$.
\end{enumerate}
Consequently,
\begin{enumerate}
\item[(c)] For every $\vep>0$ and $r$ there is $m$ such that if $\#s=m$, then $\sum_{n\le r}|\sum_{k\in s}y_k|^p \le \vep$.
\end{enumerate}
Now let $k\in \N$ be such that $k^{1/p}<\de k$, and $\vep>0$ such that $k^{1/p}+\vep<\de k$. Using (a) and
(c) above we can find finite sets $s_1<\dots s_n$ such that
\begin{enumerate}
\item[(d)] $s=\bigcup_{i=1}^n s_i\in \mc S$.
\item[(e)] Setting $z_i:=(1/\#s_i)\sum_{k\in s_i}y_k$ for each $i=1,\dots,k$, then there is a block sequence $(v_i)_{i=1}^k$ in $\ell_p$ such that
$\nrm{v_i}_p\le 1$, $i=1,\dots,k$, and such that
\begin{equation}
\nrm{(|z_1+\dots+z_k|_n)_n-(v_1+\dots+v_k)}_p\le \vep.
\end{equation}
\end{enumerate}
It follows then from (d), (e) and the fact that $(y_n)_n$ generates an $\ell_1$-spreading model with
constant $\de$ that
\begin{align*}
\de k \le \nrm{\frac{1}{k}\sum_{i=1}^k z_i}_Y \le \nrm{v_1+\dots+v_k}_p+\vep \le k^{\frac1p}+\vep <\de k,
\end{align*}
a contradiction.
\fprue
Let now $X:=X_\mc F$ where $\mc F$ is a $T$-family, let $W$ be the closed absolute convex hull of the unit basis $\{u_n\}_n$ of $X_\mc F$
\begin{prop}
The interpolation space $Y:=\De_{p}(W,X_\mc F)$, $1<p<\infty$, is a reflexive space with a weakly-null
sequence which is a Banach-Saks subset of $Y$, but its convex hull is not. \qed
\end{prop}
\bibliographystyle{amsplain}
\providecommand{\bysame}{\leavevmode\hbox
to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
} \providecommand{\href}[2]{#2}
|
2,877,628,089,141 | arxiv | \section{Introduction}
The equity default swap contract has recently been introduced in the
financial markets. The contract pays a recovery proportion of a notional
amount at the time of the occurence of a specific equity event, up to a
maturity, that is typically $5$ years. The equity event is defined as the
first time the stock price drops below $30\%$ of the price prevailing at
contract initiation. The distribution of this first passage time is then of
critical importance in the valuation of the contract. These distributions
are known for the geometric Brownian motion model and results have also
recently been obtained (Davydov and Linetsky (2001), Campi and Sbuelz
(2005)) for the constant elasticity of variance (CEV) model. The prices of
European options under these models, however, are not consistent with
observed market option prices and such observations call into question the
resulting swap prices.
Observed market option prices are more closely matched by a variety of pure
jump L\'{e}vy process models for the evolution of the logarithm of the stock
price. Here we shall focus attention on the four parameter $CGMY$ L\'{e}vy
model introduced by Carr, Geman, Madan and Yor (2002). First passage times
for such L\'{e}vy processes require knowledge of the distribution of the
supremum and infimum of the process over a fixed time interval. The product
of the Laplace transforms of these distributions is known by the famed
Wiener-Hopf factorization. Identification of the law of the infimum and
supremum then flounders on the inability to analytically perform the
factorization. Again, it is known that if the L\'{e}vy process has only
one-sided jumps, say downwards, then the law of the supremum is known and
one may then use the Wiener-Hopf identity to deduce the law of the infimum.
Such strategies have been effectively pursued in Rogers (2000), Novikov,
Melchers, Shinjikashvili and Kordzakhia (2003), Khanna and Madan (2004) and
Chiu and Yin (2005). Processes with one sided jumps are dominated by those
with two sided jumps in their ability to explain option prices.
For L\'{e}vy processes with two sided jumps the double Laplace transform in
time and the level of the infimum and supremum have been derived by Nguyen
and Yor (2002) for some L\'{e}vy processes using bivariate integral
representations. However, the numerical inversion of these transforms
involves four integrals and is computationally quite involved. Here we
consider an alternative strategy for obtaining the first passage
distribution of two sided jump L\'{e}vy processes. To this end, we use
phase-type distributions, defined as the time until absorption of a finite
state continuous-time Markov process (See Asmussen (1992)). It is a well
known fact that this class comes close to being complete for obtaining first
passage probabilities of the type we consider.
We employ results from Asmussen, Avram and Pistorius (2004) to obtain closed
forms for the Laplace transform of first passage distributions for L\'{e}vy
processes when the L\'{e}vy measure has a phase type distribution. In
addition to providing tractable computational schemes, phase-type
distributions have the advantage of forming a dense class so that by
increasing the size of the state space, one can in principle get arbitrarily
close to a target density and thereby get arbitrarily good approximations of
the first passage probabilities.
A particularly important subclass of phase-type distributions is formed by
the hyperexponential distributions, i.e. finite mixtures of exponentials.
The underlying Markov process then chooses some initial state, stays there
an exponential time with rate depending on the state and goes directly to
the absorbing state. This subclass is appealing for the $CGMY$ L\'{e}vy case
as the target density is completely monotonic, i.e. a possibly continuous
mixture of exponentials, so that by a discretization procedure it is a limit
of (possibly scaled) hyperexponential densities. In practice, the fitting
can be done either by maximum likelihood (Asmussen, Nerman and Olsson
(1996)) as then one wants to capture the shape of the target density, or by
minimizing a distance measure paying specific attention to specific features
of interest. In particular, in first passage problems a good fit in the tail
is crucial, and hyperexponential approximations to Pareto tails have been
obtained in this way (Feldman and Whitt (1998)). This is the path we follow
in this paper. We approximate the L\'{e}vy density of the $CGMY$ process by
a hyperexponential distribution and we then use the first passage time of
the approximating \ process to determine the Laplace transform of the first
passage time distribution for the approximating L\'{e}vy process. One
dimensional Laplace transform inversions using the methods of Abate and
Whitt (1995) then give us both the survival probability and the first
passage density for this approximation. One may wish to evaluate the quality
of this approximation to an exact result for the $CGMY$ process. The latter
is however not computationally available at this time. One may obtain
alternative approximations by either solving partial integro differential
equations or by Monte Carlo simulation. The simulation would require the
development of appropriate importance sampling methodologies. We therefore
leave these questions for future research. The same considerations apply
with respect to constructing a priori bounds on the error with respect to
the exact result.
From the first passage density and survival probability one may determine
the quote on an equity default swap contract. We estimate the $CGMY$
parameters from the prices of options on Ford and GM over a three year
period and extract daily estimates of quotes on equity default swap
contracts using the above hyperexponential approximation. We compare these
prices with market quotes on the $CDS$ rates and we observe that these two
separate sets of prices for a credit event, indeed, correlate well.
The outline of the rest of the paper is as follows. Section 2 presents the
details for the construction of the Laplace transform of the first passage
time for an approximation to the $CGMY$ L\'{e}vy process. Section 3
describes how this transform is used in quoting on equity default swap
contracts. Section 4 presents the results of daily calibration of the $CGMY$
model to market option prices. Section 5 compares the resulting equity
default swap prices with their credit default swap counterparts. Section 6
concludes.
\section{The CGMY\ First Passage Time}
We wish to determine the first passage time of the $CGMY$ L\'{e}vy process
to various levels using a phase type distribution to approximate the
process. For the Laplace transform of the first passage time of the
approximating process, i.e. a process with a phase type L\'{e}vy measure, we
follow Asmussen, Avram and Pistorius (2004). Given that the L\'{e}vy measure
is completely monotone we may employ a very special phase type distribution
that approximates the L\'{e}vy measure as a mixture of exponentials. The
phase type distribution we employ therefore has a simple form. Starting from
an initial state we have an entrance into a number of states where we stay
until absorption. There are no transitions between states. Suppose the L\'{e}%
vy measure has the form
\begin{eqnarray}
k(x) &=&\sum_{i=1}^{n}a_{i}e^{-\alpha _{i}x}\mathbf{1}_{x>0} \notag \\
&&+\sum_{j=1}^{m}b_{j}e^{-\beta _{j}|x|}\mathbf{1}_{x<0}. \label{LM}
\end{eqnarray}
Our phase type distribution for the positive jumps has $n+1$ states with the
generator matrix%
\begin{equation}
T^{(+)}=\left(
\begin{array}{cccc}
-\alpha _{1} & & & 0 \\
& -\alpha _{2} & & \\
& & . & 0 \\
& & & -\alpha _{n}%
\end{array}%
\right) \label{PTD}
\end{equation}%
with unnormalized initial state probabilities proportional to $\frac{a_{i}}{%
\alpha _{i}}$ for the states $i=1,\cdots ,n$ and the vector of absorption
rates into the final state is
\begin{equation}
t=(\alpha _{1},\alpha _{2},\cdots ,\alpha _{n}). \label{AR}
\end{equation}
The characteristic exponent for the approximation now takes the special form%
\begin{eqnarray}
\kappa (s) &=&\log E\left[ \exp \left( sX_{1}\right) \right] \notag \\
&=&\mu s+\lambda _{+}\sum_{i=1}^{n}\pi _{i}^{+}\left( \frac{\alpha _{i}}{%
\alpha _{i}-s}-1\right) +\lambda _{\_}\sum_{j=1}^{m}\pi _{j}^{-}\left( \frac{%
\beta _{j}}{\beta _{j}+s}-1\right) +\frac{\sigma ^{2}}{2}s^{2} \notag \\
\lambda ^{+} &=&\sum_{i=1}^{n}\frac{a_{i}}{\alpha _{i}};\text{ }\lambda
_{\_}=\sum_{j=1}^{m}\frac{b_{j}}{\beta _{j}} \label{ce} \\
\pi _{i}^{+} &=&\frac{a_{i}}{\lambda _{+}\alpha _{i}};\text{ }\pi _{j}^{-}=%
\frac{b_{j}}{\lambda _{\_}\beta _{j}}. \notag
\end{eqnarray}%
Where the value of $\sigma ^{2}$ will depend on the diffusion approximation
we incorporate for the small jumps. This is described in greater detail
later. The drift $\mu $ is chosen to make the overall drift on the stock
price be \ the interest rate less the dividend yield.
Let $T_{x}$ be the first passage time over a level $x>0,$ the Laplace
transform of $T_{x}$ at the transform argument $a$%
\begin{equation*}
E\left[ e^{-aT_{x}}\right]
\end{equation*}%
is obtained in terms of the roots, with positive real part, of the equation
\begin{equation}
\kappa (s)=a. \label{roots}
\end{equation}
We observe that one may write
\begin{equation}
\kappa (s)=\frac{p(s)}{q(s)} \label{polyroot}
\end{equation}%
as a ratio of two polynomials and so we seek the roots of the polynomial
equation
\begin{equation}
p(s)=aq(s). \label{polyroot2}
\end{equation}
The polynomial $q(s)$ is
\begin{equation}
q(s)=\prod_{i=1}^{n}(\alpha _{i}-s)\prod_{j=1}^{m}\left( \beta _{j}+s\right)
\label{polyq}
\end{equation}%
and
\begin{equation}
p(s)=q(s)\left( \frac{\sigma ^{2}}{2}s^{2}+\mu s-\left( \lambda _{+}+\lambda
_{-}\right) -\lambda _{+}\sum_{i=1}^{n}\pi _{i}^{+}\alpha _{i}\frac{1}{%
s-\alpha _{i}}+\lambda _{\_}\sum_{j=1}^{m}\pi _{j}^{-}\beta _{j}\frac{1}{%
\beta _{j}+s}\right) . \label{polyp}
\end{equation}
We observe that $\kappa (0)=0$ and the poles of $\kappa (s)$ are exactly
equal to the eigenvalues of $-T^{(+)}$ and those of $T^{(-)}$ (and also the
roots of $q(s)=0$).We note that $p-aq$ is of degree $n+m+2$ if $\sigma >0$
and of degree $n+m+1$ if $\sigma =0,\mu >0.$ In the case $\mu >0,$ $\sigma
=0,$ $\kappa (s)=a$ has $n+1$ distinct positive roots and $m$ distinct
negative roots. If $\sigma >0$ $\kappa (s)=a$ has $n+1$ distinct positive
roots and $m+1$ distinct negative roots. Finally for $\mu =0,\sigma =0$
there are $n$ positive and $m$ negative distinct roots.
Let $k_{+}$ be the number of positive roots. Suppose the positive roots are $%
\rho _{i},$ $i=1,\cdots ,k_{+}$ then letting
\begin{equation}
M_{t}=\sup_{s\leq t}\left( X_{s}\vee 0\right) \label{sup}
\end{equation}%
and writing%
\begin{equation}
P\left( M_{e(a)}\in dx\right) =\int_{0^{+}}^{\infty }ae^{-at}P(M_{t}\in dx)
\label{supdistr}
\end{equation}%
($e(a)$ is an independent exponential random variable with mean $a^{-1}$),
we have that (for $s$ with $\mathcal{R}(s)\leq 0$)
\begin{eqnarray}
\phi _{a}^{+}(s) &=&\int_{0}^{\infty }ae^{-at}E\left[ e^{sM_{t}}\right] dt
\notag \\
&=&P\left( M_{e(a)}=0\right) +\int_{0^{+}}^{\infty }e^{sx}P\left(
M_{e(a)}\in dx\right) \notag \\
&=&\phi _{a}^{+}(-\infty )+\int_{0^{+}}^{\infty }e^{sx}P\left( M_{e(a)}\in
dx\right) . \label{phiplus}
\end{eqnarray}%
This function may be expressed in terms of the positive roots $\rho _{i}$ by
\begin{eqnarray}
\phi _{a}^{+}(s) &=&\frac{\det \left( -sI-T\right) }{\det \left( -T\right) }.%
\frac{\prod_{i=1}^{k_{+}}\left( -\rho _{i}\right) }{\prod_{i=1}^{k_{+}}%
\left( s-\rho _{i}\right) } \notag \\
&=&\frac{\prod_{i=1}^{n}\left( 1-s\alpha _{i}^{-1}\right) }{%
\prod_{i=1}^{k_{+}}\left( 1-s\rho _{i}^{-1}\right) } \label{phiplus2}
\end{eqnarray}%
where $\rho _{1},\cdots \rho _{k_{+}\text{ }}$are the positive roots of $%
\kappa (s)=a$ and where $T=diag(-\alpha _{i})$ is the restriction of $T^{+}$
to the nonabsorbing states. If $\sigma =0$ the number of positive roots $%
k_{+}$ is equal to the number of positive phases $n,$ $k_{+}=n$ and we note
that
\begin{equation}
\phi _{a}^{+}(-\infty )=\prod_{i=1}^{k_{+}}\frac{\rho _{i}}{\alpha _{i}}
\label{phitail}
\end{equation}%
and $0<\phi _{a}^{+}(-\infty )<1.$
If $\sigma >0$ then the number of positive roots is $k_{+}=n+1$ one more
than the number of positive phases $n:$ $\rho _{1},\cdots \rho _{n+1}$ and
we have the formula
\begin{equation}
\phi _{a}^{+}(s)=\frac{\prod_{i=1}^{n}\left( 1-s\alpha _{i}^{-1}\right) }{%
\prod_{i=1}^{n+1}\left( 1-s\rho _{i}^{-1}\right) } \label{phiplus3}
\end{equation}%
furthermore, now $\phi _{a}^{+}(-\infty )=\lim_{s\rightarrow -\infty }\phi
_{a}^{+}(-s)$ is zero.
Hence in the case that there is a Brownian component present $(\sigma >0)$
\begin{equation}
\phi _{a}^{+}(-\infty )=0. \label{phitail2}
\end{equation}
By performing a partial fraction decomposition we can perform the Laplace
inversion in $s$ analytically: Note that we can write
\begin{equation}
\phi _{a}^{+}(s)-\phi _{a}^{+}(-\infty )=\sum_{i=1}^{k_{+}}A_{i}^{+}\frac{%
-\rho _{i}}{s-\rho _{i}} \label{phiplus4}
\end{equation}%
where
\begin{equation}
A_{i}^{+}=\frac{\prod_{j=1}^{n}\left( 1-\rho _{i}\alpha _{j}^{-1}\right) }{%
\prod_{j=1,j\neq i}^{k_{+}}\left( 1-\rho _{i}\rho _{j}^{-1}\right) }.
\label{Ai}
\end{equation}%
The Laplace transform can be inverted explicitly to find for $x>0$%
\begin{equation}
P\left( M_{e(a)}<x\right) =P\left( M_{e(a)}=0\right)
+\sum_{i=1}^{k_{+}}A_{i}^{+}(1-e^{-\rho _{i}x}). \label{Mcdf}
\end{equation}%
It follows from the fact that
\begin{equation}
P(M_{e(a)}<\infty )=1 \label{cdf1}
\end{equation}%
that
\begin{equation}
1-\sum_{i=1}^{k_{+}}A_{i}^{+}=P\left( M_{e(a)}=0\right) =\phi
_{a}^{+}(-\infty ). \label{phitail3}
\end{equation}%
In particular we get that
\begin{eqnarray}
P\left( M_{e(a)}>x\right) &=&1-P\left( M_{e(a)}=0\right)
-\sum_{i=1}^{k_{+}}A_{i}^{+}(1-e^{-\rho _{i}x}) \notag \\
&=&1-\sum_{i=1}^{k_{+}}A_{i}^{+}-P\left( M_{e(a)}=0\right)
+\sum_{i=1}^{k_{+}}A_{i}^{+}e^{-\rho _{i}x} \notag \\
&=&\sum_{i=1}^{k_{+}}A_{i}^{+}e^{-\rho _{i}x}. \label{Mccdf}
\end{eqnarray}
Now since
\begin{eqnarray}
P\left( M_{e(a)}>x\right) &=&\int_{0}^{\infty }ae^{-at}P\left(
M_{t}>x\right) dt \notag \\
&=&a\int_{0}^{\infty }e^{-at}P\left( T_{x}<t\right) dt \label{fpt}
\end{eqnarray}%
we find the Laplace transform of $P\left( T_{x}<t\right) $%
\begin{equation}
\int_{0}^{\infty }e^{-at}P\left( T_{x}<t\right) dt=\frac{1}{a}%
\sum_{i=1}^{k_{+}}A_{i}^{+}e^{-\rho _{i}x} \label{fptlt}
\end{equation}%
where $\rho _{i}=\rho _{i}(a)$ and $A_{i}^{+}=A_{i}^{+}(a)$ depend on $a.$
We employ the methods of Abate and Whitt (1995) to obtain first passage
probabilities and the first passage density. We next consider the details
for the diffusion approximation and the phase type approximation of the $%
CGMY $ process.
\subsection{CGMY details}
The $CGMY$ process was introduced in (Carr, Geman, Madan and Yor (2002), see
also Koponen (1995), Boyarchenko and Levendorskii (1999, 2000)) and is a
pure jump L\'{e}vy process with the L\'{e}vy density%
\begin{equation}
k(x)=C\frac{e^{-Mx}}{x^{1+Y}}\mathbf{1}_{x>0}+C\frac{e^{-G|x|}}{|x|^{1+Y}}%
\mathbf{1}_{x<0}. \label{cgmylm}
\end{equation}%
For this completely monotone L\'{e}vy density of the $CGMY$ process we
recognize that
\begin{equation}
\frac{1}{x^{1+Y}}=\int_{0}^{\infty }\frac{u^{Y}e^{-ux}}{\Gamma (1+Y)}du
\label{gamma}
\end{equation}%
and consider the approximation scheme
\begin{equation}
\frac{1}{x^{1+Y}}\approx \sum_{i=1}^{N-1}\frac{%
u_{i}^{Y}e^{-u_{i}x}(u_{i+1}-u_{i})}{\Gamma (1+Y)}. \label{fitfn}
\end{equation}
For a prespecified sequence of exponential decay coefficients $u_{i}$ that
correspond to reasonable levels of mean jump sizes under the single
exponential model. The specific values for $u_{i}$ were obtained using a
least squares algorithm that minimizes the sum of squared errors between the
left hand side of (\ref{fitfn}) and the right hand side of (\ref{fitfn})
evaluated at the $x$ points starting at $x=.25$ and increasing to $x=5$ in
steps of $.025.$ As starting values we employed $.5,2,5,10,20,40$ and $100$.
Our application works with the value of $Y$ fixed at $.5$ and hence the
approximation of equation (\ref{fitfn}) developed and subsequently used is
independent of all parameter variations. The resulting values for $u_{i}$
were $.1940,$ $.5982,$ $.8434,$ $1.1399,$ $1.5308,$ $2.1211,$ and $3.4055.$
We then approximate the $CGMY$ L\'{e}vy density on the two sides by
\begin{eqnarray}
k_{+}(x) &=&\sum_{i=1}^{N-1}c_{i}e^{-(M+u_{i})x} \label{cgmylmap} \\
k_{\_}(x) &=&\sum_{i=1}^{N-1}d_{i}e^{-(G+u_{i})x}. \label{cgmylman} \\
c_{i} &=&d_{i}=\frac{Cu_{i}^{Y}(u_{i+1}-u_{i})}{\Gamma (1+Y)}
\end{eqnarray}
We then have in terms of our general discussion above%
\begin{equation}
\alpha _{i}=M+u_{i};\text{ }\beta _{i}=G+u_{i} \label{coefs}
\end{equation}
Alternative but somewhat related procedures for fitting hyperexponentials to
general densities (rather than L\'{e}vy measures as here) have been
considered in Feldman and Whitt (1998), Andersen and Nielsen (1998) and
Asmussen, Jobmann and Schwefel (2002).
\subsection{First Passage to a low level}
For the first passage to a low level we may work with the first passage of $%
-X(t)$ to a high level. In this case we reverse the roles of $\alpha ,\beta $
and write
\begin{eqnarray}
\alpha _{i} &=&G+u_{i} \label{coefs2} \\
\beta _{i} &=&M+u_{i}. \notag
\end{eqnarray}
\subsection{The small jump diffusion approximation}
Above we discussed a procedure to approximate a $CGMY$ process $X$ by a L%
\'{e}vy process $X_{1}$ with density (\ref{LM}) and to improve the
approximation we would like now to approximate the difference by a Brownian
motion.
A way to refine the above approximation of $X$ by $X_{1}$ is to approximate
the process of small jumps by a Brownian motion. This process of small jumps
denoted $Z_{1}^{\varepsilon }$ is obtained using the L\'{e}vy density of $X$
restricted to $(-\varepsilon ,\varepsilon ).$ The process $%
Z_{1}^{\varepsilon }$ is approximated by a Brownian motion with variance%
\begin{equation}
\sigma ^{2}=\sigma ^{2}(\varepsilon )=\int_{-\varepsilon }^{\varepsilon
}x^{2}\nu (dx). \label{sigma}
\end{equation}
In Asmussen and Rosinski (2001) it is shown that, if $\nu $ has no atoms, $%
Z_{1}^{\varepsilon }$ weakly converges to a Brownian motion if and only if
\begin{equation}
\lim_{\varepsilon \rightarrow 0}\frac{1}{\varepsilon }\int_{-\varepsilon
}^{\varepsilon }x^{2}\nu (dx)=\infty . \label{int1}
\end{equation}
For a Variance Gamma process this diffusion approximation fails as
\begin{equation}
\lim_{\varepsilon \rightarrow 0}\frac{1}{\varepsilon }\int_{0}^{\varepsilon
}xe^{-ax}dx<\infty . \label{int2}
\end{equation}
For a $CGMY$ process with $Y>0$ this diffusion approximation is valid, since%
\begin{equation}
\lim_{\varepsilon \rightarrow 0}\frac{1}{\varepsilon }\int_{0}^{\varepsilon
}x^{1-Y}e^{-ax}dx=\infty \label{int3}
\end{equation}
(as $\nu (dx)/dx\geq const/|x|^{1+Y}$ in a neighbourhood of the origin, see
Example 2.3 in Asmussen and Rosinski (2001)).
In our $CGMY$ process approximation we truncated the $CGMY$ L\'{e}vy density
at $u_{1}=.5$ on the one hand but, on the other hand, the approximating
densities $k_{+}(x),k_{-}(x)$ are taken to start at $x=0.$ Therefore we
apply the diffusion approximation to
\begin{equation}
\widetilde{k}(x)=\left( e^{-Mx}/x^{1+Y}-k_{+}(x)\right) \mathbf{1}%
_{0<x<u_{1}}+\left( e^{-G|x|}/|x|^{1+Y}-k_{\_}(x)\right) \mathbf{1}%
_{-u_{1}<x<0}. \label{adjk}
\end{equation}
\section{Equity Default Swap Pricing}
We essentially follow the logic for the pricing of credit default swaps,
replacing the required survival probabilities and default time densities
with the first passage complementary distributions $\overline{F}%
_{n}(C,G,M,Y) $ and the first passage time density $f_{n}(C,G,M,Y)$ as
computed by Laplace transform inversion using the approximation methods
described above in section 2 for $\ n$ days or time $360t=n.$
Like the credit default swap, the equity default swap contract is viewed in
two parts, one describing the receipt side of the cash flows and the other
the payment side. There is a notional amount $M$ associated with the receipt
side and the actual level of cash flows received is $M$ times one minus the
recovery rate $R$ on the occurence of the equity event. Our calculations
employ a recovery rate of $50\%.$ These funds are received on the time $\tau
$ of first passage of the equity to a level below the specified barrier. We
employ two barriers in our calculations, these are $50\%$ and $30\%$ of the
price at initiation.
Against this stream of receipts the equity default swap holder makes
periodic coupon payments, typically until the equity event or the maturity
whichever is comes first. At the first passage event date we subtract from
the receipts the accrued coupons from the last coupon date before $\tau $ to
this date. We denote time measured in days by $n$ and in years by $t.$ For
each day $n$ we define the function $\zeta (n)$ that gives the number of
days between the end of day $n$ and the last day on which a coupon was paid.
We also denote by $N$ the maturity of the contract in days while $T$ is the
maturity in years. By convention we take $(N,n)=360(T,t)$.
Let $\Delta (n)$ be the first passage event indicator function that takes
the value $1$ if the first passage of equity has occurred on or before day $%
n $ and is zero otherwise. Further, let $k_{T}$ denote the annual coupon
rate or the equity default swap rate quoted on the contract for maturity $T.$%
The cash flow receipts on day $n,$ $R_{n}$ are then written as
\begin{equation}
R_{n}=\left( \Delta (n)-\Delta (n-1)\right) \left( M(1-R)-\frac{%
Mk_{T}(n-\zeta (n))}{360}\right) . \label{Receipt}
\end{equation}
Let $NP$ denote the number of coupon payment dates and let $np_{j}$ be the
day number of the the $j^{th}$ payment date. The $j^{th}$ payment $P_{j}$
occurring on day $np_{j}$ is then given by
\begin{equation}
P_{j}=\frac{Mk_{T}\left( np_{j}-np_{j-1}\right) }{360}\left( 1-\Delta
(np_{j})\right) , \label{Payment}
\end{equation}%
where the multiplication by the complementary first passage event indicator
function recognizes that no coupon payments are made after the first passage
event.
For the valuation of claims we employ a discount function $B_{n}$ that gives
the present value of a dollar promised on day $n.$ There are a variety of
approaches to constructing such discount functions.
The random present value of cash flows to the equity default swap contract, $%
V(T)$ is then given by
\begin{equation}
V(T)=\sum_{n=1}^{N}R_{n}B_{n}-\sum_{j=1}^{NP}P_{j}B_{np_{j}}.
\label{SwapValue}
\end{equation}%
We are supposing here that conditional on the outcome of the first passage
event the expected values of future dollars are the same. This is equivalent
to supposing that the first passage events of single names in the economy do
not contain any information about macro movements in interest rates or that
interest rate evolutions are independent of the first passage process. We
leave for future research the modeling of joint evolutions of interest rates
and first passage times.
The equity default swap quotes in markets are set at levels consistent with
a zero price at the initiation of the swap contract. Hence we have that
under the risk neutral measure%
\begin{equation}
E^{Q}\left[ V(T)\right] =0. \label{ZeroVal}
\end{equation}%
From our risk neutral distribution for the first passage time we employ
\begin{equation}
E^{Q}\left[ (1-\Delta (n))\right] \underset{Def}{=}\overline{F}_{n}(C,G,M,Y).
\label{Survival}
\end{equation}%
The probability of first passage on a particular day $n$ may be approximated
by the density of default times the length of the day in years.
\begin{eqnarray}
&&E^{Q}\left[ (\Delta (n)-\Delta (n-1))\right] \underset{Def}{=}\pi
_{n}(C,G,M,Y) \notag \\
&\approx &f_{n}(C,G,M,Y)\frac{1}{365}. \label{prob}
\end{eqnarray}
We may then compute the value of the equity default swap quote as
\begin{eqnarray}
E^{Q}\left[ V(T)\right] &=&\sum_{n=1}^{N}\left[ M(1-R)-\frac{Mk_{T}(n-\zeta
(n))}{360}\right] B_{n}\pi _{n}(C,G,M,Y)- \notag \\
&&\sum_{j=1}^{NP}\frac{Mk_{T}(np_{j}-np_{j-1})}{360}B_{np_{j}}\overline{F}%
_{np_{j}}(C,G,M,Y). \label{SwapVal}
\end{eqnarray}%
Setting this value to zero and solving for the $k_{T}$ we obtain the $CGMY$
equity default swap pricing model
\begin{equation}
k_{T}(C,G,M,Y)=\frac{(1-R)\sum_{n=1}^{N}B_{n}\pi _{n}(C,G,M,Y)}{%
\sum_{j=1}^{NP}\frac{(np_{j}-np_{j-1})}{360}B_{np_{j}}\overline{F}%
_{np_{j}}(C,G,M,Y)+\sum_{n=1}^{N}\frac{n-\zeta (n)}{360}B_{n}\pi
_{n}(C,G,M,Y)}. \label{edsw}
\end{equation}%
Equation (\ref{edsw}) provides us with a four parameter equity default swap
pricing model and the parameters may be estimated from market prices of
options on the underlying name. In this way we obtain option implied quotes
for the equity default swap rates.
By way of a stylized example we set the interest rate yield curve at a flat $%
5\%$ level and used the values $C=.5,$ $G=2,$ $M=10,$ and $Y=.5$ with
recovery at $50\%$ and obtained the $EDS$ prices for maturities of $1,$ $3,$
and $5$ years at $161.97,$ $336.65$ and $439.54.$
\section{CGMY Calibration to Option Data}
We obtained daily data on $5$ year $CDS$ rates on Ford and GM over the
period 25 February 2002 to 25 February 2005. There were $696$ trading days
in this time interval. The credit default swap rates on these names saw a
sharp increase over this period before they finally came down again. The
mean CDS rates over this period were $308$ for Ford and $249$ for GM. The
standard deviations were $132$ and $78$ respectively while the minimum rates
were $155$ and $115$ respectively. The corresponding maximal rates were $720$
and $480.$
We employ the above algorithm to compute option implied equity default swap
rates with barriers for the equity event set at the typical value of $30\%$
of initiation price and a five year maturity. For this purpose we calibrate
the $CGMY$ model to European option prices for each day on each name. Equity
option prices on Ford and GM traded on the New York Stock Exchange are
typically available for American options. We mitigate the American feature
by first employing out of the money options. Second we infer implied
volatilities from American option prices using the Black Scholes model and
then construct the corresponding European prices by the Black Scholes
formula. The data is obtained from OptionMetrics and is available at $WRDS$
the Wharton Research Data Service.
We note that the case $Y=0$ is the variance gamma model, that successfully
calibrates any of the maturities. We are also aware that we may calibrate
equally well with any specific value of the parameter $Y.$ For our
approximations to have a diffusion component we require $Y>0$ and the
structure of the approximation is determined for any fixed value of $Y.$ We
therefore froze the value of the $Y$ parameter at $Y=.5$ and calibrated the
parameters $CGM$ for this frozen value of $Y.$ The calibrations were done
using maturities between $1$ and $2$ years, by minimizing the root mean
square error between market and model European option prices. \ We report in
$Table$ $1$ summary statistics of the $CGM$ parameters for both the
companies.
\begin{equation*}
\begin{tabular}{lllllll}
\multicolumn{7}{l}{TABLE 1} \\
& \multicolumn{3}{l}{FORD} & \multicolumn{3}{l}{GM} \\
& C & G & M & C & G & M \\
Median & .6506 & 1.9458 & 11.0187 & .2171 & 1.0084 & 5.8031 \\
25\% & .3661 & 1.3066 & 9.9309 & .1664 & .6690 & 4.7486 \\
75\% & 1.0895 & 4.0969 & 11.3522 & .5582 & 2.7802 & 11.5872%
\end{tabular}%
\end{equation*}
\section{Equity Default Swap Rates and the CDS Rates}
We employed the calibrated $CGM$ model with $Y=.5$ to determine the equity
default swap rates for the typical barrier of $30\%$ for the payout on the
equity event. We present graphs of the implied EDS and CDS rates for both
companies over the 2002-2005 period evaluated every five days. The \ data on
$CDS$ rates for the five year maturity are readily available from Bloomberg.
\FRAME{ftbpFU}{5.0194in}{4.0145in}{0pt}{\Qcb{Ford CDS in Blue, EDS in Red}}{%
}{edsvscdsfordbe.eps}{\special{language "Scientific Word";type
"GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width
5.0194in;height 4.0145in;depth 0pt;original-width 6.7222in;original-height
5.188in;cropleft "0";croptop "1.0349";cropright "1";cropbottom "0";filename
'../../EDSLM/EDSvsCDSFORDBE.eps';file-properties "XNPEU";}}
\bigskip
\FRAME{ftbpFU}{5.0194in}{4.0145in}{0pt}{\Qcb{GM CDS in Blue EDS in Red}}{}{%
edsvscdsgmbe.eps}{\special{language "Scientific Word";type
"GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width
5.0194in;height 4.0145in;depth 0pt;original-width 6.7222in;original-height
5.188in;cropleft "0";croptop "1.0351";cropright "1";cropbottom "0";filename
'../../EDSLM/EDSvsCDSGMBE.eps';file-properties "XNPEU";}}
The results of regressing the CDS rates on the EDS rates are as follows. We
observe that there is a strong link between between the prices obtained from
these separate markets.%
\begin{equation*}
\begin{tabular}{lllllll}
& \multicolumn{3}{l}{$FORD$} & \multicolumn{3}{l}{$GM$} \\
& $Const$ & $Slope$ & $R^{2}$ & $Const$ & $Slope$ & $R^{2}$ \\
$value$ & $99.1820$ & $0.2913$ & $0.8950$ & $146.2816$ & $0.1442$ & $0.6292$
\\
$t-value$ & $13.62$ & $33.37$ & & $18.10$ & $14.89$ &
\end{tabular}%
\end{equation*}
\section{Conclusion}
Approximation of the $CGMY$ L\'{e}vy measure by hyperexponentials leads to
an exact Wiener-Hopf factorization of the approximating process and hence to
exact expressions for the first passage time of the approximating process to
a level. We then employ these results to obtain closed form formulas for the
prices of equity default swap contracts that payout on the equity event of a
loss of $70\%$ of the initial stock value with the receipts being a regular
coupon paid till the equity event or maturity, whichever is less.
The methods are illustrated on $CGMY$ processes calibrated to the vanilla
options market for $FORD$ and $GM$ over the period $25$ February $2002$ to $%
25$ February $2005.$ For the same period we also observe the daily values of
the credit default swap contracts and these are compared favorably with the
option imputed equity default swap prices computed by the $CGMY$
approximation method proposed here.
\bigskip \pagebreak
|
2,877,628,089,142 | arxiv | \section{Introduction}
\label{sec:introduction}
A resource allocation problem can be defined as the problem of choosing an allocation from a set of feasible allocations to a finite set of stakeholders to minimize some objective function.
Generally, this objective function represents the efficiency of an allocation by considering, for example, the total cost incurred by all the stakeholders~\citep{resalloc}.
However, in some cases, such an objective may be inappropriate due to ethical or moral considerations. For instance, to whom should limited medical supplies go in a time of crisis~\citep{HeierStamm2017}?
These situations call for a different paradigm of decision-making that addresses the need of {\em fairness} in resource allocation.
{Resource allocation problems trace their roots to the 1950s when the division of effort between two tasks was studied~\citep{koopman53}. Fairness in resource allocation problems has been studied since the 1960s. A mathematical model for the fair apportionment of the U.S. House of Representatives considered the minimization of the difference between the disparity of representation between any pairwise states~\citep{burt63}. This problem was revisited about two decades later~\citep{ze81, katoh85}. The so-called \emph{minimax} and \emph{maximin} objectives in resource allocation, which achieve some fairness by either minimizing the maximum~(or maximizing the minimum) number of resources allocated to entities, have been studied at least since the 1970s~\citep{jacobsen71, py72}. A recent book on equitable resource allocation presents various models and advocates a lexicographic minimax/maximin approach for fair and efficient solutions~\citep{luss2012equitable}. This work further discusses multiperiod equitable resource allocation. Fair resource allocation over time is also considered in the dynamic case where resource availability changes over time and is solved using approximation algorithms~\citep{bampis2018}. A combination of efficiency and fairness in resource allocation is common in communication networks~\citep{Ogryczak2014,Ogryczak2014-b}, and can be found in various other fields, such as in some pickup and delivery problems~\citep{Eisenhandler2019}. }
Further, theoretical computer science has also been interested in a range of fair allocation problems.
\citet{procaccia2015cake} provides a summary of some of the principles of fair allocation of a divisible good.
Fair allocation of indivisible good is an active area of research since the seminal paper of \citet{alkan1991fair}.
This literature consider multiple notions of fairness like (i) {\em proportionality} where each of the $n$ stakeholders receive at least $1/n$-th of the total value of all goods (ii) {\em envy-freeability}, where each stakeholder weakly prefers their own bundle over anybody else's bundle, along with other weaker notions.
Variations of the problem where goods appear in an online fashion have also been studied \citep{aleksandrov2020online}.
A common underlying assumption in this area is that each stakeholder gets value only based on the set of items appropriated to them.
In our manuscript though, with the motivation being ambulance allocation, we relax this assumption.
Even if an ambulance is not placed at the base closest to a stakeholder, but say for example, at a base that is second closest to her, she could still get limited utility from the ambulance, as opposed to the case where it is placed much farther away.
\paragraph{Our Contributions.} {Our first contribution is to formally set up an abstract framework for repeatedly solving a fair allocation problem such that enhanced fairness is achieved over time, as opposed to a single round of allocation. Typically, a fair allocation could turn out to be an inefficient allocation. The framework addresses this trade-off by explicitly providing adequate control to the decision-maker on the level of the desired efficiency of the solutions. }
{Then, we prove that the concept of achieving fairness over time is not useful should the set of feasible allocations be a convex set, as long as the value each stakeholder obtained is a linear function of the allocation. We prove this result irrespective of the measure of fairness one uses, as long as the measure has crucial technical properties mentioned formally later. Next, we show closed-form and tight solutions for the number of rounds required for perfectly fair solutions, for special choices of the set of allocations that could be of practical interest.}
{However, there might also be other cases of practical interest where such closed-form solutions are harder, if not impossible, to obtain. We provide an integer programming formulation to solve these problems. The formulation hence provided can be combined directly with delayed column generation techniques in case of large instances. }
With the above results and ideas, we consider the problem of allocating ambulances to multiple areas of a city, where the residents of each area are the stakeholders. This family of problems is easier than the general case as there exists an efficient greedy algorithm that provides reasonable solutions fast. However, such solutions could be far from optimal, and we provide examples where the greedy algorithm fails.
Besides that, in this family of problems, we also impose restrictions on how allocations in successive rounds could be. This is reminiscent of the real-life restriction that one does not want to relocate too many ambulance vehicles each day. This added constraint makes identifying good solutions much harder as the proposed column generation-technique becomes invalid now. To solve this problem, we identify a graph induced by any candidate solution and show that deciding the feasibility of a candidate solution provided by column generation is equivalent to identifying Hamiltonian paths in this graph. Hence, we provide multiple subroutines to retain the validity of the column generation-based approach. We also provide experimental results on the computational efficiency of our method.
The paper is organized as follows. \cref{sec:prelim} introduces some basic definitions and concepts. \cref{sec:simple} presents some theoretical proofs for simple cases, and \cref{sec:finitex} establishes a general framework which allows to tackle more complicated cases. This framework is used to solve a practical problem in \cref{sec:alp}, whose results are presented in \cref{sec:numerical}. Finally, some conclusions are drawn in \cref{sec:conclusion}.
\section{Preliminaries}\label{sec:prelim}
In this section, we formally define the terms used throughout the manuscript.
We study resource allocation in the context of fairness over time, meaning that resources are allocated to stakeholders over some time horizon.
We use $\mathcal{X}$ to denote the set of all feasible allocations, with $n\in\mathbb{Z}_{\ge 0},\,n\geq 2$ being the number of stakeholders among whom the allocations should be done in a fair way.
\begin{Def}[Benefit function]
The benefit function $\operatorname{\tau}:\mathcal{X}\to\mathbb{R}^n$ is a function such that the $i$-th component $[\operatorname{\tau}(x)]_i$ refers to the benefit or utility enjoyed by the $i$-th stakeholder due to the allocation $x \in\mathcal{X}$.
\end{Def}
Assume $\phi$ to be a function that measures the unfairness associated with a given benefit pattern for the $n$ stakeholders---a concept formalized in \cref{Def:Unfair}---which is to be minimized. In this context, a basic fair resource allocation problem can be defined simply as
\begin{align}
\min_{x\in\mathcal{X}} \qquad \phi(\operatorname{\tau}(x)). \label{eq:intro_spfa}
\end{align}
{Many decision-makers are public servants who are either elected or nominated, and who serve in their position for a predetermined amount of time. In the context of serving the public, these officials must often juggle with limited means to ensure that everyone they serve is catered for. Public satisfaction naturally plays a role in the evaluation of their performance. Herein lies the motivation to solve resource allocation problems that consider fairness over time.}
Assuming that there are $T$ rounds of decision making, model~\cref{eq:intro_spfa} can be expanded to
\begin{subequations}
\begin{alignat}{3}
\min_{x(t), y} \quad & \phi(y) \\
\textrm{s.t.} \quad & y = \frac{1}{T} \sum_{t=1}^T \operatorname{\tau}(x(t)) && \\
& x(t) \in \mathcal{X}, &&\forall\,t = 1,\ldots,T.
\end{alignat}
\end{subequations}
Here, we implicitly assume that it is sensible to add the benefits obtained at different rounds of decision making, and use the average benefit over time to compute fairness.
Now, {in this section, we first define the properties that a general measure of unfairness should have for it to be considered valid in our context. To start with, we state that (i) if all the stakeholders of the problem get the same benefit or utility, then the unfairness associated with such an allocation of benefits should be 0, and (ii) the unfairness associated with benefits should be invariant to permutations of the ordering of stakeholders.
}
\begin{Def}[Unfairness function]\label{Def:Unfair}
Given $y\in\mathbb{R}^n$, $\phi:\mathbb{R}^n\to\mathbb{R}_{\ge 0}$ determines the extent of unfairness in the allocations, if the $i$-th stakeholder gets a benefit of $y_i$. Such a function $\phi$ satisfies the following:
\begin{enumerate}
\item $\phi(y_1,\ldots,y_n) = 0 \iff y_1 = y_2 = \ldots = y_n$,
\item $\phi(y_1,\ldots,y_n) = \phi(\pi_1(y),\ldots,\pi_n(y))$ for any permutation $\pi$.
\end{enumerate}
In addition, if $\phi$ is a convex function, we call $\phi$ a {\em convex unfairness function}.
\end{Def}
In general, trivial solutions where no benefit is obtained by any of the stakeholders, i.e., when $\operatorname{\tau}(x)$ is a zero vector, are perfectly fair solutions! However, this results in a gross loss of efficiency.
\begin{Def}[Inefficiency function]
Given $\mathcal{X}$ and the benefit function $\operatorname{\tau}$, the inefficiency function $\ineff:\mathcal{X}\to[0,1]$ is defined as
\begin{equation}
\ineff(x) \quad=\quad \left \{
\begin{array}{cl}
0& \text{ if } \bar f = \underline f, \\
\frac{\bar{f}-\sum_{i=1}^n [\operatorname{\tau}(x)]_i}{\bar{f} - \underline{f}}& \text{ otherwise,}
\end{array}
\right .
\end{equation}
where {$\bar{f} = \sup_{x\in\mathcal{X}}\sum_{i=1}^n [\operatorname{\tau}(x)]_i$, $\underline{f} = \inf_{x\in\mathcal{X}}\sum_{i=1}^n [\operatorname{\tau}(x)]_i$}, and $\bar f$ and $\underline f$ are assumed to be finite.
\end{Def}
\begin{remark}
Note that for all feasible $x\in\mathcal{X}$, we indeed have $\ineff(x) \in [0,1]$. For the most efficient $x$, i.e., the $x$ (or allocation) that maximizes the sum of benefits, $\ineff(x) = 0$, while for the least efficient $x$, we have $\ineff(x) = 1$. Thus, $\ineff$ serves as a method of normalization of the objective values.
\end{remark}
We now define a single-period fair allocation problem, subject to efficiency constraints. One might always choose $\ineff(x)=1$ to retrieve the problem in model \cref{eq:intro_spfa} without efficiency constraints.
\begin{Def}[Single-period fair allocation (SPFA) problem]
Given $\bar \ineff \in [0,1]$, the single-period fair allocation problem is to solve
\begin{subequations}
\label{spfa}
\begin{alignat}{3}
\min_{x} \quad & \phi(\operatorname{\tau}(x)) \\
\textrm{s.t.} \quad &
\ineff (x) \le \bar \ineff &&.
\end{alignat}
\end{subequations}
\end{Def}
\cref{ex:motiv} motivates and provides a mean of validation for the SPFA. It shows that with reasonable choices, we retrieve the intuitively fair solution.
\begin{Ex}
\label{ex:motiv}
Consider the case where $\mathcal{X} = \{x \in \mathbb{R}^n_+ : \sum_{i=1}^nx_i = 1\}$, i.e., $\mathcal{X}$ is a simplex. Further, assume that $[\operatorname{\tau}(x)]_i = \operatorname{\tau}_ix_i$ for some scalars $\operatorname{\tau}_i > 0$, and that, w.l.o.g., $\operatorname{\tau}_1 \ge \operatorname{\tau}_2 \ge \ldots \ge \operatorname{\tau}_n$.
For the choice of $\bar\ineff = 1$, i.e., with no efficiency constraints, the solution for the SPFA problem is given by
\begin{subequations}
\begin{align}
x^\star_i \quad&=\quad \frac{g}{\operatorname{\tau}_i}, \quad\forall i\\
\text{where }\quad g \quad&=\quad \frac{1}{\sum_{i=1}^n \frac{1}{\operatorname{\tau}_i}}.
\end{align}
Clearly, each stakeholder enjoys a benefit of $g$, and hence the unfairness associated with this allocation is $0$. We can notice that $\ineff(x^\star) = \frac{\operatorname{\tau}_1 - ng}{\operatorname{\tau}_1 - \operatorname{\tau}_n}$.
\end{subequations}
Note that for the case where $n=2$, the above simplifies to
\begin{align*}
x_1^\star \quad&=\quad \frac{\operatorname{\tau}_2}{\operatorname{\tau}_1+\operatorname{\tau}_2}\\
x_2^\star \quad&=\quad \frac{\operatorname{\tau}_1}{\operatorname{\tau}_1+\operatorname{\tau}_2}\\
\ineff(x^\star) \quad&=\quad \frac{\operatorname{\tau}_1^2 - \operatorname{\tau}_1\operatorname{\tau}_2}{\operatorname{\tau}_1^2 - \operatorname{\tau}_2^2}.
\end{align*}
\end{Ex}
\noindent We now define the fairness-over-time problem.
\begin{Def}[$T$-period fair allocation ($T$-PFA) problem]\label{Def:TPFA}
Given $\bar \ineff \in [0,1]$ and $T \in \mathbb{Z}_{\ge 0}$ with $T\geq 2$, the $T$-period fair allocation problem is to solve
\begin{subequations}
\begin{alignat}{3}
\min_{x(t), y} \quad & \phi(y) \\
\textrm{s.t.} \quad & y = \frac{1}{T} \sum_{t=1}^T \operatorname{\tau}(x(t)) && \\
&
\ineff(x(t)) \le \bar\ineff
, &&\quad\forall\,{t} = 1,\ldots,T \label{eq:FOT:eff} \\
& x(t) \in \mathcal{X}
, &&\quad\forall\,{t} = 1,\ldots,T.
\end{alignat}\label{eq:FOT}
\end{subequations}
We say that a fair-allocation problem (SPFA or $T$-PFA) has {\em perfect fairness} if the optimal objective value of the corresponding optimization problems is 0.
\end{Def}
\begin{Ex}[Usefulness of the $T$-PFA]\label{ex:tpfa_useful}
Let $\mathcal{X} = \{x \in \{0,1\}^2:x_1 + x_2 = 1\}$. Further, let $\operatorname{\tau}(x) = (2x_1, x_2)$. Let $\bar \ineff = 1$. Note that, in the case of the SPFA, the optimal objective is necessarily nonzero, since the benefits of the two stakeholders are unequal for every feasible solution. However, consider the 3-PFA. Now, if $x(1) = (1,0)$, $x(2) = (0,1)$, $x(3) = (0,1)$, $y = \left( \frac{2}{3}, \frac{2}{3}\right)$. So, for any choice of $\phi$, the optimal objective value is $0$, which is strictly better than the SPFA solution.
\end{Ex}
\section{Simple Cases}
\label{sec:simple}
\subsection{Convex $\mathcal{X}$}
We have motivated in \cref{ex:tpfa_useful} that ensuring fairness over multiple rounds could offer better results than seeking fairness in a single round. We now show that this holds only for a nonconvex $\mathcal{X}$. In other words, if $\mathcal{X}$ is convex and if $\operatorname{\tau}$ is a linear function, we necessarily get no improvement in $T$-PFA over SPFA.
\begin{theorem}\label{thm:TPFAconvex}
Let $f^\star$ and $f^\star_T$ be the optimal values of SPFA and $T$-SPFA for some nonnegative integer $T$. If $\mathcal{X}$ is convex and if $\operatorname{\tau}$ is linear, then $f^\star = f^\star_T$.
\end{theorem}
\begin{proof}
Given that $\operatorname{\tau}$ is a linear function, we can write $[\operatorname{\tau}(x)]_i = \operatorname{\tau}_i^{\mathsf T}x$ for an appropriately chosen $\operatorname{\tau}_i$, for $i=1,\ldots,n$.
Suppose that $x^\star(t)$, for $t=1,\ldots, T$, and $y^\star$ solve the $T$-PFA problem. By our notation, this has an objective value of $\phi(y^\star) = f^\star_T$. Now, we construct a solution for the SPFA problem with an objective value equal to $f^\star_T$.
For this, consider the point $\bar x = \frac{1}{T} \sum_{t=1}^T x^\star(t)
$.
First we claim that $\bar x\in\mathcal{X}$.
We have that $x(t) \in \hat{\mathcal{X}}$ where $\hat\mathcal{X} = \{x \in \mathcal{X} : \ineff(x) \leq \bar\ineff\}$.
Now, we observe that $\hat\mathcal{X}$ is a convex set given by $\{x \in \mathcal{X} : (\sum_{i=1}^n\operatorname{\tau}_i)^{\mathsf T} x \geq \bar f - (\bar f - \underline f)\bar \ineff \}$, where $\bar f$ and $\underline f$ are constants. Since $\bar x$ is obtained as a convex combination of $x^\star(t) \in \hat \mathcal{X}$, it follows that $\bar x \in \hat \mathcal{X}$.
Finally, from the linearity of $\operatorname{\tau}$, we can set $\bar y = y^\star$. This shows that $(\bar x, \bar y)$ is feasible. Since $\bar y = y^\star$, it follows that their objective function values are equal.\hfill $\blacksquare$
\end{proof}
Having proven that multiple rounds of allocation cannot improve the fairness when $\mathcal{X}$ is convex and $\operatorname{\tau}$ is linear, we show that it is not necessarily due to the fact that perfect fairness is obtainable in SPFA. \cref{ex:lb} shows an instance where the unfairness is strictly positive with a single round of allocation, irrespective of the choice of $\phi$. Naturally, due to \cref{thm:TPFAconvex}, perfect fairness is not possible with multiple rounds of allocation either.
\begin{Ex} \label{ex:lb}
Consider a fair allocation problem where $\mathcal{X} = \{x\in\mathbb{R}^3_{\ge 0}:x_1+x_2+x_3 = 1\}$. Clearly $\mathcal{X}$ is convex. Consider linear $\operatorname{\tau}$ defined as
\begin{align*}
[\operatorname{\tau}(x)]_1 \quad&=\quad x_1 + \frac{3}{4} x_2 + \frac{3}{4} x_3\\
[\operatorname{\tau}(x)]_2 \quad&=\quad x_2\\
[\operatorname{\tau}(x)]_3 \quad&=\quad x_3.
\end{align*}
Let $\bar x$ be a fair allocation. In such a case, we need $[\operatorname{\tau}(\bar x)]_1 = [\operatorname{\tau}(\bar x)]_2 = [\operatorname{\tau}(\bar x)]_3$. Thus, we need
\begin{align*}
g \quad&=\quad x_1 + \frac{3}{4} x_2 + \frac{3}{4} x_3\\
g \quad&=\quad x_2\\
g \quad&=\quad x_3
\end{align*} for some $g \in \mathbb{R}_+$. Solving this linear system gives the {\em unique} fair allocation as allocations of the form $(x_1, x_2, x_3) = (-0.5g, g, g)$, which necessarily violates the non-negativity constraints in the definition of $\mathcal{X}$. Any other allocation necessarily has $\phi(\operatorname{\tau}(x)) > 0$.
\end{Ex}
\subsection{Simplicial $\mathcal{X}$}
We now show that perfect fairness is attainable under certain circumstances.
In the theorem below, $LCM (\ldots)$ refers to the least common multiple of the set of integers in its arguments.
\begin{theorem}\label{thm:TPFAsimpl}
Let $\mathcal{X} = \{x \in \mathbb{Z}^n_{\ge 0}: \sum_{i=1}^nx_i \leq a\}$ for some positive integer $a$.
Let the benefit function be $[\operatorname{\tau}(x)]_i = \operatorname{\tau}_ix_i$, where each $\operatorname{\tau}_i$ is a positive integer. Assume, w.l.o.g., that $\operatorname{\tau}_1 \ge \operatorname{\tau}_2 \ge \ldots \ge \operatorname{\tau}_n > 0$.
Let $L = LCM(\operatorname{\tau}_1,\ldots,\operatorname{\tau}_n)$. Then,
\begin{enumerate}
\item \add{The only perfectly fair solution within a number of periods strictly lesser than $\bar T= \ceil{\frac{1}{a}\sum_{i=1}^n \frac{L}{\operatorname{\tau}_i}}$ is the trivial solution. i.e., $x=0$.}
\remove{Perfect fairness cannot be achieved within a number of periods strictly lesser than $\bar T= \ceil{\frac{1}{a}\sum_{i=1}^n \frac{L}{\operatorname{\tau}_i}}$.}
\item Perfect fairness is achieved with $\bar T$ periods if $ \frac{\operatorname{\tau}_1- \min(\operatorname{\tau}_n,\operatorname{\tau}_1/a)}{\operatorname{\tau}_1 } \le \bar\ineff < 1
$ and $\bar T > 1$.
\item Perfect fairness can be unattainable within $\bar T$ periods if $\bar\ineff < \frac{\operatorname{\tau}_1- \min(\operatorname{\tau}_n,\operatorname{\tau}_1/a)}{\operatorname{\tau}_1 }$.
\end{enumerate}
\end{theorem}
\begin{proof}
\textbf{Part 1.} Observe that an unfairness of $0$ can be achieved when for some $g\in \mathbb{R}$ we have $\sum_{t=1}^Tx_i(t) = g/\operatorname{\tau}_i$ for every $i\in\{1,\ldots,n\}$. By integrality of each $x_i(t)$ and $T$, $g$ must be a multiple of $L$, giving
\begin{align*}
\sum_{t=1}^T x_i(t) \quad&=\quad \frac{\alpha L}{\operatorname{\tau}_i} \label{eq:TPFAsimplA}
\end{align*}
for some integer $\alpha \geq 0$. Because $\bar \ineff < 1$, $\alpha$ must be nonzero. Summing the above equation over $i=1,\ldots,n$, we obtain
\begin{align*}
\sum_{t=1}^T\sum_{i=1}^n x_i(t) \quad&=\quad \sum_{i=1}^n\frac{\alpha L}{\operatorname{\tau}_i}.
\end{align*}
Using the fact that $\sum_{i=1}^n x_i(t) \leq a$, we obtain
\begin{align*}
\alpha\sum_{i=1}^n\frac{ L}{\operatorname{\tau}_i} \quad\le\quad \sum_{t=1}^Ta \quad=\quad aT \quad\iff\quad \frac{\alpha}{a} \sum_{i=1}^n \frac{L}{\operatorname{\tau}_i} \quad \le\quad T.
\end{align*}
Since $\alpha \ge 1$, the minimum LHS is attained when $\alpha=1$ and given that $T$ is integer, we obtain $$\bar T:=\ceil{\frac{1}{a}\sum_{i=1}^n\frac{L}{\operatorname{\tau}_i}}.$$
\textbf{Part 2. } We prove this part by exhibiting a solution feasible for $\bar T$-PFA that achieves perfect fairness over time and is hence optimal for it.
Consider a reverse-lexicographic allocation method where all allocations are made to stakeholder $n$ until the total allocation towards $n$ sums to $L/\operatorname{\tau}_n$. i.e., if $L/\operatorname{\tau}_n \le a$, then let $x_n(1) = L/\operatorname{\tau}_n$. Otherwise let $x_n(1) = a$ and allow $x_n(2) = \min \{a, L/\operatorname{\tau}_n - a\}$.
Repeat this process until the total allocation towards $n$ adds up to $L/\operatorname{\tau}_n$.
Next, repeat the same process for $n-1$ so that the total allocation towards $n-1$ adds up to $L/\operatorname{\tau}_{n-1}$.
allocate similarly for $n-1$, $n-2$ and so on up to stakeholder $1$.
Observe that each of these allocations at any time $t$ is in $\mathcal{X}$ by construction.
Again by construction, each player $i$ has a value given by $\operatorname{\tau}_i \times \frac{L}{\operatorname{\tau}_i} = L$.
If we show that the allocation in each time period has $\ineff \le \bar\ineff$, we are done.
Consider a period $\hat t\in \{1,\ldots,\bar T\}$ where the allocation is the most inefficient.
This would either be in the first or the last period, i.e., $\hat t=1$ or $\hat t=\bar T$.
\add{This is because $T=1$ corresponds to adding the most possible value to stakeholder $n$, i.e., the one who values the allocation the least. Alternatively, the most inefficient allocation could be at period $T$, because in the last period the constraint $\sum_{i=1}^nx_i \le a$ might hold with a strict inequality, leading to inefficiencies.}
We will show in either case the inefficiency is at most $\bar\ineff$.
If the first period, $\ineff$ could be large as all allocations could be to $n$, i.e., $x(1) = (0,0,\ldots,0,a)$. $\ineff$ corresponding to this allocation is $\frac{a\operatorname{\tau}_1- a\operatorname{\tau}_n}{a\operatorname{\tau}_1 - 0} = \frac{\operatorname{\tau}_1- \operatorname{\tau}_n}{\operatorname{\tau}_1} \leq \bar\ineff$. Alternatively, in the last period, there could be minimal allocation of $\sum_{i=1}^n x_i(\bar T) = 1$, but in that case, it will necessarily be made to the first stakeholder, i.e., $x(T) = (1,0,0,\ldots,0)$.
$\ineff$ corresponding to this allocation is $\frac{a\operatorname{\tau}_1- \operatorname{\tau}_1}{a\operatorname{\tau}_1 - 0} = \frac{\operatorname{\tau}_1- \operatorname{\tau}_1/a}{\operatorname{\tau}_1} \leq \bar\ineff$.
\textbf{Part 3.} We show that perfect fairness might be unattainable with stronger efficiency requirements, by providing a family of counterexamples, each with $n=2$ stakeholders.
Consider $\hat T \geq 2$ and
choose $\operatorname{\tau}_1 = (\hat T-1)a$ and $\operatorname{\tau}_2 = 1$. From the first part, perfect fairness is not obtainable for $T < \bar T = \ceil{\frac{1}{a} \left(
\frac{(\hat T-1)a}{(\hat T-1)a} + \frac{(\hat T-1)a}{1}
\right)} =
\hat T
$ periods. We now show that perfect fairness is not possible with $\hat T$ periods either.
Consider the set of solutions where perfect fairness is achieved in $\hat T$ time steps.
Any such solution should award equal total utility to both stakeholders $1$ and $2$.
The smallest possible utility is the LCM $(\bar T-1)a$, \add{as even allocating $1$ unit to player 1 already results in a utility of $\hat {T}-1$ for them}.
To achieve this utility
$(\hat T-1)a$ allocations are required for $2$.
Consider the way in which such an allocation can be done.
Up to permutations across periods, They are all of the form
\begin{align*}
x(1)\quad&=\quad (0, a - \delta_1) \\
x(2)\quad&=\quad (0,a - \delta_2) \\
&\vdots \\
x(\hat T-1)\quad&=\quad (0, a - \delta_{\hat T -1}) \\
x(\hat T)\quad&=\quad \left (\sum_{i=1}^{\hat T -1}\delta_i , 1 \right )
\end{align*}
for some integers $\delta_i \in \{1,2,\ldots, a\}$ satisfying $0 \leq \sum_{i=1}^{\hat T -1}\delta_i \leq a-1$.
From the hypothesis, we need $\bar\ineff < \frac{\operatorname{\tau}_1- \min(\operatorname{\tau}_n,\operatorname{\tau}_1/a)}{\operatorname{\tau}_1 } = \frac{(\hat T-1)a - 1}{(\bar T-1)a}$.
Now consider the inefficiency function of the allocation $x(1)$. This is minimized when $\delta_1=0$. In that case, $\ineff(x(1)) = \frac{(\hat T -1)a - 1 + \delta_1/a}{\hat T - 1}= \frac{(\hat T -1)a - 1}{(\hat T - 1)a} > \bar \ineff$ shows that it is infeasible. Thus, any perfectly fair allocation for $\bar T$-PFA problem is an infeasible allocation, providing the necessary counterexample. \hfill $\blacksquare$
\end{proof}
The first part of \cref{thm:TPFAsimpl} states that for any $n$ and a $\mathcal{X}$ of the specified form, perfect fairness is possible after a specific, finite number of periods, $\bar T$, provided the efficiency requirements are as stated.
The second part shows tightness of the results saying, for any other stricter efficiency requirements, there exists a counterexample with just two stakeholders, such that perfect fairness is impossible in $\bar T$ steps.
\begin{Ex}
Consider \cref{thm:TPFAsimpl} applied to \cref{ex:tpfa_useful}. We have $\mathcal{X} = \{(1,0), (0,1)\}$, and thus $a=1$. Then, $\operatorname{\tau}(x) = (2x_1, x_2)$, so $\operatorname{\tau}_1=2$ and $\operatorname{\tau}_2=1$, and $\operatorname{\tau}_1 \geq \operatorname{\tau}_2 \geq 0$ holds. In addition, $L = LCM(2, 1) = 2$. Then, $\bar T = \ceil{\frac{1}{a}\sum_{i=1}^n \frac{L}{\operatorname{\tau}_i}} = \ceil{\frac{1}{1}(\frac{2}{2} + \frac{2}{1})} = 3.$
\end{Ex}
\subsection{Sparse Simplicial $\mathcal{X}$}
The sparse simplicial form for $\mathcal{X}$ is another interesting case from a practitioner's perspective. For example, a government might want to allot money {to} $n$ possible projects, but if it divides among all of them, then it might be insufficient for any of them. So, there could be a restriction that the government allots it to at most $r$ projects.
\begin{theorem}\label{thm:TPFAcard}
Let $\mathcal{X} =\{x \in \mathbb{R}^n_{\ge 0}: \sum_{i=1}^n x_i \leq a, \norm{x}_0 \leq r \}$ where $\norm{\cdot}_0$ is the sparsity pseudo-norm, which counts the number of non-zero entries in its argument.
Assume that the benefit function is $[\operatorname{\tau}(x)]_i = \operatorname{\tau}_ix_i$, where each $\operatorname{\tau}_i$ is a positive integer. Assume, w.l.o.g., that $\operatorname{\tau}_1 \ge \operatorname{\tau}_2 \ge \ldots \ge \operatorname{\tau}_n>0$.
We denote by $(p,q)$ the quotient and the remainder for $\frac{n}{r}$.
Let $\bar T= \ceil{\frac{n}{r}}$ and $1>\bar \ineff \ge \frac{a\operatorname{\tau}_1 - qv}{a\operatorname{\tau}_1}$ where $v = a/\max \left \{\sum_{i= (\bar T-1)r+1 }^{n} \frac{1}{\operatorname{\tau}_{i}}, \sum_{i=(p-1)r+1}^{pr} \frac{1}{\operatorname{\tau}_{i}}\right \}$. There is no perfect assignment if $T < \bar T$ whereas perfect fairness is attainable in $\bar T$ periods.
\end{theorem}
\begin{proof}
First observe that at most $r$ stakeholders can be provided value in any given period. Thus except the case where everybody gets a value of $0$, perfect fairness is impossible in any fewer than $\bar T = \ceil{\frac{n}{r}}$ periods.
Now we show that perfect fairness is possible in $\bar T$ periods, if the conditions in the theorem statement are satisfied. Consider the following allocation: In period
$t$ for $1\leq t\leq \bar T-1$, allocate $\frac{v}{\operatorname{\tau}_i}$ for $i = (t-1)r+1,(t-1)r+2,\ldots,tr$ and $0$ to the rest.
For $t=\bar T$, allocate $\frac{v}{\operatorname{\tau}_i}$ for $i=(\bar T-1)r+1,\ldots,n$.
Note that in each period $t< \bar T$, we allocate exactly to $r$ stakeholders, and for $t=\bar T$, \add{if $q=0$, we allocate to $r$ stakeholders, else } we allocate to $q$ stakeholders, thus always satisfying $\Vert x \Vert_0 \le r$.
In any period $t<\bar T$, note that $\sum_{i=1}^n x_i(t) = \sum_{i=(t-1)r+1}^{tr} \frac{v}{\operatorname{\tau}_i} \le v\sum_{i=(p-1)r + 1}^{pr} \frac{1}{\operatorname{\tau}_i} \le v \max \left\{
\sum_{i=(\bar T-1)r+1}^{n} \frac{1}{\operatorname{\tau}_{i}}, \sum_{i=(p-1)r + 1}^{pr} \frac{1}{\operatorname{\tau}_i}
\right\} = a
$ satisfying the inequality constraint. For $t=\bar T$, we have $
\sum_{i=1}^n x_i(t) = \sum_{i=(\bar T-1)r+1}^n \frac{v}{\operatorname{\tau}_i} \le v \max \left\{
\sum_{i=(\bar T-1)r+1}^{n} \frac{1}{\operatorname{\tau}_{i}}, \sum_{i=(p-1)r + 1}^{pr} \frac{1}{\operatorname{\tau}_i}
\right\} = a
$, again satisfying the inequality constraint.
We can ensure that in the first $\bar T -1$ periods, each of the allocated players receives a value of $v$, and hence the total benefit of the allocation is $rv$, giving the $\ineff(x(t)) = \frac{a\operatorname{\tau}_1-rv}{a\operatorname{\tau}_1} \leq \bar\ineff $ since $q<r$.
In the last period, we allocate a utility of $v$ to $q$ players if $q>0$ else to $r$ players. The total benefit obtained in this round is at least $qv$. Feasibility follows as before and $\ineff(x(\bar T)) = \frac{a\operatorname{\tau}_1-qv}{a\operatorname{\tau}_1} \leq \bar\ineff$, which is feasible. Perfect fairness follows since the benefit to each player is $v$.
\hfill $\blacksquare$.
\end{proof}
\begin{remark}
Unlike the setting in \cref{thm:TPFAsimpl}, \cref{thm:TPFAcard} is not tight with respect to $\bar \ineff$. In other words, it remains unknown whether decreasing the allowed value of inefficiency $\bar\ineff$ still allows one to achieve perfect fairness in $\bar T$ periods.
\end{remark}
\begin{remark}
We note that the above results, \cref{thm:TPFAconvex,thm:TPFAsimpl,thm:TPFAcard}, are agnostic to the choice of $\phi$, and only use the fact that $\phi(x_1,\ldots.x_m) = 0 \iff x_1 = \ldots = x_n$. Any result that holds for $\phi(\cdot) \neq 0$ has to necessarily depend upon the choice of $\phi$.
\end{remark}
\begin{Ex}
Consider \cref{thm:TPFAcard} applied to the following setting. $\mathcal{X} = \{x\in\mathbb{R}_{\ge 0}^3: x_1+x_2+x_3 \le 1 \norm{x}_0 \le 2 \}$. Let $\operatorname{\tau}_1 = 5$ and $\operatorname{\tau} _2 =3$.
Now $\bar T = \ceil {\frac{n}{r}} = \ceil {\frac{3}{2}} = 2$.
One can calculate that $v=2\frac{2}{9}$, $\bar\ineff = \frac{5}{9}$.
The allocation $\left(\frac{4}{9},\frac{5}{9},0\right)$ in period 1 and $\left( 0, 0, \frac{740}{999} \right)$ in period 2, gives all players a utility of $2\frac{2}{9}$ achieving perfect fairness.
\end{Ex}
\section{General Combinatorial $\mathcal{X}$}
\label{sec:finitex}
In many practical areas of interest, $\mathcal{X}$ could be more complicated than the sets presented in \cref{sec:simple}. Let $\mathcal{X} = \{x^1,x^2,\ldots,x^k\}$. We assume that $\mathcal{X}$ is finite, but typically has a large number of elements, given in the form of a solution to a combinatorial problem.
We consider a benefit function where $\operatorname{\tau}(x) = \Gamma x$, for some matrix $\Gamma$.
We thus have $[\operatorname{\tau}(x) ]_i = \operatorname{\tau}_i^{\mathsf T}x $ for $i=1,\ldots,n$.
The efficiency constraint here is trivial, as the inefficient allocations $x^j$ can be removed from $\mathcal{X}$ to retain another (smaller) finite $\mathcal{X}$. In fact, with linear $\operatorname{\tau}$, the efficiency constraint is a linear inequality of the form $\sum_{i=1}^n \operatorname{\tau}_i^{\top}x \geq \overline{f} - \bar\ineff (\overline{f}-\underline{f}) $.
Let $q_j$ be the {\em number of times} the allocation $x^j\in\mathcal{X}$ is chosen over $T$ rounds of decision. In this setting, the $T$-PFA problem can be restated as\footnote{The reader may have observed that an approach based on column generation would lend itself well for such a model. This is discussed in \cref{sec:alp}.}
\begin{subequations}
\begin{alignat}{3}
\min_{q, y} \quad & \phi(y) \\
\textrm{s.t.} \quad &y_i = \frac{1}{T}\operatorname{\tau}_i^{\mathsf T}\left(\sum_{j=1}^kq_jx^j\right), &&\quad\forall\,i = 1,\ldots,n\\
&\sum_{j=1}^kq_j = T&&\label{eq:fbs:t}\\
&q_j \in \mathbb{Z}_{\ge 0}, &&\quad\forall\,j = 1,\ldots,k.
\end{alignat}\label{eq:finiteBaseSet1}
\end{subequations}
Note that since the theorems of \cref{sec:simple} do not extend to this case, and since \cref{ex:lb} shows that a perfectly fair solution may not even exist, we know neither the value of the fairest feasible solution, nor the smallest value of $T$ that guarantees such a solution. We now present a two-phase integer program that finds the smallest value of $T$ that guarantees the fairest feasible solution. In the first phase, we are interested in finding the fairest feasible solution. This is achieved by solving model \cref{eq:finiteBaseSet1}, replacing \cref{eq:fbs:t} with $ \sum_{j=1}^kq_j \ge 1$
to ensure that not only the fairest solution is returned, but that this solution not be $(0, 0, \ldots, 0)$. Let $\phi^\star$ be the value of the solution found in the first phase. In the second phase, we are interested in finding the smallest $T$ which can accommodate a solution of value $\phi^\star$. This is achieved by solving model \cref{eq:finiteBaseSet1}, again replacing \cref{eq:fbs:t} with
$
\sum_{j=1}^k{q_j} \ge 1,
$
adding
$
\phi(y) = \phi^\star,
$
and changing the objective to
$
\min_{q,y} \sum_{j=1}^kq_j.
$
{We should note that the value of~$T^{\star}$ found by the second phase could be quite high, even for simple choices of $\mathcal{X}$. If perfect fairness can be attained in a time horizon of size, say, 10000, and that a discrete time point represents a day, then this period of 27-odd years would probably not be suitable for most practical applications, since it is only by reaching the end of the time horizon that optimal fairness is guaranteed. In practice, then, large time horizons can be problematic. This motivates the need for consistent fairness on a smaller scale.}
{Some compromises can be made which would mitigate this issue. Manually fixing $T$ to a value not higher than the expected duration of the problem would ensure a fair distribution of resources, since reaching the end of the time horizon would be assured. Accepting some degree of unfairness would also decrease the length of the time horizon and achieve similar results. \cref{ex:fairness_delayed} illustrates these two options.}
\begin{Ex}\label{ex:fairness_delayed}
{Consider a fair allocation problem where $\mathcal{X} = \{x\in\mathbb{Z}^2_{\ge 0}: x_1+x_2 = 1\}$. Further consider $\operatorname{\tau}$ defined as
\begin{align*}
[\operatorname{\tau}(x)]_1 \quad&=\quad x_1 + \frac{15}{37} x_2\\
[\operatorname{\tau}(x)]_2 \quad&=\quad x_2 + \frac{15}{47} x_1.
\end{align*}
Perfect fairness can be achieved when $T=1109$, but if we fix $T=5$, the difference between the benefits of both stakeholders will be $\sim13\%$, and if we allow a small difference of $0.1\%$ between their benefits, this could be achieved when $T=15$.}
\end{Ex}
{Another compromise would be to choose a fair way of reaching the solution. Given that we know the value of~$T^{\star}$ which grants optimal fairness, we could then identify a \emph{best path} leading to the optimal solution. A best path is the one which ensures that unfairness is kept as low as possible in the intermediate time points, without sacrificing optimality at the end of the time horizon. This would, however, require solving another linear program, which may prove burdensome if the time horizon were too large.}
{
One may think that greedily aiming for the fairest solution at each time point---while considering any unfairness incurred in previous time points, and enforcing minimal efficiency constraints by ensuring that all available resources are used at each time point---could lead to optimality at the end of the time horizon. \cref{thm:greedy} shows that it is not the case.}
\begin{theorem}\label{thm:greedy}
{A greedy approach does not provide any guarantee of optimality for the $T$-PFA problem.}
\begin{proof}
{Consider a fair allocation problem where $\mathcal{X} = \{x\in\mathbb{Z}^3_{\ge 0}: x_1+x_2+x_3 = 1\}$. Further consider $\operatorname{\tau}$ defined as
\begin{align*}
[\operatorname{\tau}(x)]_1 \quad&=\quad \epsilon x_1 + \frac{1}{2}x_2 + \frac{1}{2}x_3 \\
[\operatorname{\tau}(x)]_2 \quad&=\quad x_2 \\
[\operatorname{\tau}(x)]_3 \quad&=\quad x_3.
\end{align*}}
\noindent where $0 < \epsilon < \frac{1}{2}$.
{If $T=2$, the only perfectly fair solution~(and, as it happens, the most efficient too) is achieved by allocating the resource in turn to $x_2$ and $x_3$, over $T$. Yet, for any reasonable measure of unfairness, the greedy choice would be to allocate the resource twice to $x_1$. For instance, suppose that $\phi$ was defined by the difference between the largest and smallest benefits, i.e., $\phi(x) = \max_{i \in \{1, ...,n\}}[\operatorname{\tau}(x)]_i-\min_{i \in \{1, ...,n\}}[\operatorname{\tau}(x)]_i$. In such a case, the greedy choice could never deviate from a continuous allocation to $x_1$, as any other allocation would always be less fair---here, the solution would always be $T\epsilon-0=T\epsilon$.} {Thus, a greedy approach does not necessarily lead to an optimal solution.}\hfill $\blacksquare$
\end{proof}
\end{theorem}
{We should note that there is a trade-off between fairness and efficiency: As illustrated in \cref{ex:fair_eff}, enforcing constraints to increase fairness will generally decrease efficiency, and vice-versa.}
\begin{Ex}\label{ex:fair_eff}
{Consider a SPFA problem where $\mathcal{X} = \{x\in\mathbb{Z}^3_{\ge 0}: 1 \le x_1+x_2+x_3 \le 3\}$. Further consider $\operatorname{\tau}$ defined as
\begin{align*}
[\operatorname{\tau}(x)]_1 \quad&=\quad x_1 + \frac{1}{2} x_2 + \frac{1}{2} x_3\\
[\operatorname{\tau}(x)]_2 \quad&=\quad x_2 + \frac{1}{2} x_1\\
[\operatorname{\tau}(x)]_3 \quad&=\quad x_3 + \frac{1}{2} x_1.
\end{align*}
By enforcing $\ineff(x) = 0$, the only feasible solution is $x=(3, 0, 0)$ which gives $\operatorname{\tau}(x)=(3, \frac{3}{2}, \frac{3}{2})$ which is not perfectly fair. In contrast, by necessitating perfect fairness, the only feasible solution is $x=(0, 1, 1)$ which gives $\operatorname{\tau}(x)=(1,1,1)$. This corresponds to $\ineff(x) = \frac{6-3}{6-(3/2)}=\frac{2}{3}$.}
\end{Ex}
{We should also note that the value of $T$ may have a noticeable impact on both fairness and efficiency, as illustrated in \cref{ex:t}.}
\begin{Ex}\label{ex:t}
{Consider a fair allocation problem where $\mathcal{X} = \{x\in\mathbb{Z}^4_{\ge 0}: 1 \le x_1+x_2+x_3+x_4 \le 3\}$. Further consider $\operatorname{\tau}$ defined as
\begin{align*}
[\operatorname{\tau}(x)]_1 \quad&=\quad x_1 + x_2\\
[\operatorname{\tau}(x)]_2 \quad&=\quad x_2 + x_1\\
[\operatorname{\tau}(x)]_3 \quad&=\quad x_3 + x_4\\
[\operatorname{\tau}(x)]_4 \quad&=\quad x_4 + x_3.
\end{align*}
The smallest $T$ accommodating perfect fairness is $T=1$ with $x=(1,0,1,0)$\footnote{Or any other equivalent solution.} for $\operatorname{\tau}(x)=(1,1,1,1)$ and an inefficiency of $\eta(x)=\frac{6-4}{6-2}=0.5$~(with one wasted resource).
However, $T=2$ also accommodates solutions with perfect fairness, albeit with an increased efficiency. Take, for instance, $x(1)=(3,0,0,0)$ and $x(2)=(0,0,0,3)$ over two time points, for a combined $\operatorname{\tau}(x(1)) + \operatorname{\tau}(x(2))=(3,3,3,3)$ and an inefficiency of $\eta(x(t))=0$ in {\em each} period $t$. This represents a noticeable increase in efficiency over choosing the initial solution twice to cover the same time horizon.}
\end{Ex}
In other words, reaching perfect fairness in the shortest possible horizon might not lead to the most efficient solutions.
\section{Ambulance Location and Relocation}
\label{sec:alp}
The task of allocating ambulances to a set of bases in a region is a well-known problem in operations research \citep{Brotcorne2003}. The objective of this problem is generally one of efficiency: Ambulances should be allocated to prime spots such that they can quickly reach the maximum number of people.
{The definition of this problem is not unified across the literature~\citep{Gendreau2001, Brotcorne2003}.\footnote{This is due to the fact that research projects often work with datasets associated with specific regions, each of which must follow local guidelines and regulations.} Most definitions, however, agree that the population should receive an efficient service, which is generally translated into some form of coverage. In this paper, we are not solely concerned by efficiency, but we are further interested in fairness: Ambulances should be allocated such that the same set of people is not always at a disadvantage with respect to access to a quick service.}
\subsection{{Preliminary Problem Formulation}}
In this manuscript, we adopt the following definition. The region to which ambulances are allocated is divided into~$n$ demand zones, the travel time between each pair being a known quantity. Ambulances can only be allocated to bases, which are located within a subset of the zones. Each zone (equivalently, the people living in each zone) are individual stakeholders in this model. A pre-chosen $T$ rounds of decision is modeled to occur, over which the ambulances should be allocated in a fair and efficient manner. In each round, a configuration~$x$ of how $m$ ambulances are placed is chosen.
Next, we define the benefit function for the zones.
In this model, each zone $i$, based on its population, has a demand of $\zeta_i$ ambulance vehicles.
A zone $i$ is said to be \emph{covered} in configuration $x$ if there are at least $\zeta_i$ ambulances located in bases from which zone $i$ could be reached in a time less than a chosen time limit called the {\em response threshold time}.
Now, $[\operatorname{\tau}(x)]_i = 1$ if zone $i$ is covered by the configuration $x$ and $[\operatorname{\tau}(x)]_i = 0$ otherwise. We note that we use a nonlinear benefit function in this case, as opposed to the simpler cases in \cref{sec:simple} to both reflect the fact that a larger number of ambulances are essential to sufficiently serve certain regions and also to demonstrate the robustness of our methods to even certain classes of nonlinear (piecewise linear) benefit functions. The nonlinearity, as shown below, can be modeled using auxiliary variables and linear functions, to conform to the form of \cref{eq:finiteBaseSet1}.
Furthermore, efficiency constraints analogous to \cref{eq:FOT:eff} are enforced by stating that at least a fraction $f$ of the zones should be covered in each allocation. The unfairness metric $\phi$ used is the difference between the two zones which are most often and least often covered, over $T$, i.e., $\phi(\tau):=\min_{g,h} \{g-h: g\ge \tau_i \ge h, \forall\,i\in \{1,\ldots,n\} \}$.
With the above, the problem is a standard T-PFA problem in \cref{Def:TPFA}. However, the ambulance allocation problem involves additional constraints on allocations between two consecutive periods. Decision-makers prefer policies that ensure that not too many ambulances have to shift bases on a daily basis. Thus, between two consecutive allocations, we would ideally like to have not more than a fixed number $r$ of ambulances to shift bases. We refer to this constraint as the {\em transition constraint}.
\begin{table}[h!]
\centering
{\begin{tabular}{rl}
\hline
\multicolumn{2}{c}{Variables} \\
\hline
$\phi(y)$ & Unfairness associated with benefits $y$ \\
$y_i$ & Average benefit of zone $i$ \\
$x_i(t)$ & Number of ambulances at base $i$ on time $t$ \\
$v_i(t)$ & Number of ambulances that can reach zone $i$ at time $t$\\
$\operatorname{\tau}_{i}(t)$ & Benefit of zone $i$ at time $t$ \\
\hline
\hline
\multicolumn{2}{c}{Parameters} \\
\hline
$\mathcal{B}$ & Set of bases \\
$\mathcal{T}$ & Set of time points \\
$n$ & Number of zones \\
$m$ & Number of ambulances \\
$a_{ji}$ & 1 if zones $i$ and $j$ are connected, 0 otherwise \\
$\zeta_i$ & Demand of ambulances for zone $i$ \\
$T$ & Time horizon \\
$f$ & Fraction of zones that must be covered \\
$r$ & Number of ambulances allowed to shift bases \\
\hline
\end{tabular}}
\caption{Variables and parameters of the AWT.}\label{tbl:Amb} \end{table}
Now, the problem can be formally cast as follows:
\begin{subequations}
\begin{alignat}{3}
\min_{y,v,x,\operatorname{\tau},\phi}\quad& \phi(y) && \label{eq:Amb:begin} \\
\textrm{s.t.}\quad
&x_i(t) = 0 &&\forall\, i \not \in \mathcal{B};\, \forall\,t\in\mathscr{T}\label{eq:Amb:Base}\\
&\sum_{i=1}^n x_i(t) \le m &&\forall\,t\in\mathscr{T} \label{eq:Amb:m}\\
&v_i(t) = \sum_{j=1}^na_{ji}x_j(t)&&\forall\,i=1,\ldots,n;\,t\in\mathscr{T}\label{eq:Amb:v}\\
&v_i(t) \leq (\zeta_i-1) + m\operatorname{\tau}_i(t)&&\forall\,i=1,\ldots,n;\,t\in\mathscr{T} \label{eq:Amb:vzeta1}\\
&v_i(t) \geq \zeta_i - m(1-\operatorname{\tau}_i(t))&&\forall\,i=1,\ldots,n;\,t\in\mathscr{T}\label{eq:Amb:vzeta2}\\
&y_i =\frac{1}{T} \sum_{t=1}^T \operatorname{\tau}_i(t) &&\forall\,i=1,\ldots,n\\
&\sum_{i=1}^{n}\operatorname{\tau}_i(t) \ge fn &&\forall\,t\in\mathscr{T} \label{eq:Amb:eff}\\
&x_i(t) \in \mathbb{Z}_{\ge 0}&&\forall\,i=1,\ldots,n;\,t\in\mathscr{T} \label{eq:Amb:integer} \\
&\operatorname{\tau}_i(t) \in \{0,1\}&&\forall\,i=1,\ldots,n;\,t\in\mathscr{T} \label{eq:Amb:end} \\
&\sum_{i=1}^n\left\vert x_i(t+1) - x_i(t)\right \vert \le 2r&&\forall\,t\in1,\ldots,T-1 \label{eq:Amb:trans}
\end{alignat}\label{eq:Amb}
\noindent Here, $\mathscr {T} = \{1,\ldots,T\}$, $x_i(t)$ is the number of ambulances allotted to zone $i$ at time $t$.
Constraints \cref{eq:Amb:Base} ensure that allocation occurs only if $i$ is an ambulance base. Here, $\mathcal B$ is the set of ambulance bases. Constraints \cref{eq:Amb:m} limit the number of ambulances available for allocation in each round. Moreover, $a_{ij}$ is a binary parameter which is $1$ if an ambulance can go from $i$ to $j$ within the response threshold time and $0$ otherwise.
Through constraints \cref{eq:Amb:v}, $v_i(t)$ counts the number of ambulances that can reach zone $i$ within the response threshold time.
Here,
$\operatorname{\tau}_i(t)$ is $1$ if $v_i(t)$ is at least $\zeta_i$, i.e., if the demand of ambulances in zone $i$ is satisfied, and $0$ otherwise. This is accomplished by constraints \cref{eq:Amb:vzeta1,eq:Amb:vzeta2}.
Constraints \cref{eq:Amb:eff} take care of efficiency and ensure only allocations that cover at least a fraction $f$ of all zones are to be considered.
Finally, constraints \cref{eq:Amb:trans} are the transition constraints, which can easily be reformulated through linear inequalities.
It ensures that not too many ambulances shift bases between consecutive time periods.
\end{subequations}
The problem in \cref{eq:Amb} is a mixed-integer linear program (MILP). However, the problem is symmetric with respect to certain permutations of the variables. Symmetry makes it particularly hard for modern branch-and-bound-based solvers \citep{margot2010symmetry}. Even if one relaxes the transition constraints \cref{eq:Amb:trans}, the symmetry exists.
One can observe that, relaxing the transition constraints \cref{eq:Amb:trans} in \cref{eq:Amb}, we have a T-PFA problem.
We call this relaxed problem defined by \cref{eq:Amb:begin} to \cref{eq:Amb:end} as the Ambulance-without-transition constraints (AWT) problem.
\subsection{{A Branch-and-Price Reformulation of the AWT}}
Since the AWT in \cref{eq:Amb:begin} to \cref{eq:Amb:end} is a T-PFA problem, it can be readily written in the form shown in \cref{eq:finiteBaseSet1} and hence can be solved using branch-and-price.
First, we note that without the constraint in \cref{eq:Amb:trans}, any feasible solution or optimal solution remains feasible or optimal after permutations to $t$. Thus, the following version of the problem could be used to solve the relaxed problem without the symmetry. Here, we only {\em count} the number of times each configuration might be used over $T$ periods.
\begin{table}[h!]
\centering
{\begin{tabular}{rlrl}
\hline
\multicolumn{2}{c}{Variables} & \multicolumn{2}{c}{Parameters} \\
\hline
$g$ & Largest average benefit & $\operatorname{\tau}_{ij}$ & Benefit of zone $i$ in configuration $j$ \\
$h$ & Smallest average benefit & $T$ & Time horizon \\
$y_i$ & Average benefit of zone $i$ & $n$ & Number of zones \\
$q_j$ & Number of times configuration $j$ is used & $k$ & Number of configurations \\
\hline
\end{tabular}}
\caption{Variables and parameters of the MP.}\label{tbl:mp} \end{table}
\paragraph{{The Master Problem} (MP). }
\begin{subequations}
\begin{alignat}{30}
\min \quad& g-h &&\label{eq:BAPprimal_obj}\\
\textrm{s.t.} \quad
&g \ge y_i&&\quad(\alpha_i)&&\quad \forall \,i = 1, \dots, n\label{eq:BAPprimal_g}\\
&y_i \ge h&&\quad(\beta_i)&&\quad \forall \,i = 1, \dots, n\label{eq:BAPprimal_h}\\
&y_i = \frac{1}{T} \sum_{j=1}^k\operatorname{\tau}_{ij}q_j&&\quad(\lambda_i)&&\quad \forall \,i = 1, \dots, n\\
&\sum_{j=1}^k q_j = T&&\quad(\mu) &&\label{eq:BAPprimal_cstr2}\\
&q_j \ge 0&&&&\quad \forall \,j = 1, \dots, k\\
&q_j \in \mathbb{Z}&&&&\quad \forall \,j = 1, \dots, k.
\end{alignat}
\label{eq:BAPprimal}
\end{subequations}
\noindent where $q_j$ counts the number of times the configuration defined by $x^j$ is used. The benefit obtained by stakeholder $i$ due to the configuration $x^j$ is $\operatorname{\tau}_{ij}$. $y_i$ is the average benefit that stakeholder $i$ enjoys through the time horizon of planning. The objective \cref{eq:BAPprimal_obj} is to minimize the difference between the largest ($g$) and the smallest ($h$) average benefits.
Note that, once we solve the MP, an equivalent solution to the AWT could be obtained by arbitrarily considering the allocations $x^j$ for $q_j$ times in $x(1),\ldots,x(T)$ of the AWT. Similarly, given a solution to the AWT, one could immediately identify a corresponding solution to the MP.
However, since the number of configurations are typically exponentially large in $n$ and $m$, and we only might use a handful of them in a solution, we could resort to a branch-and-price approach where the MP only contains a subset of the configurations.
Referring to the continuous relaxation of MP as CMP, the the dual of CMP can be found in Appendix A.
Considering only a subset of columns in the CMP is equivalent to considering only a subset of constraints in \cref{eq:BAPdual:many}.
Given some dual optimal solution $(\alpha^\star,\beta^\star,\lambda^\star,\mu^\star)$ to the dual of the restricted CMP, one can find the most violated constraint in \cref{eq:BAPdual:many} and include the corresponding column in the restricted CMP.
\paragraph{{The Pricing Problem}. }
\begin{subequations}
\begin{alignat}{3}
\min \quad & \sum\limits_{i=1}^{n} \operatorname{\tau}_i \lambda_i^\star && \label{eq:pp:obj} \\
\textrm{s.t.} \quad
& (\operatorname{\tau}_{i} = 1) \Longleftrightarrow (a_{i}^{\mathsf T} x \geq \zeta_{i}), && \quad \forall \,i = 1, \dots, n \label{eq:pp:c1} \\
& (\operatorname{\tau}_{i} = 0) \Longleftrightarrow (a_{i}^{\mathsf T} x \leq \zeta_{i}-1), && \quad \forall \,i = 1, \dots, n \label{eq:pp:c0} \\
& \sum\limits_{i=1}^n x_i \leq m, && \quad \forall \,i \in \mathcal{B} \label{eq:pp:m} \\
& \sum\limits_{i=1}^n \operatorname{\tau}_i \geq fn && && \label{eq:pp:95} \\
& \operatorname{\tau} \in \{0,1\}^n && && \\
& x \in \mathbb{Z}_{\geq 0}^n. && &&
\end{alignat}
\label{eq:BAPpricing}
\end{subequations}
The minimal efficiency constraints are embedded in the pricing problem.
The time horizon is fixed in \cref{eq:BAPprimal_cstr2}.
Constraints \cref{eq:pp:c1,eq:pp:c0} help compute the benefits to each zone, $\tau$ and can clearly be rewritten with integer variables and linear constraints. No more than $m$ ambulances may be used \cref{eq:pp:m}, and the configuration must cover at least a fraction $f$ of the zones \cref{eq:pp:95}.
The MP reformulation of the AWT in \cref{eq:Amb:begin} - \cref{eq:Amb:end}, can now be solved very efficiently using branch-and-price. However, a solution thus obtained might violate \cref{eq:Amb:trans}. Further, given the solution in the space of variables in the MP it is not immediate if one can easily verify whether the constraint \cref{eq:Amb:trans} is or is not satisfied. So, given a feasible point for the MP, we define the configuration graph as follows, and then show that the point satisfies \cref{eq:Amb:trans} if and only if the configuration graph has a Hamiltonian path.
\subsection{Checking \cref{eq:Amb:trans}}
In the previous section, we proposed a branch-and-price to solve the relaxed version of the AWT without including constraints \cref{eq:Amb:trans}. First, we provide an algorithm to check the feasibility of a solution provided by branch-and-price in \cref{thm:HamPath}. If it is feasible, then we are trivially done. If the solution is infeasible, we provide routines to ``cut-off'' such an infeasible solution and continue the branch-and-price algorithm. Meanwhile, we also show how using cutting planes could be impractical in the space of solutions considered in the MP or a binarized version of the problem. Then, we talk about a three-way branching scheme that could work. However, we finally resort to constraint programming, due to the limitations of commercial solvers in implementing three-way branching.
\begin{Def}[Configuration graph]
Given a solution $\bar q$ to the MP, define $\mathcal I = \{j:\bar q_j \geq 1\}$.
The configuration graph (CG) is defined as $G = (V, E)$, where $V = \{ (j,j'): j \in \mathcal I; j' \in \{1,2,\ldots,\bar q_j\}\}$ and $E = \{( (j_1, j_1'), (j_2, j_2') ): \left \Vert \bar x^{j_1} - \bar x^{j_2} \right \Vert_1 \leq 2r\}$ where $\bar x^{j_1}$ and $\bar x^{j_2}$ are the configurations corresponding to the variables $\bar q_{j_1}$ and $\bar q_{j_2}$.
\end{Def}
\begin{theorem}\label{thm:HamPath}
Given a {point} $\bar q$ that is feasible to the MP, there exists a corresponding $\bar x$ that is feasible to the AWT if the CG defined by $\bar q$ contains a Hamiltonian path.
Conversely, if there is a feasible solution to the AWT which uses only the configurations with indices in $\mathcal I = \{j:\bar q_j \geq 1\}$, then the corresponding CG has a Hamiltonian path.
\end{theorem}
\begin{proof}
Observe that $G$ has exactly $T$ vertices, since if a configuration is found more than once in $\bar q$, it is split into as many distinct vertices in $V$, which are distinguished by the second element of the tuple. An edge $((u_1,u_2), (v_1,v_2))$ in $E$ indicates that movement between configurations indexed by $u_1$ and $v_1$ does not violate the transition constraints. A Hamiltonian path is a path that visits each vertex exactly once. As such, any Hamiltonian path in $G$ proves that a feasible sequence of transitions which does not violate the transition constraints exists between the configurations in $\bar q$. Given a Hamiltonian path $(v_1, w_1), (v_2, w_2), \dots, (v_T, w_T)$ in $G$, a feasible sequence $x(1), x(2), \dots, x(T)$ for the AWT would be $\bar x^{v_1},\bar x^{v_2}, \dots,\bar x^{v_T}$.
{Conversely, given a feasible sequence $x^{j_1}, x^{j_2}, \dots, x^{j_T}$ for the AWT, a Hamiltonian path can be constructed for $G$ as follows. The $t$-th vertex of the path is ${(j_t, \beta)}$ where
$\beta = 1 + $ the number of times the configuration denoted by $x^{j_t}$ has appeared in the first $t-1$ vertices of the path. }
\hfill $\blacksquare$
\end{proof}
Following \cref{thm:HamPath}, one can construct the CG with exactly $T$ vertices and can check if the solution to the MP is feasible to the AWT. If yes, we are done. If not, the way we can proceed is detailed in the rest of this section.
\subsubsection{{Cutting Planes and Binarization}}
The most natural way to eliminate a solution that does not satisfy a constraint is by adding a cutting plane. This is a common practice in the MILP literature.
In cases where a more sophisticated cutting plane is not available, but every feasible solution is determined by a binary vector, no-good cuts could be used to eliminate infeasible solutions one by one \citep{dAmbrosio2010interval}.
However, in our problem of interest, the variables $x$ in the MP are general integer variables.
It is possible that a point $x$ that violates the transition constraint could lie strictly in the convex hull of solutions that satisfy the transition constraint. Hence it could be impossible to separate $x$ using a cutting plane. \cref{ex:confGr} demonstrates the above phenomenon.
\begin{Ex}\label{ex:confGr}
\begin{figure}[t]
\centering
\begin{tikzpicture}[shorten >=1pt,node distance=2cm,auto]
\tikzstyle{state}=[shape=circle,draw,minimum size=.5cm]
\node[state] (C) {$(1, 1)$};
\node[state,above of=C] (A) {$(2, 1)$};
\node[state,left of=C] (D) {$(3, 1)$};
\node[state,right of=C] (E) {$(4,1)$};
\path[draw]
(A) edge node {} (C)
(D) edge node {} (C)
(E) edge node {} (C);
\end{tikzpicture}
\caption{Configuration graph for the solution in \cref{ex:confGr}.}
\label{fig:confGr}
\end{figure}
Consider the case where the MP has an optimal solution given by $q = (q_1, q_2, q_3, q_4) = (1,1,1,1)$ and the allocations $x_1,x_2,x_3,x_4$ (in the context of the AWT) corresponding to $q_1, q_2, q_3, q_4$ are $(3,3,3,3)$, $(4,2,3,3)$, $(2,4,3,3)$, $(3,3,2,4)$ respectively. Let $r = 1$. Then, the CG corresponding to this solution is a tree as shown in \cref{fig:confGr} and hence does not have a Hamiltonian path. On the other hand, for the choices $q^i$, $q^{ii}$, $q^{iii}$ and $q^{iv}$ being $(4,0,0,0)$, $(0,4,0,0)$, $(0,0,4,0)$, $(0,0,0,4)$, the CGs are all $K_4$, i.e., complete graphs and hence have a Hamiltonian path, trivially.
Now, it is easy to see that $q$ lies in the convex hull of $q^i$, $q^{ii}$, $q^{iii}$ and $q^{iv}$ and while each of the latter solutions satisfies the transition constraint, $q$ does not. Thus no valid cutting plane can cut only the infeasible point.
\end{Ex}
While \cref{ex:confGr} shows that an infeasible solution cannot always be separated using cutting planes, it could still be possible that there is an extended formulation where the infeasible point could be separated.
\paragraph{{Naive Binarization}.} A natural choice of an extended formulation comes from binarizing each integer variable $q_j$ in the MP. This is possible because each $q_j$ is bounded above by $T$ and below by $0$. Thus one can write constraints of the form $q_j = b_j^1 + 2b_j^2 + 4b_j^4 + 8b_j^{8} + \ldots$ where the summation extends up to the smallest $\ell$ such that $2^\ell > T$, i.e., $\ell = \ceil{\log_2 T}$. If each $b_j^\ell$ is binary, any integer between $0$ and $T$ can be represented as above.
Having written the above binarization scheme, one can separate any solution $q$ by adding a no-good cut on the corresponding binary variables.
A no-good cut is a linear inequality that separates a single vertex of the $0-1$ hypercube without cutting off the rest \citep{balas1972canonical,dAmbrosio2010interval}.
An example is shown in \cref{ex:confGrBin}.
\begin{Ex}\label{ex:confGrBin}
Consider the problem in \cref{ex:confGr}. Binarization to separate the solution $q = (1,1,1,1)$ can be done by adding the following constraints to the MP.
\begin{subequations}
\begin{align}
q_j \quad&=\quad b_j^1 + 2b_j^2 + 4b_j^4 &\forall\,j=1,\ldots,4\\
\sum_{j=1}^4\left (b_j^1 + (1-b_j^2) + (1-b_j^4) \right) \quad&\le\quad {3}
\end{align}
\end{subequations}
Note that the second constraint above (the no-good constraint) is violated {\em only} by the binarization corresponding to the solution $u = (1,1,1,1)$ and no other feasible solution is cut off.
\end{Ex}
The potential downside with the above scheme is that one might have to cut off a prohibitively large number of solutions before reaching the optimal solution. And with the column generation introducing new $q$-variables, this could lead to an explosion in the number of new variables as well as the number of new constraints.
\paragraph{{Strengthened Binarization}.} When the CG corresponding to $q$ is not just lacking a Hamiltonian path, but is a disconnected graph (a stronger property holds), one could add a stronger cut, which could potentially cut off multiple infeasible solutions. In this procedure, we add a binary variable $b_j$ for each $q_j$ such that $b_j = 1$ if and only if $q_j \geq 1$. Now, observe that if a CG corresponding to the solution $\bar q$ is disconnected, then it will be disconnected for all $q$ whose non-zero components coincide with the non-zero components of $\bar q$. i.e., no solution with the same support as that of $\bar q$ could satisfy the transition constraints. Thus one could add a no-good cut on these $b_j$ binary variables, which cuts off all the solutions with the same support as $\bar q$.
Unlike naive binarization, while this cuts off multiple solutions simultaneously, it could happen that the CG is connected, but just does not have a Hamiltonian path. In such a case, no cut could be added by the strengthened binarization scheme, and one might have to resort to the naive version.
\subsubsection{{Three-Way Branching}}
An alternative to the naive binarization scheme is three-way branching. While this could work as a stand-alone method, it could also go hand in hand with the strengthened binarization mentioned earlier. This method takes advantage of the three-way branching feature that some solvers, for example, SCIP \citep{GamrathEtal2020ZR,GamrathEtal2020OO}, have.
In this method, as soon as a solution $\bar q$ satisfying all the integrality constraints but violating the transition constraint is found, the following actions are performed. First, if the lower bound and the upper bound for every component of $\bar q$ match, then we are at a leaf that can be discarded as infeasible. If not, find a component $j$ (in our case, a configuration $j$) such that $\bar q_j$ is strictly different from at least one of the bounds. Now, we do a three-way branching on the variable $q_j$ where the new constraints in each of the three branches are (i) $q_j \leq \bar q_j - 1$ (ii) $q_j = \bar q_j$ (iii) $q_j \geq \bar q_j + 1$.
Finite termination follows from the fact that $\bar q_j$ is infeasible for branches (i) and (iii) and hence will never be visited again. For branch (ii), we have $q_j$ such that its lower and upper bounds are equal to $\bar q_j$. Thus, we have one more fixed variable and this variable will never be branched on again.
Three-way branching is used as opposed to regular two-way branching but on integer variables because of the following reasons. First, branching with (i) $q_j \leq \bar q_j - 1$ (ii) $q_j \geq \bar q_j + 1$ is invalid, as it could potentially cut off other feasible solutions with $q_j = \bar q_j$ but the solution differing in components other than $j$. Branching with (i) $q_j \leq \bar q_j - 1$ (ii) $q_j \geq \bar q_j$ could cycle, as we have not fixed any additional variable in the second branch, nor have we eliminated the infeasible solution. Thus, the LP optimum in the second branch is going to be $\bar q$ again, and cycling ensues.
\subsubsection{Constraint Programming}
The final alternative we can use to enforce transition constraints \cref{eq:Amb:trans} is constraint programming~(CP). Namely, CP is a programming paradigm for solving combinatorial problems. A CP model is defined by a set of variables, each of which is allowed values from a finite set, its \emph{domain}. The relationship between these variables is determined by the constraints of the problem. These succinct constraints can encapsulate complex combinatorial structures, such as the packing of items into bins, for example. A solver then solves the problem by enforcing consistency between the variables and the constraints, and using branching and backtracking techniques to explore the solution space. Before defining the CP model to enforce constraints \cref{eq:Amb:trans}, we define the compact configuration graph, and explain its relationship with the CG.
\begin{Def}[Compact configuration graph]
\label{def:CCG} Given a feasible solution $ q^\star$ to the continuous relaxation of the MP (CMP), the compact configuration graph (CCG) is defined as $G = (V, E)$, where $V = \mathscr{A}:=\{j: q^\star_j > 0\}$ and {$E = \{( v, w ): \left \Vert \bar x^{v} - \bar x^{w} \right \Vert_1 \leq 2r, \forall v, w \in \mathscr{A}\}$}.
\end{Def}
\begin{Def}[Walk]
A \emph{walk} in an undirected graph $G=(V,E)$ is a finite sequence of vertices $v_1, \ldots, v_k$, not necessarily distinct, such that for each $i$, $(v_i, v_{i+1}) \in E$.
\end{Def}
\begin{theorem}\label{thm:walk}
Every walk of length $T$ in a CCG constructed from a solution $q^\star$ to CMP corresponds to a feasible solution of the AWT.
\end{theorem}
\begin{proof}
Let $G=(V,E)$ be the CCG given a solution $q^\star$. Let $W=v_1, v_2, \dots,v_T$ be a walk of length $T$ in $G$. Since we allow {revisiting vertices}, it is possible that {$v_j = v_{j'}$ for some $j\neq j'$}.
Now define $\tilde q$ component-wise where $\tilde q_j$ corresponds to the number of times the vertex $j$ is visited in the walk W. Since the walk has a length $T$, trivially $\sum_j \tilde q_j = T$, satisfying \cref{eq:BAPprimal_cstr2}. Then, $\tilde y, \tilde q$ can be defined so that we have a feasible {solution} to the MP. Hence, if we now show that the CG defined by the nonzero components of $\tilde q$ has a Hamiltonian path, then the corresponding $\tilde x$ will be feasible for the AWT due to \cref{thm:HamPath}.
Now, in the CG, construct the path $P = (v_1, n(v_1)+1), (v_2, n(v_2)+1), \ldots, (v_T, n(v_T)+1)$ where $n(v_j)$ records the number of times $v_j$ has appeared in the path $P$ earlier so far. We note that each term in the path $P$ is indeed a vertex of the CG (which has $T$ vertices) and that they are all visited exactly once, implying that $P$ is the required Hamiltonian path.\hfill $\blacksquare$
\end{proof}
The general mechanism involving the CP component is provided in \cref{alg:cp}, which is the entire algorithm we test in \cref{sec:numerical}.
The CP component checks whether the solution returned by the CMP can be made valid in some way, i.e., if there exist solutions using only the configurations in $\mathscr{A} $ (i.e., the configurations that appear in the optimal CMP solution, see Definition \ref{def:CCG}) with integer values, such that they do not violate the transition constraints. By providing these feasible solutions, it provides an upper bound to the AWT. As soon as a set of configurations is given to CP, a cut is added to the CMP, which eliminates all solutions to CMP which only consist of the configurations provided to CP. Namely, if $\mathscr{A}$ indices the configurations given to CP, we add a cut $\sum_{j\in\mathscr{A}} q_j \leq T-1$, indicating that at least one of the configurations must be outside the set indexed by $\mathscr{A}$.
Let $k'$ be the cardinality of $\mathscr{A} $. A CCG is associated with solution $q $ --- this CCG forms the basis for a deterministic finite automaton. This automaton $A$ is defined by
a tuple $(Q, \Sigma, \delta, q_0, F)$ of states $Q$, alphabet $\Sigma$, transition function $\delta: Q \times \Sigma \rightarrow Q$, initial state $q_0 \in Q$, and final states $F \subseteq Q$, with $Q = \{0, \dots, k'\}$, $\Sigma = \{1, \dots, k'\}$, $\delta = \{(u,v) \rightarrow v : (u,v) \in E\} \cup \{(0, u) \rightarrow u : u \in F\}$, $q_0 = 0$, and $F = \{1, \dots, k'\}$. In other words, this automaton accepts any valid sequence of $k'$ configurations, with dummy configuration $q_0 = 0$ being the initial state.
Let $LB$ be the lower bound given by the CMP, and $UB$ be an upper bound.
Finally, let $\Omega$ be the collection of all index sets $\mathscr{A}'$ for which the CP model has been previously solved.
There are $T$ decision variables $z$ with domains
$\{0, \dots, k'-1\}$, with $z_t$ indicating which CMP configuration from $\mathscr{A} $ is used at time point $t$. These variables are constrained by
\begin{subequations}
\begin{alignat}{3}
& \min \phi \label{eq:cp:obj}\\
&\phi = \texttt{max}(c) - \texttt{min}(c) \label{eq:cp:phi} \\
& LB \le \phi \le UB -1 \label{eq:cp:bounds} \\
& \texttt{cost\_regular}(z, A,\tau_{i\star},c_i) &&\qquad i = 1, \dots, n \label{eq:cp:regular} \\
& \texttt{at\_most}(T-1,z,\omega ) &&\qquad\forall \omega \subset \mathscr{A} :\exists \mathscr{A}' \in \Omega,\; \omega \subseteq \mathscr{A}' \label{eq:cp:cuts} \\
&z_t \in \mathscr{A} &&\qquad t=1,\ldots,T \label{eq:cp:zdef} \\
&c_i \in \mathbb{Z}_{\ge 0}&&\qquad i=1,\ldots,n \label{eq:cp:cdef}
\end{alignat}\label{eq:CP}
\end{subequations}
The objective~\cref{eq:cp:obj} is to minimize the unfairness, i.e., the difference between the zone which is covered the most, and that which is covered the least~\cref{eq:cp:phi}.
Any feasible solution should be strictly better than the upper bound, and search by the CP solver can be interrupted as soon as a feasible solution is found whose objective value coincides with $LB$~\cref{eq:cp:bounds}.
By \cref{thm:walk}, any sequence of configurations indexed by $\mathscr{A} $ and of length $T$ must correspond to a feasible solution of the AWT.
This requirement is enforced by the $\texttt{cost-regular}$~\citep{costregular} constraints~\cref{eq:cp:regular}: A \texttt{cost-regular} constraint holds for zone $i$ if $z$ forms a word recognized by automaton $A$ and if variable $c_i$ is equal to the coverage of zone $i$ over $T$:
$c_i = \sum_{t=1}^T \tau_{i, z_t}$.
Note that the CP model does not require that all configurations indexed by $\mathscr{A} $ be used at least once: Since it checks all possible subsets of $\mathscr{A}$, the cut added to CMP
does not remove any unexplored part of the search space. Finally, all subsets $\omega \subset \mathscr{A}$ previously explored, i.e. being as well a subset of some index set $\mathscr{A}'$ in $\Omega$, are considered
by the model using the \texttt{at\_most} constraint \cref{eq:cp:cuts}: At most $T-1$ variables in $z$ can take on values in $\omega$. The motivation for this is illustrated in \cref{ex:cp_cuts2}.
\begin{Ex}\label{ex:cp_cuts2}
Assume that $\mathscr{A}=\{j', j'', j'''\}$. If \cref{eq:CP} had previously solved $\mathscr{A}'=\{j', j''\}$, this solution space would be explored again since the configurations
in $\mathscr{A}$ are not constrained to be used at least once.
Adding constraint $\texttt{at\_most}(T-1,z,\{j', j''\})$ in the current iteration avoids this.
\end{Ex}
The CP solver we use is OR-Tools, which currently does not implement the \texttt{cost-regular} constraint. We thus replaced \cref{eq:cp:phi,eq:cp:regular} with the equivalent but less efficient\footnote{In practice, the CP component of \cref{alg:cp} remains very fast, so this loss in efficiency is negligible.} reformulation
\begin{alignat}{2}
&\phi = \max\limits_{i=1}^{n} \sum\limits_{t=1}^T \tau_{i, z_t} - \min\limits_{i=1}^{n} \sum\limits_{t=1}^T \tau_{i, z_t} \nonumber \\
&\texttt{regular}(z, A). \nonumber
\end{alignat}
\subsection{The Final Algorithm}
The general working of the final algorithm is presented formally in \cref{alg:cp}.
In each iteration of the final algorithm, we solve a linear program and make a call to the constraint programming solver. The linear program is basically the continuous relaxation of the MP with any subsequent cutting planes (Step \ref{st:cuts} in \cref{alg:cp}). This is solved using column generation. The configurations which receive non-zero weights (indexed by $\mathscr{A}$) in the optimal LP solution are passed to the constraint programming solver to detect whether there exists a solution to the AWT that uses only configurations indexed by $\mathscr{A}$. Meanwhile, a cut is added to the linear program, that eliminates all solutions which use only the configurations indexed by $\mathscr{A}$.
With this, the large set of possible configurations are managed by the linear programming solver, which is powerful and efficient for large problems, while the CP-based solver only solves instances with a handful of configurations at a time.
\begin{algorithm}[t]
\caption{The Final {Algorithm}}\label{alg:cp}
\begin{algorithmic}[1]
\Require The ambulance allocation problem with number of zones $n$, the bases $\mathcal B$, $\zeta_i$, for $i=1,\ldots,n$, $f$ and $a_{ih}$ for $i,h = 1,\ldots,n$, $r\in\mathbb{Z}_{\ge 0}$ and $T \in \mathbb{Z}_+$.
\Ensure $x(1),\ldots, x(T)$ that is optimal to the AWT.
\State $LB\gets -\infty$, $UB\gets +\infty$. $ \mathscr{C} \gets\emptyset$. $x^\star(1),\ldots,x^\star(T)=NULL$.
\While {$UB > LB$}
\State Solve CMP, i.e., the continuous relaxation of the MP after adding each constraint in $\mathscr{C}$. Let $q^\star$ be the optimal solution and $o^\star$ be the optimal objective value.
\State $LB \gets o^\star $.
\State $\mathscr{A} \gets \{j:q^\star_j > 0\}$.
\State $\mathscr{C} \gets \mathscr{C}\cup \left\{ \sum\limits_{j\in\mathscr{A}}q_j \leq T-1. \right\}$. \label{st:cuts}
\State $(x^\dagger(1),\ldots,x^\dagger(T)), o^\dagger \gets \textsc{ConstraintProgramming}(q^\star, n, a, T, LB, UB, \mathscr{C}) $.
\If {$o^\dagger < UB$}
\State $UB\gets o^\dagger$ and $(x^\star(1),\ldots,x^\star(T)) \gets (x^\dagger(1),\ldots,x^\dagger(T))$.
\EndIf
\EndWhile
\State \Return $x^\star(1),\ldots,x^\star(T)$
\State
\Function {ConstraintProgramming}{$q, n, a, T, LB, UB, \mathscr{C}$}
\State $G\gets$ CCG defined by $\mathscr{A}$.
\State Solve \cref{eq:CP} on the graph $G$ with appropriate values of $\operatorname{\tau}, c$.
\If {\cref{eq:CP} is infeasible}
\State \Return $NULL$, $+\infty$
\Else
\State \Return $(x^\dagger(1),\ldots,x^\dagger(T)), o^\dagger$.
\EndIf
\EndFunction
\end{algorithmic}
\end{algorithm}
\section{{Computational Experiments}}\label{sec:numerical}
In this section, we introduce the setting that we have used to evaluate computationally \cref{alg:cp} and we discuss the results of such an evaluation in the context of ambulance location and relocation. In particular, we use real data from the city of Utrecht, Netherlands, as well as synthetically-generated instances of varying sizes. Moreover, we consider a predetermined time horizon, i.e., a fixed $T$ of size 30, with the understanding that a decision maker could reasonably make plans on a monthly basis.
\paragraph{Utrecht instance.} We use the model defined in \cref{sec:alp} to determine ambulance allocation in the city of Utrecht, Netherlands, using a dataset provided by the RIVM.\footnote{National Institute for Public Health and the Environment of the Netherlands.} The Utrecht instance contains 217 zones, 18 of which are bases. Since calls should be reached within 15 minutes, we consider that a zone is connected to another if an ambulance can travel between the two zones in under 15 minutes. This makes the graph about 28.4\% dense.
\add{Moreover, the fleet of ambulance consists of 19 ambulance vehicles.}
The efficiency measure we impose is that at least 95\% of all the zones should be covered at all times.
We consider that a zone is covered if sufficiently many ambulances are in the vicinity of that zone. In turn, we define the sufficient number of ambulances for a zone to be 1, 2, 3, or 4, based on the population density of the zone.
\paragraph{Synthetic instances.} Reproducing the ratio of bases and edges to zones, we also generated synthetic instances\footnote{These instances can be found at https://github.com/PhilippeOlivier/ambulances.} of sizes 50, 100, 200, and 400 zones, using the Mat\'ern cluster point process \citep{matern}, which helps in generating a distribution of zones mimicking a realistic urban setting. The number of ambulances for the instances is chosen to be just enough to ensure feasibility. The results are averaged over five instances of each size.
\paragraph{Testing environment and software. } Testing was performed on an Intel 2.8 GHz processor with 8 GBs of RAM running Arch Linux, and the models were solved with Gurobi 9.0.1 and OR-Tools 8.0. A time limit of 1,200 seconds was imposed \add{to solve each instance}.
\paragraph{Results.}
It is easy to see that there are two parameters that are likely to influence the difficulty of the solving process: the size of the instance (number of zones) and the transition constraints, i.e., the flexibility we allow for relocating ambulances between zones on a daily basis. We would typically expect the size of the instance to be a significant factor, but this is likely to be mitigated by the column generation approach, which tends to scale well. The effects that the transition constraints will have on the solving process, however, are less clear.
In order to assess such an effect, for each instance, we vary the number of ambulances that can shift bases on consecutive days ($r$ in constraints \cref{eq:Amb:trans}) from $0.1m$ to $m$ in increments of $0.1m$, where $m$ is the total number of ambulances in the instance. We call this ratio maximum transition, $MT$.
For each of these cases, we record the average of the time taken to solve these instances to optimality (capped at 1,200 CPU seconds).
We record the final relative gap left for the instance, which is defined by $\frac{|UB_f-LB_f|}{UB_f}$, where $LB_f$ and $UB_f$ represent the values of the best lower and upper bounds at the time limit, respectively. We also record the initial relative gap for the instance, which is defined by $\frac{|UB_i-LB_i|}{UB_i}$, where $LB_i$ represents the value of the initial lower bound (without any cuts), and $UB_i$ represents the value of the initial, trivial upper bound.
We present the final relative gaps associated with different classes of instances in \cref{fig:results}.
\begin{figure}[ht!]
\centering
\begin{tikzpicture}
\begin{axis}[xtick={0.1,0.2,...,1},xlabel={Maximum transition $MT$},ylabel={Relative gap},]
\addplot table [x=mt, y=gap, col sep=comma] {50zones.csv};
\addplot table [x=mt, y=gap, col sep=comma] {100zones.csv};
\addplot table [x=mt, y=gap, col sep=comma] {200zones.csv};
\addplot table [x=mt, y=gap, col sep=comma] {400zones.csv};
\addplot[dashed, mark=triangle*, color=orange] table [x=mt, y=gap, col sep=comma] {utrecht.csv};
\legend{50 zones,100 zones,200 zones,400 zones,Utrecht}
\end{axis}
\end{tikzpicture}
\caption{Final relative gaps with respect to the maximum transition $MT$. Gaps in the synthetic instances are averaged over 5 instances. Time limit of 1,200 CPU seconds.}
\label{fig:results}
\end{figure}
One can immediately observe from \cref{fig:results} that the transition constraints are affecting the difficulty of the instances for $MT \leq 0.5$, while everything can be solved to optimality within the time limit for $MT \geq 0.5$ independently of the number of zones. The larger $MT$ the smaller the final gap, i.e., when there is more freedom in moving the ambulances around on a daily basis, \cref{alg:cp} becomes very effective in computing optimal solutions.
We observe that for varying values of $MT$, the Utrecht instance has a higher relative gap than synthetic instances of comparable and even larger size. This discrepancy is the result of the distribution of the population densities across the zones.
In the synthetic instances, the population densities are randomly distributed among the zones.
The Utrecht instance, in contrast, exhibits several distinct clusters of varying sizes and population density distributions.\footnote{Constructing these sophisticated clusters is not a trivial task, which is why we settled on a random distribution of population densities for the synthetic instances. By randomizing the placement of the population for the Utrecht instance, its gap becomes similar to that of the synthetic instances.}
Unsurprisingly, \cref{fig:results} also suggests that larger instances are more difficult because, typically, the relative gap is large when time limits are hit. This is confirmed in more details from the results in \cref{tab:times}, where we report, for every group of instances and every $MT$ value, the initial gap (``i.gap"), the final gap at the time limit (``f.gap"),
the number of generated columns (``\#cols"), the number of solved instances (``\#solved"),\footnote{The entry ``\#solved" takes an integer value between 0 and 5 for synthetic instances and 0/1 for the Utrecht instance.} and the average time for the instances solved to optimality (``time").
\begin{table}[h!]
\tiny
\tabcolsep=2.5pt
{\begin{tabular}{r|rrrrr|rrrrr|rrrrr}
\multicolumn{1}{c|}{} & \multicolumn{5}{c|}{\textbf{50 zones}} & \multicolumn{5}{c|}{\textbf{100 zones}} & \multicolumn{5}{c}{\textbf{200 zones}} \\
\hline
$MT$ & i.gap & f.gap & \#cols & \#solved & time & i.gap & f.gap & \#cols & \#solved & time & i.gap & f.gap & \#cols & \#solved & time\\
\hline
0.1 & 0.13 & 0.13 & 738 & 4 & 0.3 & 0.55 & 0.47 & 1120 & 1 & 0.2 & 0.23 & 0.23 & 714 & 3 & 2.5 \\
0.2 & 0.13 & 0.00 & 3 & 5 & 0.4 & 0.55 & 0.25 & 674 & 2 & 1.2 & 0.23 & 0.23 & 710 & 3 & 2.5 \\
0.3 & 0.13 & 0.00 & 3 & 5 & 0.4 & 0.55 & 0.10 & 11 & 4 & 2.5 & 0.23 & 0.17 & 537 & 3 & 2.6 \\
0.4 & 0.13 & 0.00 & 3 & 5 & 0.4 & 0.55 & 0.00 & 9 & 5 & 2.7 & 0.23 & 0.02 & 108 & 4 & 82.7 \\
0.5 & 0.13 & 0.00 & 3 & 5 & 0.4 & 0.55 & 0.00 & 9 & 5 & 2.3 & 0.23 & 0.00 & 19 & 5 & 7.0 \\
0.6 & 0.13 & 0.00 & 3 & 5 & 0.4 & 0.55 & 0.00 & 9 & 5 & 2.1 & 0.23 & 0.00 & 15 & 5 & 7.2 \\
0.7 & 0.13 & 0.00 & 3 & 5 & 0.4 & 0.55 & 0.00 & 9 & 5 & 2.1 & 0.23 & 0.00 & 15 & 5 & 6.7 \\
0.8 & 0.13 & 0.00 & 3 & 5 & 0.4 & 0.55 & 0.00 & 9 & 5 & 2.2 & 0.23 & 0.00 & 15 & 5 & 6.0 \\
0.9 & 0.13 & 0.00 & 3 & 5 & 0.4 & 0.55 & 0.00 & 9 & 5 & 2.2 & 0.23 & 0.00 & 15 & 5 & 6.0 \\
1 & 0.13 & 0.00 & 3 & 5 & 0.4 & 0.55 & 0.00 & 9 & 5 & 2.1 & 0.23 & 0.00 & 15 & 5 & 6.0 \\
\hline
\end{tabular}}
{\begin{tabular}{r|rrrrr|rrrrr}
\multicolumn{11}{c}{} \\
\multicolumn{1}{c|}{} & \multicolumn{5}{c|}{\textbf{400 zones}} & \multicolumn{5}{c}{\textbf{Utrecht}} \\
\hline
$MT$ & i.gap & f.gap & \#cols & \#solved & time & i.gap & f.gap & \#cols & \#solved & time\\
\hline
0.1 & 0.57 & 0.57 & 248 & 1 & 2.7 & 0.73 & 0.73 & 786 & 0 & - \\
0.2 & 0.57 & 0.47 & 246 & 1 & 2.7 & 0.73 & 0.73 & 803 & 0 & - \\
0.3 & 0.57 & 0.43 & 238 & 1 & 2.7 & 0.73 & 0.47 & 757 & 0 & - \\
0.4 & 0.57 & 0.17 & 174 & 3 & 147.4 & 0.73 & 0.00 & 353 & 1 & 315.6 \\
0.5 & 0.57 & 0.00 & 93 & 5 & 273.6 & 0.73 & 0.00 & 252 & 1 & 122.3 \\
0.6 & 0.57 & 0.00 & 65 & 5 & 63.6 & 0.73 & 0.00 & 96 & 1 & 49.6 \\
0.7 & 0.57 & 0.00 & 60 & 5 & 49.6 & 0.73 & 0.00 & 96 & 1 & 50.3 \\
0.8 & 0.57 & 0.00 & 60 & 5 & 47.1 & 0.73 & 0.00 & 96 & 1 & 48.4 \\
0.9 & 0.57 & 0.00 & 60 & 5 & 47.4 & 0.73 & 0.00 & 96 & 1 & 49.8 \\
1 & 0.57 & 0.00 & 60 & 5 & 49.1 & 0.73 & 0.00 & 96 & 1 & 54.7 \\
\hline
\end{tabular}}
\caption{Detailed results for \cref{alg:cp} on both synthetic and real-world Utrecht instances. Time limit of 1,200 CPU seconds. The results of the synthetic instances are averaged over 5 instances.}
\label{tab:times}
\end{table}
The results in \cref{tab:times} show that for $MT > 0.7$ constraints \cref{eq:Amb:trans} are not binding, i.e., they do not cut off the obtained optimal solution to the rest of the model, and \cref{alg:cp} behaves exactly like for $MT = 0.7$.
The average time for the instances solved to optimality is often low, thus indicating that if we manage to prove optimality by completely closing the gap, then this is achieved quickly, generally with a few calls to \textsc{ConstraintProgramming}. In contrast, if we do not succeed in closing the gap early, then it is unlikely to be closed at all in the end. In the cases where optimality is not proven, improvements are still made during the solving process mostly because of improvements on the upper bound value as \textsc{ConstraintProgramming} tests more and more combinations of configurations. This suggests a few things. First, with a high enough $MT$, the initial lower bound is generally optimal, and feasible solutions whose objective values coincide with this bound can readily be found. Second, when the $MT$ value is low, solutions close to the initial LB are harder to find, and the gap is harder to close. Third, the computational power of \cref{alg:cp} is associated with the effectiveness in searching for primal solutions (upper bound improvement) versus the progress on the dual side (lower bound improvement), which appears to be very difficult.
\begin{table}[h!]
\tiny
\tabcolsep=2.5pt
{\begin{tabular}{r|rrrr|rrrr|rrrr|rrrr|rrrr}
\multicolumn{1}{c|}{} & \multicolumn{4}{c|}{\textbf{50 zones}} & \multicolumn{4}{c|}{\textbf{100 zones}} & \multicolumn{4}{c}{\textbf{200 zones}} & \multicolumn{4}{c}{\textbf{400 zones}} & \multicolumn{4}{c}{\textbf{Utrecht}} \\
\hline
$MT$ & $g$ & $h$ & $g-h$ & cov & $g$ & $h$ & $g-h$ & cov & $g$ & $h$ & $g-h$ & cov & $g$ & $h$ & $g-h$ & cov & $g$ & $h$ & $g-h$ & cov \\
\hline
0.1 & 30 & 24 & 6 & 99.20\% & 30 & 10 & 20 & 96.20\% & 30 & 6 & 24 & 97.33\% & 30 & 6 & 24 & 98.45\% & - & - & - & - \\
0.2 & 30 & 28 & 2 & 99.47\% & 30 & 16 & 14 & 96.20\% & 30 & 6 & 24 & 97.33\% & 30 & 12 & 18 & 98.45\% & - & - & - & - \\
0.3 & 30 & 28 & 2 & 99.47\% & 30 & 20 & 10 & 96.53\% & 30 & 9 & 21 & 97.33\% & 30 & 16 & 14 & 98.45\% & 30 & 15 & 15 & 97.23\% \\
0.4 & 30 & 28 & 2 & 99.47\% & 30 & 23 & 7 & 96.43\% & 30 & 13 & 17 & 97.33\% & 30 & 22 & 8 & 98.19\% & 30 & 22 & 8 & 95.39\% \\
0.5 & 30 & 28 & 2 & 99.47\% & 30 & 23 & 7 & 96.43\% & 30 & 13 & 17 & 97.15\% & 30 & 23 & 7 & 97.23\% & 30 & 22 & 8 & 95.48\% \\
0.6 & 30 & 28 & 2 & 99.47\% & 30 & 23 & 7 & 96.43\% & 30 & 13 & 17 & 97.35\% & 30 & 23 & 7 & 97.21\% & 30 & 22 & 8 & 95.39\% \\
0.7 & 30 & 28 & 2 & 99.47\% & 30 & 23 & 7 & 96.43\% & 30 & 13 & 17 & 97.00\% & 30 & 23 & 7 & 97.23\% & 30 & 22 & 8 & 95.39\% \\
0.8 & 30 & 28 & 2 & 99.47\% & 30 & 23 & 7 & 96.43\% & 30 & 13 & 17 & 97.22\% & 30 & 23 & 7 & 97.13\% & 30 & 22 & 8 & 95.39\% \\
0.9 & 30 & 28 & 2 & 99.47\% & 30 & 23 & 7 & 96.43\% & 30 & 13 & 17 & 97.22\% & 30 & 23 & 7 & 97.23\% & 30 & 22 & 8 & 95.39\% \\
1 & 30 & 28 & 2 & 99.47\% & 30 & 23 & 7 & 96.43\% & 30 & 13 & 17 & 97.22\% & 30 & 23 & 7 & 97.17\% & 30 & 22 & 8 & 95.39\% \\
\hline
\end{tabular}}
\caption{Enforcing coverage constraints at 95\%. Solution values for both synthetic and real-world Utrecht instances: $g$, $h$, $g-h$ (defined in \cref{eq:BAPprimal}), and ``cov'' (the \remove{sum of all}\add{average} coverage, i.e., the efficiency). The results of the synthetic instances are averaged over 5 instances.}
\label{tab:gh95}
\end{table}
\begin{table}[h!]
\tiny
\tabcolsep=2.5pt
{\begin{tabular}{r|rrrr|rrrr|rrrr|rrrr|rrrr}
\multicolumn{1}{c|}{} & \multicolumn{4}{c|}{\textbf{50 zones}} & \multicolumn{4}{c|}{\textbf{100 zones}} & \multicolumn{4}{c}{\textbf{200 zones}} & \multicolumn{4}{c}{\textbf{400 zones}} & \multicolumn{4}{c}{\textbf{Utrecht}} \\
\hline
$MT$ & $g$ & $h$ & $g-h$ & cov & $g$ & $h$ & $g-h$ & cov & $g$ & $h$ & $g-h$ & cov & $g$ & $h$ & $g-h$ & cov & $g$ & $h$ & $g-h$ & cov \\
\hline
0.1 & 30 & 24 & 6 & 98.80\% & 30 & 15 & 15 & 95.27\% & 30 & 6 & 24 & 94.40\% & 30 & 9 & 21 & 98.45\% & - & - & - & - \\
0.2 & 30 & 29 & 1 & 98.67\% & 30 & 23 & 7 & 95.10\% & 30 & 6 & 24 & 94.40\% & 30 & 13 & 17 & 98.45\% & - & - & - & - \\
0.3 & 30 & 29 & 1 & 98.67\% & 30 & 23 & 7 & 94.17\% & 30 & 12 & 18 & 94.40\% & 30 & 15 & 15 & 98.45\% & 30 & 19 & 11 & 90.32\% \\
0.4 & 30 & 29 & 1 & 98.67\% & 30 & 24 & 6 & 94.17\% & 30 & 20 & 10 & 94.30\% & 30 & 23 & 7 & 97.70\% & 30 & 22 & 8 & 90.32\% \\
0.5 & 30 & 29 & 1 & 98.67\% & 30 & 24 & 6 & 94.17\% & 30 & 21 & 9 & 93.33\% & 30 & 25 & 5 & 95.31\% & 30 & 23 & 7 & 91.90\% \\
0.6 & 30 & 29 & 1 & 98.67\% & 30 & 24 & 6 & 94.17\% & 30 & 21 & 9 & 93.65\% & 30 & 25 & 5 & 95.42\% & 30 & 23 & 7 & 91.84\% \\
0.7 & 30 & 29 & 1 & 98.67\% & 30 & 24 & 6 & 94.17\% & 30 & 21 & 9 & 93.30\% & 30 & 25 & 5 & 95.19\% & 30 & 23 & 7 & 91.53\% \\
0.8 & 30 & 29 & 1 & 98.67\% & 30 & 24 & 6 & 94.17\% & 30 & 21 & 9 & 93.30\% & 30 & 25 & 5 & 95.05\% & 30 & 23 & 7 & 91.97\% \\
0.9 & 30 & 29 & 1 & 98.67\% & 30 & 24 & 6 & 94.17\% & 30 & 21 & 9 & 93.28\% & 30 & 25 & 5 & 95.10\% & 30 & 23 & 7 & 92.00\% \\
1 & 30 & 29 & 1 & 98.67\% & 30 & 24 & 6 & 94.17\% & 30 & 21 & 9 & 93.30\% & 30 & 25 & 5 & 95.07\% & 30 & 23 & 7 & 92.00\% \\
\hline
\end{tabular}}
\caption{Enforcing coverage constraints at 90\%. Solution values for both synthetic and real-world Utrecht instances: $g$, $h$, $g-h$ (defined in \cref{eq:BAPprimal}), and ``cov'' (the \remove{sum of all}\add{average} coverage, i.e., the efficiency). The results of the synthetic instances are averaged over 5 instances.}
\label{tab:gh90}
\end{table}
\begin{table}[h!]
\tiny
\tabcolsep=2.5pt
{\begin{tabular}{r|rrrr|rrrr|rrrr|rrrr|rrrr}
\multicolumn{1}{c|}{} & \multicolumn{4}{c|}{\textbf{50 zones}} & \multicolumn{4}{c|}{\textbf{100 zones}} & \multicolumn{4}{c}{\textbf{200 zones}} & \multicolumn{4}{c}{\textbf{400 zones}} & \multicolumn{4}{c}{\textbf{Utrecht}} \\
\hline
$MT$ & $g$ & $h$ & $g-h$ & cov & $g$ & $h$ & $g-h$ & cov & $g$ & $h$ & $g-h$ & cov & $g$ & $h$ & $g-h$ & cov & $g$ & $h$ & $g-h$ & cov \\
\hline
0.1 & 30 & 24 & 6 & 97.60\% & 30 & 10 & 20 & 93.60\% & 30 & 6 & 24 & 5616\% & 30 & 10 & 20 & 96.45\% & - & - & - & - \\
0.2 & 30 & 29 & 1 & 98.40\% & 30 & 24 & 6 & 92.57\% & 30 & 6 & 24 & 5616\% & 30 & 13 & 17 & 96.45\% & - & - & - & - \\
0.3 & 30 & 29 & 1 & 98.40\% & 30 & 25 & 5 & 92.90\% & 30 & 14 & 16 & 5616\% & 30 & 17 & 13 & 96.45\% & 30 & 22 & 8 & 94.47\% \\
0.4 & 30 & 29 & 1 & 98.40\% & 30 & 25 & 5 & 92.57\% & 30 & 20 & 10 & 5610\% & 30 & 22 & 8 & 96.45\% & 30 & 25 & 5 & 90.48\% \\
0.5 & 30 & 29 & 1 & 98.40\% & 30 & 25 & 5 & 92.57\% & 30 & 22 & 8 & 5477\% & 30 & 24 & 6 & 92.53\% & 30 & 25 & 5 & 90.48\% \\
0.6 & 30 & 29 & 1 & 98.40\% & 30 & 25 & 5 & 92.57\% & 30 & 22 & 8 & 5464\% & 30 & 25 & 5 & 92.52\% & 30 & 25 & 5 & 90.48\% \\
0.7 & 30 & 29 & 1 & 98.40\% & 30 & 25 & 5 & 92.57\% & 30 & 22 & 8 & 5448\% & 30 & 25 & 5 & 93.19\% & 30 & 25 & 5 & 90.48\% \\
0.8 & 30 & 29 & 1 & 98.40\% & 30 & 25 & 5 & 92.57\% & 30 & 22 & 8 & 5454\% & 30 & 25 & 5 & 93.14\% & 30 & 25 & 5 & 90.48\% \\
0.9 & 30 & 29 & 1 & 98.40\% & 30 & 25 & 5 & 92.57\% & 30 & 22 & 8 & 5453\% & 30 & 25 & 5 & 93.19\% & 30 & 25 & 5 & 90.48\% \\
1 & 30 & 29 & 1 & 98.40\% & 30 & 25 & 5 & 92.57\% & 30 & 22 & 8 & 5453\% & 30 & 25 & 5 & 93.14\% & 30 & 25 & 5 & 90.48\% \\
\hline
\end{tabular}}
\caption{Enforcing coverage constraints at 85\%. Solution values for both synthetic and real-world Utrecht instances: $g$, $h$, $g-h$ (defined in \cref{eq:BAPprimal}), and ``cov'' (the \remove{sum of all}\add{average} coverage, i.e., the efficiency). The results of the synthetic instances are averaged over 5 instances.}
\label{tab:gh85}
\end{table}
\cref{tab:gh95,tab:gh90,tab:gh85} show the actual values of the solutions, as well as the associated coverages (the sum of all the zones' coverages over the time horizon, i.e., the efficiency). In \cref{tab:gh95} the coverage constraints are enforced at 95\%, i.e., all the configurations must cover at least 95\% of the zones. In \cref{tab:gh90} those constraints are enforced at 90\%, and in \cref{tab:gh85} at 85\%. Comparing the three tables, we can see a clear correlation between fairness and efficiency in virtually all instances: Enforcing a higher coverage decreases the fairness of the solutions.
Finally, we are also interested in identifying the time spent in the column generation part as opposed to the \textsc{ConstraintProgramming} routine. To this end, we track the number of calls made to the function \textsc{ConstraintProgramming} and also measure the total time spent in calls to the function.
The number of generated columns and the number of calls to \textsc{ConstraintProgramming} are detailed in \cref{tbl:results}.
\setlength{\tabcolsep}{4.8pt}
\begin{table}[h!]
\centering
{\begin{tabular}{crrrrrrrrrr}
\hline
\multicolumn{1}{c}{} & \multicolumn{2}{c}{\textbf{50 zones}} & \multicolumn{2}{c}{\textbf{100 zones}} & \multicolumn{2}{c}{\textbf{200 zones}} & \multicolumn{2}{c}{\textbf{400 zones}} & \multicolumn{2}{c}{\textbf{Utrecht}} \\
\hline
$MT$ & \#cols & \#calls & \#cols & \#calls & \#cols & \#calls & \#cols & \#calls & \#cols & \#calls \\
\hline
0.1 & 738 & 736 & 1120 & 1108 & 714 & 608 & 248 & 72& 786 & 210 \\
0.2 & 3 & 1 & 674 & 662 & 710 & 605 & 246 & 72& 803 & 217 \\
0.3 & 3 & 1 & 11 & 3 & 537 & 435 & 238 & 66& 757 & 202 \\
0.4 & 3 & 1 & 9 & 1 & 108 & 58 & 174 & 22& 353 & 54 \\
0.5 & 3 & 1 & 9 & 1 & 19 & 3 & 93 & 6 & 252 & 21 \\
0.6 & 3 & 1 & 9 & 1 & 15 & 1 & 65 & 2 & 96 & 1 \\
0.7 & 3 & 1 & 9 & 1 & 15 & 1 & 60 & 1 & 96 & 1 \\
\hline
\end{tabular}}
\caption{Number of columns and calls to \textsc{ConstraintProgramming} associated with the maximum transition (MT) constraints of the various instance sizes.
}\label{tbl:results} \end{table}
The results in \cref{tbl:results} show that when enough freedom in the day-to-day movement of ambulances is allowed, optimality can generally be proven immediately, with just one call to the \textsc{ConstraintProgramming} routine. We also notice that the ratio of columns to the number of calls to the routine increases with the instance size. Such a ratio for the Utrecht instance is disproportionately high, due again to the distribution of the population densities.
Finally, \cref{tbl:cg_cp} shows the average amount of time spent generating columns (``CG"), that spent enforcing the transition constraints through \textsc{ConstraintProgramming} (``CP") and the ratio between these two CPU times (``ratio") for the most difficult case, i.e., $MT=0.1$.
\begin{table}[h!]
\centering
{\begin{tabular}{r|rr|r}
\hline
& \multicolumn{2}{c|}{time} & \\
Instance & CG & CP & ratio \\
\hline
50 zones & 86 & 154 & 0.56 \\
100 zones & 411 & 550 & 0.75 \\
200 zones & 331 & 151 & 2.19 \\
400 zones & 711 & 253 & 2.81 \\
Utrecht & 1071 & 129 & 7.96 \\
\hline
\end{tabular}}
\caption{CPU times in seconds spent in column generation and \textsc{ConstraintProgramming}, with $MT=0.1$.}
\label{tbl:cg_cp}
\end{table}
\section{Conclusion}
\label{sec:conclusion}
We introduced an abstract framework for solving a sequence of fair allocation problems, such that fairness is achieved over time. For some relevant special cases, we have been able to give theoretical proofs for the time horizon required for perfect fairness. We described a general integer programming formulation for this problem, as well as a formulation based on column generation and constraint programming. This latter formulation can be used in a practical context, as shown by the ambulance location problem applied to the city of Utrecht. The largest synthetic instances suggest that this formulation would scale well to regions twice the size of Utrecht, given that the freedom of movement of the ambulances is not overly restricted.
\section*{Acknowledgements}
The authors would like to thank the National Institute for Public Health and the Environment of the Netherlands (RIVM) for access to the data of their ambulance service. We are grateful to the anonymous referees for useful comments and remarks.
\bibliographystyle{plainnat}
|
2,877,628,089,143 | arxiv | \section{Conclusion}
\label{sec:Conclusion}
We address the problem of learning from data sensed from networked sensors in IoT environments. Such data exists in a correlated multi-way form and considered as non-stationary due to the on-going variation that often arises from environmental changes over a long period time. Existing learning models such as OCSVM and traditional matrix analysis methods do not capture theses aspects. Thus, accuracy and performance are significantly affected with such non-stationary nature. We addressed these problems by proposing a new online CP decomposition named NeSGD and a Tensor-based Online-OCSVM method which employs the online learning technique. The essence of our proposed approach is that it triggers incremental updates to the online OCSVM model based on data received from the location component matrix which maintains important information about sensor's behaviour. We achieved this by incorporating a new criterion received regularly from each sensor which is captured by decomposing the tensor using NeSGD.
We applied our approach to real-life datasets collected from network of sensors attached to bridges to detect damage (anomalies) in its structure that might result from environmental variations over long time periods. The various experimental results showed that our tensor-based online-OCSVM was able to accurately and efficiently differentiate between environmental changes and actual damage behaviours. Specifically, our tensor-based Online OCSVM method significantly outperformed the self-advised and threshold-based OCSVM models as it scored the lowest false alarm rates and carried more accurate updates to the learning model.
It would be interesting to investigate other factors (other than temperature) that may influence anomaly detection in structure health monitoring and other areas. One interesting area is to extend and apply our Tensor-based Online OCSVM model to other related IoT fields such as smart homes.
\section{Introduction}
Almost all major cities around the world have developed complex physical infrastructure that encompasses bridges, towers, and iconic buildings. One of the most emerging challenges with such infrastructure is to continuously monitor its health to ensure the highest levels of safety. Most of the existing structural monitoring and maintenance approaches rely on a time-based visual inspection and manual instrumentation methods which are neither efficient nor effective.
Internet of Things (IoT) has created a new paradigm for connecting things (e.g., computing devices, sensors and objects) and enabling data collection over the Internet. Nowadays, various types of IoT devices are widely used in smart cities to continuously collect data that can be used to manage resources efficiently. For example, many sensors are connected to bridges to collect various types of data about their health status. This data can be then used to monitor the health of the bridges and decide when maintenance should be carried out in case of potential damage \cite{Li2015SHMBridges}. With this advancement, the concept of smart infrastructure maintenance has emerged as a continuous automated process known as Structural Health Monitoring (SHM) \cite{Jinping2010SHMReview}.
SHM provides an economic monitoring approach as the inspection process which is mainly based on a low-cost IoT data collection system. It also improves effectiveness due to the automation and continuity of the monitoring process. SHM enhances understanding the behaviour of structures as it continuously provides large data that can be utilized to gain insight into the health of a structure and make timely and economic decisions about its maintenance.
One of the critical challenges in SHM is the non-stationary nature of the data collected from several networked sensors attached to a bridge \cite{Xin2018Nonstationary}. This collected data is the foundation for training a model to detect potential damages in a bridge. In SHM, the data used in training the model comes from healthy samples (it does not include damaged data set) and hence represents one-class data. Also, the data is collected within a fixed period time and processed simultaneously. This influences the performance of model training as it does not consider data variations over more extended period time. Structures always experience severe events such as heavy loads over a long and continuous period of time and high and low seasonal temperatures. Such variations would dramatically affect the status of the data as it changes the characteristics of the structure such as the data frequencies. Consequently, the training data fed into the model does not represent such variations but only healthy or undamaged data samples. Thus it is critical to develop a method that mitigates these variations and increases the specificity rate.
Another challenge in SHM is that generated sensing data exists in a correlated multi-way form which makes the standard two-way matrix analysis unable to capture all of these correlations and relationships \cite{Cheema2016twoway}. Instead, the SHM data can be arranged into a three-way data structure where each data point is triple of a feature value extracted from a particular sensor at a specific time. Here, the information extracted from the raw signals in time domain represent features, the sensors represent data location and time snapshots represent the timestamps when data was extracted (as shown in Figure \ref{tensor}).
\begin{figure*}
\centering
\includegraphics[scale=0.33]{tensor_png}
\caption{Tensor data with three modes in SHM}
\label{tensor}
\end{figure*}
In order to address the problem as mentioned above, an online one-class learning approach should be employed along with an online multi-way data analysis tool to capture the underlying structure inherits in multi-way sensing data. In this setting, the design of a one-class support vector machine (OCSVM) \cite{anaissi2017adaptive} and tensor analysis are well-suited to this kind of problems where only observations from the multi-way positive (healthy) samples are required.
Tensor is a multi-way extension of the matrix to represent multidimensional data structures such that SHM data. Tensor analysis requires extensions to the traditional two-way data analysis methods such as Non-negative
Matrix Factorization (NMF), Principal Component Analysis (PCA) and Singular Value Decomposition (SVD). In this sense, the \textbf{CANDECOMP/PARAFAC} (CP) \cite{Phan2013Candecomp} has recently become a standard approach for analyzing multi-way tensor data. Several algorithms have been proposed to solve CP decomposition \cite{symeonidis2008tag} \cite{lebedev2014speeding} \cite{rendle2009learning}. Among these algorithms, alternating least square(ALS) has been heavily employed which repeatedly solves each component matrix by locking all other components until it converges \cite{papalexakis2017tensors}. It is usually applied in offline mode to decompose the positive training tensor data, which then fed into an OCSVM model to construct the decision boundary. However, this offline process is also not suitable for such dynamically changing SHM data. Therefore, we also interested here to incrementally update the resultant CP decomposition in an online manner.
Similarly, the OCSVM has become a standard approach in solving anomaly detection problems. OCSVM is usually trained with a set of positive (healthy) training data, which are collected within a fixed time period and are processed together at once. As we mentioned before, this fixed batch model generally performs poorly if the distribution of the training data varies over a time span. One simple approach would be to retrain the OCSVM model from scratch when additional positive data arrive. However, this would lead to ever-increasing training sets, and would eventually become impractical. Moreover, it also seems computationally wasteful to retrain the model for each incoming datum, which will likely have a minimal effect on the previous decision boundary. Another approach for dealing with large non-stationary data would be to develop an online-OCSVM model that incrementally updates the decision boundary, i.e., incorporating additional healthy data when they are available without retraining the model from scratch. The question now is how to distinguish real damage from the environmental changes which require model updates. Current research (such as \cite{wang2013online,davy2006online}) proposes a threshold value to measure the closeness of a new negative datum to the decision boundary for online OCSVM. More specifically, if this new negative datum is not far from the decision boundary, then they consider it as a healthy sample (environmental changes) and update the model accordingly. However, this predefined threshold is very sensitive to the distribution of the training data and it may include or exclude anomalies and healthy samples. Recently, Anaissi \etal \cite{anaissi2017self} propose another approach which measures the similarity between a new event and error support vectors to generate a self-advised decision value rather than using a fixed threshold. That was an effective solution but it is susceptible to include damage samples if the model keeps updated in the same direction as the real damage samples. Then the resultant updated model will start encounter damage samples in the training data. Thus this approach will start missing real damage events.
To address the aforementioned problems, we propose a new method that uses the online learning technique to solve the problems of online OCSVM and CP decomposition. We employ stochastic gradient descent (SGD) algorithm for online CP decompositio, and we introduce a new criterion to trigger the update process of the online-OCSVM, This criterion utilities the information derived from the location component matrix which we obtain when we decompose the three-way tensor data $\mathcal{X}$. This matrix stores meaningful information about the behavior for each sensor location on the bridge. Intuitively, environmental changes such as temperature will affect all the instrumented sensors on the bridge similarly. However, real damage would affect a particular sensor location and the ones close by. The contributions of our proposed method are as follows:
\begin{itemize}
\item \textbf{Online CP decomposition.} We employ Nesterov's Accelerated Gradient (NAG) method into SGD algorithm to solve the CP decomposition which has the capability to update $\mathcal{X}^{(t+1)}$ in one single step. We also followed the perturbation approach which adds a little noise to the gradient update step to reinforce the next update step to start moving away from a saddle point toward the correct direction.
\item \textbf{ Online anomaly detection.} We propose a tensor-based online-OCSVM which is able to distinguish between environmental and damage behaviours to adequately update the model coefficients.
\item \textbf{Empirical analysis on structural datasets.} We conduct experimental analysis using laboratory-based and real-life datasets in the field of SHM. The experimental analysis shows that our method can achieve lower false alarm rates compared to other known existing online and offline methods.
\end{itemize}
This paper is organized as follows. Section ~\ref{sec:RelatedWork} presents preliminary work and discusses research related to this work. In section ~\ref{sec:Method} we introduce the details of our method; online OCSVM-OCPD. Section ~\ref{sec:Experiement} presents the experimental evaluation of the proposed method and discuses the results. Conclusions and future work are discussed in Section ~\ref{sec:Conclusion}.
\section{Online Tensor-Based Learning for Multi-Way Data}
\label{sec:Method}
The incremental learning of online-OCSVM has been well-studied and proved to produce the same solution as the batch learning process (offline learning). In fact, the main problem of online-OCSVM is not related to the incremental learning process, but it is due to the criteria we need to trigger this update process successfully. Given an OCSVM model constructed from the healthy training data, the calculated decision value using Equation \ref{dv2} will decide whether a new event is healthy or not. When this decision value is positive i.e., healthy, then the KKT conditions remain satisfied when this new datum is added to the training data. Thus no model update is required. On the other hand, when this decision value is negative, we need to know whether this event is related to damage data or it is only due to environmental changes such as temperature. If this event is real damage then we report it without any model update. Nevertheless, if it is due to environmental changes, we need to add this datum to the training data and update the model coefficients accordingly since this negative decision datum will violate the KKT conditions. The challenge now is how to separate the environmental changes from real damage.
In this paper, we propose a new criterion to trigger the update process of the online-OCSVM based on the information derived from the location component matrix, we obtain when we decompose the three-way tensor data $\mathcal{X}$. This matrix stores meaningful information about the behavior for each sensor location on the bridge. Intuitively, environmental changes such as temperature will affect all the instrumented sensors on the bridge similarly. However, real damage would affect a particular sensor location and the ones close by. To implement this approach, we need to find an efficient solution for online-CP decomposition which will be discussed in the following section.
\subsection{Nesterov SGD (NeSGD) for Online-CP Decomposition}
We employ stochastic gradient descent (SGD) algorithm to perform CP decomposition in online manner. SGD has the capability to deal with big data and online learning models. The key element for optimization problems in SGD is defined as:
\begin{eqnarray}\label{eq:sgd}
w = \underset{w}{argmin} \; \mathbb{E}[L(x,w)]
\end{eqnarray}
where $L$ is the loss function needs to be optimized, $x$ is a data point and $w$ is a variable.
\noindent The SGD method solves the above problem defined in Equation \ref{eq:sgd} by repeatedly updates $w$ to minimize $L(x,w)$. It starts with some initial value of $w^{(t)}$ and then repeatedly performs the update as follows:
\begin{eqnarray}\label{eq:sgdu}
w^{(t+1)} := w^{(t)} + \eta \frac{\partial L}{\partial w } (x^{(t)} ,w^{(t)} )
\end{eqnarray}
where $\eta$ is the learning rate and $\frac{\partial L}{\partial w }$ is the partial derivative of the loss function with respect to the
parameter we need to minimize i.e. $w$.
\noindent In the setting of tensor decomposition, we need to calculate the partial derivative of the loss function $L$ defined in Equation \ref{eq:als} with respect to the three modes $A, B$ and $C$ alternatively as follows:
\begin{eqnarray}\label{eq:partial}
\frac{\partial L}{\partial A }(X^{(1)}; A) = (X^{(1)} - A \times (C \circ B)) \times (C \circ B) \nonumber\\
\frac{\partial L}{\partial B }(X^{(2)}; B) = (X^{(2)} - B \times (C \circ A)) \times (C \circ A)\\
\frac{\partial L}{\partial C }(X^{(3)}; C) = (X^{(3)} - C \times (B \circ A)) \times (B \circ A)\nonumber
\end{eqnarray}
where $X^{(i)}$ is an unfolding matrix of tensor $\mathcal{X}$ in mode $i$. The gradient update step for $A, B$ and $C$ are as follows:
\begin{eqnarray}\label{eq:update}
A^{(t+1)} := A^{(t)} + \eta^{(t)} \frac{\partial L}{\partial A } (X^{(1, t)} ;A^{(t)} ) \nonumber\\
B^{(t+1)} := B^{(t)} + \eta^{(t)} \frac{\partial L}{\partial B } (X^{(2, t)} ;B^{(t)} ) \\
C^{(t+1)} := C^{(t)} + \eta^{(t)} \frac{\partial L}{\partial C } (X^{(3, t)} ;C^{(t)} ) \nonumber
\end{eqnarray}
The rational idea of GSD algorithm depends only on the gradient information of $\frac{\partial L}{\partial w }$. In such a non-convex setting, this partial derivative may encounter data points with $\frac{\partial L}{\partial w } = 0$ even though it is not at a global minimum. These data points are known as saddle points which may detente the optimization process to reach the desired local minimum if not escaped \cite{ge2015escaping}. These saddle points can be identified by studying the second-order derivative (aka Hessian) $\frac{\partial L}{\partial w }^2$. Theoretically, when the $\frac{\partial L}{\partial w }^2(x;w)\succ 0$, $x$ must be a local minimum; if $\frac{\partial L}{\partial w }^2(x;w) \prec 0$, then we are at a local maximum; if $\frac{\partial L}{\partial w }^2(x;w)$ has both positive and negative eigenvalues, the point is a saddle point. The second order-methods guarantee convergence, but the computing of Hessian matrix $H^{(t)}$ is high, which makes the method infeasible for high dimensional data and online learning. Ge \etal \cite{ge2015escaping} show that saddle points are very unstable and can be escaped if we slightly perturb them with some noise. Based on this, we use the perturbation approach which adds Gaussian noise to the gradient. This reinforces the next update step to start moving away from that saddle point toward the correct direction. After a random perturbation, it is highly unlikely that the point remains in the same band and hence it can be efficiently escaped (i.e., no longer a saddle point)\cite{jin2017escape}. Since we are interested in the fast optimization process due to online settings, we further incorporate Nesterov's method into the PSGD algorithm to accelerate the convergence rate. Recently, Nesterov's Accelerated Gradient (NAG) \cite{nesterov2013introductory} has received much attention for solving convex optimization problems \cite{guan2012nenmf,nitanda2014stochastic,ghadimi2016accelerated}. It introduces a smart variation of momentum that works slightly better than standard momentum. This technique modifies the traditional SGD by introducing velocity $\nu$ and friction $\gamma$, which tries to control the velocity and prevents overshooting the valley while allowing faster descent. Our idea behind Nesterov's is to calculate the gradient at a next position that we know our momentum will reach it instead of calculating the gradient at the current position. In practice, it performs a simple step of gradient descent to go from $w^{(t)} $ to $w^{(t+1)}$, and then it shifts slightly further than $w^{(t+1)}$ in the direction given by $\nu^{(t-1)}$. In this setting, we model the gradient update step with NAG as follows:
\begin{eqnarray}\label{eq:nagNe}
A^{(t+1)} := A^{(t)} + \eta^{(t)} \nu^{(A, t)} + \epsilon - \beta ||A||_{L_1}
\end{eqnarray}
where
\begin{eqnarray}\label{eq:velNe}
\nu^{(A, t)} := \gamma \nu^{(A, t-1)} + (1-\gamma) \frac{\partial L}{\partial A } (X^{(1, t)} ,A^{(t)} )
\end{eqnarray}
where $\epsilon$ is a Gaussian noise, $\eta^{(t)}$ is the step size, and $||A||_{L_1}$ are the regularization and penalization parameter into the $L_1$ norms to achieve smooth representations of the outcome and thus bypassing the perturbation surrounding the local minimum problem. The updates for $(B^{(t+1)} , \nu^{(B, t)})$ and $(C^{(t+1)} ,\nu^{(C, t)} )$ are similar to the aforementioned ones.
With NAG, our method achieves a global convergence rate of $O(\frac{1}{T^2})$ comparing to $O(\frac{1}{T})$ for traditional gradient descent. Based on the above models, we present our NeSGD algorithm \ref{NeCPD}.
\begin{algorithm}
\caption{ NeSGD algorithm}
\label{NeCPD}
\textbf{Input}: Tensor $X \in \Re^{I \times J \times K} $ , number of components $R$\\
\textbf{Output}: Matrices $A \in \Re^{I \times R}$, $B \in \Re^{J \times R} $ and $ C \in \Re^{K \times R}$
\begin{enumerate}
\item[1:] Initialize $A,B,C$
\item[2:] Repeat
{\setlength\itemindent{6pt}
\item[3:] Compute the partial derivative of $A, B$ and $C$ using Equation \ref{eq:partial}
\item[3:] Compute $\nu$ of $A, B$ and $C$ using Equation \ref{eq:velNe}
\item[4:] Update the matrices $A, B$ and $C$ using Equation \ref{eq:nagNe}
}
\item[6:] until convergence
\end{enumerate}
\end{algorithm}
\subsection{Tensor-based Advised Decision Values}
In this paper, we introduce a new criterion to trigger the update process when the online OCSVM model generates a negative decision value for a new sample $c^{t+1}$. Based on the information derived from the location component matrix $B^{(t+1)}$ which we obtain when we decompose a three-way tensor data $\mathcal{X}^{t+1}$, we generate an advised decision score for a new negative datum based on the average distance from a sensing location matrix (a row in (\textbf{\textit{B}})) to the k nearest neighbouring (knn) locations. A big change in this score of a sensing
location indicates a change in sensor behaviour which might be due to occurred damage or environmental changes. If all the knn scores behave differently, then this indicates that the negative decision value is due to environmental changes. Therefore, we add this new datum to the training data and update the model coefficients accordingly. Otherwise, we report $c_i^{t+1}$ as an anomaly data point.
This algorithm is described as follows: given the location matrix $B^{t+1}$, the unit vector of each point $b_j (j = 1, \ldots, n)$ with its $k$ closest points $b_k$ is computed as follows:
\begin{eqnarray}\label{norm}
v_{j}^{k} = \dfrac{b_j-b_k}{ \lVert b_j-b_k \rVert}
\end{eqnarray}
Then we estimate the change occurred in sensors behaviors based on the absolute difference between $B^{t+1}$ and $B^{t}$ using the following equation
\begin{eqnarray}\label{rate}
\mathcal{P}_i(C^{t+1}) = \frac{1}{n}\sum_{n=1}^{j}\lvert b_j^{t+1}-b_j^{t} \rvert > \gamma
\end{eqnarray}
where $\mathcal{P}_i$ represents the probability of that negative decision value of $c_i^{t+1}$ is being related to environmental changes, and $\gamma$ is a small value represents acceptable change. In this work, we set-up a 90\% confidence interval for $\mathcal{P}_i(C^{t+1})$ to judge whether a new datum $c_i^{t+1}$ belongs to the healthy sample or not. In fact, this confidence interval is based on the physical layout of the sensor instrumentation. Algorithm \ref{TDV} illustrates the process of generating the tensor-based advised decision values:
\begin{algorithm}[!h]
\caption{ Tensor-based advised decision values method.}
\label{TDV}
\textbf{Input}: A set of $n$ sensors $B^{(t+1)}=\{{b_j}\}_{i=1}^n$\\
For each sample ${b_j}$ in $B^{(t+1)}$
\begin{enumerate}
\item[(a)]Find the $k$ closest points to $b_j$: $ j= 1, \dots,k$.
\item[(b)] Calculate the unit vectors $v_j^k$ of $b_{j}$ according to (\ref{norm}).
\item[(c)] Calculate $\mathcal{P}_i$ according to (\ref{rate}).
\item[(d)] Adjust the decision value as follows:
$g(c^{t+1})$ = \Bigg\{\begin{tabular}{lp{2em}l}
$\vert g(c^{t+1}) \vert$ & & $ \mathcal{P}_i(c^{t+1}) \geq 0.9$\\
$ g(c^{t+1}) $ & & $\mathcal{P}_i(c^{t+1}) < 0.9$
\end{tabular}
\item[(e)] Update the model coefficients
\end{enumerate}
\end{algorithm}
\subsection{Model Coefficients Update}
The model solution of OCSVM is basically composed of two coefficients denoted by $\alpha_i$ and $\rho$. When a new datum $c_i^{t+1}$ arrives with an advised decision value $a(x_c)>0$, the model coefficients must be updated to get the new solution $\alpha_{i+c}$ and $\rho*$, which should preserve the KKT conditions (see Equation \ref{kkt}). We initially assign a zero value to $\alpha_c$ and then start looking for the largest possible increment of $\alpha_c$ until its $g(x_c)$
becomes zero, in this case $x_c$ is added to the support vectors set $S$. If $\alpha_c$ equals to $1$, $x_c$ is added to the set of the error vectors $E$.
The difference between the two states of OCSVM is shown in the following
equation:
\begin{eqnarray}\label{pd}
\Delta g_i = \phi(x_i,x_c) \Delta \alpha_c + \sum_{j \in S} \phi(x_i, x_j) \Delta \alpha_j + \Delta \rho.
\end{eqnarray}
Since $g_i = 0$ $ \forall i \in S$, we can write Equation \ref{pd} in a matrix form as follows:
\begin{eqnarray}\label{pdm}
\underbrace{ \Bigg[
\begin{tabular}{cc}
0 & $1^T$ \\
1 & $\phi_{s,s}$
\end{tabular}
\Bigg]}_Q
\underbrace{ \Bigg [
\begin{tabular}{c}
$\Delta \rho$\\
$\Delta \alpha_s$
\end{tabular}
\Bigg ] }_{\Delta \tilde{\alpha_s}}
=-
\underbrace{ \Bigg [
\begin{tabular}{c}
1\\
$\phi_{s,c}$
\end{tabular}
\Bigg ] }_{\eta_c} \Delta \alpha_c
\end{eqnarray}
\vspace{-6mm}
\begin{eqnarray}\label{le}
\Delta \alpha_s = \underbrace{- Q^{-1} \eta_c}_\beta \Delta \alpha_c
\end{eqnarray}
By substituting $\alpha_s$ in the partial derivate equation \ref{pd}, we obtain:
\vspace{-2mm}
\begin{eqnarray}\label{pdb}
\Delta g_i = \phi(x_i,x_c) \Delta \alpha_c + \sum_{j \in S} \phi(x_i, x_j) \beta_j\Delta \alpha_c + \beta_0.
\end{eqnarray}
\vspace{-5mm}
\begin{eqnarray*}
\Delta g_i = \gamma_i \Delta \alpha_c
\end{eqnarray*}
\vspace{-3mm}
where
\vspace{-3mm}
\begin{eqnarray*}\label{gamma}
\gamma_i = \phi(x_i,x_c) + \sum_{j \in S} \phi(x_i, x_j) \beta_j + \beta_0.
\end{eqnarray*}
The goal now is to determine the index of the sample $i$ that leads to the minimum
$\Delta \alpha_c$. As in \cite{cauwenberghs2000incremental}, five cases must be
considered to manage the migration of the sample between the three sets $S$, $E$ and
$R$, and calculate $\Delta \alpha_c$.
\begin{enumerate}
\item $\Delta \alpha_c^{1} = \min_i \frac{1-\alpha_i}{\beta_i}$, $\forall i \in S$ and $\beta_i > 0 $. \\
$ \hspace*{1cm} \Delta \alpha_c^{1}$ leads to the minimum $ \Delta \alpha_c $
$\rightarrow$ Move $i$ from $S$ to $E$.
\item $\Delta \alpha_c^{2} = \min_i \frac{-\alpha_i}{\beta_i}$, $\forall i \in S$ and $\beta_i < 0 $\\
$\hspace*{1cm} \Delta \alpha_c^{2}$ leads to the minimum $ \Delta \alpha_c $
$\rightarrow$ Move $i$ from $S$ to $R$.
\item $\Delta \alpha_c^3$ = $\min_i \frac{-g_i}{\gamma_i}$,
$ \forall i \in E$ and $\gamma_i > 0$ or $\forall i \in R$ and $\gamma_i < 0$.\\
$\hspace*{1cm} \Delta \alpha_c^{3}$ leads to the minimum $ \Delta \alpha_c $
$\rightarrow$ Move $i$ from $E$ or $R$ to $S$.
\item $\Delta \alpha_c^4 = \frac{-g_c}{\gamma_c}$, $i$ is the index of $x_c$. \\
$\hspace*{1cm}\Delta \alpha_c^{4}$ leads to the minimum $ \Delta \alpha_c $
$\rightarrow$ Move $x_c$ to $S$, terminate.
\item $\Delta \alpha_c^5 = 1-\alpha_c$, $i$ is the index of $x_c$.\\
$\hspace*{1cm}\Delta \alpha_c^{5}$ leads to the minimum $ \Delta \alpha_c $
$\rightarrow$ Move $x_c$ to $E$, terminate.
\end{enumerate}
The next step after the migration is to update the inverse matrix $Q^{-1}$. Two
cases to consider during the update: extending $Q^{-1}$ when the determined
index $i$ joins $S$, or compressing when index $i$ leaves $S$. Similar to
\cite{laskov2006incremental}, we applied the Sherman-Morrison-Woodbury formula to
obtain the new matrix $\tilde{Q}$. We repeat this procedure until the \ index $i$
is related to the new example $x_c$.
\begin{comment}
\section{Motivation}
In many application domains, data exists in multi-way form. A typical example is in the Structural Health Monitoring (SHM) applications where many sensors at different locations are attached to a given structure (such as bridge). These sensors simultaneously collect vibration responses data over time for damage identification purposes. The generated sensing data exists in a correlated multi-way form which makes the standard two-way matrix analysis unable to capture all of these correlations and relationships. In this sense, the SHM data is arranged as a three-way tensor (feature $\times$ location $\times$ time) as shown in figure \ref{tensor}. The feature is the information extracted from the raw signals in the time domain (see figure \ref{tensor}). The sensors are represented in the location matrix, and time is data snapshots at different timestamps. Each slice along the time axis is a frontal slice representing all feature signals across all locations at a particular time (as shown in figure \ref{tensor}).
\begin{figure*}
\centering
\includegraphics[scale=0.4]{tensor_png}
\caption{Tensor data with three modes in SHM}
\label{tensor}
\end{figure*}
Furthermore, in SHM only data instances from a healthy state are available, and the samples from other states i.e damage, if not impossible, is too difficult or costly to acquire. Thus, the problem becomes an anomaly detection problem in higher-order datasets. Anomaly, or damage, detection is not the only purpose of SHM applications \cite{rytter1993vibrational}. Rytter \cite{rytter1993vibrational} affirms that damage identification requires also damage localization and severity assessment which are considered much more complex than damage detection since they require a supervised learning approach \cite{worden2006application}.
To address the above problems in SHM applications, we employ our NeCPD method to learn from SHM data in multiple modes at the same time, and we use one-class SVM \cite{scholkopf2000support} as an anomaly detection method. The rationale behind one-class SVM is to map a positive, or healthy, data into a feature space using a kernel method. Recently, the Gaussian kernel has gained much popularity in many application domains\cite{xiao2015parameter}. It has a parameter denoted $\sigma$ which may profoundly affect the performance of a one-class SVM by over-fitting or under-fitting the model. In our NeCPD approach, we use Edged Support Vector (ESV) algorithm \cite{anaissi2018gaussian} to tune $\sigma$ as it has the capability to work in a one-class learning setting.
Once NeCPD decomposes $\mathcal{X}_{train}$ into three matrices $A, B$ and $C$. The $C$ matrix represents the temporal mode where each row contains information about the vibration responses related to an event at time $t$. The analysis of this component matrix can help to detect the damage of the monitored structure. Therefore, we use the $C$ matrix to build a one-class SVM model using only the healthy training events. For each new incoming $\mathcal{X}_{new}$ (a new frontal slice in time mode), we update the three matrices $A, B$ and $C$ incrementally as described in Algorithm \ref{NeCPD}. Then the constructed model estimates the agreement between the new event $C_{new}$ and the trained data. In the case of one-class SVM, the model returns a negative decision value which indicates that the new event is likely to be a damage event.
For severity assessment in damage identification, we study the decision values returned from the one-class SVM model. This is because a structure with more severe damage will behave much differently from a normal one. For damage localization, we analyze the data in the location matrix $B$, where each row captures meaningful information for each sensor location. When the matrix $B$ is updated due to the arrival of a new event $\mathcal{X}_{new}$, we study the variation of the values in each row of matrix $B$ by computing the average distance from $B$'s row to $k$-nearest neighboring locations as an anomaly score for damage localization.
\end{comment}
\section{Preliminaries}
\subsection{One-Class Support Vector Machine}
\label{s:ocsvm}
Given a set of training data $X=\{{x_i}\}_{i=1}^n$, with $n$ being the number of
samples, OCSVM maps these samples into a high dimensional feature space using
a function $\phi$ through the kernel $K(x_i,x_j) = \phi(x_i)^T \phi(x_j)$.
Then OCSVM learns a decision boundary that maximally separates the
training samples from the origin \cite{scholkopf2001estimating}. The primary
objective of OCSVM is to optimize the following equation:
\vspace{-7mm}
\begin{eqnarray}\label{ocsvm}
\max_{w,\xi,\rho} -\frac{1}{2} \lVert w \rVert^{2} - \frac{1}{\nu n} \sum_{i=1}^n \xi_i + \rho
\end{eqnarray}
\vspace{-10.5mm}
\begin{eqnarray*}
s.t \hspace{2em} w. \phi(x_i) \geq \rho - \xi_i,\hspace{1em} \xi_i \geq 0, \hspace{1em} i = 1, \ldots, n.
\end{eqnarray*}
where $\nu$ $ (0<\nu<1)$ is a user defined parameter to control the rate of anomalies in the training data, $\xi_i$ are the slack variable, $\phi(x_i)$ is
the kernel matrix and $ w. \phi(x_i) - \rho$ is the separating hyperplane in
the feature space.
The problem turns into a dual objective by introducing Lagrange multipliers $
\alpha = \{\alpha_1,\cdots,\alpha_n\}$. This dual optimization problem is solved
using the following quadratic programming formula \cite{scholkopf2002learning}:
\vspace{-7mm}
\begin{eqnarray}\label{quad}
W=\min_{W(\alpha,\rho)} \frac{1}{2}\sum_{i}^n \sum_{j}^n\alpha_i\alpha_j \phi(x_i, x_j) + \rho(1 - \sum_{i}^n\alpha_i)
\end{eqnarray}
\vspace{-10mm}
\begin{eqnarray*}
s.t \hspace{2em} 0 \leq \alpha_i \leq 1,\hspace{1em} \sum_{i=1}^n \alpha_i =\frac{1}{\nu n}.
\end{eqnarray*}
where $\phi(x_i, x_j)$ is the kernel matrix, $\alpha$ are the Lagrange
multipliers and $\rho$ known as the bias term.
In the optimal solution, $\alpha_i = 0$ in the training samples are referred
to as non-support or reserve vectors denoted by \textit{R}. Training vectors
with $\alpha_i=1$ are called non-margin support or error vectors denoted by
\textit{E}, and vectors with $ 0 < \alpha_i < 1$ are called support vectors
denoted by \textit{S}. The partial derivative of the quadratic optimization problem (defined in
Equation \ref{quad}) with respect to $\alpha_i$, $\forall i \in S$, is then
used as a decision function to calculate the score for a new incoming
sample:
\vspace{-3mm}
\begin{eqnarray}\label{dv2}
g(x_i) = \frac{\partial w}{\partial \alpha_i} = \sum_{j}\alpha_i \phi(x_i, x_{j})-\rho.
\end{eqnarray}
\vspace{-3mm}
The achieved OCSVM solution must always satisfy the constraints from the KKT
conditions, which are described in Equation~\ref{kkt}.\vspace{-4mm}
\begin{eqnarray}\label{kkt}
g(x) = \Bigg\{\begin{tabular}{ccc}
$\geq$ 0 & & $\alpha_i=0$ \\
= 0 & & $ 0 < \alpha_i < 1$\\%\frac{1}{\nu n}$\\
$<$ 0 & &$ \alpha_i = 1
\end{tabular}
\end{eqnarray}
The OCSVM uses Equation \ref{df} to identify whether a new incoming point
belongs to the positive class when returning a positive value, and vice versa if
it generates a negative value.\vspace{-7mm}
\begin{eqnarray}\label{df}
f(x_i) = sgn(g(x_i))
\end{eqnarray}
\section{ Preliminaries and Related Work}
\label{sec:RelatedWork}
Our research work builds upon and extends essential methods and algorithms including Tensor (CP) Decomposition, Stochastic Gradient Descent, and Online One-Class Support Vector Machine. We first discuss the key elements of these methods and algorithms and then follow that with an analysis of related studies and their contribution to addressing the challenges discussed in the introduction. We conclude this with the discussion with the weaknesses of existing work and how our proposed work attempts to address these weaknesses.
\subsection{CP Decomposition}
Given a three-way tensor $\mathcal{X} \in \Re^{I \times J \times K} $, CP decomposes $\mathcal{X}$ into three matrices $A \in \Re^{I \times R}$, $B \in \Re^{J \times R} $and $ C \in \Re^{K \times R}$, where $R$ is the latent factors. It can be written as follows:
\begin{eqnarray}\label{eq:decomp}
\mathcal{X}_{(ijk)} \approx \sum_{r=1}^{R}A_{ir} \circ B_{jr} \circ C_{kr}
\end{eqnarray}
where "$\circ$" is a vector outer product. $R$ is the latent element, $A_{ir}, B_{jr} $ and $C_{kr}$ are r-th columns of component
matrices $A \in \Re^{I \times R}$, $B \in \Re^{J \times R} $and $ C \in \Re^{K \times R}$.
The main goal of CP decomposition is to decrease the sum square error between the model and a given tensor $\mathcal{X}$. Equation \ref{eq:als} shows our loss function $L$ needs to be optimized.
\begin{eqnarray}\label{eq:als}
L (\mathcal{X}, A, B, C) = \min_{A,B,C} \| \mathcal{X} - \sum_{r=1}^R \ A_{ir} \circ B_{jr} \circ C_{kr} \|^2_f,
\end{eqnarray}
where $\|\mathcal{X}\|^2_f$ is the sum squares of $\mathcal{X}$ and the subscript $f$ is the Frobenius norm. The loss function $L$ presented in Equation \ref{eq:als} is a non-convex problem with many local minima since it aims to optimize the sum squares of three matrices. The CP decomposition often uses the Alternating Least Squares (ALS) method to find the solution for a given tensor. The ALS method follows the offline training process which iteratively solves each component matrix by fixing all the other components, then it repeats the procedure until it converges \cite{khoa2017smart}. The rational idea of the least square algorithm is to set the partial derivative of the loss function to zero concerning the parameter we need to minimize. Algorithm \ref{ALS} presents the detailed steps of ALS.
\begin{algorithm}
\caption{ Alternating Least Squares for CP}
\label{ALS}
\textbf{Input}: Tensor $\mathcal{X} \in \Re^{I \times J \times K} $, number of components $R$\\
\textbf{Output}: Matrices $A \in \Re^{I \times R}$, $B \in \Re^{J \times R} $ and $ C \in \Re^{K \times R}$
\begin{enumerate}
\item[1:] Initialize $A,B,C$
\item[2:] Repeat
{\setlength\itemindent{6pt}
\item[3:] $A = \underset{A}{\arg\min} \frac{1}{2} \| X_{(1)} - A ( C \odot B)^T\|^2 $
\item[4:] $B = \underset{B}{\arg\min} \frac{1}{2} \| X_{(2)} - B ( C \odot A)^T\|^2 $
\item[5:] $C = \underset{C}{\arg\min} \frac{1}{2} \| X_{(3)} - C ( B \odot A)^T\|^2 $
\item[]($X_{(i)} $ is the unfolded matrix of $X$ in a current mode)
}
\item[6:] until convergence
\end{enumerate}
\end{algorithm}
In online settings, it is a naive approach would be to recompute the CP decomposition from scratch for each new incoming $X^{(t+1)}$. Therefore, this would become impractical and computationally expensive as new incoming datum would have a minimal effect on the current tensor. Zhou et al. \cite{zhou2016accelerating} proposed a method called onlineCP to address the problem of online CP decomposition using the ALS algorithm. The method was able to incrementally update the temporal mode in multi-way data but failed for non-temporal modes \cite{khoa2017smart}. In recent years, several studies have been proposed to solve the CP decomposition using the stochastic gradient descent (SGD) algorithm which will be discussed in the following section.
\subsection{Stochastic Gradient Descent}
Stochastic gradient descent algorithm is a key tool for optimization problems. Assume that our aim is to optimize a loss function $L(x,w)$, where $x$ is a data point drawn from a distribution $\mathcal{D}$ and $w$ is a variable. The stochastic optimization problem can be defined as follows:
\begin{eqnarray}\label{eq:sgd}
w = \underset{w}{argmin} \; \mathbb{E}[L(x,w)]
\end{eqnarray}
The stochastic gradient descent method solves the above problem defined in Equation \ref{eq:sgd} by repeatedly updates $w$ to minimize $L(x,w)$. It starts with some initial value of $w^{(t)}$ and then repeatedly performs the update as follows:
\begin{eqnarray}\label{eq:sgdu}
w^{(t+1)} := w^{(t)} + \eta \frac{\partial L}{\partial w } (x^{(t)} ,w^{(t)} )
\end{eqnarray}
where $\eta$ is the learning rate and $x^{(t)}$ is a random sample drawn from the given distribution $\mathcal{D}$.
This method guarantees the convergence of the loss function $L$ to the global minimum when it is convex. However, it can be susceptible to many local minima and saddle points when the loss function exists in a non-convex setting. Thus it becomes an NP-hard problem. The main bottleneck here is due to the existence of many saddle points and not to the local minima \cite{ge2015escaping}. This is because the rational idea of gradient algorithm depends only on the gradient information which may have $\frac{\partial L}{\partial u } = 0$ even though it is not at a minimum.
Recently, SGD has attracted several researchers working on tensor decomposition. For instance, Ge \etal \cite{ge2015escaping} proposed a perturbed SGD (PSGD) algorithm for orthogonal tensor optimization. They presented several theoretical analysis that ensures convergence; however, the method does not apply to non-orthogonal tensor. They also didn't address the problem of slow convergence. Similarly, Maehara \etal \cite{maehara2016expected} propose a new algorithm for CP decomposition based on a combination of SGD and ALS methods (SALS). The authors claimed the algorithm works well in terms of accuracy. Yet its theoretical properties have not been completely proven and the saddle point problem was not addressed. Rendle and Thieme \cite{rendle2010pairwise} propose a pairwise interaction tensor factorization method based on Bayesian personalized rank. The algorithm was designed to work only on three-way tensor data. To the best of our knowledge, this is the first work that applies SGD algorithm augmented with Nesterov's optimal gradient and perturbation methods for optimal CP decomposition of multi-way tensor data.
\subsection{Online One-Class Support Vector Machine}
\label{sec:online-OCSVM}
Given a set of training data $X=\{{x_i}\}_{i=1}^n$, with $n$ being the number of samples, OCSVM maps these samples into a high dimensional feature space using a function $\phi$ through the kernel $K(x_i,x_j) = \phi(x_i)^T \phi(x_j)$. Then OCSVM learns a decision boundary that maximally separates the training samples from the origin \cite{scholkopf2001estimating}. The primary objective of OCSVM is to optimize the following equation:
\begin{eqnarray}\label{ocsvm}
\max_{w,\xi,\rho} -\frac{1}{2} \lVert w \rVert^{2} - \frac{1}{\nu n} \sum_{i=1}^n \xi_i + \rho
\end{eqnarray}
\begin{eqnarray*}
s.t \hspace{2em} w. \phi(x_i) \geq \rho - \xi_i,\hspace{1em} \xi_i \geq 0, \hspace{1em} i = 1, \ldots, n.
\end{eqnarray*}
where $\nu$ $ (0<\nu<1)$ is a user defined parameter to control the rate of anomalies in the training data, $\xi_i$ are the slack variable, $\phi(x_i)$ is the kernel matrix and $ w. \phi(x_i) - \rho$ is the separating hyperplane in the feature space. The problem turns into a dual objective by introducing Lagrange multipliers $\alpha = \{\alpha_1,\cdots,\alpha_n\}$. This dual optimization problem is solved using the following quadratic programming formula \cite{scholkopf2002learning}:
\begin{eqnarray}\label{quad}
W=\min_{W(\alpha,\rho)} \frac{1}{2}\sum_{i}^n \sum_{j}^n\alpha_i\alpha_j \phi(x_i, x_j) + \rho(1 - \sum_{i}^n\alpha_i)
\end{eqnarray}
\begin{eqnarray*}
s.t \hspace{2em} 0 \leq \alpha_i \leq 1,\hspace{1em} \sum_{i=1}^n \alpha_i =\frac{1}{\nu n}.
\end{eqnarray*}
where $\phi(x_i, x_j)$ is the kernel matrix, $\alpha$ are the Lagrange multipliers and $\rho$ known as the bias term.
The partial derivative of the quadratic optimization problem (defined in Equation \ref{quad}) with respect to $\alpha_i$, $\forall i \in S$, is then used as a decision function to calculate the score for a new incoming sample:
\begin{eqnarray}\label{dv2}
g(x_i) = \frac{\partial w}{\partial \alpha_i} = \sum_{j}\alpha_i \phi(x_i, x_{j})-\rho.
\end{eqnarray}
The OCSVM uses Equation \ref{df} to identify whether a new incoming point belongs to the positive class when returning a positive value, and vice versa if it generates a negative value.
\begin{eqnarray}\label{df}
f(x_i) = sgn(g(x_i))
\end{eqnarray}
The achieved OCSVM solution must always satisfy the constraints from the Karush-Khun-Tucker (KKT) conditions, which are described in Equation~\ref{kkt}.
\begin{eqnarray}\label{kkt}
g(x) = \Bigg\{\begin{tabular}{ccc}
$\geq$ 0 & & $\alpha_i=0$ \\
= 0 & & $ 0 < \alpha_i < 1$\\%\frac{1}{\nu n}$\\
$<$ 0 & &$ \alpha_i = 1
\end{tabular}
\end{eqnarray}
where $\alpha_i = 0$ is referred to the non-support or reserve training vectors denoted by \textit{R}, $\alpha_i=1$ represents non-margin support or error vectors denoted by \textit{E} and $ 0 < \alpha_i < 1$ represents the support vectors denoted by \textit{S}.
In an online setting, we need to ensure the KKT conditions are maintained while learning and adding a new data point to the previous solution. Several researchers address the problem of online SVM \cite{cauwenberghs2000incremental,diehl2003svm,laskov2006incremental} where all are based on the original method known as bookkeeping which proposed by Cauwenberghs \etal. The method computes the new coefficients of the SVM model while preserving the KKT conditions. This method made useful contributions to the incremental learning of two classes SVM (TCSVM), but it cannot be used for the one-class problem. Davy \textit{et al.} \cite{davy2006online} concluded that such incremental SVM methods cannot be directly applied to the one-class problem as they are dependent on the TCSVM margin areas which do not exist in OCSVM. Therefore, Davy \etal proposed an online OCSVM-based threshold value. In this method, the anomaly score is computed for each incoming datum and evaluated against a predefined threshold value. The new datum is added to the training data and the model coefficients are updated accordingly when its value is greater than the threshold. Similarly, Wang \etal \cite{wang2013online} presented an online OCSVM algorithm for detecting abnormal events in real-time video surveillance. Their algorithm combines online least-squares OCSVM (LS-OCSVM) and sparse online LS-OCSVM. The basic model is initially constructed through a learning training set with a limited number of samples. Like \cite{davy2006online} algorithm, the model is then updated through each incoming datum using threshold-based evaluation. Recently, Anaissi \etal ~\cite{anaissi2017self} propose another approach which measures the similarity between a new event and error support vectors to generate a self-advised decision value rather than using a fixed threshold. That was an effective solution but it is susceptible to include damage samples if the model keeps updated in the same direction as the real damage samples. Then the resultant updated model will start encountering damage samples in the training data. Thus this approach will start missing the real damage events.
\section{Experimental Results}
\label{sec:Experiement}
We conduct all our experiments using an Intel(R) Core(TM) i7 CPU 3.60GHz with 16GB memory. We use R software to implement our algorithms with the help of the two packages; the \textbf{rTensor}
and the \textbf{e1071} for tensor tools and one-class model.
\subsection{Experiments on Synthetic Data}
\subsubsection{NeSGD convergence}
Our initial experiment was to ensure the convergence of our NeSGD algorithm and compare it to other state-of-the-art methods such as SGD, PSGD, and SALS algorithms in terms of robustness and convergence. We generated a synthetic long time dimension tensor $ \mathcal{X} \in \Re^{60 \times 12 \times 10000}$ from 12 random loading matrices ${M}_{i=1}^{12} \in \Re^{10000 \times 60}$ in which entries were drawn from uniform distribution $\mathcal{D} [0,1]$. We evaluated the performance of each method by plotting the number of samples $t$ versus the root mean square error (RMSE). For all experiments we use the learning rate $\eta^{(t)} = \frac{1}{1 + t}$. It can be clearly seen from Figure \ref{conv} that NeSGD outperformed the SGD and PSGD algorithms in terms of convergence and robustness. Both SALS and NeCPD converged to a small RMSE but it was lower and faster in NeSGD.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.6]{conv.png}
\caption{ Comparison of algorithms in terms of robustness and convergence. }
\label{conv}
\end{figure}
\subsubsection{Tensor-based analysis}
The same synthetic data is used here but with some modifications to evaluate the performance of the proposed method of tensor-based advice decision value. We initially emulate environmental changes on the last 300 samples by modifying the random generator setting $[\mu,\sigma ]$, where $\mu$ is the mean and $\sigma$ is the standard deviation, in all $M's$ since environmental changes which will naturally affect all sensors, where each matrix in ${M}_{i=1}^{12} \in \Re^{1000 \times 60}$ represents one source (i.e location). The first 700 samples were generated under the same environmental conditions in which 500 samples are used to form the training tensor $ \mathcal{X} \in \Re^{60 \times 12 \times 500}$. The NeSGD is initially applied to decompose the tensor $ \mathcal{X}$ into three matrices $ A, B,$ and $C$ given $R=2$, and the $C$ matrix is then used to learn the OCSVM. The remaining 200 samples where fed sequentially to the online NeSGD and OCSVM in addition to the environmental affected 300 samples. For each incoming datum, we compute $A^{t+1}, B^{t+1}$ and $C^{t+1}$ which is presented to the online OCSVM algorithm to calculate its health score. If that datum is predicted as healthy then the model is not updated. However, if that datum is predicted as unhealthy, then we compute the tensor-based decision value $\mathcal{P}_i(C^{t+1})$ using Equation \ref{rate}. If its advice decision value is greater than $\gamma$ then we incorporated this new datum into the training data and we updated the model's coefficients accordingly. Figure \ref{offline_syn} shows the initial 500 training samples where the blue dots represent the resultant support vectors of the offline model. Figure \ref{porto} shows the resultant decision boundary after we incrementally updated the OCSVM model. As it can be seen, all the healthy samples were successfully incorporated into the model since these samples were slightly different from the training samples due to the environmental changes. We can also observe that the KNN score for all sensor were significantly increased which indicates the negative decision values were due to environmental changes. Further, it shows how the decision boundary grew over time and incorporated new healthy samples.
\begin{figure}[!t]
\centering
\captionsetup[subfloat]{}
\subfloat[The resultant decision boundary of the OCSVM trained on the 500 healthy samples.]{{\includegraphics[height=1.6in,width=1.8in]{offline_model} }}%
\qquad
\subfloat[An example result of the obtained KNN score generated from the $B^{(t)}=\{{b_j}\}_{i=1}^{12}$ ]{{\includegraphics[height=1.6in,width=2.6in]{Syn_healthy} }}%
\caption{ Experimental results using the Synthetic data.}%
\label{offline_syn}%
\end{figure}
\begin{figure}[!t]
\centering
\captionsetup[subfloat]{}
\subfloat[The resultant decision boundary of the incremntally updated OCSVM after processing the 500 test samples.]{{\includegraphics[height=1.6in,width=1.8in]{updated_model_env} }}%
\qquad
\subfloat[An example result of the obtained KNN score generated from the $B^{(t+201)}=\{{b_j}\}_{i=1}^{12}$ ]{{\includegraphics[height=1.6in,width=2.6in]{syn_envi} }}%
\caption{ Experimental results using the Synthetic data.}%
\label{porto}%
\end{figure}
\subsection{Case Studies in Structural Health Monitoring}
To further validate our model, we evaluate the performance of our Tensor-based advice decision values for incremental OCSVM when it is applied to SHM. The evaluation is based on real SHM data collected from two case studies namely (a) the Infante D. Henrique Bridge in Portugal and (b) AusBridge a major Bridge in Australia (the actual Bridge name cannot be published due to a data confidentiality agreement).
\subsubsection {Infante D. Henrique Bridge Data}
The SHM data is collected continuously from the Infante D. Henrique Bridge bridge (shown in Figure \ref{porto} (a)) over a period of two years (from September 2007 to September 2009) \cite{comanducci2016vibration,magalhaes2012vibration}. The data collection was carried out continuously by instrumenting the bridge with 12 force-balance highly sensitive accelerometers to measure acceleration. Every 30 minutes, the collected acceleration measurements were retrieved from these sensors and input into an operational modal analysis process to determine the natural frequencies (the model parameters) of the bridge. This process resulted in 120 natural frequencies which used as the characteristic features in our study. Thus, this dataset consists of $ 2 \times 365 \times 48 $ (35,040) samples each with 24 features. The resultant matrices from the 12 sensors were fused in a tensor $X \in \Re^{35,040 \times 120 \times 12}$. Besides acceleration measurements, temperatures were also recorded every 30 minutes because natural frequencies of the bridge can be influenced by environmental conditions \cite{comanducci2016vibration,magalhaes2012vibration}.
\begin{comment}
In this first study, we used data from the Infante D. Henrique Bridge (Figure
\ref{porto} (a)) to test the robustness of our proposed incremental OCSVM. This
bridge has been instrumented with 12 force-balance highly sensitive
accelerometers. The acceleration measurements from these 12 sensors were
retrieved every 30 minutes, and fed to an operational modal analysis process,
which identified the modal parameters of the bridge, e.g its natural
frequencies. This process provided 12 natural frequencies which were used as the
characteristic features in this study. This dataset included two years of
continuous data (from September 2007 to September 2009) with a total number of
35,040 samples ($ 2 \times 365 \times 48 $) each with 12 features. Since the
natural frequencies of the bridge change with the environmental conditions, the
temperature was also recorded every 30-minutes. More details on this dataset
can be found \cite{comanducci2016vibration,magalhaes2012vibration}.
\end{comment}
Using this dataset, we run two-phase experimental analysis; In the first phase, we used the first two months of this data (i.e., 2,230 samples September-October 2007) to fuse them in a tensor $X \in \Re^{2,230 \times 120 \times 12}$ which was decomposed using NeSGD method (see Algorithm \ref{NeCPD}) into three matrices $A \in \Re^{12 \times 2}$, $B \in \Re^{120 \times 2}$, and $C \in \Re^{2,230 \times 2}$. The $C$ matrix is then used to construct an offline OCSVM model. We used the remaining 35,040 data samples to evaluate the offline OCSVM model without applying the tensor-based advice decision value method. This analysis resulted in a very high false alarm rate of 44.8\%. This result demonstrates the significant effect of environmental factors, particularly the temperature, on the natural frequencies of the data collected from the bridge.
In the second phase of this experiment, we run our proposed tensor-based advised incremental OCSVM algorithm on the same test dataset. Here, we calculated the health score for each incoming sample using equation \ref{dv2}. The algorithm then continues if the sample has been correctly classified, otherwise the tensor-based advice decision value has to be calculated to determine whether the model coefficients will have to be updated or not based on Equation \ref{rate}.
\begin{comment}
We initially explored the data by constructing an offline OCSVM model using the data from September 2007 to the end of October (2,230 samples). Then we evaluated the model on the remaining data from the 35,040 samples. Surprisingly,
the model generated a very high false alarm rate of 44.8\%. This result strongly
suggested that environmental factors, specifically temperature, had a
significant effect on the natural frequencies of the bridge. We then applied our tensor-based advised with the incremental OCSVM algorithm on the same test data. For each new incoming sample, we calculated its health score using Equation \ref{dv2}. If the sample was correctly classified, then we continued with the next sample, otherwise, we calculate its tensor-based advice decision value to decide if we update the model coefficients or not.
\end{comment}
The results of these experiments are shown in figure \ref{porto} (b); mainly resulted in a false alarm rate per month for the (tensor-based, threshold-based, self-advised) online OCSVM and the offline OCSVM. This figure also shows the monthly average temperature for the period of September 2007 and October 2007, which was used for constructing the offline OCSVM, for demonstrating the environmental conditions of the constructed model. As depicted in this figure, although the false alarm rate of our tensor-based online incremental model was high at the start of the experiment, it had decreased gradually until it reached close to zero as new arriving data augmented into the model.
By experimenting with the dataset during the period June 2008 - December 2008, our tensor-based online model recorded an above 10\% false alarm rate. This period time belongs to the extreme temperature conditions which have not been previously experienced at that time point in the dataset. The self-advised method produces comparable
results to tensor-based ones but with lower accuracy. In contrast to this, the threshold-based online OCSVM and offline OCSVM showed continuous some fluctuation in the false alarm rates in correlation with the monthly record temperature. Specifically, our tensor-based online model generated a very low false alarm rate (close to 0\%) during the months which had temperature values that are significantly different from the temperature values recorded during the training period (i.e., September - October 2007 vs. September - October 2008). On the other hand, very high false alarm rates (close to 100\%) were generated by the offline OCSVM model during the same time period.
From this case study, we conclude that environmental changes, which are captured using natural frequencies feature, can significantly influence OCSVM models. Our experiments with the real Infante D. Henrique Bridge dataset have demonstrated that our tensor-based online model is able to catch such environmental changes in the features. In this regard, the proposed method makes more accurate updated to the learning models compared to the (self-advised and threshold-based) online and offline models.
\begin{comment}
Figure \ref{porto} (b) shows the computed false alarm rate per month for the tensor-based advice, threshold-based online OCSVM and the offline OCSVM, together with the monthly averaged temperature. This figure intentionally shows the temperature values of the two months (Sep-Oct 2007) which were used to construct the offline OCSVM to highlight the environmental conditions of the constructed model. On this figure, the false
alarm rates of our tensor-based online incremental OCSVM started high, but gradually decreased to almost zero as new data were incorporated into the model. Later, this same online model had above 10\% false alarm rates in June and December 2008, which corresponded to temperature extreme not previously experienced at that point in time in the dataset.
On the other hand, the false alarm rates of the threshold-based online OCSVM and offline OCSVM fluctuated continuously somehow in correlation with the monthly recorded temperatures.
Indeed, the offline model generated very high false alarm rate values (close to
100\%) on the months where the temperature values were extremely different from
the temperature recorded during the training period. On the contrary, the
offline OCSVM's false alarm rates were close to zero for the months where the
temperatures were similar to the training period (e.g. Sep-Oct 2007 vs Sep-Oct
2008).
This study shows the effects of environmental changes on OCSVM models, which
were built using a feature (i.e. natural frequencies) sensitive to these
changes. It demonstrates that our incremental OCSVM method was more conductive to online learning than threshold-based online model and was able to
assimilate these environmental changes in the features, and consistently
generated lower false positives compared to the threshold-based online OCSVM and offline OCSVM method.
end{comment}
\end{comment}
\begin{figure}[!t]
\centering
\captionsetup[subfloat]{}
\subfloat[Infante D. Henrique Bridge \cite{magalhaes2012vibration}.]{{\includegraphics[height=1.6in,width=1.8in]{portoB1.png} }}%
\qquad
\subfloat[Error rates of the online models versus offline models in relation to the changes in the temperature.]{{\includegraphics[height=1.6in,width=3.2in]{portoPlot.pdf} }}%
\caption{ Experimental results using data from the Infante D. Henrique Bridge.}%
\label{porto
\end{figure}
\subsubsection{The AusBridge}
In this case study, we collect acceleration data from the AusBridge using tri-axial accelerometers connected to a small computer under each jack arch. Every vehicle passed over a given jack arch it triggers an event on it. The sensors attached to that jack arch collected all the resulted vibrations for a duration of 1.6 seconds with a sampling rate of 375 Hz. As a result, each sensor captures 600 samples per event. The collected samples were further normalized and transformed into 300 features in the frequency domain.
\begin{comment}
In this case study, a network of accelerometers was configured to collect acceleration data from the AusBridge. The bridge has many lanes and the sensors were set-up under the deck of the lane that is dedicated for buses and taxis. Furthermore, this lane is supported by 800 jack aches (nodes), each of which was instrumented with 3 tri-axial accelerometers connected to a small computer. A vehicle passed over a given jack arch to trigger an event on it. The sensors attached to that jack arch collected all the resulted vibrations for a duration of 1.6 seconds with a sampling rate of 375 Hz. As a result, each sensor captures 600 samples per event. The collected samples were further normalized and transformed into 300 features in the frequency domain.
This case study used acceleration data captured by a network of accelerometers deployed on the Sydney Harbour Bridge \cite{runcie2014advances}. These sensors
were located under the deck of Lane 7, which only carries buses and taxis. This lane is supported by about 800 jack arches, which were each instrumented with three tri-axial accelerometers connected to a small computer node. A vehicle triggered an\em{event} for a given jack arch (aka node), when it passed over it. The produced vibrations were recorded by the sensors attached to that jack arch for a period of 1.6 seconds and a sampling rate of 375 Hz. Thus each event captured 600 samples per sensor, which were further
normalized and transformed to 300 features in the frequency domain.
\end{comment}
\begin{figure}[!t]
\centering
\captionsetup[subfloat]{}
\subfloat[Comparison of average error rates on the eleven nodes for the batch and online OCSVM.]{{\includegraphics[height=1.5in,width=2.2in]{shbNodes.pdf} }}%
\qquad
\subfloat[Comparison of average detection accuracy between the offline and online OCSVM methods applied to a damaged jack arch.]{{\includegraphics[height=1.7in,width=2.3in]{joint44} }}%
\caption{ Experimental results using data from the AusBridge.}%
\label{nodes}
\end{figure}
We conducted two sets of experiments. The first one used a data of a total of 22,526 samples collected from 11 jack arches (nodes) during October 2015 and November 2016\footnote{The data for February 2016 had known instrumentation problem which appeared and was fixed during that period, and thus it was excluded from this experiment.}. The data from the 11 nodes were fused in a tensor $X \in \Re^{22,526 \times 300 \times 11}$. We train the offline OCSVM model using the data collected in October 2015 (i.e $X \in \Re^{1,689 \times 300 \times 11}$ ). The remaining data of $X \in \Re^{20,837 \times 300 \times 11}$ (November 2015 - November 2016) was sequentially fed to the same model. Using the same dataset, we run these experiments with (self-advised and threshold-based) online OCSVM, offline OCSVM , and our tensor-based OCSVM model.
The false alarm error rates resulted from the experiments with the four OCSVM models are illustrated in Figure \ref{nodes}(a). The values are shown as averages over the 11 nodes for each month of the collected data. As shown, the offline OCSVM model generated an average of about 10\% false alarm rate which means it classified many events as structural damage. Furthermore, the false alarm rates fluctuated with high standard deviations during most of the experiment period. This is not accurate and can lead to inappropriate decisions as all of the nodes being evaluated had no damage during the study period. Unlike the offline OCSVM model, our tensor-based online model resulted in a much lower false alarm rate, average about 0.1\% with narrow standard deviations. The poor performance of the offline OCSVM model can be due to the environmental and operational conditions, which were not captured in the initial frequency features in the one-month training data (October 2016). The influence of these variations on the frequency features had been captured by the threshold-based online OCSVM method and enhanced its false alarm rates. As shown in the fugue, our tensor-based online OCSVM model resulted in lower false alarm rates compared to the other online models. Although setting a fixed threshold for a single OCSVM might be an easy task, but this is not practical. Such bridges comprised of 100's of jack arches each of which is linked with an OCSVM model. Thus, setting a fixed threshold would require manual tuning of 100's OCSVM models.
The goal of the second experiment set of the AusBridge case study was to confirm that even after a long-running period (i.e. 1 year) the updated model from our online OCSVM method still had the ability to detect
structural damage. In the second experiment set of the AusBridge case study, we used the data of an identified damaged jack arch (node). In particular, we used 12 months of data during which the jack arch was health and 3 months of data while it was damaged. Our goal in this experiment is to confirm the reliability of our model in terms of its ability to detect structural damages even after a long time period (1 year). Using the 12 months dataset, we run the offline, self-advised, threshold-based, and our proposed tensor-based online OCSVM models. Figure \ref{nodes} (b) shows the resulting accuracy of the four models over the 12-months of healthy data and the 3-months damaged data of the jack arch. As shown, the damage events were detected successfully by all the four models. In addition, our tensor-based model significantly outperformed the offline OCSVM and threshold-based incremental models by achieving lower false alarm rates.
\begin{comment}
Our first experiment used 11 nodes with 13 months of data each (October 2015 to November 2016)\footnote{The data for February 2016 were discarded due to a known instrumentation problem, which appeared and was fixed during that period.}. For each node, the first month of data was used to train an offline OCSVM model and the remaining data were then sequentially fed to that model. Similar to the previous study, we also built an online OCSVM model for each node following our proposed approach.
Figure \ref{nodes}(a) shows the comparison of false alarm error rates between the three approaches (averaged over the 11 nodes for each month). Although all of the nodes had no damage during the study period, their offline OCSVM model produced an average of about 8-10\% false alarm rate, i.e. many events were classified as structural damage. During most of the study period, these false alarm rates fluctuated with high standard deviations. In contrast, our online OCSVM model produced consistently a much lower false alarm rates of about 0.1\%, with narrower standard deviations. As with the previous study, these low performances of the offline OCSVM models may be due to the changes in environmental and operational conditions, which were not captured in the initial frequency features from the single month of training. Our online OCSVM method on the other hand assimilated the impacts of these variations on the frequency features and improved its false alarm rates over time. Similar to the previous experiment, our tensor-based advised online OCSVM outperformed the threshold-based online model with lower false alarm rates. While it may be easy to set a fixed threshold for a single OCSVM, we would like to stress that this is not a practical solution. Indeed in a real world deployment such as the Sydney Harbour Bridge case study, there is a OCSVM model created for a single jack arch (i.e. a substructure), there are 800 jack arch, and thus using a fixed threshold would required manual tuning for 800 SVM models.
\end{comment}
\begin{comment}
Our second experiment used the data from an identified damaged jack arch. This node was damaged for a period of 3 months before
being repaired. This experiment used 12 months of data when the jack arch was
not damaged, and the 3 months of data while it was damaged. The goal of this
experiment was to confirm that even after a long-running period (i.e. 1 year)
the updated model from our online OCSVM method still had the ability to detect
structural damage. As previously, three models were built for this data set using the
classic offline, threshold-based online and our proposed online OCSVM. Figure \ref{nodes} (b) compares the average model accuracies between the offline and online OCSVM approaches for the 12 months of healthy data plus the 3 months of data from the damaged jack arch. The results show that our online OCSVM achieved lower false alarm rates than the threshold-based online model and the offline OCSVM. All approaches were able to successfully detect all damaged events.
\end{comment}
|
2,877,628,089,144 | arxiv | \section{Introduction and notation}\label{sec:notations}
In the Clifford geometric algebra \cl{p}{q} the exponential
functions of multivector (MV) are frequently
encountered~\cite{Gurlebeck1997,Lounesto1997,Marchuk2020}. It is
enough to mention a famous half-angle rotor in the geometric
algebra (GA) that finds a wide application from simple rotations
of animated drawings up to applications in the 4-dimensional
relativity theory. Two kinds of exponential rotors exist which
are related to trigonometric and hyperbolic functions. In higher
dimensional GAs a mixing of trigonometric and hyperbolic functions
was found~\cite{Dargys2022a,AcusDargys2022b}. In this paper the
method to calculate the exponential functions of a general
multivector {MV} in an arbitrary GA is presented applying for this
purpose the characteristic polynomial in a form of MV. In
Sec.~\ref{sec:charpoly} the methods to generate characteristic
polynomials in \cl{p}{q} algebras characterized by arbitrary
signature $\{p,q\}$ and dimension $n=p+q$ are discussed. The
method of calculation of exponential is presented in
Sec.\ref{sec:CLpq}. In Sec.~\ref{sec:otherfunctions} we
demonstrate that the obtained GA exponentials may be used to find
GA elementary and special functions. Below, the notation used in
the paper is described briefly.
In the orthonormalized basis used here the geometric product of
basis vectors $\e{i}$ and $\e{j}$ satisfies the anti-commutation
relation, $\e{i}\e{j}+\e{j}\e{i}=\pm 2\delta_{ij}$. The number of
subscripts indicates the grade. The scalar has no index and is a
grade-0 element, the vectors $\e{i}$ are the grade-1 elements, the
bivectors $\e{ij}$ are grade-2 elements, etc. For mixed signature
\cl{p}{q} algebra the squares of basis vectors, correspondingly,
are $\e{i}^2=+1$ and $\e{j}^2=-1$, where $i=1,2,\ldots, p$ and
$j=p+1,\ldots, p+q$. The sum
$n=p+q$ is the dimension of the vector space. The general MV
is expressed as
\begin{equation}\label{mvA}
\m{A}=a_0+\sum_{i}a_i\e{i}+\sum_{i<j}a_{ij}\e{ij}+\sum_{i<j<k}a_{ijk}\e{ijk}+\cdots+\sum_{i<j<\cdots < n=p+q}a_{\underbrace{ij\cdots n}_{p+q=n}} \e{ij\cdots n}
=a_0+\sum_J^{2^n-1}a_J\e{J},
\end{equation}
where $a_i$, $a_{ij\cdots}$ are the real coefficients. Since the
calculations have been done by GA package written in
\textit{Mathematica} language in the following it appears
convenient to write the basis elements $\e{ij\cdots}$ and indices
in the reverse degree lexicographic ordering, accordingly. For
example, when $p+q=3$ then the basis elements and indices are
listed in the order $\{1,\e{1},\e{2},\e{3},\e{12},\e{13},
\e{23},\e{123}\equiv I\}$, i.e., the basis element numbering
always increases from left to right. During computation this
ordering is generated by \textit{Mathematica} program
automatically. This is important because swapping of two adjacent
indices changes basis element sign. The ordered set of indices
will be denoted by single capital letter $J$ referred to as
multi-index the values of which run over already mentioned list
(also see the last expression in~Eq.~\eqref{mvA}). Note, that in
the multi-index representation the scalar is deliberately excluded
from summation as indicated by upper range $2^n-1$ in the sum in
the last expression. The convention is useful since in the
function formulas below the scalar term always has a simpler form.
The highest degree element (pseudoscalar) will be denoted as $I$,
and the corresponding coefficient as $a_I$.
We shall need three grade involutions: the reversion (e.g.,
$\reverse{\e{12}}=\e{21}=-\e{12}$ ), the inverse (e.g.,
$\gradeinverse{\e{123}}=(-\e{1})(-\e{2})(-\e{3})=-\e{123}$) and
their combination called Clifford conjugation
$\widetilde{\widehat{\e{123}}}=\widehat{\e{321}}=-\e{321}=\e{123}$).
Also we shall need the Hermitian conjugate MV $\m{A}^\dagger$ and
non-zero grade negation operation denoted by overline
$\overline{\m{A}}$. The MV Hermitian conjugation expressed for
basis elements $\e{J}$ in both real and complex GAs can be
written as~\cite{Marchuk2020,Shirokov2018a}
\begin{equation}\label{hermitian}
\m{A}^{\dagger}=a_0^*+a_1^*\e{1}^{-1}+\dots+a_{12}^*\e{12}^{-1}
+\dots+a_{123}^*\e{123}^{-1}\dots=\sum_Ja_J^*\e{J}^{-1},
\end{equation}
where $a_J^*$ s the complex conjugated $J$-th
coefficient.\footnote{There is a simple trick to find
$\e{J}^{-1}$. Formally raise all indices and then lower them down
but now taking into account the considered algebra signature
$\{p,q\}$. Finally, apply the reversion. For example, in \cl{0}{3}
we have
$\e{123}\to\e{}^{123}\to-\e{123}\to-\widetilde{\e{123}}\to\e{123}$.
Thus, $\e{123}^{\dagger}=\e{123}$.} For each multi-index that
represents the basis vector with $\e{J}^2=+1$ the Hermitian
conjugation does nothing but changes signs if $\e{J}^2=-1$.
Therefore, the basis elements $\e{J}$ and $\e{J}^\dagger$ can
differ by sign only. The non-zero grade negation is an operation
that changes signs of all grades to opposite except the scalar,
i.e. $\overline{\m{A}}= a_0-\sum_J^{2^n-1}a_J\e{J}$.
\section{MV characteristic polynomial and equation}
\label{sec:charpoly}
The algorithm to calculate the exponential and associated
functions presented below is based on a characteristic polynomial.
There is a number of methods adapted to MVs, for example, based on
MV determinant, recursive Faddeev-LeVerrier method adapted to GA
and the method related to Bell
polynomials~\cite{Helmstetter2019,Shirokov2021,Abdulkhaev2021}. In
this section the methods are briefly summarized.
Every MV $\m{A}\in\cl{p}{q}$ has a characteristic polynomial
$\chi_{\m{A}}(\lambda)$ of degree $2^{d}$ in $\ensuremath{\mathbb{R}}$, where
$d=2^{\lceil\tfrac{n}{2}\rceil}$ is the integer, $n=p+q$. In
particular, $d=2^{n/2}$ if $n$ is even and $d=2^{(n+1)/2}$ if $n$
is odd. The integer $d$ may be also interpreted as a dimension of
real or complex matrix representation of Clifford algebra in the
8-fold periodicity table~\cite{Lounesto1997}. The characteristic
polynomial that follows from determinant of a MV $\m{A}$ is
defined by
\begin{equation}\label{CharPolyDef}
\chi_{\m{A}}(\lambda)=-\Det(\lambda -\m{A})=\sum_{k=0}^{d} C_{(d-k)}(\m{A})\, \lambda ^k=
C_{(0)}(\m{A})\lambda^d+C_{(1)}(\m{A})\lambda^{d-1}+\cdots+C_{(d-1)}(\m{A})\lambda+C_{(d)}(\m{A})\,.
\end{equation}
The variable in the characteristic polynomial will be always
denoted by $\lambda$ and the roots of $\chi_{\m{A}}(\lambda)=0$
(called the characteristic equation) by $\lambda_i$, respectively.
$C_{(k)}\equiv C_{(k)}(\m{A})$ are the real coefficients defined for the MV $\m{A}$. The coefficient at the highest power
of variable $\lambda$ is always assumed $C_{(0)}=-1$. The
coefficient $C_{(1)}(\m{A})$ is connected with MV trace,
$C_{(1)}(\m{A})={\mathop{\textrm{Tr}}}(\m{A})=d \left\langle\m{A}\right\rangle_{0}$.
The coefficient $C_{(d)}(\m{A})$ is related to MV determinant
$\Det\m{A}=-C_{(d)}(\m{A})$.
Table~\ref{tableDet} shows how the MV determinant (which is a
real number) can be calculated in low dimension GAs, $n\le6$. This
table also may be used to find the coefficients $C_{(k)}(\m{A})$
in the characteristic polynomial~\eqref{CharPolyDef}. For a
concrete algebra it is enough to replace $\m{A}$'s in the
Table~\ref{tableDet} by $(\lambda-\m{A})$. For
example, in case of Hamilton quaternion algebra \cl{0}{2} we have
$\m{A}=a_0+a_1\e{1}+a_2\e{2}+a_{12}\e{12}$ and
$\overline{\m{A}}=a_0-a_1\e{1}-a_2\e{2}-a_{12}\e{12}$, then
\begin{equation}
\chi_{\m{A}}(\lambda)=-\Det(\lambda
-\m{A})=-(\lambda-\m{A})(\lambda-\overline{\m{A}})=-(a_0^2+a_1^2+a_2^2+a_{12}^2)+2a_0\lambda-\lambda^2.
\end{equation}
Thus, $C_{(0)}=-1$, $C_{(1)}=2 a_0={\mathop{\textrm{Tr}}}\m{A}$ and
$C_{(2)}=-(a_0^2+a_1^2+a_2^2+a_{12}^2)=-\Det\m{A}$ which is in
accord with that calculated from quaternion matrix representation
A=\begin{math}\bigl(\begin{smallmatrix}a_0+\mathrm{i} a_1&a_2+\mathrm{i}
a_{12}\\-a_2+\mathrm{i} a_{12}& a_0-\mathrm{i}
a_1\end{smallmatrix}\bigr)\end{math}. The table can be also used
to find values of other coefficients $C_{(k)}(\m{A})$ of
polynomial~\eqref{CharPolyDef}. To this end it is enough to
replace $\Det(\m{A})$ in Table~\ref{tableDet} by
$\Det(\lambda-\m{A})$ and then recursively differentiate
with respect to $\lambda$ a proper number of times,
\begin{equation}
C_{(k-1)}(\m{A})= -\frac{1}{d - (k - 1)} \left.\frac{\partial C_{(k)}(\lambda-\m{A})}{\partial \lambda}\right|_{\lambda =0}\qquad k=d,\ldots, 1\,,
\end{equation}
which is a straightforward method to obtain the coefficient at
$\lambda^{d-k}$ for any polynomial.
In Faddeev-Leverrier method and its
modifications~\cite{Helmstetter2019,Householder1975,Hou1998,Shirokov2020a}
the coefficients $C_{(k)}(\m{A})$ in
polynomial~\eqref{CharPolyDef} are calculated recursively,
beginning from $C_{(1)}(\m{A})$ and ending with $C_{(d)}(\m{A})$.
We start from multivector $\m{A}_{(1)}$ by setting
$\m{A}_{(1)}=\m{A}$. Then compute the coefficient
$C_{(k)}(\m{A})=\frac{d}{k}\langle \m{A}_{(k)} \rangle_{0}$ and in
the next step the new MV $\m{A}_{(k+1)}=\m{A}
\bigl(\m{A}_{(k)}-C_{(k)}\bigr)$:
\begin{equation}\label{FLAlg}
\begin{array}{rcl}
\m{A}_{(1)}=\m{A}&\rightarrow&C_{(1)}(\m{A})=\frac{d}{1}\langle \m{A}_{(1)}\rangle_0,\\
\m{A}_{(2)}=\m{A}\bigl(\m{A}_{(1)}-C_{(1)}\bigr)
&\rightarrow&C_{(2)}(\m{A})=\frac{d}{2}\langle \m{A}_{(2)}\rangle_0,\\
&\vdots&\\
\m{A}_{(d)}=\m{A}\bigl(\m{A}_{(d-1)}-C_{(d-1)}\bigr)
&\rightarrow&C_{(d)}(\m{A})=\frac{d}{d}\langle
\m{A}_{(d-1)}\rangle_0.
\end{array}
\end{equation}
The determinant of MV then is
$\Det(\m{A})=\m{A}_{(d)}=-C_{(d)}=\m{A}
\bigl(\m{A}_{(d-1)}-C_{(d-1)}\bigr)$. This algorithm if adapted to
GA allows to compute characteristic polynomials for MV of
arbitrary algebra $\cl{p}{q}$. In alternative recursive
method~\cite{Helmstetter2019} one starts from $C_{(0)}^{\prime}=1$
rather\footnote{This means that all characteristic coefficients
computed with this formula are of opposite sign than obtained by
\eqref{FLAlg}, i.e. $C_{(k)}^{\prime}=-C_{(k)}$. } then
$C_{(0)}(\m{A})=-1$ and initial MV $\m{B}_0=1$, and uses the
following iterative procedure,
\begin{equation} C_{(k)}^{\prime}=-\text{Tr}(\m{A}\m{B}_{k-1})/k,\qquad
\m{B}_k=\m{A}\m{B}_{k-1}+ C_{(k)}^{\prime},\quad k=1,2,\dotsm, d.
\end{equation}
The trace may be calculated after multiplication of MVs and taking
the scalar part of the result, ${\mathop{\textrm{Tr}}}(\m{A}\m{B}_{k-1})=d \left\langle\m{A}\m{B}_{k-1}\right\rangle_{0}$, or using the trace formula for
products of MVs~\cite{Shirokov2021,Abdulkhaev2021}.
\begin{table}
\begin{center}
\begin{tabular}{cc}
$\cl{p}{q}$& $\Det(\m{A})$ \\\hline
\rule{0pt}{3ex}
$p+q=1,2$&$\m{A}\bar{\m{A}}$\\
$p+q=3,4$ &$
\frac{1}{3}\bigl(\m{A}\m{A} \overline{\m{A}\m{A}}+2 \m{A}\overline{\bar{\m{A}}\overline{\bar{\m{A}}\bar{\m{A}}}} \bigr)$
\\
$p+q=5,6$ &$
\frac{1}{3}\bigl(\m{H}\m{H} \overline{\m{H}\m{H}}+2 \m{H}\overline{\bar{\m{H}}\overline{\bar{\m{H}}\bar{\m{H}}}} \bigr)\qquad\textrm{with}\quad H=\m{A}\reverse{\m{A}}$
\end{tabular}
\end{center}
\caption{Optimized expressions for determinant of MV $\m{A}$ in
low dimensional GAs $n=p+q\le 6$. The overbar denotes a negation
of all grades except the scalar,
$\overline{\m{A}}:=\langle\m{A}\rangle_{0}-\sum_{k=1}^n
\langle\m{A}\rangle_{k}=2\langle\m{A}\rangle_{0}-\m{A}$.\label{tableDet}}
\end{table}
Also the coefficients of characteristic polynomial can be deduced
from complete Bell polynomials. In this approach a set of scalars
is used~\cite{Shirokov2021,Abdulkhaev2021},
\begin{eqnarray}
S_{(k)}(\m{A}):= (-1)^{k-1}d (k-1)! \langle \m{A}^k \rangle_0,\qquad k=1, \ldots, d,
\end{eqnarray}
where $\langle \m{A}^k \rangle_0$ is the scalar part of MV raised
to $k$ power. The needed coefficients are given by
\begin{equation}
C_{(0)}(\m{A})=-1;\qquad
C_{(k)}(\m{A})=\frac{(-1)^{k+1}}{k!}B_k(S_{(1)}(\m{A}),
S_{(2)}(\m{A}), S_{(3)}(\m{A}), \ldots, S_{(k)}(\m{A})),\qquad
k=1, \ldots, d,
\end{equation}
where $B_k(x_1, \ldots, x_k)$ are the complete Bell polynomials,
The first Bell polynomials\footnote{\textit{Mathematica} v.10 already has function for partial Bell
polynomials BellY[\,]. The Bell Complete Polynomial then can be
computed as BellCP[x\_\text{List}]:= Sum[BellY[Length[x], k, x],
\{k,1,Length[x]\}], where x\_\text{List} is a list of variables
$x_i$.} are defined by relations
\begin{equation}\begin{split}
B_0&=1,\quad B_1(x_1)=B_0x_1=x_1,\\
B_2(x_1,x_2)&=B_1x_1+B_0x_2=x_1^2+x_2,\\
B_3(x_1,x_2,x_3)&=B_2x_1+\tbinom{2}{1}B_1x_2+B_0x_4=x_1^3+3x_1x_2+x_3,\\
B_4(x_1,x_2,x_3,x_4)&=B_3x_1+\tbinom{3}{1}B_2x_2+\tbinom{3}{2}B_1x_3+B_0x_1=x_1^4+6x_1^2x_2+4
x_1x_3+3x_2^2+x_4,\\
B_5(x_1,x_2,x_3,x_4,x_5)&=B_4x_1+\tbinom{4}{1}B_3x_2+\tbinom{4}{2}B_2x_3+\tbinom{4}{3}B_1x_4+B_0x_5
=x_1^4+6x_1^2x_2+4 x_1x_3+3x_2^2+x_4,
\end{split}\end{equation}
where $\tbinom{n}{r}$ is the binomial coefficient. This sequence
can be easily extended to higher orders. The complete Bell
polynomials also can be represented in a form matrix
determinant~\cite{Wikipedia2022}.
The coefficients of characteristic equation satisfy the following
properties
\begin{align}\label{propertyDC}
\frac{\partial C_{(k)}(t \m{A})}{\partial t}=k t^{k-1} C_{(k)}(t \m{A}),\qquad
\frac{\partial C_{(1)}(t \m{A}^k)}{\partial t}&=k t^{k-1} C_{(1)}(t \m{A}^k),
\end{align}
where $t$ is a scalar parameter. We will utilize these properties
when proving the exponential formula.
Since provided below formulas contain the sums over roots of
characteristic polynomial, it is worth to remind generalized
Vi\`{e}te's formulas that relate coefficients of characteristic
polynomial to specific sums over the roots $r_i$:
\begin{align}
&r_{1}+r_{2}+\cdots+r_{d-1}+r_{d}=(-1)^1\frac{C_{(1)}}{C_{(0)}} \\
& \left(r_{1} r_{2}+r_{1} r_{3}+\cdots+r_{1} r_{d}\right)+\left(r_{2} r_{3}+r_{2} r_{4}+\cdots+r_{2} r_{d}\right)+\cdots+r_{d-1} r_{d}=(-1)^2\frac{C_{(2)}}{C_{(0)}} \\
&\quad \vdots\notag \\
& r_{1} r_{2} \ldots r_{d}=(-1)^{d} \frac{C_{(d)}}{C_{(0)}} .
\end{align}
The other interesting identity~\cite{Hou1998}, which is important for integral
representation of functions is
\begin{equation}
{\mathop{\textrm{Tr}}}{\mathcal{L}\bigl(\ee^{t\m{A}}\bigr)}=\frac{\chi_{\m{A}}^\prime(\lambda)}{\chi_{\m{A}}(\lambda)},
\end{equation}
where $\mathcal{L}$ denotes Laplace transform
$\mathcal{L}\bigl(\ee^{t\m{A}}\bigr)=\bigl(\lambda -\m{A}
\bigr)^{-1}$ of MV $\m{A}$ and $\chi_{\m{A}}^\prime(\lambda)$ is
derivative of the characteristic polynomial
$\chi_{\m{A}}(\lambda)$~(see \eqref{CharPolyDef}) with respect to
polynomial variable $\lambda$.
In matrix theory very important polynomial is a minimal polynomial
$\mu_A(\lambda)$. It establishes the conditions of
diagonalizability of matrix $A$. Similar polynomial
$\mu_{\m{A}}(\lambda)$ may be defined for MV. In particular, it is well-known that
matrix is diagonalizable (aka nondefective) if and only if the
minimal polynomial of the matrix does not have multiple roots, i.e.,
when the minimal polynomial is a product of distinct linear
factors. It is also well-known that the minimal polynomial divides
the characteristic polynomial. If roots of
characteristic equation are all different, then matrix/MV is
diagonalizable. The converse, unfortunately, is not true, i.e.,
MV, the characteristic polynomial of which has multiple roots, may
be diagonalizable. It is also
established~\cite{Wikipedia2022Diagonalizabitiy} that in case of
matrices over the complex numbers ${\mathbb C}$, almost every
matrix is diagonalizable, i.e., the set of complex $d\times d$
matrices that are not diagonalizable over ${\mathbb C}$ --
considered as a subset of ${\mathbb C}^{d\times d}$-- has the
Lebesgue measure zero. An algorithm how to compute minimal
polynomial of MV without doing reference to matrix representation
of the MV is given in Appendix~\ref{sec:appendix}.
\section{MV exponentials in \cl{\lowercase{p}}{\lowercase{q}} algebra}
\label{sec:CLpq}
\subsection{Exponential of MV in coordinate (orthogonal basis) form}
\begin{theorem}[Exponential in coordinate form]
The exponential of a general $\cl{p}{q}$ MV $\m{A}$ given by Eq.~\eqref{mvA}
is the multivector
\begin{align}\label{expNcomplexCoord}
\exp(\m{A})=
&\frac{1}{d}
\sum_{i=1}^{d} \exp(\lambda_i)\Biggl(1+
\sum_{J}^{2^n-1} \e{J} \,
\frac{\sum_{m=0}^{d-2} \lambda_i^m \sum_{k=0}^{d-m-2}C_{(k)}(\m{A})\, C_{(1)}(\e{J}^\dagger\m{A}^{d-k-m-1})}{\sum_{k=0}^{d-1}(k+1)\,C_{(d - k-1)}(\m{A})\, \lambda_i^{k}}
\Biggr)\\=
&\frac{1}{d}
\sum_{i=1}^{d} \exp\bigl(\lambda_i\bigr)\Bigl(1+\sum_{J}^{2^n-1} \e{J}
\,b_J(\lambda_i)\Bigr),
\qquad b_J(\lambda_i)\in\ensuremath{\mathbb{R}},\ensuremath{\mathbb{C}}\,.
\end{align}
Here $\lambda_i$ and $\lambda_i^j$ denotes, respectively, the
root of a characteristic equation and the root raised to power
$j$. The sum is over all roots $\lambda_i$ of characteristic
equation $\chi_{\m{A}}(\lambda)=0$, where $\chi_{\m{A}}(\lambda)$
is the characteristic polynomial of MV $\m{A}$ expressed as
$\chi_{\m{A}}(\lambda) =\sum_{i=0}^{d} C_{(d-i)}(\m{A})\, \lambda
^i$, and where $C_{(d-i)}(\m{A})$ are the coefficients at variable
$\lambda$ raised to power~$i$. The symbol
$C_{(1)}(\e{J}^\dagger\m{A}^{k})=d\, \langle\e{J}^\dagger\m{A}^{k}\rangle_0 $
denotes the first coefficient (the coefficient at $\lambda^{d-1}$)
in the characteristic polynomial that consists of geometric
product of the hermitian conjugate basis element $\e{J}^\dagger$
and $k$-th power of initial MV:
$\e{J}^\dagger\m{A}^{k}=\e{J}^\dagger\underbrace{\m{A}\m{A}\cdots\m{A}}_{k\
\text{terms}}$.
Note, because the roots of characteristic equation in general are
the complex numbers, the individual terms in sums are complex.
However, the result $\exp(\m{A})$ always simplifies to a real
number (see subsection~\ref{realanswer}).
\end{theorem}
\begin{proof}
The proof is based on checking the following defining equation
below for MV exponential presented in
Eq.~\eqref{expNcomplexCoord}
\begin{equation}
\left.\frac{\partial\exp(\m{A}t)}{\partial t}\right|_{t=1} = \m{A}
\exp(\m{A})=\exp(\m{A}) \m{A}, \end{equation} where the MV $\m{A}$
is assumed to be independent of scalar parameter $t$. After
differentiation with respect to $t$ and then setting $t=1$, one
can verify that the result indeed is $\m{A} \exp(\m{A})$. At this
moment we have explicitly checked the
formula~\eqref{expNcomplexCoord} symbolically when $n=p+q\le 5$
and numerically up to $n\le 10$. To establish non contradicting
nature of~\eqref{expNcomplexCoord} for a general $n$ it seems more
appropriate to resort to coordinate-free or base-free formula
(see Eq.~\eqref{expNcomplexCoordFree}).
\end{proof}
\vspace{3mm}
\textbf{Example 1.} {\it Exponential of generic MV in \cl{0}{3} with all different roots.}
Let's compute the exponential of $\m{A}=8-6 \e{2}-9 \e{3}+5
\e{12}-5 \e{13}+6 \e{23}-4 \e{123}$ with
Eq.~\eqref{expNcomplexCoord}. We find $d=4$. Computation of
coefficients of characteristic polynomial
$\chi_{\m{A}}(\lambda)=C_{(4)}(\m{A})+C_{(3)}(\m{A})
\lambda+C_{(2)}(\m{A})\lambda^2+C_{(1)}(\m{A})
\lambda^3+C_{(0)}(\m{A})\lambda^4$ for MV $\m{A}$ yields
$C_{(0)}(\m{A})=-1$, $C_{(1)}(\m{A})=32$, $C_{(2)}(\m{A})=-758$,
$C_{(3)}(\m{A})=10432$, $C_{(4)}(\m{A})=-72693$. The
characteristic equation $\chi_{\m{A}}(\lambda)=0$ then becomes
$-72693+10432 \lambda-758 \lambda^2+32 \lambda^3-\lambda^4=0$,
that has four different roots $\lambda_1=12-\mathrm{i} \sqrt{53}$,
$\lambda_2=12+\mathrm{i} \sqrt{53}$, $\lambda_3=4-\mathrm{i} \sqrt{353},
\lambda_4=4+\mathrm{i} \sqrt{353}$. For every multi-index $J$ and each
root $\lambda_i$ we have to compute coefficients
\begin{equation*}
\begin{split}
b_J(\lambda_i)&=\frac{-\lambda_i^2 C_{(1)}(\e{J}^\dagger\m{A})+\lambda_i \bigl(32 C_{(1)}(\e{J}^\dagger\m{A})-C_{(1)}(\e{J}^\dagger\m{A}^2)\bigr)-758 C_{(1)}(\e{J}^\dagger\m{A})+32 C_{(1)}(\e{J}^\dagger\m{A}^2)-C_{(1)}(\e{J}^\dagger\m{A}^3)
}{-4 \lambda_i^3+96 \lambda_i^2-1516 \lambda_i+10432},
\end{split}
\end{equation*}
where we still have to substitute the coefficients
$C_{(1)}(\e{J}^\dagger\m{A}^k)$
\begin{equation*}
\begin{array}{l|rrrrrrrr}
C_{(1)}(\e{J}^\dagger\m{A}^k) &\e{1}^\dagger&\e{2}^{\dagger}&\e{3}^{\dagger}&\e{12}^{\dagger}&\e{13}^{\dagger}& \e{23}^{\dagger}&\e{123}^{\dagger}\\[2pt]
\hline
k=1& 0&-24&-36&20&-20&24&-16\\
k=2&192&-224&-416&32&-128&384&-856\\
k=3&8208&5952&5508&-11572&7468&888&-7984
\end{array}\,,
\end{equation*}
different for each multi-index $J$. The Hermite conjugate
elements are $\e{J}^\dagger=\{-\e{1},-\e{2},-\e{3},
-\e{12},-\e{13}, -\e{23},\e{123}\}$. After substituting all
computed quantities into \eqref{expNcomplexCoord} we finally get,
where $\alpha=\sqrt{53}$ and $\beta=\sqrt{353}$,
\begin{align}
\exp(\m{A})= &\frac{1}{2} \ee^4 \left(\ee^8\bigl( \cos(\alpha)+\cos
\bigl(\beta\bigr)\right) +\left(\frac{3}{\alpha} \ee^{12} \sin \bigl(\alpha\bigr)-\frac{3}{\beta} \ee^4 \sin \bigl(\beta\bigr)\right) \e{1}\notag \\
&+ \left(\frac{-1}{2 \alpha}\ee^{12} \sin \bigl(\alpha\bigr)-\frac{11}{2 \beta} \ee^4 \sin \bigl(\beta\bigr)\right)
\e{2}
+\left(-\frac{2}{\alpha} \ee^{12} \sin
\bigl(\alpha\bigr)-\frac{7}{\beta} \ee^4 \sin \bigl(\beta\bigr)\right) \e{3}
\notag\\
&+ \left(-\frac{2}{\alpha} \ee^{12} \sin \bigl(\alpha\bigr)+\frac{7}{\beta} \ee^4 \sin
\bigl(\beta\bigr)\right) \e{12}
+\left(\frac{1}{2 \alpha}\ee^{12} \sin \bigl(\alpha\bigr)-\frac{11}{2
\beta} \ee^4 \sin \bigl(\beta\bigr)\right) \e{13}
\\
&+\left(\frac{3}{\alpha} \ee^{12} \sin \bigl(\alpha\bigr)+\frac{3}{\beta} \ee^4 \sin \bigl(\beta\bigr)\right) \e{23} +\frac{1}{2} \ee^4 \left(\cos \bigl(\beta\bigr)-\ee^8
\cos \bigl(\alpha\bigr)\right) \e{123}
.\notag
\end{align}
which (after simplification) coincides with our earlier
result~\cite{AcusDargysPreprint2021}.
\vspace{3mm}
\textbf{Example 2.} {\it Exponential of generic MV in \cl{4}{2} with different roots.}
Let's compute the exponential of $\m{A}=2+3 \e{4}+3
\e{26}+\e{1345}-2 \e{12456}+3 \e{123456}$ using
formula~\eqref{expNcomplexCoord}. In this case $d=8$ and
$\chi_{\m{A}}(\lambda)=C_{(8)}(\m{A})+C_{(7)}(\m{A})
\lambda+C_{(6)}(\m{A}) \lambda^2+C_{(5)}(\m{A})
\lambda^3+C_{(4)}(\m{A}) \lambda^4+C_{(3)}(\m{A})
\lambda^5+C_{(2)}(\m{A})\lambda^6+C_{(1)}(\m{A})
\lambda^7+C_{(0)}(\m{A})\lambda^8$. The coefficients of
characteristic polynomial $\chi_{\m{A}}(\lambda)$ are
$C_{(0)}(\m{A})=-1$, $C_{(1)}(\m{A})=16$, $C_{(2)}(\m{A})=-64$,
$C_{(3)}(\m{A})=16$, $C_{(4)}(\m{A})=32$, $C_{(5)}(\m{A})=-1280$,
$C_{(6)}(\m{A})=20672$, $C_{(7)}(\m{A})=-42752$,
$C_{(8)}(\m{A})=14336$. The characteristic equation
$\chi_{\m{A}}(\lambda)=0$ is $14336-42752 \lambda+20672
\lambda^2-1280 \lambda^3+32 \lambda^4+16 \lambda^5-64 \lambda^6+16
\lambda^7-\lambda^8=0$. It has eight different roots
$\lambda_1=-4, \lambda_2=2, \lambda_3=5-\mathrm{i} \sqrt{3},
\lambda_4=5+\mathrm{i} \sqrt{3},\lambda_5=-1-\mathrm{i} \sqrt{15},
\lambda_6=-1+\mathrm{i} \sqrt{15}, \lambda_7=5- \sqrt{21}, \lambda_8=5+
\sqrt{21}$. Then for every multi-index $J$ and each root
$\lambda_i$ we have to compute the coefficients
\begin{equation*}
\begin{split}
&b_J(\lambda_i)=\\\Bigl(
&C_{0}(A) C_{1}(\e{J}^{\dagger} A^{1}) \lambda _i^6+\bigl(C_{1}(A) C_{1}(\e{J}^{\dagger} A^{1})+C_{0}(A) C_{1}(\e{J}^{\dagger} A^{2})\bigr) \lambda _i^5+\bigl(C_{2}(A) C_{1}(\e{J}^{\dagger} A^{1})+C_{1}(A) C_{1}(\e{J}^{\dagger} A^{2})+C_{0}(A) C_{1}(\e{J}^{\dagger} A^{3})\bigr) \lambda _i^4\\
&+\bigl(C_{3}(A) C_{1}(\e{J}^{\dagger} A^{1})+C_{2}(A) C_{1}(\e{J}^{\dagger} A^{2})+C_{1}(A) C_{1}(\e{J}^{\dagger} A^{3})+C_{0}(A) C_{1}(\e{J}^{\dagger} A^{4})\bigr) \lambda _i^3
\\
&+\bigl(C_{4}(A) C_{1}(\e{J}^{\dagger} A^{1})+C_{3}(A) C_{1}(\e{J}^{\dagger} A^{2})+C_{2}(A) C_{1}(\e{J}^{\dagger} A^{3})+C_{1}(A) C_{1}(\e{J}^{\dagger} A^{4})+C_{0}(A) C_{1}(\e{J}^{\dagger} A^{5})\bigr) \lambda _i^2
\\
&+\bigl(C_{5}(A) C_{1}(\e{J}^{\dagger} A^{1})+C_{4}(A) C_{1}(\e{J}^{\dagger} A^{2})+C_{3}(A) C_{1}(\e{J}^{\dagger} A^{3})+C_{2}(A) C_{1}(\e{J}^{\dagger} A^{4})+C_{1}(A) C_{1}(\e{J}^{\dagger} A^{5})+C_{0}(A) C_{1}(\e{J}^{\dagger} A^{6})\bigr) \lambda _i
\\
&
+C_{6}(A) C_{1}(\e{J}^{\dagger} A^{1})+C_{5}(A) C_{1}(\e{J}^{\dagger} A^{2})+C_{4}(A) C_{1}(\e{J}^{\dagger} A^{3})+C_{3}(A) C_{1}(\e{J}^{\dagger} A^{4})+C_{2}(A) C_{1}(\e{J}^{\dagger} A^{5})+C_{1}(A) C_{1}(\e{J}^{\dagger} A^{6})\\
&\qquad +C_{0}(A) C_{1}(\e{J}^{\dagger} A^{7})
\Bigr) \Big/\Bigl(
8 \lambda_i^7 C_{0}(A)+7 \lambda_i^6 C_{1}(A)+6 \lambda_i^5 C_{2}(A)+5 \lambda_i^4 C_{3}(A)+4 \lambda_i^3 C_{4}(A)+3 \lambda_i^2 C_{5}(A)+2 \lambda_i C_{6}(A)+C_{7}(A)
\Bigr)\,.
\end{split}
\end{equation*}
The coefficients $C_{(1)}(\e{J}^\dagger\m{A}^k)$ have values
\begin{equation}
\arraycolsep=3.0pt
\begin{array}{l|rrrrrrrrrrr}
k&\e{4}^\dagger&\e{15}^{\dagger}&\e{26}^{\dagger}&\e{34}^{\dagger}&\e{145}^{\dagger}& \e{246}^{\dagger}&\e{1256}^{\dagger}&\e{1345}^{\dagger}&\e{2346}^{\dagger}&\e{12456}^{\dagger}&\e{123456}^{\dagger}\\[2pt]
\hline
1& 24 & 0 & 24 & 0 & 0 & 0 & 0 & 8 & 0 & -16 & 24 \\
2& 96 & 0 & 144 & 0 & -96 & -144 & -96 & -112 & 0 & -64 & 48 \\
3& 1200 & 864 & 1008 & -288 & -672 & -1008 & -576 & -672 & 96 & -960 & 672 \\
4& 9792 & 8064 & 8256 & -1152 & -8832 & -10368 & -8064 & -5312 & -2688 & -7808 & 5568 \\
5& 94848 & 80640 & 82944 & -26496 & -81792 & -91008 & -82560 & -42752 & -24960 & -84992 & 46848 \\
6& 859008 & 787968 & 752256 & -294912 & -826368 & -876672 & -797184 & -397824 & -288768 & -817152 & 370176 \\
7& 8221440 & 7628544 & 7243008 & -3059712 & -7972608 & -8163072 & -7531776 & -3403264 & -3028992 & -8024320 &
3460608
\end{array}\notag
\end{equation}
In Table, not listed coefficients are zeroes. The Hermitian
conjugate basis elements in the inverse degree lexicographical
ordering are
\begin{equation*}
\begin{split}
&\{\e{1},\e{2},\e{3},\e{4},-\e{5},-\e{6},-\e{12},-\e{13},-\e{14},\e{15},\e{16},-\e{23},-\e{24},\e{25},\e{26},-\e{34},\e{35},\e{36},\e{45},\e{46},-\e{56},-\e{123},-\e{1
24},\e{125},\e{126}, -\e{134},\\
&\e{135},\e{136},\e{145},\e{146},-\e{156},-\e{234},\e{235},\e{236},\e{245},\e{246},-\e{256},\e{345},\e{346},-\e{356},-\e{456},\e{1234},
-\e{1235},-\e{1236},-\e{1245},-\e{1246},\e{1256},\\
&
-\e{1345},-\e{1346},\e{1356},\e{1456},-\e{2345},-\e{2346},\e{2356},\e{2456},\e{3456},-\e{12345},-\e{12346},\e{1235
6},\e{12456},\e{13456},\e{23456},-\e{123456}\}.
\end{split}
\end{equation*}
Substituting all quantities into \eqref{expNcomplexCoord} after
simplification we get
\begin{align}
\exp(\m{A})&=\textstyle
\frac{1+\ee^6+2 \ee^3 \cos \sqrt{15}+2 \ee^9
\left(\cos\sqrt{3}+\cosh \sqrt{21}\right)}{8 \ee^4}
+\frac{-175+175 \ee^6+14 \sqrt{15} \ee^3 \sin \sqrt{15}+10
\sqrt{3} \ee^9 \left(7 \sin \sqrt{3}+5 \sqrt{7} \sinh \sqrt{21}\right)}{840 \ee^4}\e{4}
\notag \\
&\textstyle
-\frac{1+\ee^6-2 \ee^3 \cos\sqrt{15}+2 \ee^9 \left(\cos \sqrt{3}-\cosh\sqrt{21}\right)}{8 \ee^4}\e{15}
-\frac{1+\ee^6+2 \ee^3 \cos \sqrt{15}-2 \ee^9 \left(\cos \sqrt{3}+\cosh \sqrt{21}\right)}{8 \ee^4}\e{26}
\notag \\
&\textstyle
+\frac{35-35 \ee^6+14 \sqrt{15} \ee^3 \sin \sqrt{15}+5 \sqrt{3} \ee^9 \left(7 \sin
\sqrt{3}-\sqrt{7} \sinh \sqrt{21}\right)}{210 \ee^4} \e{34}
+\frac{-175+175 \ee^6-14 \sqrt{15} \ee^3
\sin \sqrt{15}+10 \sqrt{3} \ee^9 \left(7 \sin \sqrt{3}-5 \sqrt{7} \sinh \sqrt{21}\right)}{840 \ee^4} \e{145}
\notag \\
&\textstyle
+\frac{-175+175 \ee^6+14 \sqrt{15} \ee^3 \sin \sqrt{15}-10 \sqrt{3} \ee^9 \left(7 \sin \sqrt{3}+5 \sqrt{7} \sinh
\sqrt{21}\right)}{840 \ee^4}\e{246}
-\frac{1+\ee^6-2 \ee^3 \cos \sqrt{15}+2 \ee^9
\left(\cosh\sqrt{21}-\cos \sqrt{3}\right)}{8 \ee^4}\e{1256}
\notag \\
&\textstyle
+\frac{-35+35 \ee^6+14 \sqrt{15} \ee^3 \sin \sqrt{15}-5
\sqrt{3} \ee^9 \left(7 \sin \sqrt{3}+\sqrt{7} \sinh \sqrt{21}\right)}{210 \ee^4}\e{1345}
+\frac{-35+35 \ee^6-14 \sqrt{15} \ee^3 \sin\sqrt{15}+5 \sqrt{3} \ee^9 \left(7 \sin \sqrt{3}-\sqrt{7} \sinh\sqrt{21}\right)}{210
\ee^4} \e{2346}
\notag \\
&\textstyle
+\frac{175-175 \ee^6+14 \sqrt{15} \ee^3 \sin \sqrt{15}+10 \sqrt{3} \ee^9 \left(7 \sin
\sqrt{3}-5 \sqrt{7} \sinh\sqrt{21}\right)}{840 \ee^4} \e{12456}
+\frac{-35+35 e^6+14 \sqrt{15} \ee^3 \sin \sqrt{15}+5 \sqrt{3} \ee^9 \bigl(7 \sin\sqrt{3}+\sqrt{7} \sinh
\sqrt{21}\bigr)}{210 \ee^4} \e{123456}
\, .\notag
\end{align}
Coefficients at basis elements include both trigonometric and hyperbolic functions.
\subsection{Exponential in basis-free form}
The basis-free exponential follows from
Eq.~\eqref{expNcomplexCoord} after summation over the
multi-index~$J$.
\begin{theorem}[MV exponential in basis-free form]\label{theorem2}
In $\cl{p}{q}$ algebra the exponential of a general MV $\m{A}$
(see Eq.~\eqref{mvA}) can be computed by the following formulas
\begin{align}
\exp(\m{A})=
& \sum_{i=1}^{d} \exp(\lambda_i)\,\beta(\lambda_i)
\sum_{m=0}^{d-1} \biggl(\sum_{k=0}^{d-m-1}\lambda_i^k C_{(d-k-m-1)}(\m{A})\biggr)\, \m{A}^{m}\label{expNcomplexCoordFree}
\\=
&
\sum_{i=1}^{d} \exp(\lambda_i)\Biggl(\frac{1}{d}+\beta(\lambda_i)\,
\sum_{m=0}^{d-2} \biggl(\sum_{k=0}^{d-m-2}\lambda_i^k C_{(d-k-m-2)}(\m{A})\biggr)\, \frac{\bigl(\m{A}^{m+1}-\overline{\m{A}^{m+1}}\bigr)}{2}
\Biggr)\label{expNcomplexCoordFreeWithDim}\\=
& \sum_{i=1}^{d} \exp\bigl(\lambda_i\bigr)\Bigl(\frac{1}{d}+\beta(\lambda_i)\m{B}(\lambda_i)\Bigr),\qquad\mathrm{with}\qquad
\beta(\lambda_i)=\frac{1}{\sum_{j=0}^{d-1}(j+1)\,C_{(d -j-1)}(\m{A})\, \lambda_i^{j}}\label{beta} \,.
\end{align}
The expression
$\frac{1}{2}\bigl(\m{A}^{m+1}-\overline{\m{A}^{m+1}}\bigr)\equiv
\langle\m{A}^{m+1}\rangle_{-0}$ indicates that all grades of
multivector $\m{A}^{m+1}$ are included except grade-$0$, since
the scalar part is simply a sum of exponents of eigenvalues divided by $d$.
\end{theorem}
The form of exponential of MV in Theorem
\eqref{expNcomplexCoordFree} has some similarity with exponential
of square matrix~\cite{Fujii2012}, where the characteristic
polynomial was used for this purpose too.
\begin{proof}
We will prove the basis-free formula (Theorem~\ref{theorem2}) by
checking the defining equation for the exponential~\eqref{expNcomplexCoordFree},
\begin{equation}\label{definingProperty}
\left.\frac{\partial\exp(\m{A}t)}{\partial t}\right|_{t=1} = \m{A}
\exp(\m{A})=\exp(\m{A}) \m{A}, \end{equation} where $\m{A}$ is
independent of scalar parameter $t$. Since the MV commutes with
itself, the multiplications of exponential from left and right by
$\m{A}$ gives the same result. Below, after differentiation with
respect to $t$ and then setting $t=1$, we will verify that the
result indeed is $\m{A} \exp(\m{A})$.
First, using properties of characteristic coefficients \eqref{propertyDC} and
noting that replacement $\m{A}\to\m{A}t$ implies $\lambda_i\to
\lambda_i t$ and performing differentiation
$\left.\frac{\partial\exp(\m{A}t)}{\partial t}\right|_{t=1}$ we
obtain that $\exp(\lambda_i)$ of right hand side of
\eqref{expNcomplexCoordFree} (and also of \eqref{expNcomplexCoord})
is replaced by $\lambda_i\exp(\lambda_i)$,
\begin{align}
\left.\frac{\partial\exp(\m{A}t)}{\partial t}\right|_{t=1}=&\sum_{i=1}^{d} \lambda_i\exp(\lambda_i)\beta(\lambda_i)\,
\sum_{m=0}^{d-1} \biggl(\sum_{k=0}^{d-m-1}\lambda_i^k C_{(d-k-m-1)}(\m{A})\biggr)\, \m{A}^{m},
\end{align}
where the weight factor $\beta(\lambda_i)$ does not play any role in the proof.
Next, we multiply basis-free formula \eqref{expNcomplexCoordFree}
by $\m{A}$
\begin{align}
\m{A}\exp(\m{A})=\sum_{i=1}^{d} \exp(\lambda_i)\,\beta(\lambda_i)
\sum_{m=0}^{d-1} \biggl(\sum_{k=0}^{d-m-1}\lambda_i^k C_{(d-k-m-1)}(\m{A})\biggr)\, \m{A}^{m+1}\,,
\end{align}
and subtract the second equation from the first for each fixed
root $\lambda_i$, i.e. temporary ignore summation over roots,
\begin{align}\label{difFormula}
& \left.\biggl(\left.\frac{\partial\exp(\m{A}t)}{\partial t}\right|_{t=1}- \m{A}\exp(\m{A})\biggr)\right|_{\lambda_i}=
\exp(\lambda_i)\, \beta(\lambda_i)
\Bigl(
\sum_{k=1}^{d} \lambda_i^k C_{(d-k)}(\m{A})-\m{A}^{k} C_{(d-k)}(\m{A})
\Bigr)\notag\\
&\qquad =\exp(\lambda_i) \,\beta(\lambda_i)\Bigl(
\bigr(\lambda_i^d -\m{A}^{d}\bigr) C_{(0)}(\m{A})+\bigr(\lambda_i^{d-1} -\m{A}^{d-1}\bigr) C_{(1)}(\m{A})+\cdots + \bigr(\lambda_i -\m{A}\bigr) C_{(d-1)}(\m{A})
\Bigr)\,.
\end{align}
From Cayley-Hamilton relations (which follow from algorithm~\eqref{FLAlg})
\begin{align*}
\sum_{k=0}^{d} \m{A}^k C_{(d-k)}(\m{A})=\m{A}^{d}C_{(0)}(\m{A})+\m{A}^{d-1}C_{(1)}(\m{A})+\cdots + C_{(d)}(\m{A})= &0\notag,\\
\sum_{k=0}^{d} \lambda_i^k C_{(d-k)}(\m{A})= \lambda_i^d C_{(0)}(\m{A})+\lambda_i^{d-1}C_{(1)}(\m{A})+\cdots +
C_{(d)}(\m{A})=&0,
\end{align*}
we solve for the highest powers $\m{A}^{d}$ and $\lambda_i^d$ and
after substituting them into the difference
formula~\eqref{difFormula} after expansion we obtain zero. Since
the identity holds for each of roots $\lambda_i$, it is true for a
sum over roots as well.
\end{proof}
\textbf{Example 3.} {\it Exponential of MV in \cl{4}{0} with (multiple) zero eigenvalues.}
Let's compute the exponential of
$\m{A}=-4-\e{1}-\e{2}-\e{3}-\e{4}-2 \sqrt{3} \e{1234}$ with
basis-free formula~\eqref{expNcomplexCoordFreeWithDim}. Using
Table~\ref{tableDet} one can easily verify that $\Det(\m{A})=0$.
For algebra $\cl{4}{0}$ we find $d=4$. The characteristic
polynomial is $\chi_{\m{A}}(\lambda)=C_{(4)}(\m{A})+C_{(3)}(\m{A})
\lambda+C_{(2)}(\m{A})\lambda^2+C_{(1)}(\m{A})
\lambda^3+C_{(0)}(\m{A})\lambda^4=
-64 \lambda^2-16 \lambda^3-\lambda^4 = -\lambda^2 (8+\lambda)^2$.
The roots are $\lambda_1=0,
\lambda_2=0, \lambda_3=-8, \lambda_4=-8$. Since multiple roots appear we have to compute
minimal polynomial (see Appendix~\ref{sec:appendix}) of $\m{A}$,
which is $\mu_{\m{A}}(\lambda)=\lambda (8 + \lambda)$. Since
minimal MV has only linear factors, i.e. the polynomial is square
free, the MV is diagonalizable, and the formula \eqref{expNcomplexCoordFreeWithDim} can be applied without modification. It is
also easy to verify that the minimal polynomial divides the
characteristic polynomial
$\chi_{\m{A}}(\lambda)/\mu_{\m{A}}(\lambda)=\frac{-\lambda^2
(8+\lambda)^2}{\lambda (8 + \lambda)}=-\lambda (8 + \lambda)$.
This confirms that non repeating roots of characteristic
polynomial provide sufficient but not necessary criterion of
diagonalizability.
Then we have
\begin{equation}\label{example3Bi}
\begin{split}
\beta(\lambda_i)\m{B}(\lambda_i)=& \frac{1}{\sum_{j=0}^{d-1}(j+1)\,C_{(d -j-1)}(\m{A})\, \lambda_i^{j}}\,
\sum_{m=0}^{d-2} \sum_{k=0}^{d-m-2}\lambda_i^k C_{(d-k-m-2)}(\m{A})\, \langle\m{A}^{m+1}\rangle_{-0}\\
=&
\frac{8+\lambda_i}{4 \lambda_i (4+\lambda_i)}\langle\m{A}\rangle_{-0}+ \frac{16+\lambda_i}{4 \lambda_i (4+\lambda_i) (8+\lambda_i) }\langle\m{A}^{2}\rangle_{-0}+\frac{1}{4 \lambda_i (4+\lambda_i) (8+\lambda_i) }\langle\m{A}^{3}\rangle_{-0}
\\
=&-\frac{1}{\lambda_i +4}-\frac{1}{4 (\lambda_i +4)}\e{1}-\frac{1}{4 (\lambda_i +4)}\e{2}-\frac{1}{4 (\lambda_i +4)}\e{3}-\frac{1}{4 (\lambda_i +4)}\e{4}-\frac{\sqrt{3}}{2 \lambda_i +8}\e{1234}\/.
\end{split}
\end{equation}
From the middle line one may suppose that sum over roots would yield division by zero
due to zero denominators. The last line, however, demonstrates
that this is not the case, since after collecting terms at basis
elements we see that all potential zeroes in the denominators have
been cancelled. Unfortunately the cancellation does not occur for
non-diagonalizable MVs (see next example). Lastly, after
performing summation $\sum_{i=1}^{d}
\exp(\lambda_i)\bigl(\frac{1}{d}+\beta(\lambda_i)\m{B}(\lambda_i)\bigr)$
over complete set of roots
$\{\lambda_1,\lambda_2,\lambda_3,\lambda_4\}=\{0,0,-8,-8\}$ with
exponent weight factor $\exp(\lambda_i)$ (which can be replaced by
any other function or transformation, see
Sec.~\ref{sec:otherfunctions}) we obtain
\begin{align*}
\exp(\m{A})= & \frac{1+\ee^8}{2 \ee^8}+
\frac{1-\ee^8}{8 \ee^8}\big(\e{1}+ \e{2}+\e{3}+
\e{4}-2\sqrt{3}\e{1234}\,\big).
\end{align*}
\textbf{Example 4.} {\it Exponential of non-diagonalizable
MV in \cl{3}{0}.} Let's find the exponential of $\m{A}=-1+2
\e{1}+\e{2}+2 \e{3}-2 \e{12}-2 \e{13}+\e{23}-\e{123}$ with the
help of base-free formula~\eqref{expNcomplexCoordFreeWithDim}.
For algebra $\cl{3}{0}$ we have $d=4$. The minimal polynomial is
$\mu_{\m{A}}(\lambda)=-(2+2 \lambda+\lambda^2)^2$ which coincides
with characteristic polynomial
$\chi_{\m{A}}(\lambda)=-\Det(\lambda-\m{A})$ and has multiple
roots $\{-(1+\mathrm{i}),-(1+\mathrm{i}),-(1-\mathrm{i}),-(1-\mathrm{i})\}$. Now, if we
proceed as in Example~3 then for some basis elements we will get
division by zero. To avoid this, we will add a small element to
exponent, $\m{A} + \varepsilon \e{1}=\m{A}^\prime$, and after
exponentiation and simplification will compute a limiting
value when $\varepsilon\to 0$. The infinitesimal element
$\varepsilon \e{1}$ may be replaced by any other one which does
not belong to algebra center. We find that
$\chi_{\m{A}^\prime}(\lambda)= -\lambda ^4-4 \lambda ^3+(2
(\varepsilon -4) \varepsilon -8) \lambda ^2+(4 (\varepsilon -6)
\varepsilon -8) \lambda -\varepsilon (\varepsilon ((\varepsilon
-8) \varepsilon +20)+8)-4$, the limit of which is
$\lim_{\varepsilon\to 0 }
\chi_{\m{A}^\prime}(\lambda)=\chi_{\m{A}}(\lambda)$. If
$\varepsilon$ is included it has four (now different) roots
$\lambda_1=-(1+i)-\sqrt{\varepsilon ^2-(4+2 i) \varepsilon }$,
$\lambda_2=-(1+i)+\sqrt{\varepsilon ^2-(4+2 i) \varepsilon }$,
$\lambda_3=-(1-i)-\sqrt{\varepsilon ^2-(4-2 i) \varepsilon }$,
$\lambda_4=-(1-i)+\sqrt{\varepsilon ^2-(4-2 i) \varepsilon }$
which in the limit $\varepsilon\to 0$ return back to multiple
roots. Since the roots with $\varepsilon$ included are
different in calculation of $\beta(\lambda_i)\m{B}(\lambda_i)$ the
division by zero disappears,
\begin{equation*}
\begin{split}
\beta(\lambda_i)\m{B}(\lambda_i)= &\frac{ \left(-2
\varepsilon ^2-8 \varepsilon +\lambda_i ^2+4 \lambda_i +8\right)\langle\m{A}^{\prime}\rangle_{-0}}{4 \left(-\varepsilon ^2 \lambda_i -\varepsilon
^2-4 \varepsilon \lambda_i -6 \varepsilon +\lambda_i ^3+3 \lambda_i ^2+4 \lambda_i +2\right)}
+\frac{ (\lambda_i +4)\langle\m{A}^{\prime 2}\rangle_{-0}}{4 \left(-\varepsilon ^2 \lambda_i -\varepsilon ^2-4
\varepsilon \lambda_i -6 \varepsilon +\lambda_i ^3+3 \lambda_i ^2+4 \lambda_i +2\right)}\\
&\qquad+\frac{\langle\m{A}^{\prime 3}\rangle_{-0}}{4 \left(-\varepsilon ^2 \lambda_i -\varepsilon ^2-4 \varepsilon \lambda_i -6 \varepsilon +\lambda_i ^3+3
\lambda_i ^2+4 \lambda_i +2\right)}\\
=&\frac{1}{4}\biggl(1+\frac{1}{\lambda_i ^3+3 \lambda_i ^2+\bigl(4-\varepsilon (\varepsilon +4)\bigr) \lambda_i + 2 -\varepsilon (\varepsilon +6)}\biggl(
\bigl(
(\varepsilon +2) \lambda_i ^2+2 (\varepsilon +3) \lambda_i-\varepsilon (10+\varepsilon
(\varepsilon +6))+2\bigr) \e{1}\\
&\qquad\qquad +\bigl(\lambda_i^2+6\lambda_i-\varepsilon (\varepsilon +8)+4\bigr) \e{2} + 2\bigl(\lambda_i ^2-\varepsilon (\varepsilon +2)-2\bigr) \e{3}
+ 2\bigl(-\lambda_i^2-4\lambda_i+\varepsilon (\varepsilon +6)-2\bigr) \e{12}\\
&\qquad\qquad +2\bigl(-\lambda_i^2 -\lambda_i +\varepsilon (\varepsilon +3)+1\bigr) \e{13}
+ \bigl(\lambda_i ^2-2 (\varepsilon +1) \lambda_i +(\varepsilon -2) \varepsilon -4\bigr) \e{23}\\
&\qquad\qquad- \bigl(\lambda_i ^2-2 (\varepsilon -1)\lambda_i +\varepsilon (\varepsilon +2)+2\bigr) \e{123}
\biggr)\biggr)\,.
\end{split}
\end{equation*}
After summation over all roots $\{\lambda_1,\lambda_2,
\lambda_3,\lambda_4\}$ in $\sum_{i=1}^{4}
\exp\bigl(\lambda_i\bigr)\Bigl(\frac{1}{4}+\m{B}(\lambda_i)\Bigr)$,
we collect terms at basis elements and finally compute the limit
$\varepsilon\to 0$ for each of coefficients. Then, after
simplification we get the following answer,
\begin{equation*}
\begin{split}
\exp(\m{A})= & \frac{1}{\ee}\bigl(\cos (1) +(\sin (1)+2 \cos (1))\e{1} + (2 \sin (1)+\cos (1))\e{2}+2 (\cos (1)-\sin (1))\e{3}
-2 (\sin (1)+\cos (1))\e{12}\\
&\qquad+ (\sin (1)-2 \cos (1))\e{13}+ (\cos (1)-2 \sin (1))\e{23}-\sin (1) \e{123}\bigr)\,.
\end{split}
\end{equation*}
It should be noted that computation of the limit is highly
nontrivial task, especially when dealing with the roots of high
degree polynomial equations. The primary purpose of Example~4 was
to show that non-diagonalizable matrices/MVs represent some
limiting case and the (symbolic) formula is able to take into
account this case. To illustrate how complicated computation of
exponential of non-diagonalizable matrix for higher dimensional
Clifford algebras could be we have tested internal
\textit{Mathematica} command MatrixExp[\,] using $\cl{4}{2}$
algebra and non-diagonalizable MV $\m{A}^{\prime\prime}=
-1-\e{3}+\e{6}-\e{12}-\e{13}+\e{15}-\e{24}-\e{25}+\e{26}-\e{34}-\e{35}+\e{36}-\e{45}+\e{56}+\e{123}+\e{124}+\e{126}+\e{134}+\e{135}+\e{136}+\e{146}+\e{234}-\e{235}-\e{236}-\e{245}-\e{246}-\e{256}+\e{456}
-\e{1236}+\e{1245}-\e{1246}+\e{1256}-\e{1345}-\e{1346}-\e{1356}+\e{1456}-\e{2346}-\e{2356}+\e{2456}+\e{3456}+\e{12345}-\e{12346}+\e{12356}
$ that was converted to $8\times 8$ real matrix
representation. The respective MV has minimal polynomial
$(\lambda -1)^2 \left(\lambda ^6+10 \lambda ^5+39 \lambda ^4+124
\lambda ^3+543 \lambda ^2-198 \lambda -4743\right)$.
\textit{Mathematica} (version 13.0) has crashed after almost $48$
hours of computation after all 96~GB of RAM was exhausted. This
strongly contrasts with the exponentiation of diagonalizable
matrix of the same \cl{4}{2} algebra: it took only a fraction of a
second to complete the task.
\subsection{Making the answer real}
\label{realanswer}
Formulas~\eqref{expNcomplexCoord} and~\eqref{expNcomplexCoordFree}
include summation over (in general complex valued) roots of
characteristic polynomial, therefore, formally the result is a
complex number. Here we are dealing with real Clifford algebras,
consequently, a pure imaginary part or numbers in the final result
must vanish. Because the characteristic polynomial is made up of
real coefficients, the roots of the polynomial always come in
complex conjugate pairs. Thus, after summation over each of
complex root pair in exponentials ~\eqref{expNcomplexCoord}
and~\eqref{expNcomplexCoordFree} (and other real valued) functions
one must get a real final answer. Indeed, assuming that symbols
$a,b,c,d,g,h$ are real and computing the sum over a single complex
conjugate root pair we come to the following relation
\begin{align*}
\exp(a + \mathrm{i} b) \frac{c + \mathrm{i} d}{g + \mathrm{i} h} + \exp(a - \mathrm{i} b) \frac{c - \mathrm{i} d}{g - \mathrm{i} h}=
\frac{2 \ee^{a} \bigl((c g+d h) \cos b+(c h-d g) \sin
b\bigr)}{g^2+h^2},
\end{align*}
the right hand side of which now formally represents a real number as
expected. The left hand side is exactly the expression which we
have in~\eqref{expNcomplexCoord} and~\eqref{expNcomplexCoordFree}
formulas after substitution of pair of complex conjugate roots.
However, from symbolic computation point of view the issue is not
so simple. In general, the roots of high degree ($d\ge 5$)
polynomial equations cannot be solved in radicals and, therefore,
in symbolic packages they are represented as enumerated formal
functions/algorithms of some irreducible polynomials. In
\textit{Mathematica} the formal solution is represented as
Root[poly,\,k]. In order to obtain a real answer, we have to know
how to manipulate these formal objects algebraically. To
that end there exist algorithms which allow to rewrite the
coefficients of irreducible polynomials 'poly' after they have
been algebraically manipulated. The operation, however, appears to
be nontrivial and time consuming. In \textit{Mathematica} it is
implemented by RootReduce[\,] command, which produces another
Root[poly$^\prime$,\,k$^\prime$] object. Such a root reduction
typically raises the order of the polynomial. From pure numerical
point of view, of course, we may safely remove spurious complex
part in the final answer to get real numerical value.
\section{Elementary functions of MV}
\label{sec:otherfunctions} The exponential
formulas~\eqref{expNcomplexCoord} and~\eqref{expNcomplexCoordFree}
are more universal than we have expected. In fact they allow to
compute any function and transformation of MV (at least for
diagonalizable MV) if one replaces the exponential weight
$\exp(\lambda_i)$ by any other function (and allow to use complex
numbers). Here we shall demonstrate how to compute $\log (\m{A}),
\sinh (\m{A})$, $\arcsinh (\m{A})$ and Bessel $J_0(\m{A})$ GA
functions of MV of argument $\m{A}$ in $\cl{4}{0}$
(Example~3). This example with zero and negative
eigenvalues was chosen to demonstrate that no problems arise if
symbolic manipulations are addressed.
After replacement of $\exp(\lambda_i)$ by $\log(\lambda_i)$
in~\eqref{expNcomplexCoordFree} and summing up over all roots one
obtains
\begin{equation}\label{logfromexp}
\begin{split}
\log{\m{A}}=\frac{1}{2}(\log (0_{+})
+\log (-8))+\frac{1}{8} (\log (-8)-\log (0_{+})) \left(\e{1}+\e{2}+\e{3}+\e{4}+2 \sqrt{3}\,\e{1234}\right).
\end{split}
\end{equation}
We shall not attempt to explain what $\log (-8)$ means in
\cl{4}{0} since we want to avoid presence of complex numbers in
real $\cl{4}{0}$. If we assume, however, that $\exp\bigl(\log
(-8)\bigr)=-8$ and $\exp\bigl(\log (0_{+})\bigr)=\lim_{x\to
0_{+}}\exp\bigl(\log (x)\bigr)=0$. Then it is easy to check that
under these assumptions the exponentiation of $\log{\m{A}}$
yields $\exp(\log(\m{A}))=\m{A}$, i.e., the $\log$ function in
Eq.~\eqref{logfromexp} is formal inverse of $\exp$.
There are no problems when computing hyperbolic and trigonometric
functions and their inverses\footnote{It looks as if the complex
numbers are inevitable in computing trigonometric functions in
most of real Clifford algebras, except $\cl{3}{0}$ as well as few
others~\cite{Chappell2014}.}. Indeed, after replacing
$\exp(\lambda_i)$ by $\sinh(\lambda_i)$, and $\arcsinh(\lambda_i)$
in~\eqref{expNcomplexCoordFree} one finds, respectively,
\begin{equation}
\begin{split}
\sinh{\m{A}}=&\frac{1}{8}\sinh(8)\bigl(-4-\e{1}- \e{2}-\e{3}-\e{4}-2 \sqrt{3}\,\e{1234}\bigr),\\
\arcsinh{\m{A}}=&\frac{1}{8}\arcsinh(8)\bigl(-4 -\e{1} - \e{2}-\e{3}-\e{4} -2 \sqrt{3}\, \e{1234} \bigr),\\
J_0(\m{A})=& \frac{1}{2} (1+J_0(8))+
\frac{1}{8} (J_0(8)-1) \bigl(\e{1}+\e{2}+\e{3}+\e{4}+2\sqrt{3}\,\e{1234} \bigr) \,.
\end{split}
\end{equation}
In the last line $\exp(\lambda_i)$ was replaced by Bessel
$J_0(\m{A})$ function. It is easy to check that
$\sinh\bigl(\arcsinh(\m{A})\bigr)=\m{A}$ is indeed satisfied. We
do not question where special functions of MV/matrix
argument may find application. The purpose of last
computation was to demonstrate that the
formulas~\eqref{expNcomplexCoord} and~\eqref{expNcomplexCoordFree}
are more universal: they allow to compute much larger class
of functions and transformations of MV, because the sum operator
in the formulas satisfies the properties,
\begin{equation}
\sum_{i=1}^{d}\beta(\lambda_i)\m{B}(\lambda_i)=0,\qquad \sum_{i=1}^{d} \lambda_i\Bigl(\frac{1}{d}+\beta(\lambda_i)\m{B}(\lambda_i)\Bigr)=\m{A}\,,
\end{equation}
where the scalar $\beta(\lambda_i)$ is given in formula
\eqref{beta}. These expressions provide interesting spectral
decomposition of MV.
\section{Conclusion}
\label{sec:conlusion} The paper shows that in Clifford geometric
algebras the exponential of a general multivector is associated
with the characteristic polynomial of the multivector (exponent)
and may be expressed in terms of roots of respective
characteristic equation. In higher dimensional algebras the
coefficients at basis elements, in agreement
with~\cite{AcusDargys2022b}, include a mixture of trigonometric
and hyperbolic functions. The presented GA exponential
formulas~\eqref{expNcomplexCoord} and~\eqref{expNcomplexCoordFree}
can be generalized to trigonometric, hyperbolic and other
elementary functions as well. Besides the explicit examples of functions provided in the article, we were able to compute fractional powers of MV, many special functions available in Mathematica, in particular, HermiteH[\,],LaguerreL[\,] (also with rational parameters), and some of hypergeometric functions.
\section{Appendix: Minimal polynomial of MV}
\label{sec:appendix}
A simple algorithm for computation of matrix minimal polynomial is
given in \cite{Mathworld2022}. It starts by constructing $d\times
d$ matrix $M$ and its powers $\{1, M, M^2,\ldots\}$ and then
converting each of matrix into vector of length $d\times d$. The
algorithm then checks consequently the sublists $\{1\}$, $\{1,
M\}$, $\{1, M, M^2\}$ etc until the vectors in a particular
sublist are detected to be linearly dependent. Once linear
dependence is established the algorithm returns polynomial
equation, where coefficients of linear combination are multiplied
by corresponding variable $\lambda^i$.
In GA, the orthonormal basis elements $\e{J}$ are linearly
independent, therefore it is enough to construct vectors made from
coefficients of MV. Then, the algorithm starts searching
when these coefficient vectors become linearly dependent.
A vector constructed of matrix representation of MV has
$d^2=\bigl(2^{\lceil\tfrac{n}{2}\rceil}\bigr)^2$ components. This
is exactly the number of coefficients ($2^n$) in MV for Clifford
algebras of even $n$ and twice less than number of matrix elements
$d\times d$ for odd $n$. The latter can be easily
understood if one remembers that for odd $n$ the matrix
representation of Clifford algebra has block diagonal form.
Therefore only a single block will suffice for
required matrix algorithm. Below the
Algorithm~\ref{minimalPoly} describes how to compute minimal
polynomial of MV without employing a matrix representation.
\begin{algorithm}
\SetAlgoLined \SetNoFillComment \LinesNotNumbered
\SetKwInput{KwInput}{Input}
\SetKwInput{KwOutput}{Output}
\SetKwProg{minimalPoly}{minimalPoly}{}{}$\MinimalPoly{(\m{A})}${\\
\KwInput{multivector $\m{A}=a_0+\sum_J^{2^n-1}a_J\e{J}$ and polynomial variable $x$}
\KwOutput{minimal polynomial $c_1+c_2 x+c_3 x^2+\cdots$}
{\scriptsize \tcc{Initialization}}
nullSpace=\{\};\quad lastProduct=1;\quad vectorList=\{\}\;
{\scriptsize\tcc{keep adding new MV coefficient vectors to vectorList until null space becomes nontrivial}}
While[nullSpace===\{\},\\
lastProduct=$\m{A}\circ$lastProduct\;
AppendTo[vectorList,\, ToCoefficientList[lastProduct]]\;
nullSpace=NullSpace[Transpose[vectorList]];
]\\
{\scriptsize\tcc{use null space weights to construct the polynomial $c_1+c_2 \m{A}+c_3 \m{A}^2+\cdots$, with $\m{A}$ replaced by given variable $x$}}\vskip 5pt
\Return{$\mathrm{First[nullSpace]}\cdot \{x^0,x^1,x^2,\ldots,x^{\mathrm{Length[nullSpace]-1}}\}$}.
} \label{minimalPoly}\caption{Algorithm for finding minimal polynomial of
MV in $\cl{p}{q}$}
\end{algorithm}
All functions in the above code are internal {\it Mathematica}
functions, except symbol $\circ$ (geometric product) and
ToCoefficientList[\,] which is rather simple. It takes MV
$\m{A}$ coefficients and outputs vector coefficients, i.e.
ToCoefficientList[$a_0+a_1\e{1}+a_2\e{2}+\cdots+a_I I$]$\to
\{a_0,a_1,a_2,\ldots,a_I \}$. The real job is done by {\it
Mathematica} function NullSpace[\,], which searches for linear
dependency of inserted vector list. This function is standard
function of linear algebra library. If the list of vectors is
found to be linearly dependent it outputs weight factors of
linear combination for which sum of vectors become zero, or
an empty list otherwise. The AppendTo[vectorList, newVector]
appends the newVector to list of already checked vectors in
vectorList.
Supporting information available as part of the online article:
\url{https://github.com/ArturasAcus/GeometricAlgebra}.
\bibliographystyle{REPORT}
|
2,877,628,089,145 | arxiv | \section{Introduction}
Let $B$ be a commutative ring with identity and let $A$ be a
subring of $B$ having the same identity. The ring $B$ is called a
\textit{rational extension} or a \textit{ring of quotients} of $A$
if for every $b\in B$ the subring $b^{-1}A=\{a\in A:ba\in A\}$ is
rationally dense in $B$, that is, $b^{-1}A$ does not have nonzero
annihilators. Any ring $A$ has a maximal rational extension
$\mathcal{Q}(A)$. The ring $\mathcal{Q}(A)$ is also called a
complete or total ring of quotients of $A$. The classical ring of
quotients
\[\mathcal{Q}_{cl}(A)=\left\{\frac{p}{q}:p,q\in A,\ q
\mbox{ is not a zero divisor}\right\}
\]
is, in general, a subring of $\mathcal{Q}(A)$. A ring without
proper rational extension is called \textit{rationally complete}.
It is well known that the ring $C(X)$ of all continuous real
functions on a topological space $X$ is not rationally complete.
Our goal is to represent the rational extensions of $C(X)$ as
rings of functions defined on the same domain, namely as rings of
Hausdorff continuous (H-continuous) functions on $X$. The
H-continuous functions are a special class of extended interval
valued functions, that is, their range, or target set, is
$\mathbb{I\overline{R}}=\{[\underline{a},\overline{a}]:\underline{a},
\overline{a}\in\overline{\mathbb{R}}=\mathbb{R}\cup\{\pm\infty\},\
\underline{a}\leq\overline{a}\}$. Due to a certain minimality
condition, they are not unlike the usual continuous functions. For
instance, they are completely determined by their values on any
dense subset of the domain. More precisely, for H-continuous
functions $f$, $g$ and a dense subset $D$ of $X$ we have
\begin{equation}\label{ident}
f(x)=g(x),\ x\in D\ \Longrightarrow f(x)=g(x),\ x\in X.
\end{equation}
Within the realm of Set-Valued Analysis, the H-continuous
functions can be identified with the minimal upper semi-continuous
compact set-valued (usco) maps from $X$ into
$\overline{\mathbb{R}}$, \cite{musco}.
We extend the ring structure on $C(X)$ to the set
$\mathbb{H}_{nf}(X)$ of all nearly finite H-continuous functions
following the approach in \cite{NMA2007}. We show further that the
ring $\mathbb{H}_{nf}(X)$ is rationally complete. Hence, it
contains all rational extensions of $C(X)$. The maximal and
classical rings of quotients are represented as subrings of
$\mathbb{H}_{nf}(X)$.
Let $||\cdot||$ denote the supremum norm on the set $C_{bd}(X)$ of
the bounded continuous functions. Then
\begin{equation}\label{rho}
\rho(f,g)=\left|\!\left|\frac{f-g}{1+|f-g|}\right|\!\right|
\end{equation}
is a metric on $C(X)$, which can be extended further to
$\mathcal{Q}(C(X))$. We prove that its completion
$\overline{\mathcal{Q}}(C(X))$ with respect to this metric is
exactly $\mathbb{H}_{nf}(X)$. The Dedekind completions $C(X)^\#$
and $C_{bd}(X)^\#$ of $C(X)$ and $C_{bd}(X)$ with respect to the
pointwise partial order, being subrings of $\mathcal{Q}(C(X))$,
also admit a convenient representation as rings of H-continuous
functions. These results significantly improve earlier results of
Fine, Gillman and Lambek \cite{FGL}, where the considered rings
are represented as direct limits of rings of continuous functions
on dense subsets of $X$ and $\beta X$.\vspace{3pt}
\section{The ring of nearly finite Hausdorff continuous functions}
We recall \cite{Sendov} that an interval function
$f:X\to\mathbb{I\overline{R}}$ is called \textit{S-continuous} if
its graph is a closed subset of $X\times\mathbb{\overline{R}}$. An
interval function $f:X\to\mathbb{I\overline{R}}$ is Hausdorff
continuous (H-continuous) if it is an S-continuous function which
is minimal with respect to inclusion, that is, if
$\varphi:\Omega\to\mathbb{I\overline{R}}$ is an S-continuous
function then $\varphi\subseteq f$ implies $\varphi=f$. Here the
inclusion is considered in a point-wise sense. We denote by
$\mathbb{H}(X)$ the set of H-continuous functions on $X$.
Given an interval
$a=[\underline{a},\overline{a}]\in\mathbb{I\overline{R}}$,
\[
w(a)=\left\{
\begin{tabular}
[c]{lll}%
$\overline{a}-\underline{a}$ & if & $\underline{a},\overline{a}$ finite,\\
$+\infty$ & if & $\underline{a}<\overline{a}=+\infty$ or $\underline{a}%
=-\infty<\overline{a}$,\\
0 & if & $\underline{a}=\overline{a}=\pm\infty,$%
\end{tabular}
\right.
\]
is the width of $a$, while
$|a|=\max\{|\underline{a}|,|\overline{a}|\}$ is the modulus of
$a$. An interval $a$ is called proper interval, if $w(a)>0$ and
point interval, if $w(a)=0$. Identifying $a\in
\mathbb{\overline{R}}$ with the point interval $[a,a]\in
\mathbb{I\overline{R}}$, we consider $\mathbb{\overline{R}}$ as a
subset of $\mathbb{I\overline{R}}$. H-continuous functions are
similar to the usual real valued continuous real functions in that
they assume proper interval values only on a set of First Baire
category, that is, for every $f\in\mathbb{H}(X)$ the set
$W_f=\{x\in X : w(f(x))>0\}$ is countable union of closed and
nowhere dense set, \cite{QM2004}. Furthermore, $f$ is continuous
on $X\setminus W_f$. If $X$ is a Baire space, $X\setminus W_f$ is
also dense in $X$. Thus, in this case, $f$ is completely
determined by its point values. This approach is used for defining
linear space operations \cite{RC} and ring operations
\cite{NMA2007} for H-continuous functions. Here we do not make any
such assumption on $X$. Hence the approach is different.
For every S-continuous function $g$ we denote by $\langle
g\rangle$ the set of H-continuous functions contained in $g$, that
is,
\[ \langle g\rangle=\{f\in\mathbb{H}(\Omega):f\subseteq g\}.
\]
Identifying $\{f\}$ with $f$ we have $\langle f\rangle =f$
whenever $f$ is H-continuous. The S-continuous functions $g$ such
that the set $\langle g\rangle$ is a singleton, that is, it
contains only one function, play an important role in the sequel.
In analogy with the H-continuous functions, which are minimal
S-continuous functions, we call these functions
\textit{quasi-minimal S-continuous functions} \cite{musco}. The
following characterization of the quasi-minimal S-continuous
functions is useful.
\begin{theorem}\label{tqminchar}
Let $f$ be an S-continuous function on $X$. Then $f$ is quasi-
minimal S-continuous function if and only if for every
$\varepsilon>0$ the set
\[
W_{f,\varepsilon}=\{x\in X:w(f(x))\geq\varepsilon\}
\]
is closed and nowhere dense in $X$.
\end{theorem}
\begin{proof}
Let us assume that an S-continuous function $f$ is not
quasi-minimal. Then there exist H-continuous functions
$\phi=[\underline{\phi},\overline{\phi}]$ and
$\psi=[\underline{\psi},\overline{\psi}]$, $\phi\neq \psi$ such
that $\phi\subseteq f$ and $\psi\subseteq f$. Due to the
minimality property of H-continuous functions the set $\{x\in
X:\phi(x)\cap\psi(x)=\emptyset\}$ is open and dense subset of $X$.
Let $a\in X$ be such that $\phi(a)\cap\psi(a)=\emptyset$. Without
loss of generally we may assume that
$\overline{\psi}(a)<\underline{\phi}(a)$. Let
$\varepsilon=\frac{1}{3}(\underline{\phi}(a)-\overline{\psi}(a))$.
Using that $\overline{\phi}$ and $\underline{\phi}$ are
respectively upper semi-continuous and lower semi-continuous
functions, there exists an open neighborhood $V$ of $a$ such that
$\overline{\psi}(x)<\overline{\psi}(a)+\varepsilon<\underline{\phi}(a)-\varepsilon<\underline{\phi}(x)$
, $x\in V$. Then
\[
w(f)(x)\geq
\underline{\phi}(x)-\overline{\psi}(x)>\underline{\psi}(a)-\overline{\phi}(a)-2\varepsilon=\varepsilon,\
\ x\in V.
\]
Hence $V\subset W_{f,\varepsilon}$, which implies that
$W_{f,\varepsilon}$ is not nowhere dense. Therefore, if
$W_{f,\varepsilon}$ is nowhere dense for every $\varepsilon>0$
then $f$ is quasi-minimal.
Now we prove the inverse implication, that is, that for any
S-continuous quasi-minimal function $f$ and $\varepsilon>0$ the
set $W_{f,\varepsilon}$ is closed and nowhere dense. Assume the
opposite. Since for an S-continuous function $f$ the set
$W_{f,\varepsilon}$ is always closed, this means that there exists
an S-continuous function $f=[\underline{f},\overline{f}]$ and
$\varepsilon>0$ such that $W_{f,\varepsilon}$ is not nowhere
dense. Hence there exists an open set $V$ such that $V\subseteq
W_{f,\varepsilon}$. Then there exist an H-continuous functions
$\phi$ on $V$ such that $\phi(x)\subseteq
[\underline{f}(x),\overline{f}(x)-\varepsilon]$, $x\in V$. The we
have $\phi(x)+\varepsilon \subseteq
[\underline{f}(x),\overline{f}(x)-\varepsilon]$, $x\in V$. It is
easy to see that the functions $\phi$ and $\phi+\varepsilon$ can
both be extended from $V$ to the whole space $X$ so that they
belong to $\langle f\rangle$. Hence $f$ is not quasi-minimal.
\end{proof}
The familiar operations of addition and multiplication on the set
of real intervals is defined for $[\underline{a},\overline{a}],
[\underline{b},\overline{b}]\in\mathbb{I\,}{\mathbb{\overline{R}}}$
as follows:
\begin{eqnarray*}
&&\hspace{-4mm}[\underline{a},\overline{a}] +
[\underline{b},\overline{b}]\! =\! \{a+b:a\!\in\!
[\underline{a},\overline{a}], b\!\in\!
[\underline{b},\overline{b}]\}\!=\![\underline{a} +
\underline{b},\overline{a} + \overline{b}],
\\
&&\hspace{-4mm}[\underline{a}, \overline{a}]\!\times\!
[\underline{b},\overline{b}]\!=\! \{ab\!:\!a\!\in\!
[\underline{a},\overline{a}], b\!\in\!
[\underline{b},\overline{b}]\}\!=\!
[\min\{\underline{a}\underline{b},\underline{a}\overline{b},\overline{a}\underline{b},\overline{a}\overline{b}\},
\max\{\underline{a}\underline{b},\underline{a}\overline{b},\overline{a}\underline{b},\overline{a}\overline{b}\}],
\end{eqnarray*}
Ambiguities related to $\pm\infty$ are resolved in a way which
guarantees inclusion: \[ -\infty+(+\infty)=[-\infty,+\infty], \
0\times (+\infty)=[0,+\infty],\ 0\times(-\infty)=[-\infty,0]
\]
Point-wise operations for interval functions are defined in the
usual way:
\begin{equation}\label{f+g,fxg}
(f+g)(x)=f(x)+g(x)\ ,\ \ (f\times g)(x)=f(x)\times g(x)\ ,\ x\in
X.
\end{equation}
It is easy to see that the set of the S-continuous functions is
closed under the above point-wise operations while the set of
H-continuous functions is not. In earlier works by Markov, Sendov
and the author, see \cite{NMA2007}--\cite{Varna2005}, it was shown
that the algebraic operations on the set $\mathbb{H}_{ft}(X)$ of
all finite H-continuous function can be defined in such a way that
it is a linear space (the largest linear space of finite interval
functions) and a ring. As mentioned above, these results were
derived in the case when $X$ is a Baire space. Here we extend
these results in two ways:
(i) We assume that the domain $X$ is an arbitrary completely
regular topological space.
(ii) We consider the wider set $\mathbb{H}_{nf}$ of nearly finite
H-continuous functions.
These generalizations are motivated by the aim of the paper:
namely, constructing the rational extensions of $C(X)$ as rings of
functions defined on the same domain. More precisely, the problem
is considered in the same setting as in \cite{FGL}, that is, $X$ a
completely regular topological space. Furthermore, it is shown in
the sequel that the rational extensions of $C(X)$ and their metric
completions considered in \cite{FGL} cannot be all represented
within the realm of finite H-continuous function. Hence we need to
considered the larger set $\mathbb{H}_{nf}(X)$.
Let us recall that an H-continuous function $f$ is called {\it
nearly finite} if the set
\[
\Gamma_f=\{x\in X:-\infty\in f(x)\ or +\infty\in f(x)\}
\]
is closed and nowhere dense. The set $\mathbb{H}_{nf}(X)$ has
important applications in the Analysis of PDEs within the context
of the Order Completion Method, \cite{RosingerOber}. It turns out
that the solutions of large classes of systems of nonlinear PDEs
can be assimilated with nearly finite H-continuous functions, see
\cite{AngRos}, \cite{AngRos2}. The definition of the operations on
$\mathbb{H}_{nf}(X)$ are based on the following theorem.
\begin{theorem}\label{toperqcont}
For any $f,g\in\mathbb{H}_{nf}(X)$ the functions $f+g$ and
$f\times g$ are quasi-minimal S-continuous functions.
\end{theorem}
\begin{proof}
The proofs for $f+g$ and $f\times g$ use similar ideas based on
Theorem \ref{tqminchar}. We will present only the proof for
$f\times g$ which is slightly more technical. Assume the opposite.
In view of Theorem \ref{tqminchar}, this means that there exists
$\varepsilon>0$ and an open set $V$ such that $V\subset W_{f\times
g,\varepsilon}(f)$. Furthermore, since $f$ and $g$ are nearly
finite, the set $V\setminus(\Gamma(f)\cup\Gamma(g))$ is also open
and nonempty. Let $a\in V\setminus(\Gamma(f)\cup\Gamma(g))$. It is
easy to see that the functions $|f|$ defined by $|f|(x)=|f(x)|$,
$x\in X$, is an upper semi-continuous function. Therefore, there
exists an open set $D_1$ such that $a\in D_1\subset V
\setminus(\Gamma(f)\cup\Gamma(g))$ and $|f(x)|<|f(a)|+1$, $x\in
D_1$. Similarly, there exists an open set $D_2$ such that $a\in
D_2\subset V \setminus(\Gamma(f)\cup\Gamma(g))$ and
$|g(x)|<|g(a)|+1$, $x\in D_2$. Denote $D=D_1\cap D_2$. Then, using
a well known inequality about the width of a product of intervals,
see \cite{Alefeld}, for every $x\in D$ we obtain
\begin{eqnarray*}
\varepsilon&\leq& w((f\times g)(x))=w(f(x)\times g(x))\ \leq\
w(f(x))|g(x)|+w(g(x))|f(x)|\\
&\leq& w(f(x))(|g(a)|+1)+w(g(x))(|f(a)|+1)\ \leq \
(w(f(x))+w(g(x))m,
\end{eqnarray*}
where $m=\max\{|f(a)|+1~,~|g(a)|+1\}$. This implies
\[
D \subset W_{f,\frac{\varepsilon}{2m}}\cup
W_{g,\frac{\varepsilon}{2m}}.
\]
Therefore, at least one of the sets $W_{f,\frac{\varepsilon}{2m}}$
and $W_{g,\frac{\varepsilon}{2m}}$ has a nonempty open subset.
However, by Theorem \ref{tqminchar} this is impossible. The
obtained contradiction completes the proof.
\end{proof}
Theorem \ref{toperqcont} implies that for every
$f,g\in\mathbb{H}_{nf}(X)$ the sets $[f+g]$ and $[f\times g]$
contain one element each. Then we define addition and
multiplication on $\mathbb{H}_{nf}(X)$ by
\begin{eqnarray}
f\oplus g = [f+g]\label{defadd}\\
f\otimes g= [f\times g]\label{defmult}
\end{eqnarray}
Here and in the sequel we denote the operations by $\oplus$ and
$\otimes$ to distinguish them from the point-wise operations
denoted earlier by $+$ and $\times$. Equivalently to
(\ref{defadd})--(\ref{defmult}), one can say that $f\oplus g$ is
the unique H-continuous function contained in $f+g$ and that
$f\otimes g$ is the unique H-continuous function contained in
$f\times g$. It should be noted that the operations $\oplus$ and
$\otimes$ may coincide with $+$ and $\times$ respectively for some
values of the arguments. In particular,
\begin{equation}\label{opercoincide}
f\oplus g=f+g,\ f\otimes g=f\times g\ \mbox{ for }\ f\in C(X),\
g\in\mathbb{H}_{nf}(X).
\end{equation}
\begin{theorem}
The set $\mathbb{H}_{nf}(X)$ is a commutative ring with respect to
the operations $\oplus$ and $\otimes$.
\end{theorem}
The proof will be omitted. It involves standard techniques and is
partially discussed in \cite{RC} for the case of finite functions.
The zero and the identity in $\mathbb{H}_{nf}(X)$ are the constant
functions with values 0 and 1 respectively. We will denote them by
0 and 1 with the context showing whether we mean a constant
function or the respective real number. The multiplicative inverse
of $f\in \mathbb{H}_{nf}(X)$, whenever it exists, is denoted by
$\displaystyle\frac{1}{f}$. The non-zero-divisors in the ring
$\mathbb{H}_{nf}(X)$ can be characterized similarly to the ring
$C(X)$. However, unlike $C(X)$ all non-zero-divisors are
invertible. More precisely,
\begin{equation}\label{nonzerodivizor}
f \mbox{ is a non-zero-divisor}\Longleftrightarrow Z(f) \mbox{ is
nowhere dense in } X\Longleftrightarrow \exists g\in
\mathbb{H}_{nf}(X):f\otimes g=1
\end{equation}
where $Z(f)$ is the zero set of the function $f$ given by
$Z(f)=\{x\in X:0\in f(x)\}$. We denote the inverse of $f$, that
is, the function $g$ in (\ref{nonzerodivizor}) above, by
$\displaystyle\frac{1}{f}$. If
$f(x)=[\underline{f}(x),\overline{f}(x)$, $x\in X$, then we have
\begin{equation}\label{inverse}
\frac{1}{f}(x)=\left[\frac{1}{\overline{f}(x)}\frac{1}{\underline{f}(x)}\right]\
,\ \ x\in coz(f)=X\setminus Z(f).
\end{equation}
Note that in view of the property (\ref{ident}), the equality
(\ref{inverse}) determines $\displaystyle\frac{1}{f}$ in a unique
way because $coz(f)$ is dense in $X$.
Let $D$ be an open subset of $X$. The restriction $f|_D$ of $f$ on
$D$ is an H-continuous function on $D$, see \cite{Sozopol}. More
precisely,
\begin{equation}\label{restriction}
f\in\mathbb{H}_{nf}(X)\ \Longrightarrow \
f|_D\in\mathbb{H}_{nf}(D).
\end{equation}
Since $\mathbb{H}_{nf}(D)$ is also a ring it is useful to remark
that for any $f,g\in\mathbb{H}_{nf}(X)$ we have
\begin{equation}\label{restoperations}
(f\oplus g)|_D=f|_D\oplus g|_D\ , \ (f\otimes g)|_D=f|_D\otimes
g|_D.
\end{equation}
We will also use the following property, \cite{Sozopol}:
\begin{eqnarray}
&&\mbox{for any dense subset $D$ of $X$ and
$g\in\mathbb{H}_{nf}(D)$ there exists a unique}\nonumber\\
&&\mbox{function $f\in\mathbb{H}_{nf}(X)$ such that
$f|_D=g$.}\label{extension}
\end{eqnarray}
\section{Representing the Rational Extensions of $C(X)$}
The zero set $Z(f)$ and the cozero set $coz(f)$ of
$f\in\mathbb{H}_{nf}(X)$ generalize the respective concepts for
continuous function and play an important role in the ring
$\mathbb{H}_{nf}(X)$ as suggested by (\ref{nonzerodivizor}) and
(\ref{inverse}). This is further demonstrated in the following
useful lemma which extends the respective result in \cite[Section
2.2]{FGL}.
\begin{lemma}\label{theoIdealDense} Let $H_a(X)$ be a ring of
H-continuous functions such that $C(X)\subseteq H_a(X)\subseteq
\mathbb{H}_{nf}(X)$. An ideal $P$ of $H_a(X)$ is rationally dense
in $H_a(X)$ if and only if $coz(P)$ is a dense subset of $X$.
\end{lemma}
Any ideal $P$ of a ring $A$ is also an A-module. The rational
completeness of a ring can be characterized in terms of the
A-homomorphisms from the rationally dense ideals of $A$ to $A$ as
shown in the next theorem \cite{Lambek1966}.
\begin{theorem}\label{theoRComplCond}
A ring $A$ is rationally complete if for every rationally dense
ideal $P$ of $A$ and an $A$-homomorphism $\varphi:P\to A$ there
exists $s\in P$ such that $\varphi(p)=sp$, $p\in P$.
\end{theorem}
In the sequel we refer to the A-homomorphism shortly as
homomorphisms.
\begin{theorem}\label{theoHRcompl}
The ring $\mathbb{H}_{nf}(X)$ is rationally complete.
\end{theorem}
\begin{proof}
We use Theorem \ref{theoRComplCond}. Let $P$ be an ideal of
$\mathbb{H}_{nf}(X)$ and $\varphi:P\to \mathbb{H}_{nf}(X)$ a
homomorphism. Let $p\in P$. Consider the ring
$\mathbb{H}_{nf}(coz(p))$. By (\ref{inverse}), $p$ is an
invertible element of $\mathbb{H}_{nf}(coz(p))$. Since
$\phi(p)|_D\in\mathbb{H}_{nf}(coz(p))$, see (\ref{restriction}),
we can consider the function
$\psi_p=\frac{1}{p}\otimes\phi(p)|_D\in\mathbb{H}_{nf}(coz(p))$.
Now we define the function $\psi\in\mathbb{H}_{nf}(X)$ in the
following way. For any $x\in coz(P)$ select $p\in P$ such that
$0\notin p(x)$. Then
\[
\psi(x)=\psi_p(x)
\]
It is easy to see that the definition does not depend on the
function $p$. Indeed, let $q\in P$ be such that $0\notin q(x)$.
Since $\varphi$ is a homomorphism we have
\begin{equation}\label{theoHRcompl1}
\varphi(p)\otimes q=p\otimes\varphi(q).
\end{equation}
Denote $D=coz(p)\cap coz(q)$. Clearly $D$ is an open neighborhood
of $x$. Using (\ref{restoperations}) we have
\[
\varphi(p)|_D\otimes q|_D=p|_D\otimes\varphi(q)|_D,
\]
which implies
\[
\frac{1}{p}|_D\otimes\varphi(p)|_D=\frac{1}{q}|_D\otimes\varphi(q)|_D.
\]
Therefore $\phi_p(y)=\phi_q(y)$, $y\in D$. In particular,
$\phi_p(x)=\phi_q(x)$.
Now $\psi$ is defined on $coz(P)$ and it is easy to see that
$\psi\in\mathbb{H}_{nf}(coz(P))$. Since $coz(P)$ is dense in $X$,
see Lemma \ref{theoIdealDense}, using (\ref{extension}) the
function $\psi$ can be defined on the rest of the set $X$ in a
unique way so that $\psi\in\mathbb{H}_{nf}(X))$.
We will show that $\varphi(p)=\psi\otimes p$, $p\in P$. Let $p\in
P$. We have
\[
(\psi\otimes p)|_{coz(p)}=(\psi_p\otimes
p|_{coz(p)})=\left(\frac{1}{p}\right)|_{coz(p)}\otimes\varphi(p)|_{coz(p)}\otimes
p|_{coz(p)}=\varphi(p)|_{coz(p)}
\]
Then, using also (\ref{extension}), we obtain
\begin{equation}\label{theoHRcompl2}
(\psi\otimes p)(x)=\varphi(p)(x)\ ,\ \ x\in\overline{coz(p)},
\end{equation}
where $\overline{coz(p)}$ denotes the topological closure of the
set $coz(p)$. Applying standard techniques based on the minimality
property of H-continuous functions one can obtain that
$\varphi(p)(x)=0$ for $x\in X\setminus\overline{cos(p)}\subset
Z(p)$. Then we have
\begin{equation}\label{theoHRcompl3}
\varphi(p)(x)=0\in\psi(x)\times p(x)\ , \ \ x\in
X\setminus\overline{coz(p)}
\end{equation}
From (\ref{theoHRcompl2}) and (\ref{theoHRcompl3}) it follows
\begin{equation}\label{theoHRcompl4}
\varphi(p)(x)\subseteq\psi(x)\times p(x)\ ,\ \ x\in X.
\end{equation}
By the definition of the operation $\otimes$ the inclusion
(\ref{theoHRcompl4}) implies $\varphi(p)=\psi\otimes p$, which
completes the proof.
\end{proof}
The rational completeness of $\mathbb{H}_{nf}(X)$ implies that any
rational extension of any subring of $\mathbb{H}_{nf}(X))$ is a
subring of $\mathbb{H}_{nf}(X)$. In particular this applies to
$C(X)$, where the respective maximal ring of quotients and
classical ring of quotients are characterized in the next theorem.
As in the classical theory we call a subset $V$ of $X$ a zero set
if there exists $f\in C(X)$ such that $V=Z(f)$.
\begin{theorem}[Representation Theorem]\label{theoRatExt}
The ring of quotients and the classical ring of quotients of
$C(X)$ are the following subrings of $\mathbb{H}_{nf}(X))$:
\begin{eqnarray*}
&a)&\mathcal{Q}(C(X))=\mathbb{H}_{nd}(X)=\{f\in\mathbb{H}(X):\overline{W_f}\mbox{
is nowhere dense}\}\\
&b)&\mathcal{Q}_{cl}(C(X))=\mathbb{H}_{sz}(X)=\{f\in\mathbb{H}(X):W_f\mbox{
is a subset of a nowhere dense zero set}\}
\end{eqnarray*}
\end{theorem}
\begin{proof}
a) First we need to show that $\mathbb{H}_{nd}(X)$ is a ring of
quotients of $C(X)$. In terms of the definition we have to prove
that for any $\phi,\psi\in\mathbb{H}_{nd}(X))$, $\psi\neq 0$,
there exists $f\in C(X)$ such that $\phi\otimes f\in C(X)$ and
$\psi\otimes f\neq 0$. Since $\psi\neq 0$ the open set $coz(\psi)$
is not empty. Using that $W_\phi$ and $\Gamma_\phi$ are closed
nowhere dense sets we have $coz(\psi)\setminus(W_\phi\cup
\Gamma_\phi)\neq\emptyset$. Let $a\in
coz(\psi)\setminus(W_\phi\cup \Gamma_\phi)$. By the complete
regularity of $X$: (i) there exists a neighborhood $V$ of $x$ such
that $\overline{V}\subset coz(\psi)\setminus(W_\phi\cap
\Gamma_\phi)$; (ii) there exists a function $f\in C(X)$ such that
$f(a)=1$ and $f(x)=0$ for $x\in X\setminus V$. We have that
$\psi\times f$ does not have zeros in a neighborhood of $a$,
Therefore $\psi\otimes f\neq 0$. We prove next that $\phi\otimes
f\in C(X)$. Indeed, $\phi(x)\times f(x)=0$ for $x$ in the open set
$ (X\setminus\overline{V})\setminus (W_\phi\cup\Gamma_\phi)$ which
is dense in the open set $X\setminus\overline{V}$. Therefore
$(\phi\otimes f)|_{X\setminus\overline{V}}=0$, which implies
$(\phi\otimes f)|_{X\setminus\overline{V}}\in
C(X\setminus\overline{V})$. Obviously, $(\phi\otimes
f)|_{X\setminus(W_\phi\cup\Gamma_\phi)}\in
C(X\setminus(W_\phi\cup\Gamma_\phi)$. Hence, $\phi\otimes f\in
C(X)$. Therefore, $\mathbb{H}_{nd}(X)$ is a ring of quotients of
$C(X)$. It is the maximal ring of quotients of $C(X)$ if and only
if it is rationally complete. The proof of the rational
completeness of $\mathbb{H}_{nd}(X)$ is done in a similar way as
for $\mathbb{H}_{nf}(X)$ and will be omitted.
b) Let $f,g\in C(X)$, $Z(g)$ - nowhere dense in $X$. Then using
(\ref{nonzerodivizor}) we obtain
$\displaystyle\frac{f}{g}=f\otimes \frac{1}{g}\in
\mathbb{H}_{nf}(X)$. Moreover, by (\ref{inverse}) we have
$W_\frac{f}{g}\subseteq Z(g)$ which implies
$\displaystyle\frac{f}{g}\in\mathbb{H}_{nd}(X)$. Therefore
$\mathcal{Q}_{cl}(C(X))\subseteq\mathbb{H}_{nd}(X)$. Now we will
prove the inverse inclusion. Let $f\in\mathbb{H}_{nd}(X)$. Then
there exists $g\in C(X)$ such that $Z(g)$ is nowhere dense on $X$
and $W_f\subseteq Z(g)$. Consider the functions
\begin{eqnarray*}
\phi&=&\frac{f\otimes g}{1+f\otimes f}\\
\psi&=&\frac{g}{1+f\otimes f}.
\end{eqnarray*}
It easy to see that $|\phi|\leq|g|$ and $|\psi|\leq|g|$, which
implies that $\Gamma_\phi=\Gamma_\psi=\emptyset$. Furthermore,
since $\phi(x)=\psi(x)=0$ for all $x\in W_f$ we have
$W_\phi=W_\psi=\emptyset$. Hence, $\phi,\psi\in C(X)$. Since
$Z(\psi)=Z(g)$ is nowhere dense in $X$, the function $\psi$ is an
invertible element of $\mathbb{H}_{nd}(X)$. Then $\displaystyle
f=\frac{\phi}{\psi}\in\mathcal{Q}_{cl}(C(X))$, which completes the
proof.
\end{proof}
\section{Representing the metric completions of $\mathcal{Q}(C(X))$ and
$\mathcal{Q}_{cl}(C(X))$}
The metric $\rho$ on $C(X)$ given in (\ref{rho}) can be extended
to $\mathbb{H}_{nf}(X)$ as follows
\begin{equation}\label{rhoext}
\rho(f,g)=\sup_{x\in
X\setminus(\Gamma_f\cup\Gamma_g)}\frac{|f\ominus g|}{1+|f\ominus
g|},
\end{equation}
where $f\ominus g=f\oplus (-1)g$.
\begin{theorem}\label{theoMcomplete}
The set $\mathbb{H}_{nf}(X)$ is a complete metric space with
respect to the metric $\rho$ in (\ref{rhoext}).
\end{theorem}
\begin{proof}
Verifying that $\rho$ satisfies the axioms of a metric uses
standard techniques and will be omitted. We will prove the
completeness by using that $\mathbb{H}_{nf}(X)$ is a Dedekind
complete latticet with respect to the usual point-wise order, see
\cite{AngRos,AngRos2}. Furthermore, $\mathbb{H}_{nf}(X)$ is also a
vector lattice with respect to the addition $\oplus$ and the
multiplication by constants. This can be shown similarly to
\cite{Oconv}, where the case of finite H-continuous functions is
considered. The following implication for any $\varepsilon\in
(0,1)$ is easy to obtain and is useful in the proof
\begin{equation}\label{convorder}
\rho(f,g)<\varepsilon \Longleftrightarrow
\frac{-\varepsilon}{1-\varepsilon}\leq f\ominus g \leq
\frac{\varepsilon}{1-\varepsilon} \Longleftrightarrow
g\ominus\frac{\varepsilon}{1-\varepsilon}\leq f \leq
g\oplus\frac{\varepsilon}{1-\varepsilon}.
\end{equation}
Let $(f_\lambda)_{\lambda\in\Lambda}$ be a Cauchy net on
$\mathbb{H}_{nf}(X)$. There exists $\mu\in\Lambda$ such that
$\rho(f_\lambda,f_\mu)<0.5$. Then by (\ref{convorder}) the net
$(f_\lambda)_{\lambda\geq \mu}$ is bounded. Due to the Dedekind
order completeness of $\mathbb{H}_{nf}(X)$ the following infima
and suprema exist
\begin{eqnarray*}
\phi_\lambda&=&\inf\{f_\nu:\nu\geq\lambda\}\ , \ \ \lambda\geq\mu, \\
\psi_\lambda&=&\sup\{f_\nu:\nu\geq\lambda\}\ , \ \
\lambda\geq\mu,\\
\phi&=&\sup\{\phi_\lambda:\lambda\geq\mu\}\\
\psi&=&\inf\{\psi_\lambda:\lambda\geq\mu\}
\end{eqnarray*}
Let $\varepsilon\in(0,1)$. There exists $\lambda_\varepsilon$ such
that $\rho(f_\lambda,f_\nu)<\varepsilon$,
$\lambda,\nu\geq\lambda_\varepsilon$. It follows from
(\ref{convorder}) that
\[
f_{\lambda_\varepsilon}\ominus\frac{\varepsilon}{1-\varepsilon}\leq
f_\nu\leq
f_{\lambda_\varepsilon}\oplus\frac{\varepsilon}{1-\varepsilon},\
\nu\geq\lambda_\varepsilon.
\]
Therefore,
\[
f_{\lambda_\varepsilon}\ominus\frac{\varepsilon}{1-\varepsilon}\leq
\phi_\lambda\leq\psi_\lambda\leq
f_{\lambda_\varepsilon}\oplus\frac{\varepsilon}{1-\varepsilon}, \
\lambda\geq\lambda_\varepsilon,
\]
which implies
\begin{equation}\label{theoMcomplete1}
0\leq\psi_\lambda\ominus\phi_\lambda\leq
\frac{2\varepsilon}{1-\varepsilon},\
\lambda\geq\lambda_\varepsilon.
\end{equation}
Taking a supremum on $\lambda$ and considering that $\varepsilon$
is arbitrary we obtain $\phi=\psi$.
Further, from the inequalities
\begin{eqnarray*}
&&\phi_\lambda\leq f_\lambda\leq \psi_\lambda\\
&&\phi_\lambda\leq \phi\leq \psi_\lambda
\end{eqnarray*}
and (\ref{theoMcomplete1}) we obtain
\[
|f_\lambda\ominus\phi|\leq |\psi_\lambda\ominus\phi_\lambda|\leq
\frac{2\varepsilon}{1-\varepsilon},\
\lambda\geq\lambda_\varepsilon.
\]
or equivalently $\rho(f_\lambda,\phi)<\varepsilon$. This implies
that $\lim\limits_\lambda f_\lambda=\phi$, which completes the
proof.
\end{proof}
Since the ring $\mathbb{H}_{nf}(X)$ is rationally complete, see
Theorem \ref{theoHRcompl}, as well as complete with respect to the
metric (\ref{rhoext}), see Theorem \ref{theoMcomplete}, it
contains all rings of quotients of $C(X)$ as well as their metric
completions. In particular, representation of the metric
completion $\overline{\mathcal{Q}(C(X))}$ of $\mathcal{Q}(C(X))$
is given in the next theorem.
\begin{theorem}\label{theoMcomplC}
The completion of the ring of quotients of $C(X)$ is
$\mathbb{H}_{nf}(X))$, that is,
\[
\overline{\mathcal{Q}(C(X))}=\mathbb{H}_{nf}(X)
\]
\end{theorem}
\begin{proof}
Since the completeness of $\mathbb{H}_{nf}(X))$ has already been
proved, we only need to show that $\mathcal{Q}(C(X))$ is dense in
$\mathbb{H}_{nf}(X)$. Using the representation of
$\mathcal{Q}(C(X))$ given in Theorem \ref{theoRatExt},
equivalently, we need to show that $\mathbb{H}_{nd}(X)$ is dense
in $\mathbb{H}_{nf}(X))$. Let
$f=[\underline{f},\overline{f}]\in\mathbb{H}_{nf}(X)$ and let
$n\in\mathbb{N}$. We have
\begin{equation}\label{theoMcomplC1}
\overline{f}(x)-\frac{1}{n}\leq\underline{f}(x)+\frac{1}{n}\ ,\ \
x\in X\setminus(W_{f,\frac{1}{n}}\cup\Gamma_f).
\end{equation}
Since the function on the left side of the inequality
(\ref{theoMcomplC1}) is upper semi-continuous while the function
on the right side is lower semi-continuous by the well known
Theorem of Han there exists $f_n\in
C(X\setminus(W_{f,\frac{1}{n}}\cap\Gamma_f))$ such that
\begin{equation}\label{theoMcomplC2}
\overline{f}(x)-\frac{1}{n}\leq
f_n(x)\leq\underline{f}(x)+\frac{1}{n}\ ,\ \ x\in
X\setminus(W_{f,\frac{1}{n}}\cup\Gamma_f).
\end{equation}
The set $X\setminus(W_{f,\frac{1}{n}}\cup\Gamma_f)$ is an open and
dense subset of $X$ because $W_{f,\frac{1}{n}}$ and $\Gamma_f$ are
closed nowhere dense sets. Hence $f_n$ can be extended in a unique
way to $X$ so that it is H-continuous on $X$. Since this extended
function may assume interval values or values involving
$\pm\infty$ only on the closed nowhere dense set
$W_{f,\frac{1}{n}}\cup\Gamma_f$ we have
$f_n\in\mathbb{H}_{nd}(X)$. Moreover, it follows from the
inequality (\ref{theoMcomplC2}) that
\[
\rho(f,f_n)\leq\sup_{x\in
X\setminus(W_{f,\frac{1}{n}}\cup\Gamma_f)}|f\ominus
f_n|\leq\frac{1}{n}.
\]
Hence, $\lim\limits_{n\to\infty}f_n=f$. Therefore
$\mathbb{H}_{nd}(X)$ is dense in $\mathbb{H}_{nf}(X)$.
\end{proof}
\section{Conclusion}
This paper gives an application of a class of interval functions,
namely, the Hausdorff continuous function, to the representation
of the rational extensions of $C(X)$ as well as their metric
completions. Traditionally, Interval Analysis is considered as
part of Numerical Analysis due to its major applications to
scientific computing. However, the study of the order, topological
and algebraic structure of the spaces of interval functions led to
some significant applications to other areas of mathematics, e.g.
Approximation Theory \cite{Sendov}, Analysis of PDEs
\cite{AngRos,AngRos2,JanHarm}, Real Analysis \cite{QM2004,Oconv}.
The results presented here are in the same line of applications.
It is shown that all rings of quotients of $C(X)$ and their matric
completions are subrings of the ring $\mathbb{H}_{nf}(X)$ of
nearly finite Hausdorff continuous functions. Thus,
$\mathbb{H}_{nf}(X))$ is the largest set of functions to which the
ring and metric structure of $C(X)$ can be extended in an
unambiguous way.
|
2,877,628,089,146 | arxiv | \section{Introduction}
\label{intro}
The probabilistic nature of quantum physics is related to a process whose correct interpretation and mathematically sound formulation is still under debate, the so-called collapse of the wave function \cite{miller}.
The quantum mechanical collapse
is of fundamental importance for
our common-sense concept of macroscopic reality. In most cases, collapses are invoked in the context of
measurements of quantum observables.
But extending the applicability range of quantum mechanics beyond the
microscopic realm prompts the question whether the discontinuous change of
the wave function during the measurement is a physical process or not.
Schr\"odinger's famous Gedankenexperiment involving a
cat demonstrates drastically the consequences of the assumption that the
collapse process is merely epistemic and thus concerns only the
knowledge of the observer \cite{schrod}.
Schr\"odinger's goal was to demonstrate that a paradox arises if the
collapse is not considered as a microscopic process taking place independently
from the presence of an observer.
To explain the absence of macroscopic superpositions without the assumption of
a physical collapse,
the ``decoherence'' interpretation
considers not individual systems but the
density matrix of an ensemble which becomes mixed after tracing over unobserved environmental
degrees of freedom \cite{zeh,zurek,joos}. Although this avoids the
need for a ``measurement apparatus'', the tracing operation is not physical
but epistemic. Without a physical mechanism triggering a real collapse event
as assumed \textit{e.g.} in \cite{GRW}, the
decoherence interpretation is equivalent to the many-world hypothesis
\cite{everett}.
Already in Schr\"odinger's cat example, the transition from
unitary and deterministic to probabilistic evolution is tied to a microscopic, unpredictable
\textit{event}, the decay of a radioactive nucleus. There are strong arguments
from a fundamental perspective supporting the occurence of spatio-temporally
localized events as objective and observer-independent
equivalent of collapse processes \cite{haag}.
These random events, called ``quantum jumps'' in the early debate between
Schr\"o\-din\-ger and Bohr \cite{schr-qj}, are generally thought to underlie
the statistical character of the emission and absorption of light quanta by
atoms. Einstein used arguments from the theory of classical gases to
derive Planck's formula by assuming detailed balance between the atoms and the
radiation in thermal equilibrium. His derivation did not require a microscopic
Hamiltonian \cite{einstein1916}. Nevertheless, it is easy to derive the
corresponding rate equations for the time-dependent occupancy $\langle
n\rangle(t)$ of a light mode with frequency $\Om$ coupled resonantly to $M$
two-level systems (TLS) within quantum mechanics. The transition probabilities follow from the microscopic interaction Hamiltonian by employing Fermi's {\it golden rule} \cite{goldenrule,fermi}, which tacitly incorporates the collapse event by replacing the unitary time evolution by a stochastic process.
The interaction Hamiltonian is well known \cite{c-t,loud}:
\beq
H_{\inter}=g\sum_{l=1}^M\left(a\s^+_l+\ad\s^-_l\right),
\label{hint-bb}
\eeq
where $a,\ad$ denote the annihilation/creation operators of the radiation mode
and $n=\ad a$. The Pauli lowering/raising operators $\s^-_l,\s^+_l$ describe
the $l$-th two-level system with $H_{\tls}=\hba\Om(\sum_l\s_l^+\s_l^--1/2)$. The
probabilities for a transition between the upper and lower state of a TLS
accompanied by the emission or absorption of a photon with frequency $\Om$ are
computed with the golden rule (see section~\ref{blackbody}) to obtain the rate equations
\begin{align}
\frac{\rd \langle n\rangle}{\rd t} &= \g\left[\langle m\rangle(\langle n\rangle+1)- (M-\langle m\rangle)\langle n\rangle\right],
\label{bb-n}\\
\frac{\rd \langle m\rangle}{\rd t} &= \g'\left[(M-\langle m\rangle)\langle n\rangle -\langle m\rangle(\langle n\rangle +1)\right].
\label{bb-m}
\end{align}
Here, $\langle m(t)\rangle$ denotes the time-dependent average number of excited TLS. The rate constants $\g,\g'$ depend on the coupling $g$ and the density of states of the radiation continuum around $\Om$ (see below).
These equations describe the irreversible change of average quantities and thus use the ensemble picture of statistical mechanics \cite{wal}. Nevertheless they account for the temporal behavior of a single system as well, because a {\it typical} trajectory will exhibit a fraction $m(t)/M$ of excited TLS close to $\langle m(t)\rangle/M$ for sufficiently large $M$ \cite{goldstein,baldovin,cerino}.
It is crucial that the rate equations \eqref{bb-n} and \eqref{bb-m} satisfy the \textit{detailed balance condition}, which implies that they lead from arbitrary initial values $\langle n\rangle(0), \langle m(0)\rangle$ to a unique steady state characterized by
\beq
\langle m\rangle(\langle n\rangle +1) = (M-\langle m\rangle)\langle n\rangle.
\label{det-bal}
\eeq
We have the relations
\begin{align}
P^e=\langle m\rangle/M,\quad & P^g=(M-\langle m\rangle)/M,\nn\\
P_{e\ra g}=\g'(\langle n\rangle +1),\quad & P_{g\ra e}=\g'\langle n\rangle,
\end{align}
for the probabilities $P^g$ ($P^e$) for a TLS to be in its ground (excited) state and the probabilities for emission $P_{e\ra g}$ and absorption $P_{g\ra e}$.
Equation \eqref{det-bal} can therefore be written as
\beq
P^eP_{e\ra g} = P^gP_{g\ra e}
\label{det-bal2}
\eeq
which is the definition of detailed balance \cite{wal,reichl}.
Equation \eqref{det-bal}
entails Planck's formula for thermal equilibrium between radiation and matter. If one considers \eqref{bb-m} as an equation of motion for the probability $P^e(t)$, even a microscopic system consisting of a single TLS will thermally equilibrate with the surrounding continuum of radiation.
This surprising result rests on the fact that the TLS does not interact
with the light mode exactly on resonance only but with all modes in a frequency
interval of width $\D$ around $\Om$ with a similar strength $g$
\cite{c-t,loud}. The energy uncertainty $\hba\D$ allows for a radiation event
occuring during a short time span $\tau_c\sim \D^{-1}$, whereas the process
itself is energy conserving (see \cite{c-t}, p. 419). The coupling to a
continuum of modes
leads therefore to real and irreversible {\it microscopic} processes, the emission or absorption of
light quanta, although no macroscopic measurement apparatus is involved. Such a
microscopic collapse process is tacitly assumed whenever the golden rule is
employed. If, however, the TLS is embedded into a cavity and coupled only to a single
radiation mode, the collapse cannot take place; the system shows the
Rabi oscillations of an unitarily evolving state instead, the TLS being entangled with
the bosonic mode. To this case, the golden rule cannot be applied.
Therefore, the golden rule cannot be taken as an approximation to the full
unitary time development given by solving the Schr\"odinger equation, although
it corresponds formally to a perturbative computation of the unitary dynamics
for short times \cite{zhang}. The very concept of a \textit{transition rate} implies that
the deterministic evolution of the state vector is replaced with a
probabilistic description of events. The central element in the computation is the overlap $\langle\psi_{\textrm{final}}|H_\inter|\psi_{\textrm{initial}}\rangle$, which, according to the Born rule, determines the probability for the transition from $|\psi_{\textrm{initial}}\rangle$ to $|\psi_{\textrm{final}}\rangle$. The Born rule is invoked here although the process is not a macroscopic measurement. The ``macroscopic'' element is provided by the continuum of radiation modes. The presence of this continuum causes the necessity for a statistical description \cite{c-t}.
Within the decoherence interpretation, one would require
that the continuum of radiation modes acts as an environment for the TLS, the
``system''. The environment is then traced out to yield the irreversible
dynamics of the system alone. But if the system consists of the walls of a
hohlraum, it excercises a strong influence on the ``environment'', the
enclosed radiation, such that they equilibrate together. Therefore, it is not
possible to separate the environment from the system to explain the
microscopic collapse processes driving the compound system towards thermal equilibrium.
The rate equations neglect completely the coherences of the TLS. The
irrelevance of the coherences follows
naturally from the interpretation of the radiation process as a collapse:
each such event projects the
TLS into their energy eigenbasis, just as a macroscopic
measurement projects any
quantum system into the basis entangled with the eigenbasis of
the measurement device \cite{neu}.
A macroscopic measurement device is not needed here because both
subsystems, the TLS and the radiative continuum, contain a macroscopic number
of degrees of freedom. This alone seems to justify a statistical description as
in classical gas theory, although the interaction Hamiltonian \eqref{hint-bb}
has no classical limit. Indeed, in our case the Born rule
replaces the assumption of ``molecular chaos'', needed in Boltzmann's derivation of the H-theorem
\cite{wal,reichl}. Therefore, it seems almost natural that the rate equations
(\ref{bb-n}),(\ref{bb-m}) lead to thermal equilibrium from a
non-equilibrium initial state although the radiation and the collection of
atoms are both treated as ideal gases. If $\langle m(0)\rangle$ and $\langle n\rangle(0)$
correspond to equilibrium ensembles with different temperatures at $t=0$,
\beq
T_{\tls}(0)=\frac{\hba\Om}{k_B\ln\left(\frac{M}{\langle m(0)\rangle}-1\right)}, \quad
T_{\textrm{rad}}(0)=
\frac{\hba\Om}{k_B\ln\left(\frac{1}{\langle n\rangle(0)}+1\right)},
\eeq
the rate equations derived from the quantum mechanical interaction Hamiltonian \eqref{hint-bb} together with the golden rule entail thermal equilibration according to Clausius' formulation of the second law of thermodynamics: the two gases exchange heat which flows from the hotter to the colder subsystem until a uniform temperature and maximum entropy of the compound system is reached \cite{wal,reichl}.
Certainly, the rate equations do not correspond to the exact quantum dynamics
of the system, which is non-integrable in the quantum sense if the interaction \eqref{hint-bb} is
generalized to a continuum of bosonic modes, rendering it equivalent to the
spin-boson model \cite{spin-boson}. To obtain the exact evolution equation for
the density matrix of both the TLS and the radiative modes, one would have to
solve the full many-body problem. The golden rule is then seen as a method to
approximate the time-dependent expectation values $\langle n\rangle(t)$,
$\langle m(t)\rangle$, which is justified by a large body of experimental
evidence, but not through an analytical proof of equivalence between both
approaches. It may even be that the golden rule provides a phenomenological
description of microscopic collapse processes whose actual dynamics is not yet
known. In this case, the full quantum mechanical calculation would not yield
(\ref{bb-n}),(\ref{bb-m}), although they account correctly for the
experimentally observed dynamics. The
rate equations are derived from the interaction Hamiltonian \eqref{hint-bb} and the golden rule
(see sections~\ref{blackbody} and \ref{open}), thereby demonstrating its applicability
in the situation considered. This corroboration of the golden rule
based on experimental results is independent of any theoretical justification
of this rule.
The interaction Hamiltonian \eqref{hint-bb} satisfies the detailed balance
condition which is crucial for the physically expected behavior.
We shall demonstrate in the next section that this condition does not follow
from elementary principles like time reversal invariance or hermiticity as
\eqref{hint-bb} seems to suggest. On the contrary, there are feasible
experimental setups violating the detailed balance condition while satisfying
all other prerequisites for the application of the golden rule. The
consequence of this violation is a macroscopic disagreement with the second law of thermodynamics.
\section{The Gedankenexperiment}
\label{sec:1}
In the Gedankenexperiment, we consider two identical cavities $A$ and $B$
supporting a quasi-continuous mode spectrum described by bosonic annihilation
operators $a_{Aj}$, $a_{Bj}$ and frequencies $\om_j$ (Fig.~\ref{Fig1}). They are
coupled bilinearly to the right- and left-moving modes $a_{1k}$, $a_{2k}$ of
an open-ended waveguide which form a quasi-continuum like the cavity modes
\cite{dayan}.
The loss processes through the open ends of the waveguide are
caused formally by a heat bath at $T=0$, leading to a reduction of the system
entropy through heat transport. This coupling to the outside world is one
argument for the applicability of the golden rule, the second is the already discussed
quasi-continuum of modes. But even if the full continuum of radiation modes in the cavities is treated as part of
the ``system'' which would then be subject to purely unitary
time evolution, the waveguide would still couple to a decohering
``environment'', justifying the statistical description even if one denies that
real collapse events take place in the system itself. Here we study a model which is commonly used in quantum optics to describe unidirectional loss processes \cite{haroche}. In this way, the generally accepted arguments substantiating irreversible evolution equations can be transferred to the present situation.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{Fig1}
\caption{Layout of the Gedankenexperiment. Two reservoirs $A$ and $B$
containing black-body radiation are coupled via a non-reciprocal,
open waveguide to a collection of two-level systems (TLS) and to
each other. The right-, respectively left-moving modes in channels 1
and 2 couple with different parameters $g_1$ and $g_2$ to the two-level systems.}
\label{Fig1}
\end{figure}
The modes $a_{1k}$ and $a_{2k}$ of the waveguide are coupled to a collection of $M$ two-level systems located at the center of the waveguide (see Fig.~\ref{Fig1}).
The total Hamiltonian is given by
\beq
H=H_A + H_B + H_{wg} + H_{\tls} + H^1_{\inter} + H^2_{\inter}.
\label{ham}
\eeq
$H_q$ denotes the Hamiltonian in cavity $q$ for $q=A,B$,
\beq
H_q=\hba\sum_{j}\om_j a_{qj}^\dagger a_{qj}.
\eeq
The Hamiltonian of the waveguide reads
\beq
H_{wg}=\hba\sum_{k}\om_k \left(a_{1k}^\dagger a_{1k} + a_{2k}^\dagger a_{2k}\right),
\eeq
where the modes 1 and 2 belong to waves traveling to the right and to the left, respectively. For the TLS we have $H_{\tls}=(\hba\Omega/2)\sum_{l=1}^M\s^z_l$, with the Pauli matrix $\s^z$. The coupling between the reservoirs and the modes 1 and 2 of the waveguide is bilinear,
\beq
H^1_{\inter}=\sum_{q=A,B}\sum_{j,k} h_{jk}\left(a_{qj}^\dagger [a_{1k} + a_{2k}] +\hc\right).
\label{hint1}
\eeq
Using the rotating wave approximation, the interaction with the TLS has the
standard form \cite{c-t,loud} which is equivalent to \eqref{hint-bb},
\beq
H^2_{\inter}=\sum_{l=1}^M \left(\sum_{k} g_{1k}a_{1k} +g_{2k}a_{2k}\right)\s^+_l + \hc,
\label{hint2}
\eeq
where $\s^+_l$ denotes the raising operator of the $l$th TLS.
We consider in the following the (time-dependent) average occupancy per mode
$j$, $\lana n_q\rana(t)$ for reservoir $q=A,B$ in an energy interval around
the TLS energy, $\Om-\Delta/2 <\om_j<\Om+\Delta/2$, where $\D$ is much larger than the natural linewidth of spontaneous emission from an excited TLS into the waveguide. The occupancy does not depend on $j$ if the couplings $h_{jk}$, $g_{(1,2)k}$, the density of states $\rho_q(\hba\om_j)$ of the reservoirs and $\rho_{1,2}(\hba\om_k)$ of the wa\-ve\-gui\-de are constant in the frequency interval of width $\Delta$ around $\Om$.
It is crucial that $g_{1k}\neq g_{2k}$, which specifies that the TLS couple
with unequal strengths to the right- and left-moving photons in the waveguide.
Such an unequal coupling is
a hallmark of chiral quantum optics \cite{lodahl}. Although the interaction
Hamiltonian \eqref{hint2} appears to break time-reversal invariance, as the
time-reversal operator maps left-moving to right-moving modes, this is
actually not the case because the effective interaction term \eqref{hint2} does
not contain the polarization degree of freedom. The angular momentum selection
rules for light-matter interaction lead naturally to a dependence of the
coupling strength on the propagation direction in engineered geometries
\cite{luxmoore}, especially if the spin-momentum locking of propagating
modes in nanofibers is employed \cite{junge,shomroni}. The unwanted coupling
to non-guided modes can be effectively eliminated, leading to large
coupling differences
$|g_{1k}-g_{2k}|$ \cite{soellner,lefeber}.
We shall now study the temporal behavior of the two cavities, assuming at time
$t=0$ separate thermal equilibria in $A$, $B$ and the TLS system, all at
the same temperature $T$. The probability $\langle m(0)\rangle/M$ for a TLS to be
excited obeys the Boltzmann distribution
\beq
\frac{\langle m(0)\rangle}{M}=\frac{1}{e^{\hba\Om/k_BT}+1}.
\label{boltz}
\eeq
Considering the $M$ TLS as independent classical objects, their Gibbs entropy
is given by
\beq
S_M=-k_BM\big(p_e\ln p_e +(1-p_e)\ln (1-p_e)\big),
\label{entropyTLS}
\eeq
with $p_e(t)=\langle m(t)\rangle/M$. This approach is justified by the quick relaxation of
the two-level systems by non-radiative processes, which decohere them on time
scales much shorter than the time scale of spontaneous emission and quickly quench finite coherences of the TLS \cite{brouri}. This
argument for a classical description of the TLS is independent from the
general justification of the golden rule via the mode continuum
discussed above.
Equation
\eqref{entropyTLS} provides an upper bound for the entropy of the TLS subsystem \cite{lanford}. Analogously, the entropy of the compound system reads
\beq
S^{\sys}=S_A +S_B + S_M,
\label{sys-entropy}
\eeq
where $S_q$ denotes the v.~Neumann entropy of the radiation in cavity $q$ for $q=A,B$.
The average occupation number $\lana n_q(\om)\rana$ per mode at frequency $\om$ follows from the Bose distribution
\beq
\lana n_q(\om)\rana(0)=\frac{1}{e^{\hba\om/k_BT}-1}.
\label{n-T}
\eeq
\par
Because the temperature depends on $\lana n_q\rana$ as described by \eqref{n-T},
we can define effective temperatures $T_q(t)$ by $\lana n_q\rana(t)$ for each reservoir $q$ under
the assumption that the photons in each reservoir thermalize in the usual way quickly as a non-interacting Bose gas.
Furthermore, we consider the case that $\lana n_{(1,2)k}\rana(t)=0$ for the occupancy of the modes in the waveguide, i.e., the waveguide is populated through the sufficiently weak coupling to the reservoirs and its modes appear only as intermediate states (see section~\ref{open}).
Neglecting the waveguide, the system is thus composed of three subsystems,
each in thermal equilibrium at any time $t>0$, with locally assigned
time-dependent temperatures. The subsystems interact through random emission
and absorption processes which do not lead to entanglement, because in each
such process the wave function undergoes a collapse towards a product
state. This description is in obvious accord with the derivation of Planck's
law given by Einstein \cite{einstein1916}. Note that the compound system is
not coupled to several thermal baths which have different temperatures. In
such a case, a description with separate
master equations for the subsystems is inconsistent if the interaction between subsystems is
still treated quantum mechanically. Using such a description, a violation of the
second law has been deduced \cite{capek1,levy}, which is only
apparent and caused by the inconsistent computation \cite{levy}.
Our system differs from those models because
the system dynamics is not described by a unitary evolution as in
\cite{capek1,levy}, but by a random process, with all subsystems coupled to
the same bath (the open waveguide).
The golden rule applied to the Hamiltonian \eqref{ham} yields
the rate equations for $\lana n_q\rana(\Om,t)$ and $\langle m(t)\rangle$,
\begin{align}
\frac{\rd \na}{\rd t} &= -2\g_\dec\na + \g_0(-\na+\nb)- \g_1(M-\langle m\rangle)\na\nn\\
&+ \g_2\langle m\rangle(\na+1),
\label{rateA}\\
\frac{\rd \nb}{\rd t} &= -2\g_\dec\nb + \g_0(-\nb+\na)- \g_2(M-\langle m\rangle)\nb\nn\\
&+ \g_1\langle m\rangle(\nb+1) ,
\label{rateB}\\
\frac{\rd \langle m\rangle}{\rd t}&= -[\gt_{11}(\Om) +\gt_{12}(\Om)]\langle m\rangle + \gt_1(\Om)\left[\na(M-\langle m\rangle)-(\nb+1)\langle m\rangle\right] \nn\\
&+\gt_2(\Om)\left[\nb(M-\langle m\rangle)-(\na+1)\langle m\rangle\right].
\label{rateM}
\end{align}
The first terms on the right hand side of \eqref{rateA} -- \eqref{rateM}
describe the loss of photons through the open ends of the waveguide. These terms are of first order in $|h_{jk}|^2$, resp. $|g_{1k}|^2$, $|g_{2k}|^2$. The following terms correspond to coherent processes of second order in the couplings.
The effective rates $\gamma_{\dec,0,1,2}$ and $\gt_{1r},\gt_{1,2}$ used for the numerical solution of \eqref{rateA} -- \eqref{rateM} shown in Figs.~\ref{temp-short} and \ref{occ-long} belong to the strong coupling regime of the TLS and the wa\-ve\-gui\-de with values accessible within a cavity QED framework \cite{rosen}.
The chiral nature of the coupling, $g_{1k}\neq g_{2k}$, entails
$\g_1\neq\g_2$. In our example we have assumed $\gt_2=\g_2=\gt_{12}=0$, i.e.,
channel 2 is not coupled to the TLS. One sees from \eqref{rateA} --
\eqref{rateM} that the chiral coupling leads to a breakdown of the detailed
balance condition in second-order processes because absorption is no longer
balanced by stimulated and spontaneous emission. The radiation processes generate an effective transfer
of photons from reservoir $A$ to $B$ on a time scale $\tau_{\textrm{char}}$ given by the strong
coupling between channel 1 and the TLS, $\tau_{\textrm{char}}\sim\gt^{-1}=0.1\;\mu$s. This corresponds to a difference in the local temperatures calculated via \eqref{n-T}.
Figure~\ref{temp-short} shows the temporal behavior of the temperatures of reservoirs $A$, $B$ and the TLS for intermediate times.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{Fig2}
\caption{Solutions of the rate equations \eqref{rateA}-\eqref{rateM} as function of time, starting from initial thermal equilibrium. The effective temperatures of the reservoirs $A$ and $B$ deviate. The temperature drop of the TLS parallels that of $A$. Parameters used are
$\g_\dec=\g_0=10$\;kHz, $\gt_{11}=\gt_1=10$\;MHz, $\g_1=100$\;kHz; $\gt_{12}=\gt_2=\g_2=0$ and $\hba\Om/kT(0)=1$. $\Om$ corresponds to a wavelength of 10\;$\mu$m.}
\label{temp-short}
\end{figure}
Although both reservoirs $A$ and $B$ loose photons through the open waveguide,
the ratio between $\na$ and $\nb$ attains a constant value for $t\rightarrow
\infty$, which is depicted in Fig.~\ref{occ-long}. This figure reveals that the losses of reservoir $B$ set in at a much later time than the spontaneous population of $B$ through reservoir $A$ and the TLS.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{Fig3}
\caption{The occupations of reservoirs $A$ and $B$ for long times, plotted on a logarithmic scale. The ratio $\na/\nb$ is asymptotically time-independent. The losses through the open waveguide are much slower than the breakdown of local equilibrium caused by the non-reciprocal interaction with the TLS.}
\label{occ-long}
\end{figure}
\section{Conflict with the second law of thermodynamics}
\label{conflict}
Due to the steady loss of photons, our system is always out of equilibrium and the only steady state solution of the equations \eqref{rateA} -- \eqref{rateM} corresponds to empty cavities and all TLS in their ground state. It is clear that the system entropy\footnote{For our purposes it plays no role whether the Boltzmann or the Gibbs/v.~Neumann entropy is employed, because all definitions coincide in equilibrium and we assume {\it local} thermal equilibrium for all three subsystems, the two cavities and the TLS, albeit with possibly differing local temperatures.} diminishes at any time due to the outgoing heat flow. The question arises how to apply the second law of thermodynamics to this situation.
The second law has been formulated in several versions (see, {\it e.g.}, \cite{planck1,lieb,uffink}). It is not the purpose of this paper
to discuss these in detail.
Three representative formulations provide exemplary definitions of the second law \cite{wal,reichl}:\\
\begin{tabular}{lp{0.8\textwidth}}
(1)& The entropy of the universe always increases.\\
(2)& The entropy of a completely isolated system stays either constant or increases.\\
(3)& The entropy of a system thermally coupled to the environment satisfies Clausius' inequality: $\D S \ge \D Q/T$.
\end{tabular}\par
\noindent
Variant (1) is not subject to our Gedankenexperiment, because the entropy production including the environment is formally infinite, as the external bath has zero temperature. Variant (2) follows from variant (3) because the heat transfer $\D Q$ vanishes. Variant (3) applies to the present system: the heat transfer $\D Q$ is negative and therefore also $\D S$ may be negative. However, the second law in the form of Clausius' inequality forbids the case $\D S \le \D Q/T$: The local entropy production $\s$ in the system
must be non-negative \cite{reichl}.
The change of system entropy $S^{\sys}(t)$ can be written as \cite{spohn}
\beq
\frac{\rd S^{\sys}}{\rd t} = -\int\rd{\bm o}\cdot{\bm J}_{\sys} +\s,
\label{entropyrate}
\eeq
where the surface integral of the entropy current ${\bm J}_{\sys}$ accounts for the heat transfer to the environment, characterized by the rates $\g_\dec$ and $\gt_{11},\gt_{12}$. We find for $\s(t)$ (see section \ref{entr-prod})
\beq
\s(t)= \s_{A,B}(t) + \s_{A,\tls}(t) + \s_{B,\tls}(t),
\label{entropyprod}
\eeq
with
\begin{align}
\s_{A,B} &=k_B{\cal N}\g_0(\nb - \na)\big(\ln(\nb[\na+1])-\ln(\na[\nb+1])\big),
\label{sab}\\
\s_{A,\tls}&=k_B\big(\gt_2\langle m\rangle(\na+1)-\gt_1\na(M-\langle m\rangle)\big)\big(\ln(\langle m\rangle[\na+1]) \nn\\
&-\ln([M-\langle m\rangle]\na)\big),\label{sat}\\
\s_{B,\tls}&=k_B\big(\gt_1\langle m\rangle(\nb+1)-\gt_2\nb(M-\langle m\rangle)\big)\big(\ln(\langle m\rangle[\nb+1]) \nn\\
&-\ln([M-\langle m\rangle]\nb)\big),\label{sbt}
\end{align}
with ${\cal N}=\gt_1/\g_1=\gt_2/\g_2$.
The three contributions result from the heat exchange between the three subsystems. Only $\s_{A,B}$ is always non-negative, because it has the form $(x-y)(\ln(x)-\ln(y))$ characteristic for systems satisfying the detailed balance condition. Because $\gt_1\neq\gt_2$, the two other contributions are not necessarily non-negative. Fig.~\ref{sigma} shows $\s(t)$ calculated using the parameters of Fig.~\ref{temp-short}.
The entropy production is negative for $t<0.75\;\mu$s. The time span during which the entropy is reduced beyond the loss to the environment is almost an order of magnitude longer than the characteristic time scale
$\tau_{\textrm{char}}= 0.1\;\mu$s. The especially notable point is the entropy reduction which occurs although
the dynamical evolution
started with thermal equilibrium between the subsystems.
During the exchange of heat with the other subsystems,
each subsystem remains in local thermal
equilibrium. In principle, the exchange of heat between subsystems gives only a lower bound to the entropy production for irreversible processes and the actual $\s(t)$ could be larger than the value given in \eqref{entropyprod}, which contains the contributions from the mutual heat exchange. However, the entropy change in each subsystem can be computed directly via the formulae \eqref{entropyTLS} and \eqref{entropy2}, \eqref{entropy3} as well. Doing so, one finds that no additional entropy production besides the mutual heat exchange is generated in the process described by the rate equations \eqref{rateA} -- \eqref{rateM}. This, obviously, is due to the fact that during this process each subsystem remains in local thermal equilibrium.
It follows that the entropy change of the full system would
even be negative if the initial entropy of the TLS subsystem was lower
than the upper bound given in \eqref{entropyTLS}, because the entropy change is caused solely by mutual heat transfer between subsystems if the condition of local equilibrium is satisfied.
We conclude that the initial thermal equilibrium between reservoirs and the TLS
is unstable and variant (3) of the second law is violated. This is true although the total
entropy of the system plus the environment always increases. The second law,
applied to the system alone, demands a non-negative local entropy production
for any process driven by the coupling to the bath at $T=0$
\cite{reichl,spohn}. Such processes may generate local temperature gradients
between the reservoirs, but $\s(t)$ must always be larger or equal zero to satisfy Clausius' inequality. The violation of this inequality in our Gedankenexperiment shows clearly that the chiral coupling between TLS and the waveguide generates radiation processes which are in conflict with thermodynamics if they are treated statistically in the same way as black body radiation.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{Fig4a}
\caption{The total entropy production $\s(t)$ within the system calculated for $\N\g_0=1$\;MHz,
$\gt_1=10$\;MHz, $\gt_2=0$. For these parameters, the entropy production is negative until $t'\sim 0.75\;\mu$s, when it turns positive and stays so. This behavior entails that for $t<t'$ the second law of thermodynamics is not satisfied. Note that $t'$ is appreciably larger than $\tau_{\textrm{char}}$.}
\label{sigma}
\end{figure}
\section{Discussion and Conclusions}
\label{sec:2}
In discussing the experiment we first note that it is based on phenomena taking place in the realm
where the quantum world interfaces classical physics.
The three subsystems are clearly macroscopic, but their interaction Hamiltonian (\ref{hint1},\ref{hint2}) is purely
quantum mechanical and cannot be treated in a (semi-)classical approximation.
A unique quantum feature of the interaction is given by the fact that the emission rate of the TLS depends on the occupation of the \textit{final} states. In section~\ref{embed}, we demonstrate that this counter-intuitive effect leads to the restoration of detailed balance in a cavity system without non-reciprocal elements.
The non-unitary, probabilistic state development of the device can neither be achieved in classical Hamiltonian dynamics nor in the unitary pure quantum regime described by the Schrödinger equation. The collapse processes that link the classical world and the quantum regime \cite{Leggett2005}
are the cause of the thermal imbalance between the otherwise equivalent reservoirs $A$ and $B$.
Our Gedankenexperiment therefore reveals a clear conflict between thermodynamics and the
probabilistic description of quantum phenomena on a macroscopic scale.
The corresponding rate
equations \eqref{rateA} -- \eqref{rateM} do not satisfy the condition for detailed balance.
Instead they predict a time-dependent state that violates
the Clausius inequality, {\it i.e.} the original formulation of the second law of thermodynamics.
According to the quantum description of the statistical absorption and emission processes, the chirally coupled cavities are expected to
develop unequal occupation numbers. This imbalance creates a temperature gradient
between them, although no work is done on the system, which is coupled to the
environment through the open waveguide only. This coupling to a heat bath at
temperature zero leads to heat flow out of the system which is usually
accompanied by a positive local entropy production. But in our case the
entropy production within the system is negative during a well defined time
interval. This interval is larger than the time characterizing the coupling between the subsystems. The violation of the second law is temporary. Because this violation is described by the rate equations, it is not caused by a statistical fluctuation. Such a fluctuation may occur in the stochastic evolution of the state vector of a single system, even when the initial state is typical \cite{jarz2}, but cannot appear in the fully deterministic equations for averages.
The rate equations \eqref{rateA} -- \eqref{rateM} have been derived under the
assumption that the two channels of the waveguide are fed by $A$ and $B$
through the emission of wavepackets which in turn interact with the TLS in a
causal fashion. The emission and absorption of single photons by the TLS are
considered thus as probabilistic {\it processes} taking place within a finite
time span, due to the quasi-continuum of modes available in the waveguide and
the reservoirs. They therefore satisfy causality: It is not possible for a
right-moving photon in channel 1 to be emitted by the TLS and be subsequently
absorbed by reservoir $A$. The photons entering $A$ are either generated by a
fluctuation in channel 1 of the open waveguide or arrive through channel
2. In the latter case, they may come from the outside, from a TLS or from reservoir $B$. Pure scattering events at the TLS are neglected in this approximation because they are of higher order in the coupling constants. Their inclusion cannot restore the detailed balance broken by the chiral coupling.
Our reasoning is based on the assumption that the interaction of the TLS with
the radiation continuum leads to \textit{real events} \cite{haag} which must
be described statistically, and are therefore caused by a collapse
process. The physical mechanism of this ``real" collapse plays no role in
these considerations because no hypothesis beyond the golden rule enters the
derivation of the rate equations. Of course, we have also assumed that the
macroscopic nature of the radiation and the collection of TLS removes any detectable
entanglement between the subsystems. It entails the statistical
independence of the radiation processes and therefore Markovian
dynamics\footnote{A Markovian master equation yields transition probabilities in
accord with the golden rule \cite{alicki}.}.
This assumption is corroborated by all available experimental evidence up to
now. In case the second law of thermodynamics would be correct and therefore the
presented statistical analysis wrong, the validity of the second law would be tantamount to the actual
realization of a macroscopic superposition of states in the cavity system,
although it is coupled to an unobserved environment, the open ends of the waveguide.
Nevertheless, the use of the golden rule can also be
justified even in a completely isolated system. If the system is isolated, it could in principle be
described by the full unitary dynamics, leading to a trivial reconciliation
with the second law in its restricted form: the fine-grained Gibbs/v.~Neumann entropy does not change at all.
However, also in this case an ``environment" is present which
consists of the infinitely many degrees of freedom of the photon gas,
decohering the dynamics of the TLS, at least
according to the opinion of the majority of physicists working in quantum optics \cite{c-t}. The effective coarse-grained description
of the closed system proceeds again via the golden rule (see
section~\ref{closed}). A similar temperature difference between $A$ and $B$
appears and a new steady state develops from initial thermal equilibrium for
$t\rightarrow\infty$, having a lower entropy than the initial state, thus violating variant (2) of the second law.
This is shown in Fig.~\ref{closedS}.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{Fig4}
\caption{Temporal development of the total entropy for the closed
variant of our system shown in the inset. The initial state at $t=0$
(thermal equilibrium between all subsystems, including the waveguide) maximizes the entropy. The stable steady state for long times has a lower entropy.}
\label{closedS}
\end{figure}
The temperature difference corresponds to a ``sorting'' between the
reservoirs $A$ and $B$ in the closed system and resembles the action of a Maxwell demon \cite{rosello}.
No information is processed, stored or erased, neither in the TLS nor
in the waveguide \cite{norton}, therefore the usual arguments for positive
entropy production based on information theory \cite{landauer,ben} cannot be
applied.
It has been argued that it is not possible to discern experimentally an
interpretation of quantum mechanics based on probabilistic dynamics and real
collapse from the decoherence interpretation which replaces the physical
collapse by an epistemic operation: the tracing over environmental degrees of
freedom in the full density matrix at the final observation
time $t_\fin$ \cite{leggett2}.
As mentioned in section~\ref{intro}, it is not known whether the photon densities in the reservoirs at $t_\fin$, calculated with the tracing procedure, would differ from the results of section~\ref{open} based on the golden rule.
If so, our proposed experiment, if performed with an isolated system, would
allow to decide between interpretations based on real collapses and epistemic interpretations. Only the latter do not contradict the second law of
thermodynamics, provided an actual solution of the full many-body problem
would effectively restore the detailed balance condition in the statistical
description. Such a solution would also be necessary to identify possible
reasons for the breakdown of well-established tools like the golden rule or
the Markov approximation in case the system shows the equilibrium
state predicted by thermodynamics at all times. In any case, a difference between the full solution and the approximation by the golden rule would entail another mystery: why is the approximation valid for black-body radiation in arbitrary cavities (see section~\ref{embed}) but not for chiral waveguides?
As a computation of the full quantum dynamics appears out of reach at present, the question can only be decided experimentally. A direct implementation of our model
appears feasable with current technology \cite{dayan,lodahl,rosen}.
In case that the experiment reported unequal distributions in $A$ and $B$ for
the closed system, one would be forced to conclude that statistical processes
such as spontaneous emission and absorption are able to reduce the total
entropy for arbitrary large isolated systems and on average, not only for short
times and small systems as expected from fluctuations \cite{jarz2}. Then the state of
classical thermal equilibrium with maximal entropy is unstable and the system
moves to steady states with a lower entropy.
In this case, entropy-reducing processes would be expected to occur actually
in nature, in structures differing greatly from our device\footnote{
Collapse processes can be provided in principle by any inelastic process, be
it scattering or (radioactive) decay. The photon reservoirs could be replaced by any incoherent source of particles or
energy, e.g., resistors with Johnson-Nyquist noise.
Devices closely related to the presented chiral waveguide, but being based on
coherence filters, have been presented in
\cite{Mannhart20182,mannhart_lossless_2019}. A possible solid-state realization
employing asymmetric quantum rings featuring inelastic scattering works
similarly to our quantum-optical implementation and would pump electrons
instead of photons \cite{Mannhart20181,bredol2}.}.
The contradictions presented are rooted in the still unresolved status of the
measurement problem of quantum physics: quantum mechanical probabilities can only
be computed under the assumption that a collapse (either real or epistemic)
takes place. These stochastic probabilities are formally encoded in the Born rule.
To our
knowledge, neither the Born rule nor the golden rule have ever been used
for a derivation of the second law of thermodynamics \cite{goldstein}.
There have been attempts to
deduce the second law from quantum mechanics by employing
several ``coarse-graining'' prescriptions. The proof of the H-theorem given by
v.~Neumann employs assumptions about the density matrix of pure states and
macroscopic distinguishability but excludes explicitly collapse or measurement
processes from the quantum dynamics. In v.~Neumann's approach the quantum dynamics stays always unitary
\cite{neumann-H,goldstein2}. This may seem surprising,
because the collapse processes underlying the golden rule are inherently
irreversible, as noted first also by v.~Neumann \cite{neu}. To our opinion, the intrinsic probabilistic features of quantum mechanics encoded in the Born rule add an elementary irreversible process to the dynamical laws of nature. This process, the non-unitary collapse of the wavefunction, appears during macroscopic measurements but also as {\it microscopic event}, thereby leading to the correct statistics of a photon gas interacting with matter. Therefore, we disagree with the position put forward in \cite{lebowitz93}, that the irrversibility of the measurement is just due to the macroscopic nature of the apparatus and has essentially the same origin as the irreversible behavior of macroscopic variables in classical mechanics which obeys the second law of thermodynamics. We have shown that the presence of microscopic collapse processes may lead under certain circumstances to a conflict with this law.
In conclusion, transition rates of quantum systems are commonly calculated with
great success by using Fermi's golden rule. This approach is widely accepted,
as the golden rule directly results from the Born rule.
Here, we have introduced practically realizable, open and closed quantum
systems of coupled cavities and determined their behavior by applying the
golden rule. The predicted behavior of both systems violates the second law of thermodynamics.
We therefore conclude that\\
1)\ the statistical description of quantum mechanical
transitions given by the golden rule is incorrect or\\
2)\ the second law of
thermodynamics is not universally valid.
\begin{acknowledgements}
The authors gratefully acknowledge an outstanding interaction with T. Kopp, support by L. Pavka, and also very helpful discussions with and criticism from J. Annett, E. Benkiser, H. Boschker, A. Brataas, P. Bredol, C. Bruder, H.-P. B\"uchler, T. Chakraborty, I.L. Egusquiza, R. Fresard,
A. Golubov, S. Hellberg, K.-H. H\"ock, G. Ingold, V. Kresin, L. Kürten, G. Leuchs, E. Lutz, F. Marquardt, A. Morpurgo, A. Parra-Rodriguez, H. Rogalla, A. Roulet, M. Sanz, P. Schneewei{\ss}, A. Schnyder, C. Schön,
E. Solano, R. Stamps, R. Valenti, J. Volz and R. Wanke.
D.B. thanks Raymond Fresard for the warm hospitality during his stay at CRISMAT, ENSICAEN, which was supported by the French Agence Nationale de la Recherche, Grant No. ANR-10-LABX-09-01 and the Centre National de la Recherche Scientifique through Labex EMC3.
\end{acknowledgements}
\section*{Conflict of interest}
The authors declare that they have no conflict of interest.
\section{Appendix}
\label{app}
\subsection{Derivation of the rate equations \eqref{bb-n} and \eqref{bb-m}}
\label{blackbody}
The simple, albeit nonlinearly coupled rate equations \eqref{bb-n}, \eqref{bb-m} underlie Einstein's famous derivation of Planck's law for black-body radiation \cite{einstein1916}. In this section, we describe the approximations and statistical assumptions leading to the closed system of equations \eqref{bb-n}, \eqref{bb-m}.
We begin by describing the quantum state of a single system within the statistical ensemble. A collection of $M$ two-level systems with energy splitting $\hba\Om$ interacts with radiation modes $a_j$, characterized by their frequencies in the range $[\Om-\D/2,\Om+\D/2]$ and additional parameters such as momentum and polarization, summarized in the index $j$. The full many-body quantum state $|\Psi(t)\rangle$ at time $t$ is then described as
\beq
|\Psi(t)\rangle = |s_1,\ldots s_M;\{n_j\}\rangle,
\label{fullstate}
\eeq
where the $s_l=0,1$ correspond to the ground and excited state $|g_l\rangle$ and $|e_l\rangle$ of the $l$-th TLS and $n_j$ is the number of photons in mode $j$. While the states $\{\Psi\}$ form a complete basis in the full Hilbert space, we assume that at any time the system can be described by exactly one of these states, {\it i.e.} we neglect the coherence of each TLS and coherent superpositions between several TLS. We also consider the radiation field as thermal, namely each state being diagonal in the eigenbasis of the non-interacting Hamiltonian
\beq
H_0=\frac{\hba\Om}{2}\sum_{l=1}^M\s^z_l +\hba\sum_j\om_j\ad_j a_j.
\eeq
The interaction between the TLS and the radiation via the Hamiltonian \eqref{hint-bb} is incorporated as a stochastic process using the golden rule. It gives the probabilities to transition from state $|\Psi(t)\rangle$ to states
$|\Psi'(t+\del t)\rangle$ (emission) or $|\Psi''(t+\del t)\rangle$ (absorption),
\begin{align}
|s_1,\ldots,1_l,\ldots s_M;\{n_j\}\rangle
&\ra |s_1,\ldots,0_l,\ldots s_M;\{n_{j\neq k}\},n_k+1\rangle,
\label{emiss}\\
|s_1,\ldots,0_{l'},\ldots s_M;\{n_j\}\rangle
&\ra |s_1,\ldots,1_{l'},\ldots s_M;\{n_{j\neq k}\},n_k-1\rangle.
\label{absorp}
\end{align}
The rate $\tau_{\emi}^{-1}$ for the emission process of a single TLS depends on the occupation $n_j$ of the mode $j$ actually involved. However, as the process couples the TLS to a continuum of radiation degrees of freedom, the total probablility is given by the average $\bar{n}$ of the occupations of all modes in the interval $[\Om-\D/2,\Om+\D/2]$ belonging to the state $|\Psi(t)\rangle$ as $\tau^{-1}_\emi=\g'(\bar{n}(t)+1)$ (see \cite{c-t} and section~\ref{open}). As we don't know into which mode the emission occurs, we can simplify the description of $|\Psi(t)\rangle$ given in \eqref{fullstate} as
\beq
\Psi(t)= |s_1,\ldots s_M;\bar{n}\rangle,
\label{fullstate1}
\eeq
which already is a form of coarse-graining procedure applied to the microstate \eqref{fullstate}. The next and crucial approximation is the assumption that the radiation processes of the several TLS are independent, although the coupling to the radiation field introduces statistical correlations between them. The approximation is justified by the fact that we have a {\it continuum} of radiation modes which do not interact among themselves. This implies that each individual radiation process can be considered as independent from all others; the chance of two TLS to interact with exactly the same microscopic field mode $j$ within observation time is vanishingly small. Thus we consider the joint probability $P^e_l(t,\bar{n})$ for the $l$-th TLS to be in the excited state and the average mode occupation to be $\bar{n}$. This probability satisfies the master equation \cite{reichl}
\beq
\frac{\rd P_l^e(t,\bar{n})}{\rd t}=
-\g'P_l^e(t,\bar{n})(\bar{n} +1)
+\g'P_l^g(t,\bar{n})\bar{n},
\label{master}
\eeq
where $P_l^g(t,\bar{n})=P(\bar{n},t)-P_l^e(t,\bar{n})$ is the probablility that the $l$-th TLS is in its ground state. This relation is a consequence of the factorization property
\beq
P_l^e(t,\bar{n})=P^e_l(t)P(\bar{n},t),
\label{factor}
\eeq
which follows from the same argument as the statistical independence of the TLS, {\it i.e.} the presence of macroscopically many different modes of the radiation field in the frequency interval $[\Om-\D,\Om+\D]$, rendering the average occupation $\bar{n}$ statistically independent from the state of a single TLS. Summing \eqref{master} over $\bar{n}$, we obtain
\beq
\frac{\rd P_l^e(t)}{\rd t}=
-\g'P_l^e(t)(\langle n\rangle(t) +1)
+\g'(1-P_l^e(t))\langle n\rangle(t).
\label{master2}
\eeq
Noting that $P^e_l(t)=\langle m(t)\rangle/M$, we obtain equation~\eqref{bb-m} for the ensemble average of the number $m(t)$ of excited TLS. The corresponding rate equation \eqref{bb-n} for the average photon number follows directly from the conservation of the total excitation number
\beq
N_{\textrm{exc}}=\sum^M_{l=1} s_l + \sum_j n_j,
\label{exno}
\eeq
but contains a different rate constant $\g$ to account for the continuous density of states around the resonance frequency, $\g=\g'/{\cal N}$ (see equation~\eqref{dos} in section~\ref{open}). The state vector \eqref{fullstate} satisfies a much more complicated master equation then \eqref{master2} and its stochastic evolution shows deviations of the actual value $m(t)$ from its average $\langle m(t)\rangle$ computed via \eqref{bb-n}, \eqref{bb-m}. But one may argue that a {\it typical} state will have a time evolution which deviates only slightly from its mean value for all times, $|m_{\textrm{typical}}(t)-\langle m(t)\rangle|/M\ra 0$ for $M\ra\infty$ \cite{goldstein,goldstein2}. This statement can be shown analytically for simple stochastic models like the Ehrenfest model \cite{baldovin} and also in purely deterministic toy models, e.g. the Kac ring model \cite{bricmont}. Moreover there is ample numerical evidence for such a behavior of typical states in more realistic models \cite{cerino}.
In our case, the ultimate justification of \eqref{bb-n}, \eqref{bb-m} rests on the fact that they lead to Planck's law for black-body radiation, one of the cornerstones of modern physics. It would be hard to argue that these equations are incorrect but in perfect agreement with all experiments due to some unknown cancellation of errors.
\subsection{Derivation of the rate equations \eqref{rateA} -- \eqref{rateM} for the open system}
\label{open}
We begin with the computation of the loss rate of photons with frequency $\om_j$ from reservoir $A$ via channel 1 of the open waveguide (see Fig.~\ref{Fig1}). In the coupling Hamiltonian, \eqref{hint1}, the process is of first order.
The initial state reads
\beq
|\p_\ini\rana = |\{n_A\}n_{Aj},\{n_B\},\{0_1\}0_{1k},\{0_2\},\{s\}\rana,
\label{dec-in}
\eeq
for a certain configuration of occupations $\{n_A\}$ for modes $j'$ in $A$ with $j'\neq j$. Here, $n_{Aj}$ is the occupation number of mode $j$. Similarly,
$\{n_B\}$ denotes a configuration in reservoir $B$. Further, $\{s\}$ is the configuration of excited ($s_l=e$) or ground states ($s_l=g$) of the $M$ TLS, $l=1,\ldots M$.
The waveguide channels 1 and 2 are not occupied in $\p_\ini$.
The initial state is connected via the term $h_{jk}^*a_{1k}^\dagger a_{Aj}$ to the final state
\beq
|\p_\fin\rana = |\{n_A\}n_{Aj}-1,\{n_B\},\{0_1\}1_{1k},\{0_2\},\{s\}\rana.
\label{dec-fin}
\eeq
The $S$-matrix element reads then
\beq
S^{t_0}_{fi}=-2\pi i\del^{t_0}(E_\fin - E_\ini)\lan\p_\fin|H_{\inter}^1|\p_\ini\rana
\eeq
with $E_\fin-E_\ini = \hba(\om_k-\om_j)$. The interaction is assumed to take place over a time interval $t_0$. The corresponding regularized $\del$-function is (see \cite{c-t})
\beq
\del^{t_0}(E) = \frac{1}{\pi}\frac{\sin(Et_0/2\hba)}{E}.
\eeq
We have
\beq
\lan\p_\fin|H_{\inter}^1|\p_\ini\rana = h^*_{jk}\sqrt{n_{Aj}},
\eeq
and, according to standard reasoning \cite{c-t}, the transition rate is given by
\beq
\tau_\dec^{-1}(\om_j)=\frac{1}{t_0}\sum_k 4\pi^2|h_{kj}|^2\frac{t_0}{2\pi\hba}\del^{t_0}(E_\fin-E_\ini)
=\frac{2\pi}{\hba}|\hb|^2n_{Aj}\rho_1(\hba\om_j),
\eeq
where $\hb$ is the value of $h_{jk}$ for $\om_j=\om_k$, which is assumed to be constant in the interval $[\Om-\D/2,\Om+\D/2]$. Averaging over all modes $j$ with frequency $\om$, we obtain $\tau_\dec^{-1}(\om)=\g_\dec(\om)\na(\om)$ with
\beq
\g_\dec(\om)=\frac{2\pi}{\hba}|\hb|^2\rho_{wg}(\hba\om),
\eeq
assuming $\rho_1=\rho_2=\rho_{wg}$.
Adding the contribution of the decay into channel 2, the master equation for $\na(\om)$
describing the loss process reads
\beq
\frac{\rd \na(\om)}{\rd t}=-2\g_\dec(\om)\na(\om)+\ldots
\label{decay}
\eeq
A similar expression holds for reservoir $B$ with the same rate constant $\g_\dec$.
The TLS couple to the channels via spontaneous emission. The corresponding first-order process leads to
\beq
\frac{\rd \langle m\rangle}{\rd t}=-(\gt_{11}(\Om)+\gt_{12}(\Om))\langle m\rangle +\ldots,
\label{decay-m}
\eeq
with $\gt_{1r}(\om)=2\pi \gb_r^2(\om)\rho_{wg}(\hba\om)/\hba$ for $r=1,2$ (see below).
We compute now the coherent transfer of a photon from reservoir $A$ to $B$ through the chiral waveguide, which is of second order in the coupling $|\hb|^2$.
As this process concerns a wavepacket of finite width and takes place in a finite time interval, only the right-moving channel 1 is relevant, i.e., the coupling Hamiltonian is
\beq
H^1_{\inter}=\sum_{q=A,B}\sum_{j,k} h_{jk}\left(a_{qj}^\dagger a_{1k} +\hc\right).
\eeq
We denote the initial state as
\beq
|\p_\ini\rana = |\{n_A\}n_{Aj},\{n_B\}n_{Bl},\{0_1\}0_{1k},\{0_2\},\{s\}\rana.
\eeq
The transition from $A$ to $B$ occurs via the intermediate states
\beq
|\p_\im\rana = |\{n_A\}n_{Aj}-1,\{n_B\}n_{Bl},\{0_1\}1_{1k},\{0_2\},\{s\}\rana
\eeq
towards the final state
\beq
|\p_\fin\rana = |\{n_A\}n_{Aj}-1,\{n_B\}n_{Bl}+1,\{0_1\}0_{1k},\{0_2\},\{s\}\rana,
\eeq
involving the operators
\beq
h^*_{jk}\ad_{1k}a_{Aj}, \quad h_{lk}\ad_{Bl}a_{1k}.
\eeq
The $S$-matrix $S_{fi}$ connecting initial and final states reads
\beq
S^{t_0}_{fi}= -2\pi i \del^{t_0}(E_\fin-E_\ini)\lim_{\eta \ra 0^+}
\sum_{k}\frac{V_{fk}V_{ki}}{E_{\ini}-E_{\im} +i\eta},
\eeq
and $E_\ini-E_\im = \hba(\om_j-\om_k)$. The matrix elements $V_{fk}$ and $V_{ki}$ are
\beq
V_{fk}=\lan\p_{\fin}|h_{lk}\ad_{Bl}a_{1k}|\p_{\im}\rana, \quad
V_{ki}=\lan\p_{\im}|h^*_{jk}\ad_{1k}a_{Aj}|\p_{\ini}\rana.
\eeq
We find
\beq
S^{t_0}_{fi} \approx -2\pi^2\del^{t_0}(\hba(\om_l-\om_j)) |\hb|^2\rho_{wg}(\hba\om_j)
\sqrt{n_{Aj}(n_{Bl}+1)},
\eeq
and obtain for the transition rate out of state $|\p_\ini\rana$,
\beq
\tau^{-1}_{jA\ra B}=\frac{1}{t_0}\sum_l|S^{t_0}_{fi}|^2=
\sum_l \frac{2\pi^3}{\hba}\del^{t_0}(\hba(\om_l-\om_j))|\hb|^4\rho^2_{wg}(\hba\om_j)n_{Aj}(n_{Bl}+1).
\eeq
We denote with $\nb(\om)$ the average occupation number per mode in reservoir $B$ at frequency $\om$. It follows that
\beq
\tau^{-1}_{jA\ra B}= \frac{2\pi^3}{\hba}|\hb|^4\rho_{wg}^2(\hba\om_j)
\rho_B(\hba\om_j)n_{Aj}(\nb(\om_j)+1).
\eeq
If we average over all modes $j$ in $A$ with frequency $\om$, the transition rate from $A$ to $B$ at $\om$ reads
\beq
\frac{1}{\tau_{A\ra B}(\om)}= \frac{2\pi^3}{\hba}|\hb|^4\rho_{wg}^2(\hba\om)
\rho_B(\hba\om)\na(\om)(\nb(\om)+1).
\eeq
Similarly, the transition from $B$ to $A$, proceeding via channel 2, is
\beq
\frac{1}{\tau_{B\ra A}(\om)}= \frac{2\pi^3}{\hba}|\hb|^4\rho_{wg}^2(\hba\om)
\rho_A(\hba\om)\nb(\om)(\na(\om)+1).
\eeq
Finally, the master equation for reservoir $A$ characterizing direct transitions between $A$ and $B$ through the waveguide is given by
\beq
\frac{\rd \na(\om)}{\rd t} = -\frac{1}{\tau_{A\ra B}(\om)} +\frac{1}{\tau_{B\ra A}(\om)} = \g_0(\om)(\nb(\om) - \na(\om)),
\label{direct}
\eeq
where we have assumed identical densities of states in $A$ and $B$, $\rho_A=\rho_B=\rho$. The rate $\g_0(\om)$ is defined as
\beq
\g_0(\om)= \frac{2\pi^3}{\hba}|\hb|^4\rho_{wg}^2(\hba\om)
\rho(\hba\om).
\eeq
The ratio of second and first order contributions follows as
\beq
\frac{\g_0(\om)}{\g_\dec(\om)}=\pi^2|\hb|^2\rho_{wg}(\hba\om)\rho(\hba\om).
\label{ratio}
\eeq
Next, we consider the absorption of radiation from reservoir $A$ by the $l$-th TLS. The initial state is
\beq
|\p_\ini\rana = |\{n_A\}n_{Aj},\{n_B\},\{0_1\}0_{1k},\{0_2\},\{s\}g_l\rana.
\label{absorption}
\eeq
The absorption of a wavepacket of finite spatial extension can only proceed via channel 1. The intermediate states read then
\beq
|\p_\im\rana = |\{n_A\}n_{Aj}-1,\{n_B\},\{0_1\}1_{1k},\{0_2\},\{s\}g_l\rana
\eeq
and the final state is obtained by absorption of the photon in channel 1 by the TLS,
\beq
|\p_\fin\rana = |\{n_A\}n_{Aj}-1,\{n_B\},\{0_1\}0_{1k},\{0_2\},\{s\}e_l\rana.
\eeq
For the S-matrix element we have
\beq
S^{t_0}_{fi}=-2\pi^2\hb \gb_1\del^{t_0}(\hba\Om-\hba\om_j)\rho_{wg}(\hba\om_j)
\sqrt{n_{Aj}},
\eeq
where $\gb_1$ is the value of $g_{1k}$ for $\om_k=\om_j$. The corresponding term in the master equation for $\langle m\rangle$, the average number of excited TLS, is obtained by summing over all initial states, which leads to the expression
\beq
\frac{\rd \langle m\rangle}{\rd t} = -\frac{2\pi^3}{\hba}(M-\langle m\rangle)\hb^2\gb^2_1\rho_{wg}^2(\hba\Om)\rho(\hba\Om)\na(\Om) +\ldots
\eeq
In an anlogous manner, the radiation from reservoir $B$ is absorbed via channel 2 by the TLS,
\beq
\frac{\rd \langle m\rangle}{\rd t} = -\frac{2\pi^3}{\hba}(M-\langle m\rangle)\hb^2\gb^2_2\rho_{wg}^2(\hba\Om)\rho(\hba\Om)\nb(\Om) +\ldots
\eeq
On the other hand, the emission from the TLS towards reservoir $A$ must proceed via the left-moving channel 2. The initial state is now
\beq
|\p_\ini\rana = |\{n_A\}n_{Aj},\{n_B\},\{0_1\},\{0_2\}0_{2k},\{s\}e_l\rana,
\eeq
the intermediate state
\beq
|\p_\im\rana = |\{n_A\}n_{Aj},\{n_B\},\{0_1\},\{0_2\}1_{2k},\{s\}g_l\rana,
\eeq
and the final state
\beq
|\p_\fin\rana = |\{n_A\}n_{Aj}+1,\{n_B\},\{0_1\},\{0_2\}0_{2k},\{s\}g_l\rana.
\eeq
A calculation completely analogous to the one for absorption above leads to the following term in the master equation for $\langle m\rangle$, this time summing over final modes in reservoir $A$,
\beq
\frac{\rd \langle m\rangle}{\rd t} = \frac{2\pi^3}{\hba}\langle m\rangle\hb^2\gb^2_2\rho_{wg}^2(\hba\Om)\rho(\hba\Om)(\na(\Om)+1) +\ldots
\label{stim-em}
\eeq
In this expression, the term proportional to $\na$ is associated with stimulated emission from the TLS. It is noteworthy that the stimulated emission of the TLS is not caused by photons that have been emitted by $A$ and then impinge on the TLS to create photons in the same mode. Here, in contrast, the stimulated emission of the TLS is induced by photons that are \textit{received} by $A$ after having been emitted by the TLS.
This counterintuitive effect, which is solely due to the Bose statistics and therefore only possible in quantum physics, restores detailed balance for the case of a cavity embedded in another one, in accord with Kirchhoff's law on black body radiation (see section \ref{embed}).
With the definition
\beq
\gt_r(\Om)=\frac{2\pi^3}{\hba}\hb^2\gb^2_r\rho_{wg}^2(\hba\Om)\rho(\hba\Om)
\eeq
for $r=1,2$ we obtain the master equation for $\langle m\rangle$,
\begin{align}
\frac{\rd \langle m\rangle}{\rd t}&= -[\gt_{11}(\Om) +\gt_{12}(\Om)]\langle m\rangle + \gt_1(\Om)\left[\na(M-\langle m\rangle)-(\nb+1)\langle m\rangle\right] \nn\\
&+\gt_2(\Om)\left[\nb(M-\langle m\rangle)-(\na+1)\langle m\rangle\right],
\label{rate-me}
\end{align}
which is Eq.\eqref{rateM} in the main text. To compute the terms corresponding to the absorption and emission processes in the rate equations for $\na$ and $\nb$, we note that the rates $\gt_r$ contain a summation over initial, respectively final modes in the reservoirs. The coefficients $\g_r$ describing the temporal change in the average occupation number per mode, $\lana n_q\rana$, are therefore $\gt_r$ divided by the number of modes in the relevant frequency interval. Thus,
\beq
\N = \hba\int_{\Om-\D/2}^{\Om+\D/2}\rd\om \rho(\hba\om), \quad \g_r=\gt_r/\N.
\label{dos}
\eeq
Together with \eqref{decay} and \eqref{direct}, it follows for $\om=\Om$,
\begin{align}
\frac{\rd \na}{\rd t} &= -2\g_\dec\na + \g_0(-\na+\nb)- \g_1(M-\langle m\rangle)\na\nn\\
&+ \g_2\langle m\rangle(\na+1),\label{rateAa}\\
\frac{\rd \nb}{\rd t} &= -2\g_\dec\nb + \g_0(-\nb+\na)- \g_2(M-\langle m\rangle)\nb\nn\\
&+ \g_1\langle m\rangle(\nb+1), \label{rateBa}
\end{align}
which are Eqs. \eqref{rateA} and \eqref{rateB}.
\subsection{Derivation of the entropy production \eqref{sab} -- \eqref{sbt}}
\label{entr-prod}
As no work is done during the irreversible process, we can write for the heat flow out of the system within our approximation
\beq
Q^{\ext}=-\frac{\rd}{\rd t}\big(\lana H_A\rana+\lana H_B\rana+\lana H_{\tls}\rana\big) >0,
\eeq
because the coherences of the TLS are not important and the waveguide contains no photons on average.
We consider in the following only quantities (entropy and energy) corresponding to the interval
$[\hba(\Om-\D/2),\hba(\Om+\D/2)]$ because other photon modes do not interact with the TLS. The heat transferred to the environment per unit time reads for each subsystem (see \eqref{rateA} -- \eqref{rateM})
\beq
Q^{\ext}_q=2\hba\Om{\N}\g_{\dec}\lana n_q\rana, \quad
Q^{\ext}_{\tls}=\hba\Om(\gt_{11}+\gt_{12})\langle m\rangle.
\eeq
The corresponding entropy change of the system is given by
\beq
-\int\rd{\bm o}\cdot{\bm J}_{\sys} = -\left(\sum_{q=A,B}\frac{Q^{\ext}_q}{T_q} + \frac{Q^{\ext}_{\tls}}{T_{\tls}}\right),
\eeq
with
\beq
T_q^{-1}=\frac{k_B}{\hba\Om}\ln\left(\frac{\lana n_q\rana +1}{\lana n_q\rana}\right), \quad
T_{\tls}^{-1}=\frac{k_B}{\hba\Om}\ln\left(\frac{M-\langle m\rangle}{\langle m\rangle}\right).
\eeq
The entropy of the system is always reduced due to the vanishing temperature of the bath. Although the entropy production of system plus bath is thus formally positive and infinite, the actual entropy loss of the system stays finite and is of course compatible with the second law which mandates that the local entropy production within the system must be non-negative. In our case, the local entropy production is induced by the heat exchange between the three subsystems. The heat change of reservoir $A$ due to the interaction with reservoir $B$ reads
\beq
Q^A_{A,B}=-Q^B_{A,B}=\hba\Om\N\g_0(\nb-\na).
\eeq
It follows for the entropy production due to this process
\beq
\s_{A,B}=\hba\Om\N\g_0(\nb-\na)(T_A^{-1}-T_B^{-1}),
\eeq
which is equation \eqref{sab}. The equations \eqref{sat}, \eqref{sbt} for the heat exchange between the reservoirs and the TLS are deduced correspondingly.
Alternatively, one may compute the entropy change in each subsystem directly with the expressions \eqref{entropyTLS} for the TLS entropy and \eqref{entropy2} for the entropy of the radiation modes. This shows that the entropy change of each subsystem is not accompanied by additional entropy production but is solely due to heat transfer between subsystems, a consequence of the local thermal equilibrium in $A$, $B$ and the TLS.
\subsection{Derivation of the rate equations for the closed system}
\label{closed}
This chapter discusses a variant of the open system characterized
in section~\ref{open}. In this variant, which is a closed system, the
waveguide satisfies periodic boundary conditions, corresponding to a loop
of length $L$, where $L$ is much larger than the distance between the
reservoirs $A$ and $B$, see Fig.~\ref{loop}. We show that a description
assuming real absorption and emission processes fulfills the detailed
balance condition to first order in the coupling. Second order processes
analogous to those described in Eqs. \eqref{absorption} -- \eqref{stim-em} break the detailed balance condition for the case that coherent absorption and emission is only possible along the short path between the reservoirs and the TLS, while dephasing occurs for wave packets emitted by a TLS and traveling the long way around the loop before reaching one of the reservoirs. The latter case is already accounted for by the first-order terms describing the equilibration between the reservoirs/TLS and the waveguide. Of course, if the dynamics of the closed system is considered to be unitary, corresponding to completely coherent evolution, the entropy does not change. This situation could be approximated by treating all second order processes as coherent,
including those on the long path between the TLS and the cavities. Then
the detailed balance condition would be satisfied, leading to stabilization of the state with maximal entropy. However, if the coherence is restricted to processes occurring along the short path, the ensuing steady state does not have maximum entropy.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{FigS1}
\caption{The system obtained by closing the open waveguide of the system shown in Fig.~1 of the main text. The occupation numbers of the chiral channels 1 and 2 do not vanish due to equilibration with the reservoirs $A$ and $B$.}
\label{loop}
\end{figure}
In the closed system, the occupancy of the waveguide can no longer be assumed to be zero, as the photons cannot escape towards infinity.
We describe the occupancy in channel $q$ by $\lana n_q\rana$ for $q=1,2$.
The Hamiltonian is given in Eqs. \eqref{ham} -- \eqref{hint2}.
The exchange of photons between cavity $A$ and channel 1 of the waveguide
proceeds via a first-order process analogous to that given in Eqs. \eqref{dec-in} and \eqref{dec-fin}, but the matrix element reads now (suppressing the frequency arguments),
\beq
\lan\p_\fin|H_{\inter}^1|\p_\ini\rana = h^*_{jk}\sqrt{n_{Aj}(n_1+1)}.
\eeq
This gives for the transition rate from mode $j$ in $A$ to channel 1 of the waveguide
\beq
\tau_{A\ra 1}^{-1}=\g_{\dec}n_{Aj}(\nei+1),
\eeq
and for the reverse process,
\beq
\tau_{1\ra A}^{-1}=\g_{\dec}\nei(n_{Aj} +1),
\eeq
where in this case a sum over initial states has to be performed to obtain the rate of emission into the fixed mode $j$ of $A$.
The terms in the master equation for $\na$ are thus
\beq
\frac{\rd \na}{\rd t} = \g_{\dec}(\nei -\na) + \ldots,
\eeq
and for $\nei$
\beq
\frac{\rd \nei}{\rd t} = \g_{\dec}'(\na -\nei) + \ldots,
\label{channel1}
\eeq
with the rate constant
$\g_\dec'=(\rho/\rho_{wg})\g_\dec$. Analogous expressions are obtained for $B$ and channel 2.
Another first-order process couples the TLS and the waveguide modes. We find for this contribution to the rate equation for $\langle m\rangle$
\beq
\frac{\rd \langle m\rangle}{\rd t} = \sum_{r=1,2} \gt_{1r}[(M-\langle m\rangle)\nr -\langle m\rangle(\nr +1)] + \ldots,
\eeq
with $\gt_{1r}=2\pi g_r^2\rho_{wg}/\hba$ (compare Eq.\eqref{decay-m}).
The corresponding terms in the rate equations for $\nr$ are
\beq
\frac{\rd \nr}{\rd t} = \g_{1r}[\langle m\rangle(\nr +1) -(M-\langle m\rangle)\nr] + \ldots,
\eeq
and
\beq
\g_{1r}=\gt_{1r}/\N', \quad \N' = \hba\int_{\Om-\D/2}^{\Om+\D/2}\rd\om \rho_{wg}(\hba\om).
\eeq
All these first-order terms satisfy the detailed balance condition. They lead naturally to thermal equilibration between the reservoirs and the waveguide.
For the second-order terms, we have first the process described by Eq.\eqref{rate-me},
\begin{align}
\frac{\rd \langle m\rangle}{\rd t}& = \gt_1(\nei+1)\left[\na(M-\langle m\rangle)-(\nb+1)\langle m\rangle\right] \nn\\
&+\gt_2(\nz+1)\left[\nb(M-\langle m\rangle)-(\na+1)\langle m\rangle\right] + \ldots,
\label{rate-mec}
\end{align}
which depends also on the occupation numbers $\nr$ of the waveguide. This term is accompanied by corresponding terms in the rate equations for the reservoirs. It does not satisfy detailed balance because we have only considered the short path between the reservoirs and the TLS (the only available one in the open system). Including also the long path around the circular waveguide would again reinstate the detailed balance condition. We assume that this second process is not coherent due to dephasing of the photon while traveling along the loop.
Such a dephasing may, for example, be caused by scattering processes induced in the long section of the loop.
The second order term given in Eq.\eqref{direct} is modified in the closed system as follows,
\beq
\frac{\rd \na}{\rd t}=\g_0\big( (\nz+1)(\na+1)\nb -(\nei+1)(\nb+1)\na\big) +\ldots,
\eeq
together with an equivalent term for $\nb$.
Another term of second order in $|\hb|^2$ couples the two channels of the waveguide via a reservoir.
For channel 1 it reads
\beq
\frac{\rd \nei}{\rd t} = \g_3(\na+\nb+2)(\nz-\nei) +\ldots,
\eeq
with $\g_3=2\pi^3|\hb|^4\rho^2\rho_{wg}/\hba$.
Finally, there is a term connecting the channels via an intermediate excitation of the TLS, proportional to $g_1^2g_2^2$. This term can be neglected if one of the chiral couplings $g_r$ is close to zero, as we have assumed in the numerical evaluation.
Collecting all the terms, we obtain the rate equations for the photon occupation numbers,
\begin{align}
\frac{\rd \na}{\rd t}&=\g_{\dec}(\nei+\nz-2\na) -\g_1(M-\langle m\rangle)\na(\nei+1)\nn\\
& +\g_2\langle m\rangle(\na+1)(\nz+1) +\g_0\big( (\nz+1)(\na+1)\nb \nn\\
&-(\nei+1)(\nb+1)\na\big), \label{rateAc}\\
\frac{\rd \nb}{\rd t}&=\g_{\dec}(\nei+\nz-2\nb) -\g_2(M-\langle m\rangle)\nb(\nz+1)\nn\\
& +\g_1\langle m\rangle(\nb+1)(\nei+1) +\g_0\big( (\nei+1)(\nb+1)\na \nn\\
& -(\nz+1)(\na+1)\nb\big), \label{rateBc}\\
\frac{\rd \nei}{\rd t}&=\g_\dec'(\na+\nb-2\nei)-\g_{11}\big((M-\langle m\rangle)\nei
- \langle m\rangle(\nei+1)\big) \nn\\
&+\g_3(\na+\nb+2)(\nz - \nei),\label{rateE} \\
\frac{\rd \nz}{\rd t}&=\g_\dec'(\na+\nb-2\nz)-\g_{12}\big((M-\langle m\rangle)\nz
- \langle m\rangle(\nz+1)\big) \nn\\
&+\g_3(\na+\nb+2)(\nei - \nz). \label{rateZ}
\end{align}
The average number of excited TLS is determined by
\begin{align}
\frac{\rd \langle m\rangle}{\rd t}& = \sum_{r=1,2} \gt_{1r}[(M-\langle m\rangle)\nr -\langle m\rangle(\nr +1)]\nn\\
&-\langle m\rangle\big(\gt_1(\nb+1)(\nei+1)+\gt_2(\na+1)(\nz+1)\big) \label{rateMc}\\
&+(M-\langle m\rangle)\big(\gt_1\na(\nei+1)+\gt_2\nb(\nz+1)\big).\nn
\end{align}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{FigS2}
\caption{Solutions of the rate equations \eqref{rateAc}-\eqref{rateMc} as function of time, starting from initial thermal equilibrium. Panel (a) displays $\na(t)$, $\nb(t)$ and panel (b) $\nei(t)$, $\nz(t)$.
The average occupations $\lana n_q\rana(t)$ per mode reach a novel steady state with $\na\neq\nb$. The photon densities in reservoir $A$ and channel 1 fall to zero, whereas channel 2 stays occupied and reservoir $B$ is populated. The displayed time interval corresponds to the time scale set by the coupling to the TLS. Parameters used are
$\g_\dec=\g_\dec'=\g_0=\g_3=10$\;kHz, $\gt_1=\gt_{11}=10$\;MHz, $\g_1=\g_{11}=100$\;kHz, $\gt_2=\gt_{12}=\g_2=\g_{12}=0$ and $\N=\N'=100$, $\hba\Om/k_BT(0)=1$.}
\label{occup-short}
\end{figure}
To compute the total entropy of the system, we note first that
\beq
S_M=-k_BM\big(p_e\ln p_e +(1-p_e)\ln (1-p_e)\big),
\label{entropyTLSa}
\eeq
is the entropy of the TLS, as given by Eq.\eqref{entropyTLS}.
The entropy $S_\rad(\om)$ of the radiation per mode in the reservoirs and the waveguide depends only on $\lana n_q(\om)\rana$ and reads
\beq
S^l_{\rad}(\om) =\frac{\hba\om}{T}\langle n_l(\om)\rana +k_B\ln(1+\langle n_l(\om)\rana),
\label{entropy2}
\eeq
for $l=A,B,1,2$. Because the temperature depends on $\lana n_l\rana$ as described by Eq.\eqref{n-T} in the main text, the entropy is only a function of $\lana n_l\rana$. The effective entropies and temperatures in the reservoirs and the waveguide are computed under the assumption that the photons in each reservoir/channel thermalize in the usual way quickly as an ideal Bose gas.
We approximate the total entropy by the sum of the entropies of the subsystems, as is justified for large $\lana n_q\rana$ and $M$.
The total entropy is then given by
\beq
S(t)=S_M(t) + \int_{\Om-\D/2}^{\Om+\D/2}\rd\om [\rho(\om)S^R(\om,t)+\rho_{wg}(\om)S^{wg}(\om,t)],
\label{entropy3}
\eeq
with
\beq
S^R=S^A_\rad+S^B_\rad, \quad S^{wg}=S^1_\rad +S^2_\rad.
\eeq
For constant densities of states $\rho$ and $\rho_{wg}$, we may write
\beq
S(t) =S_M(t) + \N S^R(\Om,t) + \N' S^{wg}(\Om,t).
\eeq
The temporal evolution of the closed system, starting with thermal equilibrium, is depicted in Fig.~\ref{occup-short} for the same parameters as in the open system. The non-reciprocal interaction with the TLS empties reservoir $A$ and the active (coupled) channel 1, while the occupation of reservoir $B$ rises. The inert channel 2 is unaffected on this short timescales. It interacts via the weak couplings $\g_\dec'$ and $\g_3$ with $B$, which manifests only on much longer timescales, as depicted in Fig.~\ref{occup-long}. This separation of timescales has the same origin in the closed and the open system.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{FigS3}
\caption{The asymptotic temporal behavior of reservoir $B$ (panel (a)) and the inert channel 2 (panel (b)).
The occupation in $A$ and channel 1 is almost zero. The novel steady state has unequal occupations in all photonic subsystems and therefore lower entropy than the initial state. The parameters used are the ones of Fig.~\ref{occup-short}.}
\label{occup-long}
\end{figure}
\subsection{Detailed balance for a system with an embedded cavity}
\label{embed}
We shall now demonstrate that the term in \eqref{stim-em} corresponding to radiation stimulated by the receiving reservoir leads to the detailed balance condition in case a cavity is embedded into another one. Here, detailed balance is also obtained in the second-order terms, in contrast to the chiral system treated in the previous section. We consider a closed cavity $C$ with adiabatic walls. Inside of $C$ there is a smaller cavity $A$ which is coupled to $C$ through a small opening. Besides $A$, a collection of $M$ two-level systems is located in $C$ (Fig.~\ref{supp-fig}).
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{FigS4}
\caption{The cavity system. The small cavity $A$ exchanges radiation with the surrounding closed cavity $C$. The collection $M$ of two-level systems interacts with the radiation modes of $C$.}
\label{supp-fig}
\end{figure}
The Hamiltonian of this system is given as
\beq
H=H_A + H_C + H_{\tls} + H^1_{\inter} + H^2_{\inter},
\label{hams}
\eeq
where
\beq
H_q=\hba\sum_{j}\om_{qj} a_{qj}^\dagger a_{qj}, \qquad
H_{\tls}=\frac{\hba\Omega}{2}\sum_{l=1}^M\s^z_l
\eeq
for $q=A,C$.
The interaction between $A$ and $C$ is given by
\beq
H^1_{\inter}=\sum_{j,k}h_{jk}a^\dagger_{Aj}a_{Ck} + \hc,
\label{hint1s}
\eeq
and $C$ interacts with $M$ as
\beq
H^2_{\inter}=\sum_{l=1}^M \sum_{k} g_{k}a_{Ck}\s^+_l + \hc
\label{hint2s}
\eeq
The rate equations are computed as above, but now the exchange between $A$ and $C$ is given by terms of first order in the coupling $h_{jk}$,
\begin{align}
\frac{\rd \na}{\rd t} &= \g_4\left[-\na(\nc+1) + \nc(\na+1)\right] +\ldots,
\label{rateA-ei}\\
\frac{\rd \nc}{\rd t} &= \g_4'\left[-\nc(\na+1) + \na(\nc+1)\right] +\ldots
\label{rateC-ei}
\end{align}
with $\g_4'=(\rho_A/\rho_C)\g_4$, (compare Eq.\eqref{channel1}).
The interaction between the TLS and $C$ leads to the terms
\beq
\frac{\rd \nc}{\rd t} = -\g_5\nc(M-\langle m\rangle) +\g_5(\nc+1)\langle m\rangle +\ldots,
\eeq
which are of first order in $g_{k}$ and correspond to standard black-body radiation.
Besides these first-order terms, there are also terms of second order in the couplings $h_{jk}$ and $g_k$, describing the interaction of the small cavity $A$ with the TLS via intermediate states belonging to $C$. However, in contrast to the second-order terms discussed in section \ref{closed}, these second-order terms are compatible with detailed balance.
The corresponding terms in the rate equation for $A$ read
\beq
\frac{\rd \na}{\rd t} = \g_6(\nc+1)^2\left[-\na(M-\langle m\rangle) +(\na+1)\langle m\rangle\right]+\ldots
\eeq
As above, the term for stimulated emission, $\g_6(\nc+1)^2\na \langle m\rangle$, is not related to radiation emerging from cavity $A$ into $C$ which would have the wrong direction (see Fig.~\ref{supp-fig}), but comes from the occupation of $A$-modes in the final state. The rate equations for $A$ and $C$ are therefore
\begin{align}
\frac{\rd \na}{\rd t} &= \g_4\left[-\na + \nc\right] +
\g_6(\nc+1)^2\left[-\na(M-\langle m\rangle) +(\na+1)\langle m\rangle\right],
\label{rateA-ei1}\\
\frac{\rd \nc}{\rd t} &= \g_4'\left[-\nc + \na\right]
-\g_5\nc(M-\langle m\rangle) +\g_5(\nc+1)\langle m\rangle.
\label{rateC-ei1}
\end{align}
These equations fulfill the detailed balance condition and lead to thermal equilibrium between $A$, $C$ and $M$. The result is consistent with Kirchhoff's law, which states that the interior of a thermally isolated hohlraum has no influence on the final steady state of the contained radiation. This radiation exhibits the black-body spectrum found by M. Planck.
\bibliographystyle{spphys}
|
2,877,628,089,147 | arxiv | \section{Introduction}
X-ray diffraction imaging, known later as X-ray topography, originated
in the late 1920ies and early 1930ies, when researchers revealed the
internal structure of individual Laue spots in diffraction patterns
\cite{Berg31,Barrett31}. To improve the resolution, fine grain
photographic emulsions were exposed and examined under optical
microscopes -- for this reason the technique was sometimes called
X-ray microscopy \cite{Barrett45}. This method was applied for both
mapping of strains in heavily deformed materials such as cold-worked
metals and alloys \cite{Barrett45} and studies of individual
defects in near-perfect single crystals \cite{Ramachandran44}.
It was assumed that each point on the film or detector corresponds to
a small volume in the reflecting crystal. Simple geometrical optics
then requires the incoming X-ray beam to be tightly collimated, and
the film to be placed as closely as possible to the sample. The
achievable resolution is then limited by the detector resolution, at
best $500\un{nm}$ \cite{Martin06} but more typically $1\un{\mu m}$.
However, in the absence of X-ray optics between the sample and the
film diffraction effects progressively blur the image with increasing
sample-to-detector distance. For a typical experimental setup
(wavelength $\lambda=1\,\textrm{\AA}$, sample-to-detector distance
$s=1\un{cm}$, and sample feature size $d=1\un{\mu m}$) the diffraction
limited resolution due to propagation of the perturbed wavefront from
the exit surface of the crystal to the detector can be approximated as
\begin{equation}
\delta
\approx \frac{\lambda}{d}\cdot s
= \frac{10^{-10}\un{m}}{10^{-6}\un{m}}\cdot 10^{-2}\un{m}
= 1\un{\mu m}.
\end{equation}
Note that to image $10\times$ smaller features on the sample
($d=100\un{nm}$ and therefore $\delta=100\un{nm}$) the
sample-to-detector distance would have to be decreased by a factor of
100, $s=100\un{\mu m}$. In most cases this is technically not
feasible. Furthermore, to the best of our knowledge, 2D imaging
detectors with a spatial resolution of $100\un{nm}$ are not yet
available.
On the other hand, conventional X-ray microscopy techniques as
proposed by Kirkpatrick and Baez \cite{kirkpatrick48,baez52} have
been implemented in the hard X-ray domain rather late. Here, an
in-line scheme is used where the beam transmitted through the sample
is magnified by X-ray optics such as mirrors \cite{Underwood86},
Fresnel zone plates \cite{lai95}, Bragg-Fresnel lenses
\cite{Snigirev97}, or refractive lenses \cite{lengeler99}. Such
forward scattering techniques are primarily sensitive to spatial
variations of the X-ray index of refraction which depends mostly on
the local density of the sample.
In this paper we propose a compact scheme for diffraction microscopy
using X-ray refractive lenses between the sample and the detector. The
insertion of refractive optics into the diffracted beam allows
significant improvements of the resolution, potentially
down to below 100\,nm (a resolution of 300\,nm has been demonstrated
using a similar lens in transmission X-ray microscopy\cite{Bosak10}).
Furthermore, the progressive blurring due to the
wavefront propagating from the sample to the detector can be overcome
by a lens, thus reestablishing the direct mapping of intensity
variations on the detector to the reflectivity variations on the
sample. In this case the image resolution can, in principle, reach the
limit imposed by dynamical diffraction effects within the crystal.
Recently, Fresnel zone plates have been used in X-ray reflection
microscopy to image monomolecular steps at a solid surface
\cite{Fenter06} and for scanning X-ray topography of strained silicon
oxide structures \cite{Tanuma06}. CRLs have the advantage that
efficient focusing can be achieved at higher photon energies, $E \gg
10\un{keV}$. Please note that standard KB mirrors are not suited for imaging setups, as they do not fulfill the Abbe-sine condition. More complicated multi-mirror setups are however being developed to overcome this limitation in transmission geometry \cite{matsuyamaHard2012}.
\section{Experimental details}
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.7\columnwidth, clip=true]{setup_rev.pdf}
\end{center}
\caption{ \label{fig:setup}Experimental setup for Bragg diffraction
microscopy. 11\un{keV} X-rays impinge on the sample. The
diffracted intensity is imaged onto a Sensicam camera via a set of
66 Beryllium compound refractive lenses (CRLs) with an apex-radius of curvature of
$50\un{\mu m}$. The scattering plane is horizontal, and the imaged
features on the sample were aligned parallel to the scattering
plane. }
\end{figure}
Our experiment was carried out at the undulator beamline ID06 of the
European Synchrotron Radiation Facility. A cryogenically cooled
permanent magnet in-vacuum undulator \cite{Chavanne09} with a period
of 18\un{mm} and a conventional in-air undulator with a period of
32\un{mm}, combined with a liquid nitrogen cooled Si (111)
monochromator, delivered photons at an energy of 11\un{keV}. A
transfocator located at 38.7\un{m} from the source point (electron beam waist position in the middle between the two undulators) acted as a
condenser, i.e. it focused the photons onto the sample at 67.9\un{m}
distance from the source, using a combination of paraboloid (2D)
compound refractive lenses, CRLs, \cite{lengeler99}: one lens with radius of curvature at the apex $R$= 1.5\un{mm} and two
lenses with $R$= 0.2\un{mm}, all made out of high-purity Beryllium
(Be). The use of the condenser-CRLs improved the optical efficiency
of the system (absorption of X-rays in the condenser-CRLs was only about 6\,\%.), as it increased the flux on the imaged sample area. The divergence of the photon beam is not altered significantly, as the condenser CRL works almost in a 1:1 magnification geometry. A flux of approximately $2 \cdot 10^{12} \un{photons/s}$ was
incident to the sample. The sample was mounted on a six circle
diffractometer. The scattering plane coincided with the horizontal
plane.
The detector consisted of a scintillator screen, magnifying optics,
and a high resolution CCD-camera. The 9.9\un{\mu m} thick LAG:Eu
scintillator on a 170\un{\mu m} YAG substrate converted X-rays into
visible light, which was projected onto the CCD by the objective lens
(Olympus UPLAPO $\times 10$, numerical aperture 0.4). The CCD camera
(pco SensicamQE) had $1376 \times 1040$ pixels (px) of size
$6.45\un{\mu m/px} \times 6.45\un{\mu m/px}$ and 12 bit depth,
yielding a field of view on the scintillator of $887 \times
670\un{\mu m^2}$ with an effective resolution of 1.3\un{\mu m}. Each
CCD pixel imaged an area of $0.645\times 0.645\un{\mu m^2}$ on the
scintillator.
In front of the detector, on the same diffractometer arm, a second set
of paraboloid Be CRLs (66 lenses with apex-radius of curvature $R$=$50\un{\mu m}$) was
mounted as X-ray objective lens, i.e.~to image the diffracted
intensity pattern at the sample exit surface onto the detector. These
lenses were mounted on translation and rotation stages to align the
lens stack, in particular to tune the sample-to-lens distance to
achieve best focusing onto the detector.
The focal length of this lens stack at 11\un{keV} was about
14\un{cm}, so that a $\approx 4$-fold magnified image was achieved
with the lens center placed about 18\un{cm} downstream of the
sample. The effective aperture was about 240\un{\mu m}, giving a
corresponding diffraction limit of 130\un{nm}. The transmission through the lens stack is reduced by the absorption from the thinnest lens part, plus the increased absorption for rays travelling further away from the lens center, resulting in an effective aperture with Gaussian profile \cite{Snigirev97,lengeler99}. The first contribution is easy to calculate and gives an absorption of 18\,\%. Considering the size of the illuminated sample ($\approx$ 200\,nm, see below) and approximating the reflected beam as a parallel beam, the total absorption is closer to 50\,\%.
Scaling the effective detector resolution by the magnification factor
4 to $1.3\un{\mu m}/4 = 0.33\un{\mu m}$, we expected a resolution
limit of $\sqrt{(0.33\un{\mu m})^2+(130\un{nm})^2} \approx 350\un{nm}$
with this set-up.
\setlength\fboxsep{0pt}
\setlength\fboxrule{0.2pt}
\begin{figure}[htbp!]
\begin{center}
\newcommand{0.35\columnwidth}{0.25\columnwidth}
\begin{tabular}{ll}
(a) & (b)\\
\includegraphics[width=0.35\columnwidth,clip=true,angle=180]{SiO2_0007scale.pdf}
&
\includegraphics[angle=-90,width=0.35\columnwidth]{SiO2_0013roi.pdf}\\
(c) & (d)\\
\includegraphics[angle=-90,width=0.35\columnwidth]{SiO2_0018.pdf}
&
\includegraphics[angle=-90,width=0.35\columnwidth]{SiO2_0022.pdf}\\
(e) & (f) \\
\imagetop{\includegraphics[angle=0,width=0.35\columnwidth]{SiO2_rockingcurve_rev.pdf}}
&
\imagetop{\includegraphics[angle=90,width=0.35\columnwidth]{SiO2_SEM.pdf}}\\
\end{tabular}
\end{center}
\caption{\label{fig:stripe_results} Diffraction microscopy images
(exposure time 2.5\un{s}) of the \chem{SiO_{2}} stripe structure
at different Bragg angles: a) at $0.006^\circ$ below the maximum
diffracted intensity; b) almost at the maximum; c) $0.005^\circ$
above the maximum; d) $0.009^\circ$ above the maximum. The beam
travels from left to right. e) Rocking
curve as measured by a photo diode, indicating also the angle
positions corresponding to Figs. \ref{fig:stripe_results}a) to \ref{fig:stripe_results}d). f) Scanning electron
microscope image of the same \chem{SiO_{2}} stripe system. Note that b) shows stripe like intensity in a region where e) shows a homogeneous \chem{SiO_{2}} surface. This indicates a strain propagation in the Si beyond the etched areas.}
\end{figure}
Two samples were imaged in Bragg geometry. The first sample was a Si
(111) wafer upon which a regular stripe pattern of amorphous
$\mathrm{SiO_2}$ has been fabricated by thermal oxidation followed by
standard photo-resist etching. The $\mathrm{SiO_2}$ layer was
$z=1.15\un{\mu m}$ thick, and was etched to fabricate 2\un{\mu m} wide
windows with a period of 4\un{\mu m}. The substrate was aligned in the
diffractometer to set the oxide stripes parallel to the diffraction
plane. In order to record magnified diffraction images of the sample
the Bragg (333) reflection of the Si substrate (Bragg angle
$\theta_B=32.63^\circ$) was used.
The second sample was a linear Bragg-Fresnel lens (BFL) fabricated on
a Si (111) substrate (for the fabrication process see
\cite{aristov87}). The basic geometrical parameters were: an
outermost zone width of 0.5\un{\mu m}, a height of the structure of
4.4\un{\mu m}, and an aperture of 200\un{\mu m}. Again, the sample was
aligned with the structures parallel to the horizontal scattering
plane. Again, the Si (333) ($\theta_B= 32.63^\circ$) reflection was
studied.
\section{Resolution}
The homogeneous periodicity of the $\mathrm{SiO_2}$ line pattern
(Fig.~\ref{fig:stripe_results}) was used to calibrate the effective
magnification of our configuration. The mask used to produce the
pattern had a period of 4\un{\mu m}, in good agreement with the value,
4.1(1)\un{\mu m}, obtained by scanning electron microscopy (SEM), see
Fig.~\ref{fig:stripe_results}f. Our X-ray image of this structure
shows 15 periods over 395(5)\un{px}. (Fig.~\ref{fig:stripe_results}b)
The line spacing on the fluorescence screen of the detector was
therefore $0.645\un{\mu m/px}\cdot 395\un{px}/15 = 17.0(2)\un{\mu m}$,
yielding a magnification factor of $17.0\un{\mu m}/4.1\un{\mu
m}=4.2(1)$ for the CRL stack and $4.1\un{\mu m}\cdot 15/395\un{px} =
0.156(4)\un{\mu m/px}$ for the overall experiment. The resulting field
of view on the sample was $\approx 162\un{\mu m}/\sin(\theta_B)$ in
the horizontal (within the scattering plane) and $215\un{\mu m}$ in
the vertical direction (perpendicular to the scattering plane).
\begin{figure}[ht!]
\centerline{
\includegraphics[width=0.6\columnwidth,trim=20mm 2mm 30mm 15.5mm, clip=true]{SiO2_FTplot.pdf}
}
\caption{\label{fig_ft_sio2}Fourier transform of the diffraction
microscopy image shown in Fig.~\ref{fig:stripe_results}(b). The
region of interest was divided into 10 vertical slices. The data
shown here represent the magnitude of the Fourier transform
averaged over these 10 slices. The parameters listed are the
result of a fit to eq.~\ref{eq_model_sio2}.}
\end{figure}
An upper limit for the effective resolution of our imaging system can
be estimated from the Fourier transform (FT) of the image (see
Fig.~\ref{fig_ft_sio2}). Peaks corresponding to the fundamental,
second and third harmonics of the structure are clearly visible above
a two-component background. For quantitative analysis, the FT was
fitted to a model function
\begin{equation}
\tilde{I}(f) = \sum\limits_{n=1}^{3} \left( A_n \cdot g(f-n f_1) \right)
+ a e^{-f/f_\mathrm{BG}} + b,
\label{eq_model_sio2}
\end{equation}
where $\tilde{I}(f)$ is the Fourier transform of the image at spatial
frequency $f$. The background is composed of a constant and an
exponentially decaying term with characteristic frequency
$f_{\mathrm{BG}}$. The harmonics are modelled by Gaussians $g(f) =
(2\pi \tilde{\sigma}^2)^{-1/2} \exp(-f^2/2\tilde{\sigma}^2)$ with
amplitude factors $A_n$ for the $n$-the harmonic. The magnification
factor determined above (0.156\un{\mu m/px}) was used to scale the
frequency axis. The resulting parameters are shown in
Fig.~\ref{fig_ft_sio2}. The ratio $A_1/A_3$ can be used to estimate
the modulation transfer function (MTF),
$\tilde{I}(f)=\tilde{c}(f)\cdot \mathrm{MTF}(f)$, where $\tilde{c}(f)$
is the FT of the scattering amplitude of the sample. For an ideal
square wave, $\tilde{c}(f_1)/\tilde{c}(3 f_1) = 3$. For a more smooth
modulation, e.g. resulting from continuous buckling of the Bragg
planes due to strain \cite{Kuznetsov04}, the higher harmonics will be
suppressed, $\tilde{c}(f_1)/\tilde{c}(3 f_1) > 3$ so that
\begin{equation}
\frac{
\tilde{I}(f_1)
}{
\tilde{I}(3 f_1)
}
=
\frac{
\tilde{c}(f_1)
}{
\tilde{c}(3 f_1)
}
\cdot
\frac{
\mathrm{MTF}(f_1)
}{
\mathrm{MTF}(3 f_1)
}
\geq
3
\cdot
\frac{
\mathrm{MTF}(f_1)
}{
\mathrm{MTF}(3 f_1)
}
\end{equation}
The presence of a second harmonic at $2 f_1$ indicates that the
contrast does not follow an ideal square modulation.
In the absence of any further information, we assume the MTF to be a
Gaussian with standard deviation $\tilde{\sigma}_{\mathrm{MTF}}$, so
that $\tilde{\sigma}_{\mathrm{MTF}} = 2 f_1
\left(\log[\mathrm{MTF}(f_1)/\mathrm{MTF}(3f_1)]\right)^{-1/2}$. Using
the values obtained from the fit we find
$\tilde{\sigma}_{\mathrm{MTF}} \geq 0.254 \un{\mu m^{-1}}$,
corresponding to a Gaussian point spread function (PSF) with standard
deviation $\tilde{\sigma}_{\mathrm{PSF}} = {1}/(2\pi
\tilde{\sigma}_{\mathrm{MTF}}) \leq 0.625\un{\mu m}$ and full width at
half maximum (FWHM) $\leq 1.47\un{\mu m}$ on the sample (9.4\un{px} on
the CCD).
\begin{figure}[ht!]
\begin{center}
\newcommand{0.35\columnwidth}{0.35\columnwidth}
\begin{tabular}{ll}
(a) & (b)\\
\includegraphics[width=0.35\columnwidth, clip=true]{BFL_1597scale.pdf}
&
\includegraphics[angle=90,width=0.35\columnwidth]{BFL_1601roi.pdf}\\
(c) & (d)\\
\imagetop{\includegraphics[width=0.35\columnwidth]{BFL_SEM.pdf}}
&
\imagetop{\includegraphics[width=0.35\columnwidth,clip=true,trim=50 5 90 90]{BFL_FTplot.pdf}}
\end{tabular}
\end{center}
\caption{\label{fig_zone_plate} a), b) Diffraction microscopy images of a
Bragg-Fresnel lens (Exposure times a) 1\un{s} and b) 2.5\un{s}).
c) Scanning electron microscope image of the same Bragg-Fresnel
lens. d) Fourier analysis of the stripe structures. The region of
interest (ROI) shown in (d) was divided into 10 vertical
slices. Each slice was Fourier transformed. The average of the
resulting magnitudes were fit to eq.~\ref{eq_model_bfl}. The
modulation transfer function intersects the background at
$f_{\mathrm{max}}=1.10\un{\mu m^{-1}}$, indicating a minimum
observable peak-to-valley distance of
$1/(2f_{\mathrm{max}})=0.46\un{\mu m}$. }
\end{figure}
Further experiments were performed on a Bragg-Fresnel lens. In
this sample, the zone width decreases away from the center, thus
yielding a richer Fourier spectrum that should provide more detailed
information on the MTF and the effective resolution of our system.
Fig.~\ref{fig_zone_plate} shows the Bragg microscopy images of the Bragg-Fresnel lens
(panels a and b), a Scanning electron microscopy image of the same
structure (panel c), and a Fourier analysis (panel d) of the region of
interest shown in panel b.
As above, Fourier analysis was performed to reveal the resolving power
of our experiment. The region of interest shown in panel b) was
divided into 10 vertical slices. Each slice was Fourier transformed,
The average magnitude of the FT is shown in panel d). A Gaussian MTF
with an exponential background was fitted to the average FT, again
using the magnification factor of 0.156\un{\mu m/px} to scale the
frequency axis.
\begin{equation}
\tilde{I}(f) = A \cdot g(f) + a \cdot e^{-f/f_{\mathrm{BG}}}
\label{eq_model_bfl}
\end{equation}
From the resulting parameters (listed in Fig.~\ref{fig_zone_plate}d) two estimates of the
resolution were derived: The standard deviation of the MTF, and the
frequency $f_{\mathrm{max}}$ where the MTF falls below the background.
The fit yielded $\tilde{\sigma}_{\mathrm{MTF}}=0.34\un{\mu m^{-1}}$
and $f_{\mathrm{max}}=1.10\un{\mu m^{-1}}$. The period of a structure
at this frequency is
$\lambda_{\mathrm{min}}=1/f_{\mathrm{max}}=0.92\un{\mu m}$, so that
the smallest observable peak-to-valley valley distance on the sample
is $\lambda_{\mathrm{min}}/2 = 0.46\un{\mu m}$. The standard deviation
of the MTF corresponds to a PSF with standard deviation
$\sigma_{\mathrm{PSF}}=1/(2\pi
\tilde{\sigma}_{\mathrm{MTF}})=0.47\un{\mu m}$ and FWHM 1.1\un{\mu m}
on the sample (7.1\un{px} on the CCD), slightly better than the
estimate obtained above for the \chem{SiO_2} stripe pattern.
We have thus shown that our setup reaches sub-micrometer resolution in
the vertical direction (perpendicular to the scattering plane). In the
horizontal direction (parallel to the scattering plane) the sample
does not lie perpendicular to the camera plane. Viewing the sample at
the Bragg angle $\theta_B$ yields a projected in-plane images size
that is smaller by a factor $\sin(\theta_B)$, here $\sin(32.63^\circ)=
0.539$. Assuming that the resolution limit is given by the detector
and the diffraction limit of the imaging lenses, the in-plane
resolution at the sample surface is then degraded by the factor
$1/\sin(\theta_B) = 1.85$ compared to the vertical out-of-plane
direction. Furthermore, within the scattering plane, the beam
transverses the scatterer partly. The beam path length inside the
sample (limited by absorption or extinction) is comparable to the
resolution, so that additional blurring is to be expected. This could
be avoided in backscattering geometry ($\theta_B \approx 90^\circ$),
or when imaging high-Z materials with low penetration depth.
\section{Discussion}
Using the example of the \chem{SiO_2} stripe sample, we now discuss the
information obtainable from diffraction images taken at different Bragg
angles. We recall that the reflected intensity stems form the underlying
single crystalline Si waver and not from the amorphous top structure of
\chem{SiO_2} stripes.
Contrast in the diffraction image may arise from several effects: (a)
absorption leading to different amplitudes of rays that do or do not
travel through the thin \chem{SiO_2} layer (b) phase shift between
these beams, as the index of refraction of \chem{SiO_2} is different
from unity, and (c) local variations of the Si(111) reflectivity due
to strain in the Si substrate induced by the overlying \chem{SiO_2}
layer.
For (a) and (b) we can calculate the expected contrast. The index of
refraction of \chem{SiO_2} at $E=11\un{keV}$
is $n=1-\delta+i \beta$ with $\delta=3.8\cdot 10^{-6}$ and $\beta=
2.7\cdot 10^{-8}$ \cite{CXROweb}, assuming a density of
2.2\un{g/cm^3}. The path length of the X-rays through
\chem{SiO_2} is $L=2z/\sin(\theta_B)=4.3\un{\mu m}$. The E-field
amplitude of the beam travelling through the \chem{SiO_2} layer is therefore reduced to
$\exp(-2\pi L \beta/\lambda) = 0.9935$, whereas its phase is
shifted by $L\delta/\lambda\cdot 360^\circ=52^\circ$ as compared to the beam travelling through an adjacent groove via the bare Si surface. Consequently,
the absorption contrast (a) is expected to be 1-$\frac{0.9935^2}{1}$ = 1.3\,\%. The phase contrast (b) occurs through
interference at the edges. It can be estimated in calculating the intensity resulting from the superposition of two beam parts that travel (i) through the SiO$_2$ and acquiring the 52$^\circ$ phase shift and a part (ii) that does not travel through the SiO$_2$, but through the groove. This intensity is to be compared with the signal from two beams that did not experience a relative phase shift. We obtain a phase contrast of 1-$\frac{|1+\exp(i\cdot 52^\circ)|^2}{|1+1|^2}$ = 20\,\%. For the regular \chem{SiO_2} pattern of
Fig \ref{fig:stripe_results}b), the measured contrast was 35\,\%. So a part of the contrast must come from strain.
The magnitude of the strain contrast (c)
is difficult to estimate. However, for (a) and (b) the contrast at
each point of the rocking curve should be identical, whereas strain
might shift and broaden the rocking curve, so that for (c) the
contrast in the diffraction micrographs at different points of the
rocking curve might differ \cite{Tanuma06}.
This can indeed be seen when comparing images recorded at different
positions on the rocking curve (Fig.~\ref{fig:stripe_results}a--d). Fig.~\ref{fig:stripe_results}a and \ref{fig:stripe_results}b show intensity
on the right-hand side of the \chem{SiO_2} line pattern, caused by
strain propagation beyond the etched areas. Fig.~\ref{fig:stripe_results}c and~d show that
control structures etched into the \chem{SiO_2} (shown in the right
part of the SEM image, Fig.~\ref{fig:stripe_results}f) cause strong strain in the \chem{Si}
substrate.
Such shifts of the rocking curve can occur via two routes, local
tilting of the lattice planes, or local modifications of the lattice
parameter \cite{Aristov92,Aristov92b}. In the former, a positive tilt
on one side of a straining feature should be accompanied by a negative
tilt on the opposite side. The corresponding areas should be visible
at angles symmetric to the center of the rocking curve. A local
modification of the lattice parameter, on the other hand, would lead
to a unidirectional shift with respect to the unstrained rocking curve
\cite{Tanuma06}.
The sharp features visible in Fig.~\ref{fig:stripe_results}d appear only $\approx
0.009^\circ$ above the rocking curve, indicating that the lattice
parameter is compressed by
\begin{equation}
\frac{\Delta d}{d}
\approx \mathrm{cot}(\theta_B) \cdot \Delta\theta
\approx 2.5\cdot 10^{-4}.
\end{equation}
As shown in Fig.~\ref{fig:stripe_results}e, this strain level is clearly resolved in our
experiment. The sensitivity to lattice strain could be further
improved by selecting higher order Bragg reflections with narrower
rocking curves.
\section{Conclusion}
X-ray diffraction microscopy combines the advantages of X-ray
microscopy in forward scattering geometry and conventional diffraction
topography without image magnification:
\begin{itemize}
\item As in transmission X-ray microscopy, the effective resolution
is greater than that achievable with conventional diffraction
topography, which is limited by detector resolution and
sample-to-detector distance (diffraction effects).
\item As in diffraction topography, the technique is sensitive to
microscopic crystallographic imperfections such as strain,
dislocations, twinning, etc.
\end{itemize}
As we have shown here, data acquisition is fast: A single exposure is
sufficient and contains all the maximum resolution information, thus
the technique is robust with respect to instabilities of the
experimental setup, and it has the potential to study transient,
non-equilibrium phenomena where it is impossible to acquire several
images of the same state.
It should be underlined that the proposed diffraction microscopy
technique has great potential for non-destructive studies of highly
deformed metals and alloys. By comparing the images taken at different
angular settings the contrast due to strain or orientation are easily
distinguished \cite{Afanasev71}. Adding a tomography option
($180^\circ$ rotation) will provide 3D mapping of the orientation and
strain of individual grains in polycrystalline materials.
The use of
CRLs is of particular interest in diffraction topography since these
optics are well adapted to focusing hard X-rays, and are relatively
straight-forward to implement on existing diffractometer setups.
The technique, however, can also be used with other imaging
systems such as Fresnel zone plates \cite{Tanuma06,Fenter06} or
mirrors such as Wolter optics \cite{Wolter52,Takano02}. This
flexibility enables the use of X-ray diffraction microscopy over a
very wide range of photon energies from sub-keV soft X-rays, e.g.~for
the study of multilayers, to very hard X-rays with several tens of
keV.
Furthermore, the technique can be combined with other standard
X-ray techniques to access information unobtainable in transmission
geometry. Examples include grazing-incidence diffraction to image
micro- and nanostructures grown on a surface, magnetic scattering to
image ferromagnetic \cite{Kreyssig09} or antiferromagnetic
\cite{Lang04} magnetic domain patterns and the imaging of
ferroelectric domains \cite{Fogarty96}.
Finally, the field of view and the magnification can be adjusted
in-situ simply by changing the number of lenses (e.g.~by using a
Transfocator) and the sample-to-lens or lens-to-detector distance.
{\textbf{\\ Acknowledgments}}
We acknowledge
the European Synchrotron Radiation Facility (ESRF) for the provision
of beam time on ID06. C.~D.~thanks R.~Barrett for stimulating
discussions and critical reading of the manuscript.
\printbibliography
\end{document}
|
2,877,628,089,148 | arxiv | \section{\textbf{Introduction}}
A basic objective in scientific research is to infer causal effects of a
treatment versus non-treatment from empirical data. Randomized studies
provide a gold standard of causal inference [19], because such a study
ensures, through the random assignments of treatments to subjects, that
treatment assignments are independent of all observed and unobserved
confounding variables which characterize experimental subjects. As a
consequence, the causal effect can be estimated by a direct comparison of
treatment outcomes and non-treatment outcomes. However, in many research
settings, it is not feasible to randomly assign treatments to subjects; due
to financial, ethical, or time constraints [19]. Therefore, in such
settings, it becomes necessary to estimate causal effects of treatments from
non-randomized, observational designs.
For observational studies, a popular approach to causal inference is based
on the regression discontinuity (RD)\ design ([21], [7]). In the design,
each subject is assigned to a treatment or non-treatment whenever an
observed value of a continuous-valued assignment variable ($R$) exceeds a
cutoff value. The RD design can provide a "locally-randomized experiment",
in the sense that the causal effect of treatment outcomes versus
non-treatment outcomes can be identified and estimated from subjects having
values of the assignment variable that are located in a small neighborhood
around the cutoff. As shown in [11], the RD\ design can empirically produce
causal effect estimates that are similar to those from a standard randomized
study ([1], [5], [3], [2], [20]). Arguably, the fact that RD designs can
provide such locally-randomized experiments is a main reason why, between
1997 through 2013, that at least 74 RD-based empirical studies have emerged
in the fields of education, psychology, statistics, economics, political
science, criminology, and health sciences ([7], [16], [4], [24], [17]).
Given the locally-randomization feature of RD designs, a
conceptually-attractive approach to causal inference is to identify the
cluster
\begin{equation*}
\mathcal{C}_{\epsilon }(r_{0})=\{i:|r_{i}-r_{0}|<\epsilon \}
\end{equation*
of locally-randomized subjects, who have assignment variable observations
r_{i}$ that are located in a neighborhood of size $\epsilon >0$ around the
cutoff $r_{0}$. Then, for subjects within the cluster $\mathcal{C}_{\epsilon
}(r_{0})$, to perform statistical tests by comparing treatment outcomes
between subjects with $r_{i}\geq r_{0}$ against non-treatment subjects; i.e.
among subjects with $r_{i}<r_{0}$) [6].
One approach to identifying the locally-randomized cluster of subjects is to
iteratively search for the largest value of $\epsilon >0$ that leads to a
non-rejection of the null hypothesis tests of zero effect of the treatment
variable $T=\mathbf{1}(R\geq r_{0})$. A theoretical motivation for this
approach is that, as a consequence of local randomization, the distribution
of the confounding variables is the same for non-treatment subjects located
just to the left of the cutoff ($r_{0}$), as for subjects located just to
the right of the cutoff [15].
However, this approach is not fully satisfactory.\ This is because it bases
estimates and tests of causal effects (comparisons of treatment outcomes
versus non-treatment outcomes), conditionally on an optimal value of the
neighborhood size parameter $\epsilon $, i.e., the clustering configuration
\mathcal{C}_{\epsilon }(r_{0})$ found via the null hypothesis tests
mentioned above. Therefore, the approach does not fully account for the
uncertainty that is inherent in the neighborhood size parameter, i.e., the
configuration of the cluster of locally-randomized subjects. As a result,
the approach may lead to null-hypothesis tests of causal effects that have
exaggerated significance levels.
In this manuscript, we propose a Bayesian nonparametric regression approach
to estimating and testing for causal effects, for locally-randomized
subjects in a RD design, while accounting for the uncertainty in the cluster
configuration $\mathcal{C}_{\epsilon }(r_{0})$ of locally-randomized
subjects. In the approach, we consider a scalar variable $X$ that we assume
sufficiently describes all observed and unobserved pre-treatment
"confounding" variables. Again, as a consequence of local-randomization, $X$
has the same distribution, for subjects located just to the left and for
subjects located just to the right of the cutoff, respectively [15].
Then, we fit a Bayesian nonparametric regression model, based on the
restricted Dirichlet process (rDP)\ [23], in order to provide a flexible
regression of $X$ on the assignment variable $R$. Importantly, the model
provides a random clustering of subjects, with respect to common values of
X $, in a way that is sensitive to the ordering of the values of $R$. Given
a posterior random draw of clusters of subjects under the rDP\ model, we
identify the single cluster
\begin{equation*}
\mathcal{C}(r_{0})=\{i\,:\,\mbox{individual}\,\,i\,\,\mbox{has an
\,\,r_{i}\,\,\mbox{in the cluster containing}\,\,r_{0}\}
\end{equation*
of locally-randomized subjects. Within this cluster we then compare the
treatment outcomes (of subjects with $r_{i}\geq r_{0}$) against
non-treatment outcomes (of subjects with $r_{i}<r_{0}$), via statistical
summaries and two-sample statistical tests. Specifically, we may compare
outcomes in terms of the mean, variance, quantiles, and the interquartile
range, and use various two-sample tests, including the $t$-test,
Wilcoxon-Mann-Whitney test of equality of medians, chi-square test of
equality of variances, and the Kolmogorov-Smirnov test. We can even estimate
the probability $\Pr [Y_{1}\geq Y_{0}|\mathcal{C}(r_{0})]$ that the
treatment outcome ($Y_{1}$) exceeds the non-treatment outcome ($Y_{0}$) (see
[14]). We then average the results of such statistical comparisons over many
posterior samples of the clusters $\mathcal{C}(r_{0}),$under the restricted
DP model. Hence, when making such statistical comparisons, we fully account
for the uncertainty that is inherent in the clustering configuration
\mathcal{C}(r_{0})$ of locally-randomized subjects. Moreover, this
statistical procedure represents another application of inference of
posterior functionals in DP-based models [8].
We can extend the Bayesian nonparametric regression procedure to RD design
setting that involve multiple pre-treatment confounding variables
(X_{1},\ldots ,X_{p})$. In this case, we can construct a scalar confounding
covariate $X$ by the multivariate confounder score [18], which enables the
comparison of treatment outcomes and non-treatment outcomes, conditionally
on subclassified values of the score. Multivariate confounder scores are
constructed by a regression of the outcomes $Y$ on $(T,X_{1},\ldots ,X_{p})
, and then setting each score to be equal to the part of the predictor that
is free of the linear effect of $T$ on $Y$. Moreover, while in this paper we
base our causal inference procedure on the restricted DP, in fact, it is
possible to base the causal inference procedure on any Bayesian
nonparametric regression model that can cluster subjects as a function of
the assignment variable $R$. See [13], for a recent example.
In the next section, we review the Bayesian nonparametric regression model,
based on the rDP. Then we further describe how estimates and tests of the
causal effects is undertaken based on the clustering method. We also provide
some more details on the multivariate confounder scoring method. Moreover,
we show how this Bayesian nonparametric method can be extended to handle
causal inferences in a context of a fuzzy RD\ design [22], which involves
imperfect treatment compliance among the subjects. In Section 3, we
illustrate our model through the analysis of a data set, to provide causal
inferences in an educational research setting. Section 4 ends with
conclusions.
\section{The Bayesian Nonparametric \textbf{Model}}
Let $\{(x_{i},r_{i},y_{i})\}_{i=1}^{n}$ denote a sample set of data obtained
under a RD\ design. For each subject $i\in \{1,\ldots ,n\}$, the triple
(x_{i},r_{i},y_{i})$ denotes an observed value of the confounding variable,
the assignment variable, and outcome variable, respectively. For such data,
we consider a Bayesian nonparametric regression of the pre-treatment
confounding variable $X$ on the assignment variable $R$, based on a
restricted DP\ (rDP) mixture of normal linear regressions [23]. Without loss
of generality, we assume
\begin{equation*}
r_{1}\leq r_{2}\leq \cdots \leq r_{n}.
\end{equation*
The key idea is that clusterings will be based on this order; so the first
cluster would be based on the smallest values of $r$, and so on. Since here
we are interested in clustering subjects on common values of $X$, based on
values of $R$, we represent this model as a random partition model, as
follows:
\begin{subequations}
\label{restDP}
\begin{align}
\left[ (X_{1},\ldots ,X_{n})|r,\rho _{n},\{(\beta _{j}^{\ast },\sigma
_{j}^{2\ast })\}_{j=1}^{k_{n}}\right] \text{ }& \sim
\dprod\limits_{j=1}^{k_{n}}\dprod\limits_{\{i:s_{i}=j\}}^{{}}\mathrm{Normal
\bigg(x_{i}|\underline{\mathbf{r}}_{i}^{\intercal }\boldsymbol{\beta
_{j}^{\ast },\sigma _{j}^{2\ast }\bigg) \\
\lbrack \rho _{n}]\text{ }& \sim \text{ }\pi (\rho _{n})=\dfrac{\alpha
^{k_{n}}}{\alpha ^{\lbrack n]}}\dfrac{n}{k_{n}!}\dprod\limits_{j=1}^{k_{n}
\dfrac{1}{n_{j}}\mathbf{1}\left\{ s_{1}\leq \cdots \leq s_{n}\right\} \\
\lbrack \boldsymbol{\beta }_{j}|\sigma _{j}^{2}]& \sim \mathrm{Normal}\big
\boldsymbol{\beta }_{j}|\beta _{0},\sigma _{j}^{2}\mathbf{C}^{-1}\big) \\
\lbrack \sigma _{j}^{2}]& \sim \mathrm{InverseGamma}\big(\sigma _{j}^{2}|a,
\big),
\end{align
where $\underline{\mathbf{r}}_{i}=(1,r_{i})^{\intercal }$; the collection
\{(\boldsymbol{\beta }_{j}^{\ast },\sigma _{j}^{2\ast })\}_{j=1}^{k_{n}}$
form the $k_{n}\leq n$ distinct values in the sample of parameters $(
\boldsymbol{\beta }_{1},\sigma _{1}^{2}),\ldots ,(\boldsymbol{\beta
_{n},\sigma _{n}^{2}))$ that are assigned to each of the $n$ subjects. And
\rho _{n}=(s_{1},\ldots ,s_{n})$ is a random partition of the $n$
observations, where $s_{i}=j$ if $(\boldsymbol{\beta }_{i},\sigma _{i}^{2})=
\boldsymbol{\beta }_{j}^{\ast },\sigma _{j}^{2\ast })$, and with \noindent
n_{j}=\tsum\nolimits_{i=1}^{n}\mathbf{1}\{(\boldsymbol{\beta }_{i},\sigma
_{i}^{2})=(\boldsymbol{\beta }_{j}^{\ast },\sigma _{j}^{2\ast })\}$.
Also, the rDP is parameterized by a precision parameter $\alpha $, and by a
normal-inverse-gamma baseline distribution for $(\boldsymbol{\beta },\sigma
^{2})$ that defines the mean of the process. See [23] for more details.
The posterior distribution of the clustering partition $\rho
_{n}=(s_{1},\ldots ,s_{n})$, with respect to the $k_{n}$ distinct values $\{
\boldsymbol{\beta }_{j}^{\ast },\sigma _{j}^{2\ast })\}_{j=1}^{k_{n}}$, is
given by:
\end{subequations}
\begin{equation*}
\pi (\rho _{n}|\mathbf{y},\mathbf{x})\propto \frac{\alpha k_{n}}{k_{n}!
\dprod\limits_{j=1}^{k_{n}}\frac{1}{n_{j}}\sqrt{\dfrac{|\mathbf{C}|}{
\mathbf{C}+\underline{\mathbf{R}}_{j}^{\intercal }\underline{\mathbf{R}}_{j}
}}\dfrac{b^{a}\Gamma (a+n_{j}/2)}{\Gamma (a)(b+V_{j}^{2}/2)^{a+n_{j}/2}
\mathbf{1}_{s_{1}\leq \cdots \leq s_{n}}.
\end{equation*
In the above,
\begin{equation*}
V_{j}^{2}=(\mathbf{y}_{j}-\widehat{\mathbf{y}}_{j})\widehat{\mathbf{W}}_{j}
\mathbf{y}_{j}-\widehat{\mathbf{y}}_{j}),
\end{equation*
\begin{equation*}
\widehat{\mathbf{W}}_{j}=\mathbf{I}_{j}-\mathbf{R}_{j}^{\intercal }(\mathbf{
}+\underline{\mathbf{R}}_{j}^{\intercal }\underline{\mathbf{R}}_{j})_{j}^{-1
\underline{\mathbf{R}}_{j}^{\intercal }
\end{equation*
and
\begin{equation*}
\widehat{\mathbf{y}}_{j}=\underline{\mathbf{R}}_{j}\boldsymbol{\beta }_{0},
\end{equation*
with $\underline{\mathbf{R}}_{j}$ the matrix of row vectors $\underline
\mathbf{r}}_{i}=(1,r_{i})^{\intercal }$ for subjects belonging in cluster
group $j$ [23]. Posterior samples from $\pi (\rho _{n}|\mathbf{y},\mathbf{x
) $ can be generated though the use of a reversible-jump Markov Chain Monte
Carlo (RJMCMC) sampling algorithm that is described in Section 4 of [23]. At
each sampling stage of the algorithm, with equal probability, either two
randomly-selected clusters of subjects, that are adjacent with respect to
the ordering of $R$, are merged into a single cluster; or a randomly
selected cluster of subjects is split into two clusters.
Given a random sample of a partition, $\rho _{n}\sim \pi (\rho _{n}|\mathbf{
},\mathbf{x})$ from the posterior, and given the fixed index $i\equiv i_{0}$
of a subject whose assignment variable $r_{i}$ is nearest to the cutoff
r_{0}$, we then identify the single (posterior random)\ cluster of
locally-randomized subjects, with this cluster of subjects identified by the
subset of indices
\begin{equation*}
\mathcal{C}(r_{0})=\{i:s_{i_{0}}=s_{i}\}.
\end{equation*}
This is a posterior random cluster of subjects with values of the assignment
variables $r_{i}$ located in a neighborhood around the cutoff $r_{0}$.
For the subjects in this random cluster, we compare treatment outcomes
y_{i} $ for subjects where $r_{i}\geq r_{0}$, versus non-treatment outcomes
y_{i}$ for subjects where $r_{i}<r_{0}$, based on two-sample statistical
comparisons of various statistical quantities (e.g., means), as mentioned in
Section 1. We then repeat this process over a large number of MCMC\ samples
from the posterior distribution of the partitions, $\pi (\rho _{n}|\mathbf{y
,\mathbf{x})$.\ We then summarize the posterior distribution of such
statistical comparisons over these samples, in order to provide estimates
and tests of causal effect of the treatment versus non-treatment, on the
outcome $Y$.
This procedure can be extended to a fuzzy RD design, where the assignment
variable $R$ represents eligibility to receive a treatment, and some
subjects who are assigned treatment $T_{i}=\mathbf{1}(r_{i}\geq r_{0})$ opt
to receive the other treatment $(1-T_{i})$. Then, for the subset of subjects
in a given random cluster $\mathcal{C}_{\epsilon }(r_{0})$, we can divide
the difference in statistical quantities (e.g., treatment mean minus
non-treatment mean) by the difference $\overline{T}_{\mathcal{C
}^{(r_{i}\geq r_{0})}-\overline{T}_{\mathcal{C}}^{(r_{i}<r_{0})}$, where
\overline{T}_{\mathcal{C}}^{(r_{i}\geq r_{0})}$ ($\overline{T}_{\mathcal{C
}^{(r_{i}<r_{0})}$, respectively) is the average $T_{i}$ for subjects in the
locally-randomized cluster $\mathcal{C}(r_{0})$ that have assignment
variables $r_{i}\geq r_{0}$ (with assignment variables $r_{i}<r_{0}$,
respectively). Such a divided difference provides an instrumental-variables
estimate of a causal effect of treatment versus non-treatment.\ This is
true, provided that the local exclusion restriction holds in the sense that
for a given cluster of subjects $\mathcal{C}_{\epsilon }(r_{0})$, any effect
of the assignment variable $\mathbf{1}(R\geq r_{0})$ on $Y$\ must be only
via $T$ [16].
For an RD setting that involves multiple pre-treatment confounding variables
$\mathbf{x}_{i}=(x_{1i},\ldots ,x_{pi})^{\intercal }$ (for $i=1,\ldots ,n$),
we construct scalar-valued confounding variable $x_{i}$ ($i=1,\ldots ,n$) by
taking Miettinen's multivariate confounder score [18], with $x_{i}=\widehat
\beta }_{0}+\widehat{\boldsymbol{\beta }}_{\mathbf{x}}g(\mathbf{x}_{i})$,
based on
\begin{equation*}
\widehat{\mathrm{E}}[Y_{i}|\mathbf{x}_{i},r_{i}]=\widehat{\beta }_{0}
\widehat{\boldsymbol{\beta }}_{\mathbf{x}}B(\mathbf{x}_{i})^{\intercal }
\widehat{\beta }_{R}\mathbf{1}(r_{i}\geq r_{0}),
\end{equation*
with $B(\mathbf{\cdot })$ a chosen (e.g., polynomial)\ basis transformation
of $\mathbf{x}$, and with coefficient estimates
\begin{equation*}
\widehat{\boldsymbol{\beta }}=(\widehat{\beta }_{0},\widehat{\boldsymbol
\beta }}_{\mathbf{x}},\widehat{\beta }_{R})
\end{equation*
obtained by a linear model fit. We consider the Bayesian estimator
\begin{equation*}
\widehat{\boldsymbol{\beta }}=(v^{-1}\mathbf{I}_{q}+\mathbf{B}^{\intercal
\mathbf{B})^{-1}\mathbf{B}^{\intercal }\mathbf{y},
\end{equation*
with $\mathbf{y}=(y_{1},\ldots ,y_{n})^{\intercal }$, and with $\mathbf{B}$
the $(n\times q)$-dimensional basis matrix with row vectors $(1,B(\mathbf{x
_{i})^{\intercal },\mathbf{1}(r_{i}\geq r_{0}))$, $i=1,\ldots ,n$.
\begin{table}[H] \centerin
\begin{tabular}{lcc}
\hline
\textbf{Statistic} & \textbf{Non-Treatment} & \textbf{Treatment} \\ \hline
sample size & 103.1 \ (3, 190) & 6.7\ \ (2, 16) \\
mean & .37 \ ($-$.07, 1.55) & 1.23 \ (.97, 1.59) \\
variance & .76 \ (.01, 1.04) & .47 \ (.01, 0.85) \\
interquartile range & 1.17\ \ (.18, 1.71) & .90 \ (.24, 1.41) \\
skewness & $-$.11\ \ (-1.26, .71) & .03 \ ($-$.63, 0.82) \\
kurtosis & 2.69\ \ (1.45, 3.41) & 2.06 \ (1.00, 3.06) \\
1\%ile & $-$1.34\ \ ($-$2.20, 1.47) & .27 \ ($-$0.65, 1.47) \\
10\%ile & $-$.77 \ ($-$1.35, 1.47) & .42 \ (.01, 1.47) \\
25\%ile & $-$.20 \ ($-$0.65, 1.47) & .74 \ (.30, 1.47) \\
50\%ile & .34 \ ($-$.18, 1.47) & 1.22 \ (1.00, 1.59) \\
75\%ile & .98 \ (.53, 1.65) & 1.65 \ (1.47, 2.06) \\
90\%ile & 1.43 \ (1.24, 1.71) & 2.14 \ (1.71, 2.77) \\
99\%ile & 2.04 \ (1.71, 2.42) & 2.28 \ (1.71, 2.89) \\
t-statistic & \multicolumn{2}{c}{$-$2.02 \ ($-$4.21, .88) \ \ p-value:\ \
.19 \ (.00, .91)} \\
$F$ test,variance & \multicolumn{2}{c}{\ 4.86 \ (.02, 34.45) \ \ p-value:\
.65\ \ (.05, .98)} \\
$\Pr [Y_{1}\geq Y_{0}|\mathcal{C}_{\epsilon }(r_{0})]$ & \multicolumn{2}{c}{
0.70 (.21, .93)} \\
$\Pr [Y_{1}\leq Y_{0}|\mathcal{C}_{\epsilon }(r_{0})]$ & \multicolumn{2}{c}{
0.22 (.04, .67)} \\
KS test & \multicolumn{2}{c}{.28\ (.05, .98)} \\ \hline
\end{tabular
\caption{Statistical comaprisons of treatment outcomes versus non-treatment
outcomes.}\label{TableKey
\end{table
\section{\textbf{Illustration}}
A data set was obtained under a partnership between four Chicago University
schools of education, which implemented a new curriculum that aims to train
and graduate teachers to improve Chicago public school education. This data
set involves $n=204$ undergraduate teacher education students, each of whom
enrolled into one of the four Chicago schools of education during either the
year of 2010, 2011, or 2012 (90\%\ female; mean age = $22.5$, s.d. = $5.3$,
n=203$); $47\%$, $21\%$, $10\%$, and $22\%$ attended the four universities;
49\%$, $41\%$ and $10\%$\ enrolled in 2010, 2011, and 2012). We investigate
the causal effect of basic skills on teacher performance (e.g., [10]),
because most U.S. schools of education based their undergraduate admissions
decisions on the ability of individual applicants to pass basic skills
tests. Here, the assignment variable is a 4-variate random variable, defined
by subtest scores on an Illinois test of basic skills, in reading, language,
math and writing. Each subtest has a minimum passing score of 240. The
dependent variable is the total score on the 50-item Haberman Teacher
Pre-screener assessment, and a score in the 40-50 range indicates a very
effective teacher. This assessment has a test-retest reliability of .93, and
has a 95\% accuracy rate in predicting which teachers will stay and succeed
in the teaching profession, and is used by many schools to assess applicants
of teaching positions [12]. Among all the $205$ students of the RD design,
the average Haberman Pre-screener score is $29.82$ (s.d. = $4.3$).\ The
average basic skills score in reading ($\mathrm{Read}$), language ($\mathrm
Lang}$), math ($\mathrm{Math}$), and writing ($\mathrm{Write}$) was $204.3$
(s.d. = $33.4$), $204.0$ (s.d. = $35.8$), $212.6$ (s.d. = $42.2$), and
238.3 $ (s.d. = $23.7$), respectively.
Using the Bayesian clustering method that was described in the previous
section, based on the rDP regression model, we analyzed the data set to
estimate the causal effect of passing the reading basic skills exam
(treatment), versus not passing (non-treatment), on students' ability to
teach in urban schools, for subjects located around the cutoff $r_{0}$\ of
an assignment variable $R$. We treated the Haberman z-score as the outcome
variable $Y$. Also, using the Bayesian rDP regression model, we regressed
the scalar-valued confounding variable $X$ on $R$. The confounding variable
X$ is a multivariate confounder score constructed from 113 pre-treatment
variables that describe students' personal background, high school
background, and teaching preferences. The multivariate confounder score is
based on the the Bayesian linear model fit with prior parameter $v=1000$,
and with linear basis $B(\mathbf{x})=(1,\mathbf{x})^{\intercal }$, as
described in the previous section. Also, the assignment variable $R$ is
defined by $\mathrm{B240d10}=(\min (\mathrm{Read,Lang,Math,Write})-240)/10
.\ This gives a minimum difference between the four basic subtest scores
subtracted by the minimum cutoff score (240), and provides one standard
method for handling a multivariate assignment variable [24]. Then, the
treatment assignment variable $T$ is defined by \textrm{BasicPass} $=\mathbf
1}(\mathrm{B240d10}\geq 0)$.
Using code we wrote in the MATLAB\ (Natick, VA) software language, we
analyzed the data set using the Bayesian rDP model. For the model, we chose
vague prior specifications $\beta _{0}=\mathbf{0},\mathbf{C}=\mathrm{diag
(10^{3},10)$, $\alpha =a=b=1$. All posterior estimates, reported here, are
based on 200,000\ MCMC\ samples of clusters $\mathcal{C}_{\epsilon }(r_{0})
, which led to accurate posterior estimates of two-group statistical
comparisons, according to standard MCMC\ convergence assessment criteria
[9]. Specifically, univariate trace plots displayed good mixing of model
parameters and posterior predictive samples, while all posterior predictive
estimates obtained 95\%\ MC\ confidence intervals with half-width sizes of
.00.
Table 1 presents the results of the two-group statistical comparisons, in
terms posterior mean and 95\% posterior credible interval summaries of the
group sample size, mean, variance, interquartile range, skewness, kurtosis,
quantiles, the t-statistic, the F-statistic for equality of variances,
exceedance probabilities $\Pr [Y_{1}\geq Y_{0}|\mathcal{C}_{\epsilon
}(r_{0})]$ and $\Pr [Y_{1}\leq Y_{0}|\mathcal{C}_{\epsilon }(r_{0})]$ of the
treatment outcome ($Y_{1}$) and the non-treatment outcome ($Y_{0}$), and the
Kolmogorov-Smirnov test for the equality of distributions. We find that, in
terms of the posterior mean of these statistics, the treatment group in the
random clusters $\mathcal{C}_{\epsilon }(r_{0})$ (having assignment variable
observations $\mathrm{B240d10}\geq 0$) tended to have higher values of the
Haberman z-score outcome $Y$ compared to the non-treatment group in the
random clusters $\mathcal{C}_{\epsilon }(r_{0})$ (having assignment variable
observations $\mathrm{B240d10}\geq 0$), in terms of the mean and quantiles.
In contrast, the outcomes of the non-treatment tended to have larger
dispersion (variance and interquartile range), and more skewness and
kurtosis, compared to the treatment group. The treatment group had a
significantly higher 90\%ile (.90 quantile) of the z-score outcome $Y$
compared to the non-treatment group, as the 95\% posterior credible
intervals of the outcome for the two groups was (1.71, 2.77) and (1.24,
1.71), respectively.
\section{\textbf{Discussion\label{Section: Discussion}}}
In this paper, we proposed and illustrated a novel, Bayesian nonparametric
regression modeling approach to RD\ designs, which exploits the local
randomization feature of RD designs, and which basis causal inferences on
comparisons of treatment outcomes and non-treatment outcomes within
posterior random clusters of locally-randomized subjects. The approach can
be easily extended to fuzzy RD\ settings, involving treatment
non-compliance. We illustrate the Bayesian nonparametric approach through
the analysis of a real educational data set, to investigate the causal link
between basic skills and teaching ability. Finally, the approach assumes
that the RD\ design provides data on all confounding variables (that are
used to construct $X$), but this assumption of no hidden bias is
questionable. Therefore, in future research, it would be of interest to
extend the procedure, so that it can provide an analysis of the sensitivity
of causal inference, with respect to varying degrees of hidden biases, i.e.,
of effects to hypothetical unobserved confounding variables.\bigskip
\bigskip
\noindent {\LARGE References\smallskip }
\begin{description}
\item \lbrack 1] Aiken, L., S. West, D. Schwalm, J. Carroll, and S. Hsiung
(1998): \textquotedblleft Comparison of a randomized and two
quasi-experimental designs in a single outcome evaluation efficacy of a
university-level remedial writing program,\textquotedblright\ \textit
Evaluation Review}, \textit{22}, 207--244.
\item \lbrack 2] Berk, R., G. Barnes, L. Ahlman, and E. Kurtz (2010):
\textquotedblleft When second best is good enough: A comparison between a
true experiment and a regression discontinuity
quasi-experiment,\textquotedblright\ \textit{Journal of Experimental
Criminology}, \textit{6}, 191--208.
\item \lbrack 3] Black, D., J. Galdo, and J. Smith (2005): \textquotedblleft
Evaluating the regression discontinuity design using experimental
data,\textquotedblright\ Unpublished manuscript.
\item \lbrack 4] Bloom, H. (2012): \textquotedblleft Modern regression
discontinuity analysis,\textquotedblright\ \textit{Journal of Research on
Educational Effectiveness}, \textit{5}, 43--82.
\item \lbrack 5] Buddelmeyer, H. and E. Skoufias (2004): \textit{An
evaluation of the performance of regression discontinuity design on PROGRESA
, World Bank Publications.
\item \lbrack 6] Cattaneo, M., B. Frandsen, and R. Titiunik (2013):
\textquotedblleft Randomization inference in the regression discontinuity
design: An application to the study of party advantages in the U.S.
Senate,\textquotedblright\ Technical report, Department of Statistics,
University of Michigan.
\item \lbrack 7] Cook, T. (2008): \textquotedblleft Waiting for life to
arrive: A history of the regression discontinuity design in psychology,
statistics and economics,\textquotedblright\ \textit{Journal of Econometrics
, \textit{142}, 636--654.
\item \lbrack 8] Gelfand, A. and A. Kottas (2002): \textquotedblleft A
computational approach for full nonparametric Bayesian inference under
Dirichlet process mixture models,\textquotedblright\ \textit{Journal of
Computational and Graphical Statistics}, \textit{11}, 289--305.
\item \lbrack 9] Geyer, C. (2011): \textquotedblleft Introduction to
MCMC,\textquotedblright\ in S. Brooks, A. Gelman, G. Jones, and X. Meng,
eds., \textit{Handbook of Markov Chain Monte Carlo}, Boca Raton, FL: CRC,
3--48.
\item \lbrack 10] Gitomer, D., T. Brown, and J. Bonett (2011):
\textquotedblleft Useful signal or unnecessary obstacle? The role of basic
skills tests in teacher preparation,\textquotedblright\ \textit{Journal of
Teacher Education}, \textit{62}, 431--445.
\item \lbrack 11] Goldberger, A. (2008/1972): \textquotedblleft Selection
bias in evaluating treatment effects: Some formal
illustrations,\textquotedblright\ in D. Millimet, J. Smith, and E. Vytlacil,
eds., \textit{Modelling and evaluating treatment effects in economics},
Amsterdam: JAI Press, 1--31.
\item \lbrack 12] Haberman, M. (2008): \textit{The Haberman Star Teacher
Pre-Screener,} Houston: The Haberman Educational Foundation.
\item \lbrack 13] Karabatsos, G. and S. Walker (2012b): \textquotedblleft
Adaptive-modal Bayesian nonparametric regression,\textquotedblright\ \textit
Electronic Journal of Statistics}, \textit{6}, 2038--2068.
\item \lbrack 14] Kotz, S., Y. Lumelskii, and M. Pensky (2003): \textit{The
Stress-Strength Model and its Generalizations}, New Jersey: World Scientific.
\item \lbrack 15] Lee, D. (2008): \textquotedblleft Randomized experiments
from non-random selection in U.S. house elections,\textquotedblright\
\textit{Journal of Econometrics}, \textit{142}, 675--697.
\item \lbrack 16] Lee, D. and T. Lemieux (2010): \textquotedblleft
Regression discontinuity designs in economics,\textquotedblright\ \textit
The Journal of Economic Literature}, \textit{48}, 281--355.
\item \lbrack 17] Li, F., A. Mattei, and F. Mealli (2013): \textquotedblleft
Bayesian inference for regression discontinuity designs with application to
the evaluation of Italian university grants,\textquotedblright\ Technical
report, Department of Statistics, Duke University.
\item \lbrack 18] Miettinen, O. (1976): \textquotedblleft Stratification by
a multivariate confounder score,\textquotedblright\ \textit{American Journal
of Epidemiology}, \textit{104}, 609--620.
\item \lbrack 19] Rubin, D. (2008): \textquotedblleft For objective causal
inference, design trumps analysis,\textquotedblright\ \textit{The Annals of
Applied Statistics}, \textit{2}, 808--840.
\item \lbrack 20] Shadish, W., R. Galindo, V. Wong, P. Steiner, and T. Cook
(2011): \textquotedblleft A randomized experiment comparing random and
cutoff-based assignment.\textquotedblright\ \textit{Psychological Methods},
\textit{16}, 179-191.
\item \lbrack 21] Thistlewaite, D. and D. Campbell (1960): \textquotedblleft
Regression-discontinuity analysis: An alternative to the ex-post facto
experiment,\textquotedblright\ \textit{Journal of Educational Psychology},
\textit{51}, 309--317.
\item \lbrack 22] Trochim,W. (1984): \textit{Research design for program
evaluation: The regression discontinuity approach}, Newbury Park, CA: Sage.
\item \lbrack 23] Wade, S., S. Walker, and S. Petrone (2013, to appear):
\textquotedblleft A predictive study of Dirichlet process mixture models for
curve fitting,\textquotedblright\ \textit{Scandinavian Journal of Statistics
, n/a--n/a.
\item \lbrack 24] Wong, V., P. Steiner, and T. Cook (2013):
\textquotedblleft Analyzing regression-discontinuity designs with multiple
assignment variables: A comparative study of four estimation
methods,\textquotedblright\ \textit{Journal of Educational and Behavioral
Statistics}, \textit{38}, 107--141.
\end{description}
\end{document}
|
2,877,628,089,149 | arxiv | \section{Introduction}
\paragraph{Delaunay triangulations and Voronoi diagrams.}
Let $P$ be a (finite) set of points in ${\mathbb R}^2$.
Let $\mathop{\mathrm{VD}}(P)$ and $\mathop{\mathrm{DT}}(P)$ denote the Voronoi diagram and Delaunay
triangulation of $P$, respectively. For a point $p \in P$, let
$\mathop{\mathrm{Vor}}(p)$ denote the Voronoi cell of $p$.
The Delaunay triangulation $\mathop{\mathrm{DT}}=\mathop{\mathrm{DT}}(P)$ consists of all
triangles whose circumcircles do not contain points of $P$ in their
interior. Its edges form the {\em Delaunay graph}, which is the
straight-edge dual graph of the Voronoi diagram of $P$. That is,
$pq$ is an edge of the Delaunay graph if and only if
$\mathop{\mathrm{Vor}}(p)$ and $\mathop{\mathrm{Vor}}(q)$ share an edge, which we denote by $e_{pq}$.
This is equivalent to the existence of a circle passing through $p$
and $q$ that does not contain any point of $P$ in its interior---any
circle centered at a point on $e_{pq}$ and passing through $p$ and $q$
is such a circle.
Delaunay triangulations and Voronoi diagrams are fundamental to much
of computational geometry and its applications.
See \cite{AK,Ed2} for a survey and a
textbook on these structures.
In many applications of Delaunay/Voronoi methods (e.g., mesh generation and kinetic collision detection) the points are moving continuously, so
these diagrams need to be efficiently updated as motion occurs.
Even though the motion of the nodes is continuous, the combinatorial and topological structure of the Voronoi and
Delaunay diagrams change only at
discrete times when certain critical events occur. Their evolution
under motion can be studied within the framework of {\em kinetic data
structures} (KDS in short) of Basch {\em et al.}~\cite{bgh-dsmd-99,285869,g-kdssar-98},
a general methodology for designing efficient algorithms for maintaining
such combinatorial attributes of mobile data.
For the purpose of kinetic maintenance, Delaunay triangulations are
nice structures, because, as mentioned above, they admit local
certifications associated with individual triangles. This makes
it simple to maintain $\mathop{\mathrm{DT}}$ under point motion: an update is
necessary only when one of these empty circumcircle conditions
fails---this corresponds to cocircularities of certain subsets of
four points.\footnote{We assume that the motion of the points is sufficiently generic, so that no more than four points can become cocircular at any given time.} Whenever such an event happens,
a single edge flip easily restores Delaunayhood. Estimating the
number of such events, however, has been elusive---the problem
of bounding the number of combinatorial changes in $\mathop{\mathrm{DT}}$ for
points moving along semi-algebraic trajectories of constant description complexity has been in the
computational geometry lore for many years; see \cite{TOPP}.
Let $n$ be the number of moving points in $P$. We
assume that each point moves along an algebraic trajectory of
fixed degree or, more generally, along pseudo-algebraic trajectory of constant description complexity (see Section~\ref{sec:Prelim} for a more formal
definition).
Guibas et al.~\cite{gmr-vdmpp-92} showed a roughly cubic upper bound of
$O(n^2 \lambda_s(n))$ on the number of discrete (also known as \textit{topological}) changes in $\mathop{\mathrm{DT}}$, where $\lambda_s(n)$ is the maximum length
of an $(n,s)$-Davenport-Schinzel sequence~\cite{SA95}, and $s$ is a constant
depending on the motions of the points. A substantial gap exists between this upper bound
and the best known quadratic lower bound~\cite{SA95}.
It is thus desirable to find approaches for maintaining a substantial
portion of $\mathop{\mathrm{DT}}$ that {\em provably} experiences only a nearly
quadratic number of discrete changes, that is reasonably easy to define and
maintain, and that retains useful properties for further applications.
\paragraph{Polygonal distance functions.}
If the ``unit ball" of our
underlying norm is {\em polygonal} then things improve considerably.
In more detail, let $Q$ be a convex polygon with a constant
number, $k$, of edges. It induces a {\em convex distance function}
$$d_Q(x,y) = \min\{\lambda \mid y\in x+\lambda Q\};$$
$d_Q$ is a metric if $Q$ is centrally symmetric with respect to the origin.
We can define the $Q$-Voronoi diagram
of a set $P$ of points in the plane in the usual way, as the
partitioning of the plane into Voronoi cells, so that the cell
$\mathop{\mathrm{Vor}}^\diamond(p)$ of a point $p$ is
$\{ x\in{\mathbb R}^2 \mid d_Q(x,p)=\min_{p'\in P}d_Q(x,p') \}$.
Assuming that the points of $P$ are in general position with respect
to $Q$, these cells are nonempty, have pairwise disjoint interiors,
and cover the plane.
As in the Euclidean case, the $Q$-Voronoi diagram of $P$ has its
dual representation, which we refer to as the {\em $Q$-Delaunay
triangulation} $\mathop{\mathrm{DT}}^\diamond(P)=\mathop{\mathrm{DT}}^\diamond$. A triple of points in $P$ define a
triangle in $\mathop{\mathrm{DT}}^\diamond$ if and only if they lie on the boundary of some
homothetic copy of $Q$ that does not contain any point of $P$ in its
interior. Assuming that $P$ is in general position, these $Q$-Delaunay
triangles form a triangulation of a certain simply-connected polygonal
region that is contained in the convex hull of $P$.
Unlike the Euclidean case, it does not always coincide with the convex hull (see Figures~\ref{Fig:ConesCertif} and~\ref{Fig:AlmostTriangulation} for examples).
See Chew and Drysdale~\cite{CD} and Leven and Sharir~\cite{LS} for analysis of Voronoi and Delaunay diagrams of this kind.
For kinetic maintenance, polygonal Delaunay triangulations are
``better'' than Euclidean Delaunay triangulations because, as shown by
Chew~\cite{Chew}, when the points of $P$ move (in the algebraic
sense assumed above), the number of topological changes in the
$Q$-Delaunay triangulation is only nearly quadratic in $n$.
One of
the major observations in this paper is that the \textit{stable portions} of the Euclidean Delaunay triangulation and the $Q$-Delaunay triangulation are closely related.
\paragraph{Stable Delaunay edges.}
We introduce the notion of \textit{$\alpha$-stable Delaunay edges},
for a fixed parameter $\alpha>0$, defined as follows. Let $pq$ be
a Delaunay edge under the Euclidean norm, and let $\triangle pqr^+$ and $\triangle pqr^-$
be the two Delaunay triangles incident to $pq$. Then $pq$ is
called {\em $\alpha$-stable} if its opposite angles in these triangles
satisfy $\angle pr^+q + \angle pr^-q < \pi-\alpha$. (The case where
$pq$ lies on the convex hull of $P$ is treated as if one of $r^+,r^-$
lies at infinity, so that the corresponding angle $\angle pr^+q$ or
$\angle pr^-q$ is equal to $0$.) An equivalent and more useful definition, in terms of the dual Voronoi diagram, is that
$pq$ is $\alpha$-stable if the equal angles at which $p$ and $q$
see their common Voronoi edge $e_{pq}$ are at least $\alpha$.
See Figure \ref{Fig:LongDelaunay}.
\begin{figure}[htbp]
\begin{center}
\input{LongDelaunay.pstex_t}
\caption{\small \sf The points $p$ and $q$ see their common Voronoi edge $ab$ at (equal) angles $\beta$. This is equivalent to the angle condition $x+y=\pi-\beta$ for the two adjacent Delaunay triangles.}
\label{Fig:LongDelaunay}
\end{center}
\end{figure}
A justification for calling such edges stable lies in the following
observation: If a Delaunay edge $pq$ is $\alpha$-stable then it
remains in $\mathop{\mathrm{DT}}$ during any continuous motion of $P$ for which
every angle $\angle prq$, for $r\in P\setminus\{p,q\}$, changes
by at most $\alpha/2$. This is clear because at the time $pq$ is
$\alpha$-stable we have
$\angle pr^+q + \angle pr^-q < \pi-\alpha$ for \textit{any} pair of points
$r^+$, $r^-$ lying on opposite sides of the line $\ell$ supporting $pq$, so,
if each of these angles change by at most $\alpha/2$ we still have
$\angle pr^+q + \angle pr^-q \le \pi$, which is easily seen to imply
that $pq$ remains an edge of $\mathop{\mathrm{DT}}$. (This argument also covers the cases when a point $r$ crosses $\ell$ from side to side: Since
each point, on either side of $\ell$, sees $pq$ at an angle of $\leq \pi-\alpha$, it follows that no point can cross
$pq$ itself -- the angle has to increase from $\pi-\alpha$ to $\pi$. Any other crossing of $\ell$ by a point $r$ causes $\angle prq$ to
decrease to $0$, and even if it increases to $\alpha/2$ on the other side of $\ell$, $pq$ is still an edge of $\mathop{\mathrm{DT}}$, as is easily checked.)
Hence, as long as the ``small angle change'' condition
holds, stable Delaunay edges remain a ``long time'' in the
triangulation.
Informally speaking, the non-stable edges $pq$ of $\mathop{\mathrm{DT}}$ are those for $p$ and $q$
are almost cocircular with their two common Delaunay neighbors
$r^+$, $r^-$, and hence are more likely to get flipped ``soon".
\paragraph{Overview of our results.}
Let $\alpha>0$ be a fixed parameter.
In this paper we show how to maintain a subgraph of the full Delaunay
triangulation $\mathop{\mathrm{DT}}$, which we call a {\em $(c\alpha,\alpha)$-stable Delaunay graph} ($\mathop{\mathrm{SDG}}$ in short), so that (i) every edge of $\mathop{\mathrm{SDG}}$ is $\alpha$-stable,
and (ii) every $c\alpha$-stable edge of $\mathop{\mathrm{DT}}$ belongs to $\mathop{\mathrm{SDG}}$, where $c>1$ is some (small) absolute constant.
Note that $\mathop{\mathrm{SDG}}$ is not uniquely defined, even when $c$ is fixed.
In Section \ref{sec:Prelim}, we introduce several useful definitions and show that the number of discrete changes in the $\mathop{\mathrm{SDG}}$s
that we consider
is nearly quadratic.
What this analysis also implies is that if the true bound for
kinetic changes in a Delaunay triangulation is really close to cubic, then
the overhelming majority of these changes involve edges which never become stable and just flicker in and out of the diagram by cocircularity with their two Delaunay neighbors.
In Sections \ref{Sec:polygProp} and \ref{Sec:ReduceS} we show that $\mathop{\mathrm{SDG}}$ can be
maintained by a kinetic data structure that uses only near-linear
storage (in the terminology of \cite{bgh-dsmd-99}, it is {\em compact}),
encounters only a nearly quadratic number of critical events
(it is {\em efficient}), and processes each event in polylogarithmic
time (it is {\em responsive}). For the second data structure, described in Section \ref{Sec:ReduceS}, can be slightly modified to ensure that each point appears at any time
in only polylogarithmically many places in the structure (it then becomes
{\em local}).
The scheme described in Section \ref{Sec:polygProp} is based on a useful and interesting ``equivalence" connection
between the (Euclidean) $\mathop{\mathrm{SDG}}$ and a suitably defined ``stable" version of the Delaunay triangulation of $P$ under the ``polygonal" norm whose unit ball $Q$ is a regular
$k$-gon, for $k=\Theta(1/\alpha)$. As noted above, Voronoi and Delaunay structures under polygonal norms are particularly
favorable for kinetic maintenance because of Chew's
result~\cite{Chew}, showing that the number of topological changes in
$\mathop{\mathrm{DT}}^\diamond(P)$ is $O^*(n^2k^4)$; here
the $O^*(\cdot)$ notation hides a factor that depends
sub-polynomially on both $n$ and $k$. In other words, the scheme simply maintains the ``polygonal" diagram $\mathop{\mathrm{DT}}^\diamond(P)$ in its entirety, and selects from it those edges that are also stable edges of the Euclidean diagram $\mathop{\mathrm{DT}}$.
The major disadvantage of the solution in Section \ref{Sec:polygProp} is
the rather high (proportional to $\Theta(1/\alpha^4)$) dependence on
$1/\alpha(\approx k)$ of the bound on the number of
topological changes. We do not know whether the upper bound $O^*(n^2k^4)$ on the number of topological changes in
$\mathop{\mathrm{DT}}^\diamond(P)$ is nearly tight (in its dependence on $k$).
To remedy this, we present in Section \ref{Sec:ReduceS} an
alternative scheme for maintaining stable
(Euclidean) Delaunay edges. The scheme is reminiscent of the kinetic
schemes used in \cite{KineticNeighbors} for maintaining closest pairs
and nearest neighbors. It extracts $O^*(n)$ pairs of points of $P$
that are candidates for being stable Delaunay edges. Each point
$p\in P$ then runs $O(1/\alpha)$ \textit{kinetic and dynamic tournaments} involving
the other points in its candidate pairs. Roughly, these tournaments
correspond to shooting $O(1/\alpha)$ rays from $P$ in fixed directions and finding along each ray
the nearest point equally distant from $p$ and from some other
candidate point $q$. We show that $pq$ is a stable Delaunay edge if and
only if $q$ wins many (at least some constant number of) consecutive tournaments of $p$ (or $p$ wins many consecutive tournaments of $q$). A careful analysis shows that
the number of events that this scheme processes (and the overall
processing time) is only $O^*(n^2/\alpha^2)$.
Section \ref{Sec:SDGProperties} establishes several useful properties of stable Delaunay graphs. In particular, we show that at
any given time the stable subgraph contains at least $\left[1-\frac{3}{2(\pi/\alpha-2)}\right]n$ Delaunay
edges, i.e., at least about one third of the maximum possible number of edges. In addition, we
show that at any moment the $\mathop{\mathrm{SDG}}$ contains the closest pair, the so-called
\textit{$\beta$-skeleton} of $P$, for $\beta=1+\Omega(\alpha^2)$ (see \cite{Crusts,Skeletons}), and the \textit{crust} of a sufficiently densely sampled point set along a smooth curve (see \cite{Amenta,Crusts}).
We also extend the connection in Section \ref{Sec:polygProp} to arbitrary distance functions $d_Q$ whose unit ball $Q$ is sufficiently close (in the Hausdorff sense) to the Euclidean one (i.e., the unit disk).
\section{Preliminaries}\label{sec:Prelim}
\seclab{sdg}\seclab{ddg}
\paragraph{Stable edges in Voronoi diagrams.}
Let $\{u_0, \ldots, u_{k-1}\} \subset {\mathbb S}^1$ be a set of
$k=\Theta(1/\alpha)$ equally spaced directions in ${\mathbb R}^2$. For
concreteness take $u_i = (\cos
(2\pi i/k), -\sin (2\pi i/k))$, $0\le i < k$ (so our directions $u_i$ go clockwise as $i$ increases).\footnote{%
The index arithmetic is modulo $k$, i.e., $u_i=u_{i+k}$.}
For a point $p\in P$ and a unit vector
$u$ let $u[p]$ denote the ray $\{p+\lambda u\mid \lambda \geq 0\}$ that
emanates from $p$ in direction $u$. For a pair of points $p,q \in P$
let $b_{pq}$ denote the perpendicular bisector of $p$ and $q$.
If $b_{pq}$ intersects $u_i[p]$, then the expression
\begin{equation}\label{Eq:DirectDist}
\varphi_i[p,q]=\frac{\|q-p\|^2}{2\inprod{q-p}{u_i}}
\end{equation}
is the distance
between $p$ and the intersection point of $b_{pq}$ with
$u_i[p]$.
If $b_{pq}$ does not
intersect $u_i[p]$ we define $\varphi_i[p,q] = \infty$.
The point $q$ minimizes $\varphi_i[p,q']$, among all points
$q'$ for which $b_{pq'}$ intersects $u_i[p]$, if and only if the
intersection between $b_{pq}$ and $u_i[p]$ lies on the Voronoi edge $e_{pq}$. We call $q$ the {\em neighbor of $p$ in direction $u_i$},
and denote it by $N_i(p)$; see Figure \ref{Fig:StableVoronoi}.
The {\em (angular) extent} of a Voronoi edge $e_{pq}$ of two points
$p,q\in P$ is
the angle at which it is seen from either $p$ or $q$ (these two
angles are equal). For a given angle $\alpha \le \pi$,
$e_{pq}$ is called {\em $\alpha$-long} (resp., {\em
$\alpha$-short}) if the extent of $e_{pq}$ is at least
(resp., smaller than) $\alpha$. We also say that
$pq \in \mathop{\mathrm{DT}}(P)$ is {\em $\alpha$-long} (resp., {\em
$\alpha$-short}) if $e_{pq}$ is {\em $\alpha$-long} (resp., {\em
$\alpha$-short}). As noted in the introduction, these notions can also be defined (equivalently) in terms of the angles in the Delaunay triangulation: A Delaunay edge $pq$, which is not a hull edge, is $\alpha$-long if and only if $\angle pr^+q+\angle pr^-q\leq \pi-\alpha$,
where $\triangle pr^+q$ and $\triangle pr^-q$ are the two Delaunay triangles incident to $pq$. See Figure \ref{Fig:LongDelaunay}; hull edges are handled similarly, as discussed in the introduction.
Given parameters $\alpha'>\alpha>0$, we seek to construct (and
maintain under motion) an \emph{$(\alpha',\alpha)$-stable Delaunay
graph} (or \emph{stable Delaunay graph}, for brevity, which we further abbreviate as $\mathop{\mathrm{SDG}}$) of $P$, which
is any subgraph ${\sf G}$ of $\mathop{\mathrm{DT}}(P)$ with the following properties:
\begin{itemize}
\item[(S1)]
Every $\alpha'$-long edge of $\mathop{\mathrm{DT}}(P)$ is an edge of
${\sf G}$.
\item[(S2)]
Every edge of ${\sf G}$ is an $\alpha$-long edge of $\mathop{\mathrm{DT}}(P)$.
\end{itemize}
An $(\alpha',\alpha)$-stable Delaunay graph need not be
unique. In what follows, $\alpha'$ will always be some fixed (and reasonably small) multiple of $\alpha$.
\paragraph{Kinetic tournaments.}
Kinetic tournaments were first studied by
Basch \textit{et al.}~\cite{bgh-dsmd-99}, for kinetically maintaining the lowest point in a set $P$ of $n$ points moving on some vertical line, say the $y$-axis, so that their trajectories are algebraic of bounded degree, as above.
These tournaments are a key ingredient in the data structures that we will develop for maintaining stable Delaunay graphs. Such a tournament is
represented and maintained using the following variant of a heap.
Let $T$ be a minimum-height balanced binary tree, with the points stored
at its leaves (in an arbitrary order). For an internal node $v\in T$,
let $P_v$ denote the set of points stored in the subtree rooted at $v$. At any
specific time $t$, each internal node $v$ stores the lowest point
among the points in $P_v$ at time $t$, which is called the {\em winner\/} at $v$.
The winner at the root is the desired overall lowest point of $P$.
To maintain $T$ we associate a certificate with each internal node $v$, which
asserts which of the two winners, at the left child and at the
right child of $v$, is the winner at $v$. This certificate remains
valid as long as (i) the winners at the children of $v$ do not change,
and (ii) the order along the $y$-axis between these two
``sub-winners'' does not change. The actual certificate caters
only to the second
condition; the first will be taken care of recursively.
Each certificate has an associated failure time, which is the next time
when these two winners switch their order along the $y$-axis.
We store all certificates in another heap, using the failure times
as keys.\footnote{Any ``standard'' heap that supports
{\bf insert}, {\bf delete}, and {\bf deletemin} in $O(\log n)$
time is good for our purpose.}
This heap of certificates is called the {\em event queue}.
Processing an event is simple. When the two sub-winners $p,q$ at some node $v$ change their order, we compute the new failure time of the certificate at $v$ (the first future time when $p$ and $q$ meet again), update the event queue accordingly, and propagate the new winner, say $p$, up the tree, revising the certificates at the ancestors of $v$, if needed.
If we assume that the trajectories of each pair of points intersect at most $r$
times then
the overall number of changes of winners, and
therefore also the overall number of events, is at most
$\sum_v |P(v)| \beta_r(|P(v)|)= O(n \beta_r(n) \log n)$. Here
$\beta_r(n)=\lambda_r(n)/n$, and $\lambda_r(n)$ is the maximum length of a Davenport-Schinzel sequence
of order $r$ on $n$ symbols; see \cite{SA95}.
This is larger by a logarithmic factor than the maximum possible
number of times the lowest point along the $y$-axis can indeed change,
since this latter number is bounded by the complexity of the lower
envelope of the trajectories of the points in $P$ (which, as noted above, records the changes in the winner at the root of $T$).
Agarwal {\em et al.}~\cite{KineticNeighbors} show how to make
such a tournament also {\em dynamic\/}, supporting insertions and deletions of points. They replace the balanced binary tree $T$ by
a {\em weight-balanced $(BB(\alpha))$ tree} \cite{NR73}
(and see also \cite{Mehlhorn}). This allows us to insert a new point
anywhere we wish in $T$, and to delete any point from $T$,
in $O(\log n)$ time. Each such insertion or deletion may
change $O(\log n)$ certificates, along the corresponding search path,
and therefore updating the event queue takes $O(\log^2 n)$ time, including the time for the
structural updates of (rotations in) $T$; here $n$ denotes the
actual number of points in $T$, at the step where we perform
the insertion or deletion. The analysis of \cite{KineticNeighbors} is summarized in Theorem \ref{thm:kinetic-tour}.
\begin{theorem}[\textbf{Agarwal \textit{et al.}}~\cite{KineticNeighbors}] \label{thm:kinetic-tour}
A sequence of $m$ insertions and deletions into a kinetic tournament,
whose maximum size at any time is $n$ (assuming $m\ge n$), when
implemented as a weight-balanced tree in the manner described above,
generates at most $O(m\beta_{r+2}(n)\log n)$ events, with a total processing cost
of $O(m\beta_{r+2}(n)\log^2 n)$. Here $r$ is the maximum number of times a pair of points intersect, and $\beta_{r+2}(n)=\lambda_{r+2}(n)/n$.
Processing an update or a tournament event takes
$O(\log ^2 n)$ worst-case time. A dynamic kinetic tournament on $n$
elements can be constructed in $O(n)$ time.
\end{theorem}
\noindent {\it Remarks:} (1) Theorem \ref{thm:kinetic-tour} subsumes the static case too, by inserting all the elements ``at the beginning of time", and then tracing the kinetic changes. \\
\noindent (ii) Note that the amortized cost of an update or of processing a tournament event is only $O(\log n)$ (as opposed to the $O(\log^2n)$ worst-case cost).
\paragraph{Maintenance of an SDG.}
Let $P=\{p_1,\ldots,p_n\}$ be a set of points
moving in ${\mathbb R}^2$. Let $p_i(t)=(x_i(t),y_i(t))$ denote the position
of $p_i$ at time $t$. We call the motion of $P$
\emph{algebraic} if each $x_i(t),y_i(t)$ is a
polynomial function of $t$, and the \emph{degree} of motion of $P$ is the maximum
degree of these polynomials.
Throughout this paper we assume that the motion of $P$ is
algebraic and that its degree is bounded by a constant.
In this subsection we present a simple technique for maintaining a
$(2\alpha,\alpha)$-stable Delaunay graph. Unfortunately this
algorithm requires quadratic space.
It is based on the following easy observation (see Figure \ref{Fig:StableVoronoi}),
where $k$ is an integer, and the unit vectors (directions) $u_0,\ldots,u_{k-1}$ are as defined earlier.
\begin{lemma} \label{lem:alpha}
Let $\alpha = 2\pi/k$.
(i) If the extent of $e_{pq}$ is larger than $2\alpha$ then there are two consecutive directions
$u_i$, $u_{i+1}$, such that $q$ is the neighbor of $p$ in directions $u_i$ and $u_{i+1}$. \\
(ii) If there are two consecutive directions $u_i,u_{i+1}$, such that $q$ is the neighbor of $p$ in both directions $u_i$ and $u_{i+1}$, then
the extent of $e_{pq}$ is at least $\alpha$.
\end{lemma}
\begin{figure}[htbp]
\begin{center}
\input{StableVoronoi.pstex_t}
\caption{\small \sf $q$ is the neighbor of $p$ in the directions $u_i$ and $u_{i+1}$, so the Voronoi edge $e_{pq}$ is $\alpha$-long.}
\label{Fig:StableVoronoi}
\end{center}
\end{figure}
The algorithm maintains Delaunay edges $pq$ such that there are two consecutive directions
$u_i$ and $u_{i+1}$ along which $q$ is the neighbor of $p$.
For each point $p$ and direction $u_i$ we get a set of at most $n-1$ piecewise
continuous functions of time, $\varphi_i[p,q]$, one for each point $q \not=
p$, as defined in (\ref{Eq:DirectDist}). (Recall that $\varphi_i[p,q]=\infty$ when $u_i[p]$
does not intersect $b_{pq}$.) By assumption on the motion of $P$,
for each $p$ and $q$, the domain in which $\varphi_i[p,q](t)$ is
defined consists of a constant number of intervals.
For each point $p$, and ray $u_i[p]$,
consider each function $\varphi_i[p,q]$ as the
trajectory of a point moving
along the ray and corresponding to $q$. The algorithm maintains
these points in a dynamic and kinetic tournament $K_i(p)$
(see Theorem \ref{thm:kinetic-tour}) that keeps track of the
minimum of $\{\varphi_i[p,q](t)\}_{q\neq p}$ over time.
For each pair of points $p$ and $q$ such that $q$
wins in two consecutive tournaments, $K_i(p)$ and $K_{i+1}(p)$, of $p$,
it keeps the edge $pq$ in
the stable Delaunay graph. It is trivial to update this graph as a by-product of the updates of the various tournaments.
The analysis of this
data structure is straightforward using Theorem \ref{thm:kinetic-tour},
and yields the following result.
\begin{theorem} \label{thm:ddj}
Let $P$ be a set of $n$ moving points in ${\mathbb R}^2$ under algebraic
motion of bounded degree, let $k$ be an integer, and let $\alpha = 2\pi/k$.
A $(2\alpha,\alpha)$-stable Delaunay graph
of $P$ can be maintained using
$O(kn^2)$ storage and processing
$O(kn^2\beta_{r+2}(n)\log n)$ events, for a total cost
of $O(kn^2\beta_{r+2}(n)\log^2 n)$ time.
The processing of each event takes
$O(\log ^2 n)$ worst-case time.
Here $r$ is a constant that depends on the degree of motion of $P$.
\end{theorem}
Later on, in Section \ref{Sec:ReduceS}, we will revise this approach and reduce the storage to nearly linear, by letting only
a small number of points to participate in each tournament. The filtering procedure for the points makes the improved solution
somewhat more involved.
\section{An SDG Based on Polygonal Voronoi Diagrams}
\label{sec:ViaPolygonal}
\label{Sec:polygProp}
Let $Q=Q_k$ be a regular $k$-gon
for some even $k=2s$, circumscribed by the unit disk, and let $\alpha = \pi/s$ (this is the angle at which the center of $Q$ sees an edge).
Let $\mathop{\mathrm{VD}}^\diamond(P)$ and $\mathop{\mathrm{DT}}^\diamond(P)$ denote the $Q$-Voronoi diagram and
the dual $Q$-Delaunay triangulation of $P$, respectively.
In this section we show that the set of
edges of $\mathop{\mathrm{VD}}^\diamond(P)$ with sufficiently many \textit{breakpoints} (see below for details) form a
$(\beta,\beta')$-stable (Euclidean) Delaunay graph for appropriate multiples
$\beta,\beta'$ of $\alpha$.
Thus, by kinetically maintaining $\mathop{\mathrm{VD}}^\diamond(P)$ (in its entirety),
we shall get ``for free'' a KDS for keeping track of a stable portion
of the Euclidean DT.
\subsection{Properties of $\mathbf{VD^\diamond(P)}$}
\label{Sec:PolygonalBackground}
We first review the properties of the (stationary)
$\mathop{\mathrm{VD}}^\diamond(P)$ and $\mathop{\mathrm{DT}}^\diamond(P)$. Then we consider the
kinetic version of these diagrams, as the points of $P$ move, and
review Chew's proof~\cite{Chew} that the number of topological
changes in these diagrams, over time, is only nearly quadratic
in $n$. Finally, we present a straightforward kinetic data structure
for maintaining $\mathop{\mathrm{DT}}^\diamond(P)$ under motion that uses linear storage,
and that processes a nearly quadratic number of events,
each in $O(\log n)$ time.
Although later on we will take $Q$ to be a regular $k$-gon, the analysis in this subsection is more general, and we only assume here that $Q$ is an arbitrary convex $k$-gon, lying in general position with respect to $P$.
\paragraph{Stationary $Q$-diagrams.}
The {\em bisector} $b_{pq}^\diamond$ between
two points $p$ and $q$, with respect to $d_Q(\cdot,\cdot)$, is the
locus of all
placements of the center of any homothetic copy $Q'$ of $Q$ that
touches $p$ and $q$.
$Q'$ can be classified according to the pair of its edges, $e_1$ and $e_2$,
that touch $p$ and $q$, respectively. If we slide $Q'$ so that its
center moves along $b_{pq}^\diamond$ (and its size expands or shrinks to
keep it touching $p$ and $q$), and the contact edges, $e_1$ and $e_2$,
remain fixed, the center traces a straight segment.
The bisector is a concatenation of $O(k)$ such segments. They
meet at {\em breakpoints}, which are placements of the center of a
copy $Q'$ that touches $p$ and $q$ and one of the contact points
is a vertex of $Q$; see Figure \ref{Fig:CornerContact}. We call such a
placement a {\em corner contact} at the appropriate point.
Note that a corner contact where some vertex $w$ of (a copy $Q'$ of) $Q$ touches $p$
has the property that the center of $Q'$ lies on the fixed ray
emanating from $p$ and parallel to the directed segment from $w$
to the center of $Q$.
\begin{figure}[htbp]
\begin{center}
\input{CornerPlacement.pstex_t}
\caption{\small \sf Each breakpoint on $b_{pq}^\diamond$ corresponds to a corner contact of $Q$ at one of the points $p,q$, so that $\partial Q$ also touches the other point.}\label{Fig:CornerContact}
\end{center}
\end{figure}
A useful property of bisectors and Delaunay edges, in the special case where $Q$ is a regular
$k$-gon, which will be used in the next subsection, is that the breakpoints along a bisector
$b_{pq}^\diamond$ alternate between corner contacts at $p$ and corner contacts at $q$.
Indeed, assuming general position, each point $w\in{\partial} Q$ determines a unique
placement of $Q$ where it touches $p$ at $w$ and also touches $q$, as
is easily checked. A symmetric property holds when we interchange $p$
and $q$. Hence, as we slide the center of $Q$ along the bisector
$b^\diamond_{pq}$, the points of contact of ${\partial} Q$ with $p$ and $q$ vary
continuously and monotonically along ${\partial} Q$. Consider two consecutive
corner contacts, $Q'$, $Q''$, of $Q$ at $p$ along $b^\diamond_{pq}$, and
suppose to the contrary that the portion of $b^\diamond_{pq}$ between
them is a straight segment, meaning that, within this portion,
${\partial} Q$ touches each of $p$, $q$ at a fixed edge. Since the center of
$Q$ moves along the angle bisector of the lines supporting these
edges (a property that is easily seen to hold for regular $k$-gons), it is easy to see that the distance between the two contact
points of $p$, at the beginning and the end of this sliding, and
the distance between the two contact points of $q$ (measured, say,
on the boundary of the standard placement of $Q$) are equal. However,
this distance for $p$ is the length of a full edge of ${\partial} Q$, because
the motion starts and ends with $p$ touching a vertex, and therefore
the same holds for $q$, which is impossible (unless $q$ also starts
and ends at a vertex, which contradicts our general position
assumption).
Another well known property of $Q$-bisectors and Voronoi edges, for arbitrary convex polygons in general position with respect to $P$, is that
two bisectors $b^\diamond_{pq_1}$, $b^\diamond_{pq_2}$, can intersect at
most once (again, assuming general position), so every $Q$-Voronoi edge $e_{pq}^\diamond$ is connected.
Equivalently, this
asserts that there exists at most one homothetic placement of $Q$ at
which it touches $p$, $q_1$, and $q_2$. Indeed, since homothetic
placements of $Q$ behave like pseudo-disks (see, e.g., \cite{KLPS}),
the boundaries of two distinct homothetic placements of $Q$ intersect
in at most two points, or, in degenerate position, in at most two
connected segments. Clearly, in the former case the boundaries
cannot both contain $p$, $q_1$, and $q_2$, and this also holds in the
latter case because of our general position assumption.
Consider next an edge $pq$ of $\mathop{\mathrm{DT}}^\diamond(P)$. Its dual Voronoi edge
$e_{pq}^\diamond$ is a portion of the bisector $b_{pq}^\diamond$, and consists of those
center placements along $b_{pq}^\diamond$ for which the corresponding copy $Q'$
has an {\em empty interior} (i.e., its interior is disjoint from $P$).
Following the notation of Chew~\cite{Chew}, we call $pq$ a
{\em corner edge} if $e_{pq}^\diamond$ contains a breakpoint
(i.e., a placement with a corner contact); otherwise it is a
{\em non-corner edge}, and is therefore a straight segment.
\paragraph{Kinetic $Q$-diagrams.}
Consider next what happens to $\mathop{\mathrm{VD}}^\diamond(P)$ and $\mathop{\mathrm{DT}}^\diamond(P)$
as the points of $P$ move continuously with time.
In this case $\mathop{\mathrm{VD}}^\diamond(P)$ changes
continuously, but undergoes topological
changes at certain critical times, called \emph{events}. There are
two kinds of events:
\smallskip
\noindent (i) \textsc{Flip Event.}
A Voronoi edge $e_{pq}^\diamond$ shrinks to a point, disappears, and is
``flipped'' into a newly emerging Voronoi edge $e_{p'q'}^\diamond$.
\smallskip
\noindent (ii) \textsc{Corner Event.}
An endpoint of some Voronoi edge $e_{pq}^\diamond$ becomes a breakpoint (a
corner placement). Immediately after this time $e_{pq}^\diamond$ either
gains a new straight segment, or loses such a segment, that it had before the event.
\smallskip
Some comments are in order:
\smallskip
\noindent(a) A flip event
occurs when the four points $p,q,p',q'$ become ``cocircular'':
there is an empty homothetic copy $Q'$ of $Q$ that touches all four points.
\smallskip
\noindent(b) Only non-corner edges can participate in a flip event, as
both the vanishing edge $e_{pq}^\diamond$ and the newly emerging edge
$e_{p'q'}^\diamond$ do not have breakpoints near the event.
\smallskip
\noindent(c) A flip event entails a discrete change in the
Delaunay triangulation, whereas a corner event does not.
Still, for algorithmic purposes, we will keep track of both kinds of events.
\smallskip
We first bound the number of corner events.
\begin{lemma} \label{corners}
Let $P$ be a set of $n$ points in ${\mathbb R}^2$ under algebraic
motion of bounded degree, and let $Q$ be a convex $k$-gon.
The number of corner events in $\mathop{\mathrm{DT}}^\diamond(P)$ is $O(k^2n\lambda_r(n))$,
where $r$ is a constant that depends on the degree of motion
of $P$.
\end{lemma}
\begin{proof}
Fix a point $p$ and a vertex $w$ of $Q$, and consider all the corner
events in which $w$ touches $p$. As noted above, at any such event the
center $c$ of $Q$ lies on a ray $\gamma$ emanating from $p$ at a fixed
direction. (Since $p$ is moving, $\gamma$ is a moving ray, but its orientation remains fixed.) For each other point $q\in P\setminus\{p\}$, let $\varphi_\gamma^\diamond[p,q]$
denote the distance, at time $t$, from $p$ along $\gamma$ to the center
of a copy of $Q$ that touches $p$ (at $w$) and $q$.
The value
$\min_q \varphi_\gamma^\diamond[p,q](t)$ represents the intersection of
${\partial} \mathop{\mathrm{Vor}}^\diamond(p)$ with $\gamma$ at time $t$, where $\mathop{\mathrm{Vor}}^\diamond(p)$ is the Voronoi cell of $p$ in $\mathop{\mathrm{VD}}^\diamond(P)$. The point $q$ that attains the
minimum defines the Voronoi edge $e_{pq}^\diamond$ (or vertex if the
minimum is attained by more than one point $q$) of $\mathop{\mathrm{Vor}}^\diamond(p)$ that $\gamma$ intersects.
In other words, we have a collection of $n-1$ partially defined
functions $\varphi_\gamma^\diamond[p,q]$, and the breakpoints of their lower envelope
represent the corner events that involve the contact
of $w$ with $p$. By our assumption on the motion of $P$, each
function $\varphi_\gamma^\diamond[p,q]$ is piecewise algebraic,
with $O(k)$ pieces. Each piece encodes a continuous contact of $q$
with a specific edge of $Q'$, and has constant description complexity. Hence (see, e.g., \cite[Corollary 1.6]{SA95}) the complexity of
the envelope is at most $O(k\lambda_r(n))$, for an appropriate constant
$r$. Repeating the analysis for each point $p$ and each vertex $w$ of $Q$, the lemma
follows.
\end{proof}
Consider next flip events. As noted, each flip event involves a
placement of an empty homothetic copy $Q'$ of $Q$ that touches
simultaneously four points $p_1,p_2,p_3,p_4$ of $P$, in this
counterclockwise order along $\partial Q'$, so that the
Voronoi edge $e_{p_1p_3}^\diamond$,
which is a non-corner edge before the event, shrinks to a point
and is replaced by the non-corner edge $e_{p_2p_4}^\diamond$. Let $e_i$
denote the edge of $Q'$ that touches $p_i$, for $i=1,2,3,4$.
We fix the quadruple of edges $e_1,e_2,e_3,e_4$, bound the number
of flip events involving a quadruple contact with these edges,
and sum the bound over all $O(k^4)$ choices of four edges of $Q$.
For a fixed quadruple of edges $e_1,e_2,e_3,e_4$, we replace $Q$ by
the convex hull $Q_0$ of these edges, and note that any flip event
involving these edges is also a flip event
for $Q_0$. We therefore restrict our attention to $Q_0$, which is
a convex $k_0$-gon, for some $k_0\le 8$.
We note that if $(p,q)$ is a Delaunay edge
representing a contact of some homothetic copy $Q'_0$ of $Q_0$ where $p$ and $q$
touch two {\em adjacent} edges of $Q'_0$, then $(p,q)$ must be a corner
edge---shrinking $Q'_0$ towards the vertex common to the two edges,
while it continues to touch $p$ and $q$, will keep it empty, and
eventually reach a placement where either $p$ or $q$ touches a corner
of $Q'_0$.
The same (and actually simpler) argument applies to the case when $p$ and $q$ touch the same edge\footnote{In general position this does not occur, but it can happen at discrete time instances during the motion,} of $Q_0$.
\begin{figure}[htbp]
\begin{center}
\input{ChewProof.pstex_t}\hspace{3cm}\input{ChewProof1.pstex_t}
\caption{\small \sf Left: The edge $e_{13}^\diamond$ in the diagram $\mathop{\mathrm{VD}}^\diamond(P)$ before disappearing. The endpoint $c_{123}^\diamond$ (resp., $c_{143}^\diamond$) of $e_{13}^\diamond$ corresponds to the homothetic copy of $Q_0$ whose edges $e_1,e_2,e_3$ (resp., $e_1,e_4,e_3$) are incident to the respective vertices $p_1,p_2,p_3$ (resp., $p_1,p_4,p_3$). Right: The tree of non-corner edges.}\label{Fig:ChewProof}
\end{center}
\end{figure}
Consider the situation just before the critical event takes place,
as depicted in Figure~\ref{Fig:ChewProof} (left).
The Voronoi edge $e_{p_1p_3}^\diamond$ (to simplify the notation, we write this edge as $e_{13}^\diamond$, and similarly for the other edges and vertices in this analysis) is delimited by two Voronoi vertices,
one, $c^\diamond_{123}$, being the center of a copy of $Q_0$ which
touches $p_1,p_2,p_3$ at the respective edges $e_1,e_2,e_3$,
and the other, $c_{143}^\diamond$, being the center of a copy of
$Q_0$ which touches $p_1,p_4,p_3$ at the respective edges
$e_1,e_4,e_3$. Consider the two other Voronoi edges $e_{12}^\diamond$ and
$e_{23}^\diamond$ adjacent to $c_{123}^\diamond$, and
the two Voronoi edges $e_{14}^\diamond$ and
$e_{43}^\diamond$ adjacent to $c_{143}^\diamond$.
Among them, consider only those which are non-corner edges;
assume for simplicity that they all are.
For specificity, consider the edge $e_{12}^\diamond$. As we move
the center of $Q_0$ along that edge away from $c_{123}^\diamond$,
$Q_0$ loses the contact with $p_3$; it shrinks on the side of
$p_1p_2$ which contains $p_3$ (and $p_4$, already away from $Q_0$),
and expands on the other side. Since this is a non-corner edge,
its other endpoint is a placement where the (artificial) edge
$e_{12}$ of $Q_0$ between $e_1$ and $e_2$ touches another point
$p_{5}$. Now, however, since $e_{12}$ is adjacent to both edges
$e_1$, $e_2$, the new Voronoi edges $e^\diamond_{15}$ and
$e_{25}^\diamond$ are both corner edges.
Repeating this analysis to each of the other three Voronoi edges
adjacent to $e_{13}^\diamond$, we get a tree of non-corner Voronoi edges,
consisting of at most five edges, so that all the other Voronoi edges
adjacent to its edges are corner edges. As long as no discrete change occurs at any of the surrounding corner edges, the tree can undergo only $O(1)$ discrete changes, because all its edges are defined by a total of $O(1)$ points of $P$. When a corner edge undergoes a discrete change, this can affect only $O(1)$ adjacent non-corner trees of the above kind. Hence, the number of changes in non-corner edges is proportional to the number
of changes in corner edges, which, by Lemma \ref{corners} (applied to $Q_0$) is $O(n\lambda_r(n))$. Multiplying by the $O(k^4)$ choices of quadruples of edges of $Q$, we thus obtain:
\begin{theorem} \label{Thm:PolygonalVoronoi}
Let $P$ be a set of $n$ moving points in ${\mathbb R}^2$ under algebraic
motion of bounded degree, and let $Q$ be a convex $k$-gon.
The number of topological changes in $\mathop{\mathrm{VD}}^\diamond(P)$ with respect to $Q$
is $O(k^4n\lambda_r(n))$, where $r$ is a
constant that depends on the degree of motion of $P$.
\end{theorem}
\paragraph{Kinetic maintenance of $\mathbf{VD^\diamond(P)}$ and
$\mathbf{DT^\diamond(P)}$.}
As already mentioned, it is a fairly trivial task to maintain
$\mathop{\mathrm{DT}}^\diamond(P)$ and $\mathop{\mathrm{VD}}^\diamond(P)$ kinetically, as the points of $P$
move. All we need to do is to assert the correctness of the present
triangulation by a collection of local certificates, one for each edge
of the diagram, where the certificate of an edge asserts that the two
homothetic placements $Q^-,Q^+$ of $Q$ that circumscribe the two
respective adjacent $Q$-Delaunay triangles
$\triangle pqr^-,\triangle pqr^+$, are such that $Q^-$ does not
contain $r^+$ and $Q^+$ does not contain $r^-$. The failure time of this certificate is the first time (if one exists) at which $p,q,r^-$, and $r^+$ become $Q$-cocircular---they all lie on the boundary of a common homothetic copy of $Q$. Such an event corresponds to a flip event in $\mathop{\mathrm{DT}}^\diamond(P)$. If $pq$ is an edge of the periphery of $\mathop{\mathrm{DT}}^\diamond(P)$, so that $\triangle pqr^+$ exists but $\triangle pqr^-$ does not, then $Q^-$ is a limiting wedge bounded by rays supporting two {\it consecutive} edges of (a copy of) $Q$, one passing through $p$ and one through $q$ (see Figure \ref{Fig:ConesCertif}).
The failure time of the corresponding certificate is the first time (if any) at which $r^+$ also lies on the boundary of that wedge.
We maintain the breakpoints using ``sub-certificates", each of which asserts that $Q^-$, say, touches each of $p,q,r^-$ at respective specific edges (and similarly for $Q^+$). The failure time of this sub-certificate is the first failure time when one of $p,q$ or $r^-$ touches $Q^-$ at a vertex. In this case we have a corner event---two of the adjacent Voronoi edges terminate at a corner placement. Note that the failure time of each sub-certificate can be computed in $O(1)$ time. Moreover, for a fixed collection of valid sub-certificates, the failure time of an initial certificate (asserting non-cocircularity) can also be computed in $O(1)$ time (provided that it fails before the failures of the corresponding sub-certificates), because we know the four edges of $Q^-$ involved in the contacts.
\begin{figure}[htbp]
\begin{center}
\input{CertificateCones.pstex_t}
\caption{\small \sf If $r^-$ does not exist then $Q^-$ is a limiting wedge bounded by rays supporting two consecutive edges of (a copy of) $Q$.}
\label{Fig:ConesCertif}
\end{center}
\end{figure}
We therefore maintain an event queue that stores and updates all the
active failure times (there are only $O(n)$ of them at any given time---the bound is
independent of $k$, because they correspond to actual $\mathop{\mathrm{DT}}$ edges. When a sub-certificate fails we do not change
$\mathop{\mathrm{DT}}^\diamond(P)$, but only update the corresponding Voronoi edge, by
adding or removing a segment and a breakpoint, and by replacing the
sub-certificate by a new one; we also update the cocircularity certificate
associated with the edge, because one of the contact edges has changed.
When a cocircularity certificate fails we update $\mathop{\mathrm{DT}}^\diamond(P)$ and
construct $O(1)$ new sub-certificates and certificates. Altogether, each update of the diagram takes $O(\log n)$ time. We thus have
\begin{theorem}\label{Thm:MaintainPolygDT}
Let $P$ be a set of $n$ moving points in ${\mathbb R}^2$ under algebraic
motion of bounded degree, and let $Q$ be a convex $k$-gon.
$\mathop{\mathrm{DT}}^\diamond(P)$ and $\mathop{\mathrm{VD}}^\diamond(P)$ can be maintained using
$O(n)$ storage and $O(\log n)$ update time, so that
$O(k^4n\lambda_r(n))$ events are processed, where $r$ is a
constant that depends on the degree of motion of $P$.
\end{theorem}
\subsection{Stable Delaunay edges in $\mathbf{DT^\diamond(P)}$}
We now restrict $Q$ to be a regular $k$-gon.
Let $v_0,\ldots,v_{k-1}$ be
the vertices of $Q$ arranged in a clockwise direction, with $v_0$ the leftmost.
We call a homothetic copy of $Q$ whose vertex $v_j$ touches a point $p$, a
{\em $v_j$-placement of $Q$ at $p$}. Let
$u_j$ be the direction of the vector that connects $v_j$ with the
center of $Q$, for each $0\leq j< k$ (as in Section \ref{sec:Prelim}). See Figure \ref{Fig:Placement} (left).
We follow the machinery in the proof of Lemma~\ref{corners}. That is, for any pair $p,q\in
P$ let $\varphi^\diamond_j[p,q]$ denote the distance from $p$ to the point
$u_j[p]\cap b^\diamond_{pq}$; we put $\varphi^\diamond_j[p,q]=\infty$ if
$u_j[p]$ does not intersect $b^\diamond_{pq}$. If
$\varphi^\diamond_j[p,q]<\infty$ then the point $b^\diamond_{pq}\cap u_j[p]$ is
the center of the $v_j$-placement $Q'$ of $Q$ at $p$
that also touches $q$, and it is easy to see that there is a unique such point.
The value $\varphi^\diamond_j[q,p]$ is equal to the circumradius
of $Q'$. See Figure \ref{Fig:Placement} (middle).
\begin{figure}[htbp]
\begin{center}
\input{Regular.pstex_t}\hspace{2cm}\input{PolygonalDist.pstex_t}\hspace{2cm}\input{UndefinedPolygDist.pstex_t}
\caption{\small \sf Left: \sf $u_j$ is the direction of the vector connecting vertex $v_j$ to the center of $Q$. Middle:
The function $\varphi_j^\diamond[p,q]$ is equal to the radius of
the circle that circumscribes the $v_j$-placement of $Q$ at
$p$ that also touches $q$.
Right: The case when $\varphi^\diamond_j[p,q]=\infty$ while
$\varphi_j[p,q]<\infty$. In this case $q$ must lie in one of
the shaded wedges.
}\label{Fig:Placement}
\end{center}
\end{figure}
The \textit{neighbor} $N^\diamond_j[p]$ of $p$ in direction $u_j$ is defined to be the point $q\in
P\setminus\{p\}$ that minimizes $\varphi^\diamond_j[p,q]$. Clearly, for any
$p,q\in P$ we have $N^\diamond_j[p]=q$ if and only if there is an empty
$v_j$-placement $Q'$ of $Q$ at $p$ so that $q$ touches one of
its edges.
\smallskip
\noindent{\bf Remark:}
Note that, in the Euclidean case, we have $\varphi_j[p,q]<\infty$ if
and only if the angle between $\overline{pq}$ and $u_j[p]$ is at most
$\pi/2$. In contrast, $\varphi^\diamond_j[p,q]<\infty$ if and only if
the angle between $\overline{pq}$ and $u_j[p]$ is at most
$\pi/2-\pi/k=\pi/2-\alpha/2$. Moreover, we have $\varphi_j[p,q]\leq
\varphi^\diamond_j[p,q]$. Therefore, $\varphi^\diamond_j[p,q]<\infty$ always
implies $\varphi_j[p,q]<\infty$, but not vice versa; see Figure
\ref{Fig:Placement} (right). Note also that in both the Euclidean and the polygonal cases, the respective quantities $N_j[p]$ and $N_j^\diamond[p]$ may be undefined.
\begin{lemma}\label{Thm:LongEucPoly}
Let $p,q\in P$ be a pair of points such that $N_j(p)=q$ for
$h\geq 3$ consecutive indices, say $0\leq j\leq h-1$.
Then for each of these indices, except possibly for the first and the last one, we also have $N^\diamond_j[p]=q$.
\end{lemma}
\begin{proof}
Let $w_1$ (resp., $w_2$) be the point at which the ray $u_0[p]$ (resp., $u_{h-1}[p]$) hits the edge $e_{pq}$ in $\mathop{\mathrm{VD}}(P)$. (By assumption, both points exist.)
Let $D_1$ and $D_2$ be the disks centered at $w_1$ and $w_2$, respectively, and touching $p$ and $q$. By definition, neither of these disks contains a point of $P$ in its interior. The angle between the tangents to $D_1$ and $D_2$ at $p$ or at $q$ (these angles are equal) is $\beta=(h-1)\alpha$; see Figure \ref{Fig:ProvePolyg} (left).
\begin{figure}[htbp]
\begin{center}
\input{ProvePolyg1.pstex_t}\hspace{2cm}\input{ProvePolyg2.pstex_t}
\caption{\small \sf Left: The angle between the tangents to $D_1$ and $D_2$ at $p$ (or at $q$) is equal to
$\angle w_1pw_2= \beta=(h-1)\alpha$. Right: The line $\ell'$ crosses $D$ in a chord $qq'$ which is fully
contained in $e'$.}\label{Fig:ProvePolyg}
\end{center}
\end{figure}
Fix an arbitrary index $1\leq j\leq h-2$, so $u_j[p]$ intersects $e_{pq}$
and forms an angle of at least $\alpha$ with each of ${pw}_1,{pw}_2$.
Let $Q'$ be the $v_j$-placement of $Q$
at $p$ that touches $q$. To see that such a placement exists, we note that, by the preceding remark, it suffices to show that the angle between
$\overline{pq}$ and $u_j[p]$ is at most $\pi/2-\alpha/2$; that is, to rule out the case where $q$ lies in one of the shaded wedges in Figure \ref{Fig:Placement} (right). This case is indeed impossible, because then one of $u_{j-1}[p],u_{j+1}[p]$ would form an angle greater than $\pi/2$ with
$\overline{pq}$, contradicting the assumption that both of these rays intersect the (Euclidean) $b_{pq}$.
We claim that $Q' \subset D_1\cup D_2$.
Establishing this property for every $1\leq j\leq h-2$ will complete
the proof of the lemma.
Let $e'$ be the edge of $Q'$ passing through $q$. See Figure \ref{Fig:ProvePolyg} (right). Let $D$ be the disk
whose center lies on $u_j[p]$ and which passes through $p$ and $q$,
and let $D^+$ be the circumscribing disk of $Q'$.
Since $q\in \partial D$ and is interior to $D^+$, and since $D$ and
$D^+$ are centered on the same ray $u_j[q]$ and pass through $p$, it
follows that $D \subset D^+$.
The line $\ell'$ containing $e'$ crosses $D$ in a chord $qq'$ that
is fully contained in $e'$. The angle between the tangent to $D$ at
$q$ and the chord $qq'$ is equal to the angle at which $p$ sees $qq'$.
This is smaller than the angle at which $p$ sees $e'$, which in turn
is equal to $\alpha/2$.
Arguing as in the analysis of $D_1$ and $D_2$, the tangent to $D$ at $q$ forms an angle of at least $\alpha$ with each of the tangents to $D_1,D_2$ at $q$, and hence $e'$ forms an angle of at least $\alpha/2$ with each of these tangents; see Figure \ref{Fig:ProvePolyg2} (left).
The line $\ell'$ marks two chords $q_1q,qq_2$ within the respective disks $D_1,D_2$. We claim that $e'$ is fully contained in their union $q_1q_2$. Indeed, the angle $q_1pq$ is equal to the angle between $\ell'$ and the tangent to $D_1$ at $q$, so $\angle q_1pq\geq \alpha/2$.
On the other hand, the angle at which $p$ sees $e'$ is $\alpha/2$, which is smaller. This, and the symmetic argument involving $D_2$, are easily seen to imply the claim.
\begin{figure}[htbp]
\begin{center}
\input{ProvePolyg3.pstex_t} \hspace{2cm} \input{ProvePolyg4.pstex_t}
\caption{\small \sf Left: The line $\ell'$ forms an angle of at least $\alpha/2$ with each of the tangents to $D_1,D_2$ at $q$. Right: The edge $e'=a_1a_2$ of $Q'$ is fully contained in $D_1\cup D_2$.}\label{Fig:ProvePolyg2}
\end{center}
\end{figure}
Now consider the circumscribing disk $D^+$ of $Q'$. Denote the endpoints of $e'$ as $a_1$ and $a_2$, where $a_1$ lies in $q_1q$ and $a_2$ lies in $qq_2$.
Since the ray $\overline{pa}_1$ hits $\partial D^+$ before hitting $D_1$, and the ray $\overline{pq}$ hits these circles in the reverse order, it follows that the second intersection of $\partial D_1$ and $\partial D^+$ (other than $p$) must lie on a ray from $p$ which lies between the rays $\overline{pa}_1,\overline{pq}$ and thus crosses $e'$. See Figure \ref{Fig:ProvePolyg2} (right).
Symmetrically, the second intersection point of $\partial D_2$ and $\partial D^+$ also lies on a ray which crosses $e'$.
It follows that the arc of $\partial D^+$ delimited by these intersections and containing $p$ is fully contained in $D_1\cup D_2$.
Hence all the vertices of $Q'$ (which lie on this arc) lie in $D_1\cup D_2$. This, combined with the argument in the preceding paragraphs, is easily seen to imply that $Q'\subseteq D_1\cup D_2$, so its interior does not contain points of $P$, which in turn implies that $N_j^\diamond[p]=q$.
As noted, this completes the proof of the lemma.
\end{proof}
Since $Q$-Voronoi edges are connected, Lemma~\ref{Thm:LongEucPoly} implies that $e_{pq}^\diamond$ is ``long", in the sense that it contains at least $h-2$ breakpoints that represent corner placements at $p$, interleaved (as promised in Section \ref{Sec:PolygonalBackground}) with at least $h-3$ corner placements at $q$.
This property is easily seen to hold also under the weaker assumptions that: (i) for the first and the last indices $j=0,h-1$, the point $N_j[p]$ either is equal to $q$ or is undefined, and (ii) for the rest of the indices $j$ we have $N_j[p]=q$ and $\varphi^\diamond_j[p,q]<\infty$ (i.e., the $v_j$-placement of $Q$ at $p$ that touches $q$ exists).
In this relaxed setting, it is now possible that any of the two points $w_1,w_2$ lies at infinity, in which case the corresponding disk $D_1$ or $D_2$ degenerates into a halfplane. This stronger version of
Lemma~\ref{Thm:LongEucPoly} is used in the proof of the converse
Lemma~\ref{Thm:LongPolygEuc}, asserting that every edge $e^\diamond_{pq}$ in $\mathop{\mathrm{VD}}^\diamond(P)$ with sufficiently many breakpoints has a stable counterpart $e_{pq}$ in $\mathop{\mathrm{VD}}(P)$.
\begin{lemma}\label{Thm:LongPolygEuc}
Let $p,q\in P$ be a pair of points such that $N_j^\diamond[p]=q$ for at least three consecutive indices $j\in \{0,\ldots, k-1\}$.
Then for each of these indices, except possibly for the first and the last one, we have $N_j[p]=q$.
\end{lemma}
\begin{proof}
Again, assume with no
loss of generality that $N_j^\diamond[p]=q$ for $0\leq j\leq h-1$, with
$h\geq 3$. Suppose to the contrary that, for some $1\leq j\leq h-2$, we have $N_j[p]\neq q$. Since $N^\diamond_j[p]=q$
by assumption, we have $\varphi_j[p,q]\leq
\varphi_j^\diamond[p,q]<\infty$, so there exists $r\in P$ for which
$\varphi_j[p,r]<\varphi_j[p,q]$. Assume with no loss of generality
that $r$ lies to the left of the line from $p$ to $q$. In this case $\varphi_{j-1}[p,r]<\varphi_{j-1}[p,q]<\infty$. Indeed, we have (i) $N_{j-1}^\diamond[p]=q$ by assumption,
so $\varphi_{j-1}^\diamond[p,q]<\infty$, and (ii) $\varphi_{j-1}[p,q]\leq
\varphi_{j-1}^\diamond[p,q]$. Moreover,
because $r$ lies to the left of the line from $p$ to $q$, the orientation of $b_{pr}$ lies counterclockwise to that of $b_{pq}$,
implying that $\varphi_{j-1}[p,q]<\infty$.
See Figure \ref{Fig:Converse}. Since $u_j[p]$ hits $b_{pr}$ before hitting $b_{pq}$, any ray emanating from $p$ counterlockwise to $u_j[p]$ must do the same, so we have $\varphi_{j-1}[p,r]<\varphi_{j-1}[p,q]$, as claimed. Similarly, we get that either
$\varphi_{j-2}[p,r]<\varphi_{j-2}[p,q]<\infty$ or
$\varphi_{j-2}[p,r]\leq \varphi_{j-2}[p,q]=\infty$ (where the latter
can occur only for $j=1$). Now applying (the extended version of)
Lemma~\ref{Thm:LongEucPoly} to the point set $\{p,q,r\}$ and to
the index set $\{j-2,j-1,j\}$, we get that
$\varphi^\diamond_{j-1}[p,r]<\varphi^\diamond_{j-1}[p,q]$. But this
contradicts the fact that $N_{j-1}^\diamond[p]=q$.
\end{proof}
\begin{figure}[htbp]
\begin{center}
\input{Converse.pstex_t}
\caption{\small \sf Proof of Lemma~\ref{Thm:LongPolygEuc}. If $N_j[p]\neq q$ because some $r$, lying to the left of the line from $p$ to $r$, satisfies $\varphi_{j}[p,r]<\varphi_{j}[p,q]$. Since $\varphi_{j-1}[p,q]<\varphi_{j-1}^\diamond[p,q]<\infty$, we have $\varphi_{j-1}[p,r]<\varphi_{j-1}[p,q]$.}
\label{Fig:Converse}
\end{center}
\end{figure}
\paragraph{Maintaining an SDG using $\mathbf{VD^\diamond(P)}$.}
Lemmas~\ref{Thm:LongEucPoly}
and~\ref{Thm:LongPolygEuc} together imply that an $\mathop{\mathrm{SDG}}$ can be maintained
using the fairly straightforward kinetic algorithm for maintaining
the whole $\mathop{\mathrm{VD}}^\diamond(P)$, provided by Theorem~\ref{Thm:MaintainPolygDT}.
We use
$\mathop{\mathrm{VD}}^\diamond(P)$ to maintain the graph ${\sf G}$ on $P$, whose edges are all
the pairs $(p,q)\in P\times P$ such that $p$ and $q$ define an edge
$e^\diamond_{pq}$ in $\mathop{\mathrm{VD}}^\diamond(P)$ that contains at least seven
breakpoints. As shown in Theorem \ref{Thm:MaintainPolygDT}, this can
be done with $O(n)$ storage, $O(\log n)$ update time, and
$O(k^4n\lambda_r(n))$ updates (for an appropriate $r$). We claim that ${\sf G}$ is a
$(6\alpha,\alpha)$-$\mathop{\mathrm{SDG}}$ in the Euclidean norm.
Indeed, if two points $p,q\in P$ define a $6\alpha$-long edge $e_{pq}$ in $\mathop{\mathrm{VD}}(P)$ then
this edge stabs at least six rays $u_j[p]$ emanating from $p$, and at least six rays $u_j[q]$ emanating from $q$.
Thus, according to Lemma~\ref{Thm:LongEucPoly}, $\mathop{\mathrm{VD}}^\diamond(P)$ contains the edge $e_{pq}^\diamond$ with at least four breakpoints corresponding to corner placements of $Q$ at $p$ that touch $q$, and at least four breakpoints corresponding to corner placements of $Q$ at $q$ that touch $p$. Therefore, $e_{pq}^\diamond$ contains at least $8$ breakpoints, so $(p,q)\in {\sf G}$.
For the second part, if $p,q\in P$ define an edge $e_{pq}^\diamond$ in $\mathop{\mathrm{VD}}^\diamond(P)$ with at least $7$ breakpoints then, by the interleaving property of breakpoints, we may assume, without loss of generality, that at least four of these breakpoints correspond to $P$-empty corner placements of $Q$ at $p$ that touch $q$.
Thus, Lemma~\ref{Thm:LongPolygEuc} implies that $\mathop{\mathrm{VD}}(P)$ contains the edge $e_{pq}$, and that this edge is hit by at least two consecutive rays $u_j[p]$.
But then, as observed in Lemma \ref{lem:alpha}, the edge $e_{pq}$ is $\alpha$-long in $\mathop{\mathrm{VD}}(P)$.
We thus obtain the main result of this section.
\begin{theorem}\label{Thm:MaintainSDGPolyg}
Let $P$ be a set of $n$ moving points in ${\mathbb R}^2$ under algebraic
motion of bounded degree,
and let $\alpha \ge 0$ be a parameter. A
$(6\alpha,\alpha)$-stable Delaunay graph of $P$ can be maintained by a
KDS of linear size that processes
$O(n\lambda_r(n)/\alpha^4)$ events, where $r$ is a
constant that depends on the degree of motion of $P$, and that
updates the SDG at each event in $O(\log n)$ time.
\end{theorem}
\section{An Improved Data Structure}
\label{Sec:ReduceS}
The data structure of Theorem~\ref{Thm:MaintainSDGPolyg} requires
$O(n)$ storage but the best bound we have on the number of
events it may encounter is $O^*(n^2/\alpha^4)$, which is
much larger than the number of events encountered by the data structure
of Theorem~\ref{thm:ddj} (which, in terms of the dependence on $\alpha$, is only $O^*(n^2/\alpha)$). In this
section we present an alternative data structure that requires
$O^*(n/\alpha^2)$ space and $O^*(n^2/\alpha^2)$ overall processing
time. The structure processes each event in $O^*(1/\alpha)$ time and is also {\it local}, in the sense that each point is stored at only $O^*((1/\alpha)^2)$ places in the structure.
\paragraph{Notation.}
We use the directions $u_i$ and the associated quantities $N_i[p]$ and $\varphi_i[p,q]$ defined in
Section \ref{sec:Prelim}. We assume that $k$, the number of canonical directions, is even, and write, as in Section \ref{sec:Prelim}, $k=2s$.
We denote by $C_i$ the cone (or wedge) with apex at the origin that is bounded by
$u_i$ and $u_{i+1}$. Note that $C_i$ and $C_{i\pm s}$ are antipodal.
As before, for a vector $u$, we denote by $u[x]$ the ray emanating from $x$ in direction $u$. Similarly, for a cone $C$ we denote
by $C[x]$ the translation of $C$ that places its apex at $x$.
Let $0\leq \beta\leq \pi/2$ be an angle. For a direction $u \in {\mathbb S}^1$
and for two points $p,q \in P$, we say that the edge
$e_{pq}\in \mathop{\mathrm{VD}}(P)$ is \emph{$\beta$-long around the ray $u[q]$}
if $p$ is the Voronoi neighbor of $q$ in all directions in the range
$[u-\beta,u+\beta]$, i.e., for all $v\in[u-\beta,u+\beta]$, the ray
$v[q]$ intersects $e_{pq}$.
The {\em $\beta$-cone around $u[q]$} is the cone whose apex is $q$
and each of its bounding rays makes an angle of $\beta$ with $u[q]$.
\begin{figure}[htbp]
\begin{center}
\input{Jextremal1.pstex_t}\hspace{2cm}\input{StronglyJextremal1.pstex_t}
\caption{\small \sf Left: $q$ is $j$-extremal for $p$. Right: $q$ is \textit{strongly} $j$-extremal for $p$.}
\label{Fig:Jextremal}
\end{center}
\end{figure}
\paragraph{Definition ($j$-extremal points).}
(i) Let $p,q\in P$, let $i$ be the index such that $q\in C_i[p]$,
and let $u_j$ be a direction
such that
$\inprod{u_j}{x}\le 0$ for all $x\in C_i$.
We say that $q$ is \emph{$j$-extremal} for $p$ if
$q = \arg\max \{\inprod{p'}{u_j}\mid p'\in C_i[p]\cap P\setminus\{p\}\}$.
That is, $q$ is the nearest point to $p$ in this cone, in the $(-u_j)$-direction.
Clearly, a point $p$ has at most $s$ $j$-extremal points,
one for every admissible cone $C_i[p]$, for any fixed $j$. See Figure \ref{Fig:Jextremal} (left).
(ii) For $0\leq i< k$, let $C'_i$ denote the extended cone that
is the union of the seven consecutive cones $C_{i-3},\ldots,
C_{i+3}$.
Let $p,q\in P$, let $i$ be the index such that $q\in C_i[p]$,
and let $u_j$ be a direction
such that
$\inprod{u_j}{x}\le 0$ for all $x\in C'_i$ (such $u_j$'s exist if $\alpha$ is smaller than some appropriate constant).
We say that the point $q\in P$ is \textit{strongly $j$-extremal} for $p$
if $q = \arg\max \{\inprod{p'}{u_j}\mid p'\in C'_i[p]\cap P\setminus\{p\}\}$.
(iii) We say that a pair $(p,q)\in P\times P$ is {\it (strongly)} $(j,\ell)$-{\it extremal}, for some $0\leq j,\ell \leq k-1$, if $p$ is (strongly) $\ell$-extremal for
$q$ and $q$ is (strongly) $j$-extremal for $p$.
\begin{figure}[htbp]
\begin{center}
\input{necessaryCond.pstex_t}
\caption{\small \sf Illustration of the setup in Lemma~\ref{Lemma:EmptyCone}: the edge $e_{pq}$ is $\beta$-long around $v[p]$, and the ``tip" $\triangle \sigma^+q\sigma^-$ of the cone $C[q]$ is empty.}
\label{Fig:EmptyCone}
\end{center}
\end{figure}
\begin{lemma}\label{Lemma:EmptyCone}
Let $p,q\in P$, and let $v$ be a direction such that the
edge $e_{pq}$ appears in $\mathop{\mathrm{VD}}(P)$ and is $\beta$-long around the ray
$v[p]$. Let $C[q]$ be the $\beta$-cone around the ray from $q$ through
$p$. Then $\inprod{p}{v}\geq \inprod{p'}{v}$ for all
$p'\in P\cap C[q]\setminus \{q \}$.
\end{lemma}
\begin{proof}
Refer to Figure \ref{Fig:EmptyCone}. Without loss of generality, we assume that $v$ is the $(+x)$-direction
and that $q$ lies above and to the right of $p$. (In this case the slope of the bisector $b_{pq}$ is negative. Note that $q$ has to
lie to the right of $p$, for otherwise $b_{pq}$ would not cross $v[p]$.)
Let $v^+$
(resp., $v^-$) be the direction that makes a counterclockwise (resp.,
clockwise) angle of $\beta$ with $v$. Let $a^+$ (resp., $a^-$) be
the intersection of $e_{pq}$ with $v^+[p]$ (resp., with $v^-[p]$); by assumption, both points exist.
Let $h$ be the vertical line passing through $p$. Let $\sigma^+$
(resp., $\sigma^-$) be the intersection point of $h$ with the ray
emanating from $a^+$ (resp., $a^-$) in the direction opposite to
$v^-$ (resp., $v^+$); see Figure~\ref{Fig:EmptyCone}.
Note that $\angle pa^+\sigma^+=2\beta$, and that
$\|a^+\sigma^+\|=\|pa^+\|=\|qa^+\|$, i.e., $a^+$ is the circumcenter
of $\triangle p\sigma^+q$. Therefore $\angle \sigma^+ q p =
\frac12{\angle \sigma^+ a^+p} = \beta$. That is, $\sigma^+$ is the intersection of the upper ray of $C[q]$ with $h$.
Similarly, $\sigma^-$ is the intersection of the lower ray of $C[q]$ with $h$.
Moreover, if there exists a
point $x\in P$ properly inside the triangle $\triangle pq\sigma^+$
then
$\|a^+x\| < \|a^+p\|$,
contradicting the fact that $a^+$ is on $e_{pq}$. So the interior of $\triangle pq\sigma^+$ (including the relative interiors of edges $pq,\sigma^+q$) is disjoint from $P$.
Similarly, by a symmetric argument, no points of $P$ lie inside
$\triangle pq\sigma^-$ or on the relative interiors of its edges $pq,\sigma^-q$.
Hence, the portion of $C[q]$ to the right of $p$ is openly disjoint from $P$, and therefore $p$ is a rightmost point of $P$ (extreme in the $v$ direction) inside $C[q]$.\end{proof}
\begin{corollary}\label{Corol:ExtremalPair}
Let $p,q\in P$.
\noindent (i) If the edge $e_{pq}$ is $3\alpha$-long in $\mathop{\mathrm{VD}}(P)$
then there are $0\leq j,\ell< k$ for which $(p,q)$
is a $(j,\ell)$-extremal pair.
\noindent(ii) If the edge $e_{pq}$ is $9\alpha$-long in $\mathop{\mathrm{VD}}(P)$
then there are $0\leq j,\ell < k$ for which $(p,q)$
is a strongly $(j,\ell)$-extremal pair.
\end{corollary}
\begin{proof}
To prove part (i), choose $0\leq j, \ell < k$, such that $e_{pq}$ is
$\alpha$-long around each of $u_{\ell}[p]$ and $u_{j}[q]$. By Lemma \ref{Lemma:EmptyCone}, $p$ is $u_\ell$-extremal in the $\alpha$-cone $C[q]$ around the ray from $q$ through $p$. Let $i$ be the index such that $p\in C_i[q]$. Since the opening angle of $C[q]$ is $2\alpha$, it follows that $C_i[q]\subseteq C[q]$, so $p$ is $\ell$-extremal with respect to $q$, and, symmetrically, $q$ is $j$-extremal with respect to $p$. To prove part (ii) choose $0\leq
j,\ell< k$, such that $e_{pq}$ is $4\alpha$-long around each of $u_{\ell}[p]$
and $u_{j}[q]$ and apply
Lemma~\ref{Lemma:EmptyCone} as in the proof of part (i).
\end{proof}
\paragraph{The stable Delaunay graph.}
We kinetically maintain a
$(10\alpha,\alpha)$-stable Delaunay graph, whose precise definition is given below,
using a data-structure which is based on a collection of 2-dimensional orthogonal range trees similar to the ones used in \cite{KineticNeighbors}.
Fix $0\leq i< s$, and choose a ``sheared" coordinate frame in
which the rays $u_i$ and $u_{i+1}$ form the $x$- and $y$-axes,
respectively. That is, in this coordinate frame, $q\in C_i[p]$ if and
only if $q$ lies in the upper-right quadrant anchored at $p$.
We define a 2-dimensional range tree $\EuScript{T}_i$ consisting of a \textit{primary} balanced binary search tree with the
points of $P$ stored at its leaves ordered by their $x$-coordinates, and of secondary trees, introduced below.
Each internal node $v$ of the primary tree of $\EuScript{T}_i$ is associated with the
\emph{canonical subset} $P_v$ of all points that are stored at the
leaves of the subtree rooted at $v$. A point $p\in P_v$ is said to be
\emph{red} (resp., {\it blue) in $P_v$} if it is stored at the subtree rooted at
the left (resp., right) child of $v$ in $\EuScript{T}_i$. For each primary node
$v$ we maintain a secondary balanced binary search tree $\EuScript{T}_i^v$, whose
leaves store the points of $P_v$ ordered by their $y$-coordinates.
We refer to a node $w$ in a secondary
tree $\EuScript{T}_i^v$ as a {\em secondary node $w$ of $\EuScript{T}_i$}.
Each node $w$ of a secondary tree $\EuScript{T}_i^v$ is associated with a canonical
subset $P_w\subseteq P_v$ of points stored at the leaves of the
subtree of $\EuScript{T}_i^v$ rooted at $w$. We also associate with $w$ the sets
$R_w\subset P_w$ and $B_w\subset P_w$ of points residing in the
\emph{left} (resp., \emph{right}) \emph{subtree} of $w$ and are red
(resp., blue) in $P_v$. It is easy to verify that the
sum of the sizes of the sets $R_w$ and $B_w$
over all secondary nodes of $\EuScript{T}_i$
is $O(n\log^2n)$.
For each secondary
node $w\in \EuScript{T}_i$ and each $0\leq j< k$ we maintain the points
$$
\xi^R_{i,j}(w)=\arg \max_{p\in
R_w}\inprod{p}{u_j}, \quad \xi^B_{i,j}(w)=\arg \max_{p\in B_w}\inprod{p}{u_j},
$$ provided that both $R_w,B_w$ are not empty.
See Figure \ref{Fig:JLextremalNode}. It is straightforward to show that if $(p,q)$ is a $(j,\ell)$-extremal pair, so that $q\in C_i[p]$,
then there
is a secondary node $w\in \EuScript{T}_i$ for which $q=\xi^B_{i,j}(w)$
and $p=\xi^R_{i,\ell}(w)$.
\begin{figure}[htbp]
\begin{center}
\input{JLextremal.pstex_t}
\caption{\small \sf The points $\xi_{i,\ell}^R(w),\xi_{i,j}^B(w)$ for a secondary node $w$ of $\EuScript{T}_i$.}
\label{Fig:JLextremalNode}
\end{center}
\end{figure}
For each $p\in P$ we construct a set $\EuScript{N}[p]$ containing all points $q\in
P$ for which $(p,q)$ is a $(j,\ell)$-extremal pair, for some pair of indices $0\leq j,\ell< k$. Specifically, for
each $0\le i < s$, and each secondary node $w\in \EuScript{T}_i$ such that
$p=\xi^R_{i,\ell}(w)$ for some $0 \le \ell < k$, we include in
$\EuScript{N}[p]$ all the points $q$ such that $q=\xi^B_{i,j}(w)$ for some $0 \le
j < k$. Similarly,
for each $0\le i < s$, and each secondary node
$w\in \EuScript{T}_i$ such that $p=\xi^B_{i,\ell}(w)$ for some $0 \le \ell <
k$ we include in $\EuScript{N}[p]$ all the points $q$ such that
$q=\xi^R_{i,j}(w)$ for some $0 \le j < k$.
It is easy to verify that, for each $(i,\ell)$-extremal pair $(p,q)$, for some $0\leq j,\ell<k$,
$q$ is placed in $\EuScript{N}[p]$ by the preceding process. The converse, however, does not always hold, so in general $\{p\}\times\EuScript{N}[p]$ is a superset of
the pairs that we want.
For each $0\leq i< s$, each
point $p\in P$ belongs to $O(\log^2n)$ sets $R_w$ and $B_w$, so the size of
$\EuScript{N}[p]$ is bounded by $O(s^2\log^2n)$. Indeed, $p$ may be coupled with up to $k=2s$ neighbors at each of the $O(s\log^2n)$ nodes containing it.
For each point $p\in P$ and $0\leq \ell < k$ we
maintain all points in $\EuScript{N}[p]$ in a kinetic and dynamic
tournament ${\mathcal D}_\ell[p]$ whose winner $q$ minimizes
the directional distance $\varphi_\ell[p,q]$, as given in (\ref{Eq:DirectDist}). That is,
the winner in ${\mathcal D}_\ell[p]$ is $N_\ell[p]$ in the Voronoi diagram of
$\{p\}\cup \EuScript{N}[p]$.
We are now ready to define the stable Delaunay graph ${\sf G}$ that we maintain.
For each pair of points $p,q\in
P$ we add the edge $(p,q)$ to ${\sf G}$ if
the following hold.
\begin{itemize}
\item[(G1)] There is an index $0\leq \ell < k$ such that $q$ wins the 8
consecutive tournaments ${\mathcal D}_\ell[p],\ldots,{\mathcal D}_{\ell+7}[p]$.
\item[(G2)] The point $p$ is strongly $(\ell+3)$-extremal and strongly
$(\ell+4)$-extremal for $q$.
\end{itemize}
The $(10\alpha,\alpha)$-stability of ${\sf G}$ is implied by a combination of Theorems \ref{Thm:Completeness} and \ref{Thm:Soundness}.
\begin{theorem}\label{Thm:Completeness}
For every $10\alpha$-long edge $e_{pq}\in \mathop{\mathrm{VD}}(P)$, the graph ${\sf G}$
contains the edge $(p,q)$.
\end{theorem}
\begin{proof}
By Corollary~\ref{Corol:ExtremalPair} (i),
there are $j$ and $\ell$ such that $(p,q)$ is a $(j,\ell)$-extremal pair.
By the preceding discussion this implies that $q$ is in $\EuScript{N}[p]$.
Now since $e_{pq}$ is $10\alpha$-long
there is an $\ell'$ such that
$N_{\ell'}[p],\ldots,N_{\ell'+7}[p]=q$ in $\mathop{\mathrm{VD}}(P)$, and therefore also
in the Voronoi diagram of $\{p\}\cup \EuScript{N}[p]$.
So it follows that $q$ indeed wins the tournaments
${\mathcal D}_{\ell'}[p],\ldots,{\mathcal D}_{\ell'+7}[p]$.
By the proof of Corollary~\ref{Corol:ExtremalPair} (ii), $p$ is
strongly $(\ell'+3)$-extremal and
strongly $(\ell'+4)$-extremal for $q$.
\end{proof}
\begin{theorem}\label{Thm:Soundness}
For every edge $(p,q)\in {\sf G}$, the edge $e_{pq}$ belongs to $\mathop{\mathrm{VD}}(P)$ and is $\alpha$-long there.
\end{theorem}
\begin{proof}
Since $(p,q)\in {\sf G}$ we know that $q$ is in $\EuScript{N}[p]$ and wins the tournaments
${\mathcal D}_\ell[p],{\mathcal D}_{\ell+1}[p],\ldots,{\mathcal D}_{\ell+7}[p]$, for some $0\leq \ell <k$ and
that the point $p$ is strongly $(\ell+3)$-extremal and $(\ell+4)$-extremal for $q$.
We prove that the rays $u_{\ell+3}[p]$ and $u_{\ell+4}[p]$ stab $e_{pq}$,
from which the theorem follows.
Assume then that
one of the rays $u_{\ell+3}[p], u_{\ell+4}[p]$ does not stab
$e_{pq}$; suppose it is the ray $u_{\ell+4}[q]$. (This includes the case when $e_{pq}$ is not present at
all in $\mathop{\mathrm{VD}}(P)$.) By definition, this means that
$r=N_{\ell+4}[p]\neq q$. We use Lemma~\ref{Lemma:qExtremal}, given shortly below, to show that
$q$ cannot win in at least one of the tournaments among
${\mathcal D}_{\ell}[p],\ldots,{\mathcal D}_{\ell+7}[p]$ and thereby get a
contradiction.
\begin{figure}[htbp]
\begin{center}
\input{ProvingStable2.pstex_t}
\end{center}
\caption{\sf \small Proof of Theorem~\ref{Thm:Soundness}: the case when $r$
is to the right of the line from $p$ to $q$. The line $h$
orthogonal to $u_j$ through $r$
intersects the circle $D$ at a point $y$ outside
$C_i[p]$, which implies that $r'$
is to the right of the line from $p$ to $r$.
Assuming $r\neq r'$, the point $z=b_{pr}\cap
b_{pr'}$ is inside the cone bounded by $u_{\ell+4}[p]$ and
$u_{\ell+7}[p]$. Hence, $u_{\ell+7}[p]$ hits $b_{pr'}$ before
$b_{pr}$.} \label{Fig:Clockwise}
\end{figure}
According to Lemma~\ref{Lemma:qExtremal},
there exists a point $r$ such that $\varphi_{\ell+4}[p,r] < \varphi_{\ell+4}[p,q]$
and
$p$ is $(\ell+4)$-extremal for $r$.
Let $x=u_{\ell+4}[p]\cap b_{pr}$ and let
$D$ be the circle which is centered at $x$, and passes through $r$ and $p$; see Figure \ref{Fig:Clockwise}.
We consider the case where $r$ is to the right of the line
from $p$ to $q$;
the other case is treated symmetrically. In this case the
intersection of $b_{pr}$ and $b_{pq}$ is to the left of
the directed line from $p$ to $x$. Let $0\leq i\leq k-1$ be the index for
which $r\in C_i[p]$. If $i\le s-1$ then there is a secondary
node $w$ in the tree $\EuScript{T}_i$ for which $p\in R_w$ and $r\in B_w$, and
since $p$ is $(\ell+4)$-extremal for $r$, $\xi^R_{i,\ell+4}(w)$ is equal to
$p$. If $i > s$ then, symmetrically, we have a node $w\in \EuScript{T}_{i-s}$
such that $r\in R_w$ and $p\in B_w$ and $\xi^B_{i,\ell+4}(w)$ is equal to
$p$. We assume that $i\le s-1$ in the sequel; the other case is treated in a fully symmetric manner.
Let $v[r]$ be the ray from $r$ through $x$, for an appropriate direction $v\in \mathcal{S}^1$, and let $u_j$ be the direction which lies counterclockwise to $v$ and forms with it an angle of at least $\alpha$ and at most $2\alpha$.
Put
$r'=\xi^B_{i,j}(w)$, implying that $r'\in C_i[p]$ and
$\inprod{r'}{u_j}\geq \inprod{r}{u_j}$.
In particular, $r'$ belongs
to $\EuScript{N}[p]$.
If $r'$ is inside $D$ (and in particular if $r'=r$) then $q$ cannot win
the tournament ${\mathcal D}_{\ell+4}[p]$ which is the contradiction we are after.
So we may assume that $r'$ is outside $D$.
Let $h$ be the line through $r$ orthogonal to $u_j$.
Clearly, $h$ intersects $D$ at two points, $r$ and another point
$y$ (lying counterclockwise to $r$ along $\partial D$, by the choice of $u_j$). Since $\angle rpy=\frac{1}{2}\angle rxy$, and $\angle rxy$ equal to twice the angle between $v$ and $u_j$, $\angle rpy$ is at least $\alpha$,
so $y$ is outside $C_i[p]$. By assumption, $r'$ lies in the halfplane bounded by $h$ and containing $p$. Since we assume that $r'$
is not in $D$ it must be to the right of the line
from $x$ to $r$. It follows that $b_{pr'}$ intersects $b_{pr}$ at
some point $z$ to the right of the line from $p$ to $x$; see
Figure~\ref{Fig:Clockwise}.
We claim that $z$ is inside the cone with apex $p$ bounded by the rays
$u_{\ell+4}[p]$ and $u_{\ell+7}[p]$.
Indeed, suppose to the contrary that the claim is false.
It follows that in the diagram $\mathop{\mathrm{VD}}(\{r,r',p\})$ the edge
$e_{pr}$ is $\alpha$-long around $u_j[r]$. Indeed, denote the intersection point of $u_{\ell+7}[p]$ and $b_{pr}$ as $w$ (see Figure \ref{Fig:Clockwise}). Then $\angle xrw=\angle xpw= 3\alpha$. Since the angle between $v[r]$ and $u_j[r]$ is between $\alpha$ and $2\alpha$, the claim follows. Now, according to
Lemma~\ref{Lemma:EmptyCone}, $\inprod{r}{u_j}\leq\inprod{r'}{u_j}$,
which contradicts the choice of $r'$. It follows that $z$ is in
the cone bounded by $u_{\ell+4}[p]$ and $u_{\ell+7}[p]$ and thus
$u_{\ell+7}[p]$ hits $b_{pr'}$ before $b_{pr}$, and therefore also before $b_{pq}$. Hence, $q$ cannot win ${\mathcal D}_{\ell+7}[p]$, and we get the final contradiction which completes the proof of the theorem.
\end{proof}
\begin{figure}[htbp]
\begin{center}
\input{fix2.pstex_t}
\caption{\sf \small The proof of Lemma \ref{Lemma:qExtremal}: The point $p$ is strongly $\ell$-extremal for $q$ and
$\ell$-extremal for $r$.}\label{Fig:Extremal1}
\end{center}
\end{figure}
\noindent {\bf Remark:} We have not made any serious attempt to reduce the constants $c$ appearing in the definitions of various $(c\alpha,\alpha)$-$\mathop{\mathrm{SDG}}$s that we maintain. We suspect, though, that they can be significantly reduced.
To complete the proof of Theorem \ref{Thm:Soundness}, we provide the missing lemma.
\begin{lemma}\label{Lemma:qExtremal}
Let $p,q\in P$ be a pair of points and $0\leq \ell \leq k-1$ an index,
such that the point $p$ is strongly $\ell$-extremal for $q$ but
$N_\ell[p]\neq q$.
Then there exists a point $r$ such that $\varphi_\ell[p,r] < \varphi_\ell[p,q]$ and
$p$ is $\ell$-extremal
for $r$.
\end{lemma}
\begin{proof}
Let $0\leq i\leq k-1$ be the index for which $q\in C_i[p]$ and let
$h$ be the
line through $p$, orthogonal to $u_\ell$.
Assume without loss of generality that $h$ is vertical and
the ray $u_\ell[p]$ extends to the right of $h$.
\begin{figure}[htb]
\begin{center}
\input{fix3.pstex_t}\hspace{1cm}\input{fix4.pstex_t}
\caption{\sf \small Left: The circular arc $\Gamma(\theta)$ is the locus of all points
which are to the right of $p\sigma^+$ and see it at angle $\theta$. Right: To minimize $\theta$ we increase the radius of
$\Gamma(\theta)$ until one of its intersection points with $D$
coincides with $t^-$.}\label{Fig:Extremal2}
\end{center}
\end{figure}
Let $a$ be the point at which $u_\ell[p]$ intersects the bisector
$b_{pq}$, and let $D$ be the disk centered at $a$
whose boundary contains both $p$ and $q$. Since
$N_\ell[p]\neq q$, the interior of $D$ must contain some other point $r\in
P$; see Figure~\ref{Fig:Extremal1}.
Let $C[q]$ be the cone
emanating from $q$ such that each of its bounding rays makes an angle of
$\beta=3\alpha$ with
the ray from $q$ through $p$;
in particular $C[q]$ contains $p$.
Let $\sigma^+$ (resp., $\sigma^-$) denote the upper (resp., lower) endpoint of the
intersection of $C[q]$ and $h$. Since $p$ is strongly $\ell$-extremal for
$q$, the interior of the triangle $\triangle \sigma^+q\sigma^-$ does not contain any
points of $P$. Hence, $r$ must be outside
the triangle $\triangle \sigma^+q\sigma^-$. So either $r$ is above
$q\sigma^+$ (and inside $D$) or below $q\sigma^-$ (and inside $D$).
Assume, without loss of generality, that $r$ is below $q\sigma^-$, as shown in
Figure~\ref{Fig:Extremal1}. (The case where $r$ is above
$q\sigma^+$ is fully symmetric.)
Let $t^+$ and $t^-$ denote the intersection points
$q\sigma^+\cap{\partial} D$ and $q\sigma^-\cap{\partial} D$, respectively. Let $e$
be the point at which the ray
from $r$ through $t^{-}$
intersects $h$. Then
the intersection of the triangle $\triangle \sigma^+re$ and $\triangle
\sigma^+q\sigma^-$ is empty.
Among all the points of $P$ in $D$ we choose $r$ so that its $x$-coordinate is
the smallest. For this choice of $r$ we also
have that $\triangle \sigma^+re \setminus \triangle
\sigma^+q\sigma^-$ is empty (since it is contained in $D$ and lies to the left of $r$).
In other words, $\triangle \sigma^+re$ is empty.
Let $\gamma^+$ (resp., $\gamma^-$) denote the angle $\angle pr\sigma^+$
(resp., $\angle pr t^{-}$). It remains to show that
$\gamma^+\geq \frac{1}{3}\beta$ and $\gamma^-\geq \frac{1}{3}\beta$.
This will imply that the cone $C_{i'}[r]$ that contains $p$ is fully contained in the cone bounded by the rays from $r$ through $\sigma^+$ and $t^-$, so $p$ is extreme in the $u_\ell$-direction within $C_{i'}[r]$, which is what the lemma asserts. Since $r$ is inside $D$, it is clear that
$\gamma^-\ge \angle pqt^{-}=\beta$.
The angle
$\gamma^+$ however may be smaller than $\beta$, but, as we next show,
$\tan\gamma^+ \ge \frac13\tan \beta$.
Indeed, fix an angle $\theta$ and
let $\Gamma(\theta)$ denote the circular arc which is the locus of
all points $z$ that are to the right of $h$ and
the angle $\angle pz\sigma^+$ is
$\theta$. The endpoints of $\Gamma(\theta)$ are $p$ and
$\sigma^+$, and its center $a^*$ is on the (horizontal) bisector of
$p\sigma^+$; see Figure~\ref{Fig:Extremal2} (left).
Notice that $\Gamma(\theta)$ intersects ${\partial} D$ at two points, one of which is $p$, which
are symmetric with respect to the line
through $a$ and $a^*$. As $\theta$ decreases
$a^*$ moves to the right, and the intersection of $\Gamma(\theta)$
with ${\partial} D$ rotates clockwise around ${\partial} D$.
Consider the smallest $\theta$ such that $\Gamma(\theta)$ intersects $D$ on or below $qt^{-}$. It follows that this intersection is at $t^-$.
See Figure~\ref{Fig:Extremal2} (right).
This shows that
for fixed $p$ and $q$, the position of $r$
in $D$ below the line $qt^{-}$ which minimizes $\gamma^+$ is at
$t^{-}$.
To complete the analysis, we look for the position of $q$ that minimizes $\gamma^+$ when
$r$ is at $t^{-}$.
Note that, as $q$ moves along $\partial D$, the points $t^+$ and $t^-$ do not change.
As shown in Figure~\ref{Fig:Extremal3} (left),
$\gamma^+$ decreases when
$q$ tends counterclockwise to $t^+$.
When $q$ is at $t^{+}$,
$q\sigma^+$ is tangent to $D$.
A simple calculation,
illustrated in Figure~\ref{Fig:Extremal3} (right), shows that
$\tan\gamma^+ = \frac13 \tan \beta$. By the
inequality $\tan (3x) > 3\tan x$, for $x$ sufficiently small, it follows that $\gamma^+ >
\frac{1}{3}\beta$, implying, as noted above, that the point $p$ is $\ell$-extremal
for $r$. This completes the proof of the lemma.
\end{proof}
\begin{figure}[htbp]
\begin{center}
\input{fix5.pstex_t}\hspace{2cm}\input{fix6.pstex_t}
\caption{\sf \small Left: $\gamma^+$ is minimized as $q$ tends counterclockwise to $t^+$.
Right: Proving that $\tan \gamma^+=\frac{1}{3}\tan \beta$ when $q=t^+$ and
$r=t^-$. The triangles $\triangle q\sigma^+p$ and $\triangle pqr$ are isosceles and similar, and $y=2x\cos\beta$. Thus $\tan
\gamma^+=\frac{x\sin\beta}{x\cos\beta+y}=\frac{1}{3}\tan\beta$. }\label{Fig:Extremal3}
\end{center}
\end{figure}
In Section \ref{Subsec:ReducedMaintenNaive} we describe a naive algorithm for kinetic maintenance of ${\sf G}$, which encounters a total of $O^*(k^4n^2)$ events in the tournaments ${\mathcal D}_\ell[p]$. In Section \ref{Subsec:ReducedMaintenImprove} we consider a slightly more economical definition of the tournaments ${\mathcal D}_\ell[p]$, yielding a solution which processes only $O^*(k^2n^2)$ events in $O^*(k^2n^2)$ overall time.
\subsection{Naive maintenance of ${\sf G}$}\label{Subsec:ReducedMaintenNaive}
As the points of $P$ move, we need to update the $\mathop{\mathrm{SDG}}$ ${\sf G}$, which, as we recall, contains those edges $(p,q)$ such that $q$ wins $8$ consecutive tournaments ${\mathcal D}_{\ell}[p],\ldots,{\mathcal D}_{\ell+7}[p]$ of $p$, and $p$ is strongly $(\ell+3)$-extremal and $(\ell+4)$-extremal for $q$. We thus need to detect and process instances at which one of these conditions changes. There are several events at which such a change can occur:\\
\indent(a) A change in the sets of neighbors $\EuScript{N}[p]$, for $p\neq P$.\\
\indent(b) A change in the status of being strongly $\ell$-extremal for some pair $(p,q)$.\\
\indent(c) A change in the winner of some tournament ${\mathcal D}_\ell[p]$ (at which two existing members of $\EuScript{N}[p]$ attain the same minimum distance in the direction $u_\ell$).
Note that each of the events (a)--(b) can arise only during a swap of two points in one of the $s$ directions $u_0,\ldots ,u_{s-1}$ or in one of the directions orthogonal to these vectors.
For each $0\leq i\leq s-1$ we maintain two lists. The first list, $L_i$, stores the
points of $P$ ordered by their projections on a line in the $u_i$-direction, and the second list, $K_i$, stores the points ordered by their projections on a line orthogonal to the $u_i$-direction. We note that, as long as the order in each of the $2s$ lists $K_i,L_i$ remains unchanged, the discrete structure of the range trees $\EuScript{T}_i$, and the auxiliary items $\xi_{i,\ell}^R(w),\xi_{i,j}^B(w)$, does not change either. More precisely, the structure of $\EuScript{T}_i$ changes only when two consecutive elements in $K_i$
or in $K_{i+1}$ swap their order in the respective list; whereas the auxiliary items $\xi_{i,j}^R(w),\xi_{i,j}^B(w)$, stored at secondary nodes of $\EuScript{T}_i$, may also change when two consecutive points swap their order in the list $L_j$.
There are $O(sn^2)=O(n^2)$ discrete events where consecutive points in $K_i$ or $L_i$
swap. We call these events {\em $K_i$-swaps} and \textit{$L_i$-swaps}, respectively. Each such event happens
when the line trough a pair of points becomes orthogonal or parallel to $u_i$. We
can maintain each list in linear space for a total of $O(sn)$ space
for all lists. Processing a swap takes $O(\log n)$ time to replace a
constant number of elements in the event queue (and more time to update the various structures, as discussed next).
\smallskip
\noindent{\bf The range trees $\EuScript{T}_i$.}
As just noted, the structure of $\EuScript{T}_i$ changes either at a $K_i$-swap or at a
$K_{i+1}$-swap. As described in \cite[Section 4]{KineticNeighbors}, we
can update $\EuScript{T}_i$ when such a swap occurs, including the various auxiliary data that it stores, in $O(s\log^2 n)$ time.
(The factor $s$ is due to the fact that we maintain
$O(s)$ extreme points $\xi_{i,\ell}^B(w)$ and $\xi_{i,j}^R(w)$ in each
secondary node $w$ of $\EuScript{T}_i$, whereas in \cite{KineticNeighbors} only
two points are maintained.)
In a similar manner, an $L_j$-swap of two points $p,q$ may affect one of the items $\xi_{i,j}^B(w)$
and $\xi_{i,j}^R(w)$ stored at any secondary node $w$ of any $\EuScript{T}_i$, for $0\leq i\leq s-1$, such that both $p,q$ belong to $R_w$ or to $B_w$. Each $\EuScript{T}_i$ has only $O(\log^2n)$ such nodes, and the data structure of \cite{KineticNeighbors} allows us to
update $\EuScript{T}_i$, when an $L_j$-swap occurs in $O(\log^2 n)$ time. Summing up over all $0\leq i\leq s-1$, we get
that the total update time of the range trees after an $L_j$-swap is
$O(s\log^2 n)$. As follows from the analysis in~\cite[Section 4]{KineticNeighbors}, the
trees $\EuScript{T}_i$, for $0\leq i\leq s-1$, require a total of $O(s^2n\log
n)$ storage (because of the $O(s)$ items
$\xi_{i,\ell}^B(w),\xi_{i,j}^R(w)$ stored at each secondary node of each of the $s$ trees).
\smallskip
\noindent{\bf The tournaments ${\mathcal D}_\ell[p]$.}
The kinetic tournament ${\mathcal D}_\ell[p]$, for $p\in P$ and $0\leq
\ell\leq k-1$ contains the points in the set $\EuScript{N}[p]$. Since $\EuScript{N}[p]$
varies both kinetically and dynamically and therefore the tournaments
${\mathcal D}_\ell[p]$ need to be maintained as kinetic and dynamic tournaments, in the manner reviewed in Section \ref{sec:Prelim}.
For $0\leq i\leq s-1$, we define $\Pi_{i}$ to be
the set of pairs of points $(p,q)$, such that there exists a
secondary node $w$ in $\EuScript{T}_i$, and indices $0\leq j,\ell\leq k-1$, for which
$p=\xi_{i,\ell}^R(w)$ and $q=\xi_{i,j}^B(w)$.
For a fixed $i$, a point $p$ belongs to $O(s\log^2n)$ pairs $(p,q)$ in
$\Pi_{i}$, for a total of $O(s^2\log^2n)$ pairs over all sets $\Pi_{i}$. It follows that the total size of all the
sets $\Pi_i$ is $O(s^2n\log^2n)$. Any secondary node of any tree
$\EuScript{T}_i$, for $0\leq i\leq s-1$, contributes at most $O(s^2)$ pairs to
the respective set $\Pi_{i}$.
The set $\EuScript{N}[p]$ consists of all the points $q$ such that there exists a set
$\Pi_{i}$ that contains the pair $(p,q)$ or the pair $(q,p)$. So the
total size of the sets $\EuScript{N}[p]$, over all points $p$, is $O(s^2 n\log^2
n)$. A set $\EuScript{N}[p]$ changes only when one of the sets $\Pi_{i}$ changes,
which can happen only as the result of a swap.
Specifically, when $\xi_{i,\ell}^R(w)$ changes for some $0\leq i\le
s-1$ and $0\le \ell \leq k-1$, from a point $p$ to a point $p'$, we
make the following updates.
(i) If $p\not= \xi_{i,\ell'}^R(w)$ for all
$\ell' \not= \ell$ then for every $0\le j \le k-1$ we delete the pair
$(p,\xi_{i,j}^B(w))$ from $\Pi_{i}$. (ii) We add the
pair $(p',\xi_{i,j}^B(w))$ to $\Pi_{i}$. We make analogous updates
when one of the values $\xi_{i,j}^B(w)$ changes. When a node $w$ is created, deleted,
or involved in a rotation, we update the pairs ($\xi_{i,\ell}^B(w)$,
$\xi_{i,j}^R(w)$) in $\Pi_i$ for every $\ell$ and $j$. In such a case
we say that node $w$ {\em is changed}.
A change of $\xi_{i,\ell}^R(w)$ or $\xi_{i,j}^B(w)$ in an existing
node $w$ generates $O(s)$ changes in $\Pi_{i}$ and thereby $O(s)$
changes to the sets $\EuScript{N}[p]$. Thus, it may generate $O(s^2)$ updates to the
tournaments ${\mathcal D}_\ell[p]$. A change of a secondary node may
generate $O(s^2)$ changes to the sets $\EuScript{N}[p]$ and thereby $O(s^3)$ updates
to the tournaments ${\mathcal D}_\ell[p]$.
A point $\xi_{i,\ell}^R(w)$ or $\xi_{i,\ell}^B(w)$ changes during either
a $K_i$, $K_{i+1}$, or $L_\ell$-swap. Each $L_\ell$-swap, for any $\ell$, causes $O(s\log^2n)$
points $\xi_{i,\ell}^R(w)$ or $\xi_{i,\ell}^B(w)$ to change (over the entire collection of trees), and
therefore each swap causes $O(s^3 \log^2 n)$ updates to the tournamnets
${\mathcal D}_\ell[p]$. The number of nodes which
change in $\EuScript{T}_i$ by a $K_i$ or $K_{i+1}$-swap is $O(\log^2n)$. Each
such change causes $O(s^3)$ updates to the tournaments ${\mathcal D}_\ell[p]$.
Therefore the total number of updates to tournaments due to changes of
nodes is also $O(s^3\log^2n)$ per swap.
The number of swaps is $O(sn^2)$, so overall we get $O(s^4n^2\log^2n)$
updates to the tournaments. The size of each individual tournament is
$O(s^2 \log^2 n)$.
By Theorem \ref{thm:kinetic-tour} these updates generate
$$
O(s^4n^2\log^2n \cdot\beta_{r+2}(s^2\log^2 n)\log(s^2\log^2n))
=O(s^4n^2\beta_{r+2}(s\log n)\log^2n\log(s\log n))
$$
tournament
events, which are processed in
$$
O(s^4n^2\log^2 n\cdot\beta_{r+2}(s^2\log^2 n)\log^2(s^2\log^2n))
=O(s^4n^2\cdot \beta_{r+2}(s\log n)\log^2n\log^2(s\log n))
$$
time.
Processing each individual tournament event takes $O(\log^2\log
n+\log^2 s)$ time.
Since the size of each tournament is $O(s^2\log^2n)$ and there are $O(ns)$ tournaments, the
total size of all tournaments is $O( s^3 n\log^2 n)$.
\smallskip
\noindent{\bf Testing whether $p$ is strongly $\ell$-extremal for the
winner of ${\mathcal D}_\ell[p]$.}
For each $0\leq i\leq s-1$, and for each pair $(p,q)\in \Pi_{i}$ we maintain
those indices $0\le \ell\le k-1$ (if there are any) for which
$p$ is strongly $\ell$-extremal for $q$.
Recall that each point $p$ belongs to $O(s^2\log^2 n)$ pairs in the sets
$\Pi_{i}$.
We use the trees $\EuScript{T}_j$ for $i-3 \le j\le i+3$ to find, for a query
$q$, the point $\arg\max_{q'\in P\cap
C'_{i}[q]}\inprod{q'}{u_\ell}$, for each $0\le \ell\le k-1$. The query time is $O(s\log^2n)$
Using this information we easily determine,
for a pair $(p,q)$,
for which values of $\ell$
$p$ is strongly $\ell$-extremal for $q$.
As explained above, every swap changes
$O(s^2\log^2n)$ pairs of the sets $\Pi_{i}$.
When a new pair is added to a set $\Pi_{i}$ we query the trees
$\EuScript{T}_j$, $i-3 \le j\le i+3$, to find for which values of $\ell$, $p$ is
strongly $\ell$-extremal for $q$ (and vice versa).
This takes a total of $O(s^3\log^4n)$ time for each
swap.
Furthermore, a point $p$ can cease (or start) being strongly
$\ell$-extremal for $q$ only during a swap which involves either $p$ or $q$.
So when we process a swap
between $p$ and some other point we
recompute, for all pairs $(p,x)$ and $(x,p)$ in the current
sets $\Pi_{i}$ and for every
$0\le \ell \le k-1$,
whether $p$ is strongly $\ell$-extremal for $x$, and whether $x$ remains strongly $\ell$-extremal for $p$.
This adds an overhead of
$O(s^3\log^4n)$ time at each
swap.
The following theorem summarizes the results obtained so far in this section.
\begin{theorem}
The $\mathop{\mathrm{SDG}}$ ${\sf G}$ can be maintained using a data structure which
requires
$O\left(\left(n/\alpha^3\right) \log^2 n\right)$ space
and encounters two types of
events: swaps and tournament events.\\
There are $O(n^2/\alpha)$ swaps,
each processed in $O\left(\log^4n/\alpha^3\right)$ time.
There are
$$
O\left(\left(n^2/\alpha^4\right)\log^2n \beta_{r+2}(\log n/\alpha)\log(\log n/\alpha)\right)
$$ tournament
events which are processed in overall
$$
O\left(\left(n^2/\alpha^4\right)\log^2 n\beta_{r+2}(\log n/\alpha)\log^2(\log n/\alpha)\right)
$$
time.
Processing each individual tournament event takes $O(\log^2\log n+\log^2 (1/\alpha))$ time.
\end{theorem}
\subsection{An even faster data structure}\label{Subsec:ReducedMaintenImprove}
We next reduce the overall time and space required to maintain ${\sf G}$
roughly by factors of $s^2$ and $s$, respectively (bringing the dependence on $s$ of both bounds down to roughly $s^2$). We achieve that by
restricting each tournament ${\mathcal D}_\ell[p]$ to contain a carefully chosen
subset $\EuScript{N}_\ell[p]\subseteq \EuScript{N}[p]$ of size $O(s\log^2n)$ (recall that the size of the entire set $\EuScript{N}[p]$ is $O(s^2\log^2n)$).
The definition of $\EuScript{N}_\ell[p]$ is based on
the following lemma. Its simple proof is given in Figure \ref{Fig:Compatible}.
\begin{lemma}\label{Lemma:Compatible}
Let $p,q\in P$ and let $i$ be the index for which
$q \in C_i[p]$. Let $0\leq \ell\leq k-1$ be an index, and $v\in \mathbb{S}^1$ a direction such that
the rays $u_\ell[p]$ and $v[q]$ intersect
$b_{pq}$ at the same point.
Then $v$ lies in one of the two consecutive cones $C_{\zeta(i,\ell)},C_{\zeta(i,\ell)+1}$, where $\zeta(i,\ell)=2i+s-\ell$.
\end{lemma}
\begin{figure}[htbp]
\begin{center}
\input{Compatible.pstex_t}
\caption{\sf \small Proof of Lemma \ref{Lemma:Compatible}: We assume that $q\in C_i[p]$, and that the rays $u_\ell[p]$ and $v[q]$ hit $b_{pq}$ at the \textit{same point} $w$. Then the angle $x=\angle wpq=(i+1-\ell)\alpha-t$, for some $0\leq t\leq \alpha$. The orientation of $\overline{qp}$ is $(i+1)\alpha-t+\pi=(i+s+1)\alpha-t$. Hence, the orientation of $v$ is $(i+s+1)\alpha-t+x=(2i+s-\ell+2)\alpha-2t$. Thus, the direction $v$ lies in the union of the two consecutive cones $C_{\zeta(i,\ell)},C_{\zeta(i,\ell)+1}$, for $\zeta(i,\ell)=2i+s-\ell$.}\label{Fig:Compatible}
\end{center}
\end{figure}
It follows that in Corollary~\ref{Corol:ExtremalPair},
we can require that the indices $0\leq j,\ell\leq k-1$, for which
$(p,q)$ is a (strongly) $(j,\ell)$-extremal pair, satisfy
$\zeta(i,\ell) \le j \le \zeta(i,\ell)+2$.
Indeed, we may require that the vectors $u_j[q],u_{\ell}[p]$ hit $b_{pq}$ at the respective points $x$ and $y$ for which the angle $\angle xpy=\angle xqy$ is at most $\alpha$, which, in turn, happens only if $u_j$ bounds one of the cones $C_{\zeta(i,\ell)},C_{\zeta(i,\ell)+1}$.
For all $0\le i \le s-1$ and $0\le \ell\leq k-1$ we define a set $\Pi_{i,\ell}$ which
consists of all pairs $(p,q)$ of points of $P$ such that there exists a
secondary node $w$ in $\EuScript{T}_i$, and indices $\ell$ and $\zeta(i,\ell)
\le j \le \zeta(i,\ell)+2$, such that $p=\xi_{i,\ell}^B(w)$ and
$q=\xi_{i,j}^R(w)$ or $p=\xi_{i,\ell}^R(w)$
and $q=\xi_{i,j}^B(w)$. We define the set $\EuScript{N}_\ell[p]$ to consist of
all points $q$ such that $(p,q) \in \Pi_{i,\ell}$.
For a point $p$ the set of points that participate in the
{\em reduced} tournament ${\mathcal D}_\ell[p]$ is
$\bigcup_{\ell'=\ell-3}^{\ell+3} \EuScript{N}_{\ell'}[p]$.
(Note that this rule distributes a point $q\in \EuScript{N}_{\ell}[p]$ to only seven nearby tournaments. Nevertheless, when the edge $pq$ is sufficiently long, $q$ will belong to several consecutive neighborhoods $\EuScript{N}_{\ell}[p]$, and therefore will appear in more tournaments, in particular in at least eight consecutive tournaments at which it should win, according to the definition of our $\mathop{\mathrm{SDG}}$.)
We claim that, with this redefinition of the tournaments
${\mathcal D}_\ell[p]$, Theorems \ref{Thm:Completeness} and \ref{Thm:Soundness} still hold.
To verify that Theorem \ref{Thm:Completeness} holds one has to follow its (short) proof and
notice that, by Lemma
\ref{Lemma:Compatible}, the point $q$ belongs to the eight reduced tournaments
which it is supposed to win.
We next indicate the changes required in the proof of Theorem
\ref{Thm:Soundness}. We use the same notation as in the original
proof of Theorem \ref{Thm:Soundness}, and recall that it assumed by contradiction that, say,
$N_{\ell+4}[p]\neq q$ even though $q$ wins the tournaments
${\mathcal D}_\ell[p],{\mathcal D}_{\ell+1}[p],\ldots,{\mathcal D}_{\ell+7}[p]$,
and the point $p$ is strongly $(\ell+3)$- and $(\ell+4)$-extremal for
$q$.
We use
Lemma~\ref{Lemma:qExtremal} to establish the existence of some point
$r\in P$ such that $\varphi_{\ell+4}[p,r]<\varphi_{\ell+4}[p,q]$ and $p$ is
$(\ell+4)$-extremal for $r$. Let $i$ be the index for which $r\in C_i[p]$, and let $w$ be the secondary node in $\EuScript{T}_i$ for which $r\in B_w$
and $p\in R_w$. Note that $p=\xi_{i,\ell+4}^R(w)$. We next choose an
index $j$ such that the point $r'=\xi_{i,j}^B(w)$ either satisfies
that
$\varphi_{\ell+7}[p,r']<\varphi_{\ell+7}[p,q]$ if $r$ is to the
right of the line from $p$ to $q$, or that
$\varphi_{\ell+1}[p,r']<\varphi_{\ell+1}[p,q]$ if $r$ is to the
left of the line from $p$ to $q$. To re-establish Theorem \ref{Thm:Soundness} it suffices to show that
$r'$ participates in the reduced tournament ${\mathcal D}_{\ell+7}[p]$ (resp., ${\mathcal D}_{\ell+1}[p]$) if $r$ is to the
right (resp., left) of the line from $p$ to $q$.
It follows from the way we defined $j$ in the original proof and from Lemma \ref{Lemma:Compatible}
that $\zeta(i,\ell+4)-2\le j \le \zeta(i,\ell+4)-1$
(if $r$ is to the
right of the line from $p$ to $q$)
or $\zeta(i,\ell+4)+1\le j \le \zeta(i,\ell+4)+2$ (if $r$ is to the
left of the line from $p$ to $q$).
So $r' \in \EuScript{N}_{\ell+4}[p]$ and therefore $r'$ does participate in the reduced tournament ${\mathcal D}_{\ell+1}[p]$ or ${\mathcal D}_{\ell+7}[p]$.
Indeed, the direction $v$ used in that proof lies in one of the cones $C_{\zeta(i,\ell+4)}, C_{\zeta(i,\ell+4)+1}$. The direction $u_j$ then forms an angle between $\alpha$ and $2\alpha$ with $v$, which lies counterclockwise from $v$ if $r$ lies to the right of the line from $p$ to $q$, or clockwise from $v$ in the other case. This is easily seen to imply the two corresponding constraints on $j$; see Figure \ref{Fig:Clockwise}.
We change our algorithm accordingly to maintain only the reduced tournaments.
Now every secondary node $w$ of any range tree $\EuScript{T}_i$ contributes only seven pairs to each set
$\Pi_{i,\ell}$, for $0\leq \ell\leq k-1$, so the size of each such set is $O(n\log n)$.
Since there
are $O(s^2)$ sets $\Pi_{i,\ell}$, their total size is $O(s^2 n\log
n)$. Each pair in each $\Pi_{i,\ell}$ contributes an item to a constant
number of tournaments, so the total size of the tournaments is
$O(s^2 n \log n)$. Each individual tournament ${\mathcal D}_{\ell}[p]$ is now
of size $O(s\log^2n)$, because $p$ belongs to $O(\log^2 n)$ pairs in each
set $\Pi_{i,\ell'}$ for $0\le i\le s-1$, $0\leq \ell'\leq k-1$, and ${\mathcal D}_\ell[p]$ inherits only those points $q$ that come from pairs $(p,q)\in \Pi_{i,\ell'}$, for $0\leq i\leq s-1$ and $\ell-3\leq \ell'\leq \ell+3$.
When $\xi_{i,\ell}^B[w]$
changes from $p$ to $p'$ for some $0\le i\le s-1$ and $0\le \ell \le
k-1$, at most a constant number of pairs
$(p,\xi_{i,j}^R(w))$ for $\zeta(i,\ell)\leq j\leq \zeta(i,\ell)+2$
are deleted from $\Pi_{i,\ell}$, and a constant number of pairs
$(p',\xi_{i,j}^R(w))$ for $\zeta(i,\ell)\leq j\leq \zeta(i,\ell)+2$
are added to $\Pi_{i,\ell}$.
Similar changes take place in $\Pi_{i,j}$ for those three indices $j$ satisfying
$\zeta(i,j)\leq \ell \leq \zeta(i,j)+2$.
When $\xi_{i,j}^R[w]$
changes from $q$ to $q'$ for
some $0\leq i \le s-1$ and $0\le j \leq k-1$,
at most a constant number of pairs $(\xi_{i,\ell}^B(w),q)$ are deleted
from $\Pi_{i,j}$ for the indices $\ell$ satisfying $\zeta(i,j)\leq \ell \leq
\zeta(i,j)+2$, and a constant number of pairs
$(\xi_{i,\ell}^B(w),q')$ are added for the same values of
$\ell$. Similarly,
at most a constant number of pairs $(\xi_{i,\ell}^B(w),q)$ are deleted
from $\Pi_{i,\ell}$ for the indices $\ell$ satisfying $\zeta(i,\ell)\leq j\leq
\zeta(i,\ell)+2$, and a constant number of pairs
$(\xi_{i,\ell}^B(w),q')$ are added for the same values of
$\ell$.
A change
of a secondary node $w$ in the tree $\EuScript{T}_i$ causes
$O(s)$ pairs in the sets $\Pi_{i,\ell}$
to change.
Any $K_i$-swap changes
$O(\log^2 n)$ nodes in
$\EuScript{T}_i$ and thereby causes $O(s\log^2 n)$ pairs in the sets $\Pi_{i,\ell}$
to change.
Any $L_j$-swap changes $O(s\log^2 n)$ extremal points $\xi_{i,j}^R[w]$,
$\xi_{i,j}^B[w]$ at secondary nodes $w$ of the trees $\EuScript{T}_i$, and thereby causes
$O(s\log^2 n)$ pairs in the sets $\Pi_{i,\ell}$
to change. Since each pair in $\Pi_{i,\ell}$ contributes an item to a constant
number of tournaments it follows that
$O(s\log^2n)$ points are inserted to and deleted from
the tournaments ${\mathcal D}_\ell[p]$ at each swap.
According to Theorem \ref{thm:kinetic-tour}
the size of each tournament is $O(s\log^2 n)$ -- the number
of elements that it contains. So the total size of all tournaments
is $O(s^2n\log n)$. In total we get that there are
$O(s^2n^2 \log^2 n)$ updates to tournaments during swaps.
These updates generate
$$
O(s^2n^2\log^2n\beta_{r+2}(s\log n)\log(s\log n))$$
tournament events
that are processed in overall
$$
O(s^2n^2\log^2n\beta_{r+2}(s\log n)\log^2(s\log n))
$$
time.
Each individual tournament event
is processed in $O(\log^2\log n + \log^2 s)$ time and each swap
can be processed in $O(s\log^2n\log^2(s\log n))$
time.
\smallskip
In addition, for each pair $(p,q)\in \Pi_{i,\ell}$
we record whether $p$ is strongly
$\ell$-extremal for $q$.
We maintain this information using the
trees $\EuScript{T}_j$, for $i-3\leq j\leq i+3$, as described above, which allow
for any $p,q\in P$ and $0\leq \ell\leq k-1$ to test, in $O(\log^2n)$
time, if $p$ is strongly $\ell$-extremal for $q$. At each swap event
we spend $O(s\log^4n)$ extra time to compute for
$O(s\log^2n)$ pairs $(p,q)$ which are added to the sets $\Pi_{i,\ell}$
whether $p$ is strongly
$\ell$-extremal for $q$.
Consider a pair $(p,q) \in \Pi_{i,\ell}$. The point $p$ may stop being
strongly
$\ell$-extremal for $q$ only during a swap which involves $p$
or $q$. So, as before, at each swap we find the $O(s\log^2n)$ pairs
containing one of the points involved in the swap, and recompute, in $O(s\log^4n)$
total time,
for each such pair $(p,q)$, whether
the strong extremal relation holds.
We thus obtain the following summary result.
\begin{theorem}\label{Thm:ReducedS}
Let $P$ be a set of $n$ moving points in ${\mathbb R}^2$ under algebraic
motion of bounded degree,
and let $\alpha > 0$ be a sufficiently small parameter. A $(10\alpha,\alpha)$-SDG of $P$
can be maintained using a data structure that requires
$O((n/\alpha^2) \log n)$ space and encounters two types of
events: swap events
and tournament events. There are $O(n^2/\alpha)$ swap events,
each processed in $O(\log^4(n)/\alpha)$ time.
There are
$$O((n/\alpha)^2 \beta_{r+2}(\log (n)/\alpha)\log^2n\log(\log (n)/\alpha))$$
tournament
events, which are handled in a total of
$$O((n/\alpha)^2 \beta_{r+2}(\log (n)/\alpha)\log^2n\log^2(\log (n)/\alpha))$$
processing time. The worst-case processing time of a
tournament event is $O(\log^2(\log (n)/\alpha))$. The data structure is also {\it local}, in the sense that each point
is stored, at any given time, at only $O(\log^2n/\alpha^2)$ places in the structure.
\end{theorem}
Concerning locality, we note that a point participates in $O(s)$ projection tournaments at each of $O(s\log^2n)$ tree nodes. If it wins in at least one of the projection tournaments at a node, it is fed to $O(s)$
directional tournaments. So it appears in $O(s^2\log n)$ places.
\noindent {\bf Remarks:} (1) Comparing this algorithm with the space-inefficient one of Section~\ref{sec:Prelim}, we note that they both use the
same kind of tournaments, but here much fewer pairs of points
($O^*(n/\alpha^2)$ instead of $O(n^2/\alpha)$) participate in the
tournaments. The price we have to pay is that the test for an edge $pq$
to be stable is more involved. Moreover, keeping track of the subset of
pairs that participate in the tournaments requires additional work,
which is facilitated by the range trees $\EuScript{T}_i$.
\medskip\noindent
(2) To be fair, we note that our $O^*(\cdot)$ notation hides polylogarithmic factors in $n$. Hence, comparing the analysis in this section with Theorem \ref{Thm:MaintainSDGPolyg}, we gain when $n$ is smaller than some threshold, which is exponential in $1/\alpha$.
\section{Properties of SDG}\label{Sec:SDGProperties}
We conclude the paper by establishing some of the properties of stable Delaunay graphs.
\paragraph{Near cocircularities do not show up in an SDG.}
Consider a critical event during the kinetic maintenance of the full
Delaunay triangulation, in which four points $a,b,c,d$ become cocircular,
in this order, along their circumcircle, with this circle being empty.
Just before the critical event, the Delaunay triangulation involved
two triangles, say, $abc$, $acd$. The Voronoi edge $e_{ac}$ shrinks
to a point (namely, to the circumcenter of $abcd$ at the critical event),
and, after the critical cocircularity, is replaced by the Voronoi edge
$e_{bd}$, which expands from the circumcenter as time progresses.
Our algorithm will detect the possibility of such an event before the criticality occurs,
when $e_{ac}$ becomes $\alpha$-short (or even before this happens). It will then remove this edge from the stable subgraph,
so the actual cocircularity will not be recorded. The new edge $e_{bd}$
will then be detected by the algorithm only when it becomes sufficiently long
(if this happens at all), and will then enter the stable Delaunay graph. In short,
critical cocircularities do not arise {\em at all} in our scheme.
As noted in the introduction, a Delaunay edge $ab$ (interior to the hull) is just about to become $\alpha$-short or $\alpha$-long when the sum of the opposite angles in its two adjacent Delaunay triangles is $\pi-\alpha$ (see Figure \ref{Fig:LongDelaunay}). This shows that changes in the stable Delaunay graph occur when the
``cocircularity defect'' of a nearly cocircular quadruple (i.e., the difference between $\pi$ and the sum of opposite angles in the quadrilateral spanned by the quadruple) is between
$\alpha$ and $c\alpha$, where $c$ is the constant used in our definitions in Section \ref{sec:ViaPolygonal} or Section \ref{Sec:ReduceS}.
Note that a degenerate case of cocircularity is a collinearity on the convex
hull.
Such collinearities also do not show up in the stable
Delaunay graph.\footnote{Even if they did show up, no real damage would be done, because the number of such collinearities is only $O^*(n^2)$; see, e.g., \cite{SA95}.} A hull collinearity between three nodes $a, b, c$ is
detected before it happens, when (or before) the corresponding Voronoi edge
becomes $\alpha$-short, in which case the angle $\angle acb$, where $c$ is the middle point of the (near-)collinearity becomes $\pi-\alpha$ (see Figure \ref{collinearity}).
Therefore a hull edge is removed from the $\mathop{\mathrm{SDG}}$ if the Delaunay triangle is
almost collinear. The edge (or any new edge about to replace it) re-appears in the $\mathop{\mathrm{SDG}}$ when its corresponding
Voronoi edge is long enough, as before.
\begin{figure}
\begin{center}
\input{collinearity.pstex_t}
\caption{\small \sf The near collinearity that corresponds to a Voronoi edge
becoming $\alpha$-short.} \label{collinearity}
\end{center}
\end{figure}
\paragraph{SDGs are not too sparse.}
Consider the Voronoi cell $\mathop{\mathrm{Vor}}(p)$ of a point $p$, and suppose that $p$ has only one $\alpha$-long edge $e_{pq}$. Since the angle at which $p$ sees $e_{pq}$ is at most $\pi$, the sum of the angles at which $p$ sees the other edges is at least $\pi$, so $\mathop{\mathrm{Vor}}(p)$ has at least $\pi/\alpha$ $\alpha$-short edges. Let $m_1$ denote the number of points $p$ with this property. Then the sum of their degrees in $\mathop{\mathrm{DT}}(P)$ is at least $m_1(\pi/\alpha+1)$. Similarly, if $m_0$ points do not have any $\alpha$-long Voronoi edge, then the sum of their degrees is at least $2\pi m_0/\alpha$. Any other point at least two $\alpha$-long Voronoi edges and its degree is at least 3 if it is an interior point, or at least 2 otherwise. So the number of $\alpha$-long
edges is at least (recall that each $\alpha$-long edge is counted twice)
\begin{equation}
\label{Eq:long-edge}
n-m_1-m_0+m_1/2=n-(m_1+2m_0)/2 .
\end{equation}
Let $h$ denote the number of hull vertices. Since the sum of the degrees is $6n-2h-6$, we get
$$3(n-h-m_1-m_0)+2h+m_1\left(\frac{\pi}{\alpha}+1\right)+2m_0\frac{\pi}{\alpha}
\leq 6n-2h-6,$$
implying that
$$m_1+2m_0\leq \frac{3n}{\pi/\alpha-2}.$$
Plugging this inequality in (\ref{Eq:long-edge}), we conclude that the number of $\alpha$-long edges is at least
$$ n\left[1-\frac{3}{2(\pi/\alpha-2)}\right].$$
As $\alpha$ decreases, the number of edges in the SDG is always at
least a quantity that gets closer to $n$.
This is nearly tight, since there exist $n$-point sets for which the number of stable edges is only roughly $n$, see Figure \ref{Fig:ShiftedGrid}.
\begin{figure}
\begin{center}
\input{Grid.pstex_t}
\caption{\small \sf If the points of $P$ lie on a sufficiently spaced shifted grid then the number of $\alpha$-long edges in $\mathop{\mathrm{VD}}(P)$ (the vertical ones) is close to $n$.} \label{Fig:ShiftedGrid}
\end{center}
\end{figure}
\paragraph{Closest pairs, crusts, $\beta$-skeleta, and the SDG.}
Let $\beta\geq 1$, and let $P$ be a set of $n$ points in the plane.
The \textit{$\beta$-skeleton} of $P$ is a graph on $P$ that
consists of all the edges $pq$ such that the union of the two disks of
radius $(\beta/2)d(p,q)$, touching $p$ and $q$, does not contain any
point of $P\setminus\{p,q\}$. See, e.g., \cite{Crusts,Skeletons} for
properties of the $\beta$-skeleton, and for its applications in surface reconstruction.
We show that the edges of the $\beta$-skeleton are $\alpha$-stable
in $\mathop{\mathrm{DT}}(P)$, provided $\beta\geq 1+\Omega(\alpha^2)$.
In Figure \ref{Fig:Skeleton} we sketch a straightforward proof of the fact that the edges of the $\beta$-skeleton are $\alpha$-stable in $\mathop{\mathrm{DT}}(P)$, provided that $\beta\geq 1+\Omega(\alpha^2)$.
\begin{figure}[htbp]
\begin{center}
\input{Skeleton.pstex_t}
\caption {\small \sf An edge $pq$ of the $\beta$-skeleton of $P$ (for $\beta>1$). $c_1$ and $c_2$ are centers of the two $P$-empty disks of radius $(\beta/2)d(p,q)$ touching $p$ and $q$. Clearly, each of $p,q$ sees the Voronoi edge $e_{pq}$ at an angle at least $2\theta=\angle c_1pq+\angle c_2pq$ (so it is $2\theta$-stable). We have $1/\beta=\cos \theta\approx 1-\theta^2/2$ or $\beta=1+\Theta(\theta^2)$. That is, for $\beta\geq 1+\Omega(\alpha^2)$ every edge of the $\beta$-skeleton is $\alpha$-stable.}\label{Fig:Skeleton}
\end{center}
\end{figure}
A similar argument shows that the stable Delaunay graph contains the
closest pair in $P(t)$ as well as the crust of a set of points sampled
sufficiently densely along a 1-dimensional curve (see \cite{Amenta,Crusts} for the definition of crusts and their applications in surface
reconstruction).
We only sketch the argument for closest pairs: If $(p,q)$ is a closest pair then $pq\in \mathop{\mathrm{DT}}(P)$, and the two adjacent Delaunay triangles $\triangle pqr^+,\triangle pqr^-$ are such that their angles of $r^+,r^-$ are at most $\pi/3$ each, so $e_{pq}$ is $(\pi/3)$-long, ensuring that $pq$ belongs to any stable subgraph for $\alpha$ sufficiently small; see \cite{KineticNeighbors} for more details.
We omit the proof for crusts, which is fairly straightforward.
\begin{figure}
\begin{center}
\input{norng.pstex_t}
\caption{\small \sf $ab$ is an edge of the relative neighborhood graph but not of
$\mathop{\mathrm{SDG}}$.}
\label{norng1}
\end{center}
\end{figure}
In contrast, stable Delaunay graphs need not contain all the
edges of several other important subgraphs of the Delaunay
triangulation, including the Euclidean minimum spanning tree, the
Gabriel graph, the relative neighborhood graph, and the
all-nearest-neighbors graph. An illustration for the relative neighborhood graph is given in Figure \ref{norng1}. As a matter of fact, the stable
Delaunay graph need not even be connected, as is illustrated in
Figure~\ref{norng2}.
\begin{figure}
\begin{center}
\input{wheel.pstex_t}
\caption{\small \sf A wheel-like configuration that disconnects $p$ in the
stable Delaunay graph. The Voronoi diagram is drawn with dashed
lines, the stable Delaunay edges are drawn as solid, and the remaining
Delaunay edges as dotted edges. The points of the ``wheel" need not be cocircular.}
\label{norng2}
\end{center}
\end{figure}
\paragraph{Completing SDG into a triangulation.}
As argued above, the Delaunay edges that are missing in the stable
subgraph correspond to nearly cocircular quadruples of points, or
to nearly collinear triples of points near the boundary of the convex
hull. Arguably, these missing edges carry little information, because
they may ``flicker" in and out of the Delaunay triangulation even when the points
move just slightly (so that all angles determined by the triples of points change only slightly). Nevertheless, in many applications it is desirable
(or essential) to complete the stable subgraph into {\em some} triangulation,
preferrably one that is also stable in the combinatorial sense---it undergoes
only nearly quadratically many topological changes.
By the analysis in Section \ref{Sec:polygProp} we can achieve part of this goal by maintaining the full Delaunay triangulation $\mathop{\mathrm{DT}}^\diamond(P)$ under the polygonal norm induced by the regular $k$-gon $Q_k$. This diagram experiences only a nearly quadratic number of topological changes, is easy to maintain, and contains all the stable Euclidean Delaunay edges, for an appropriate choice of $k\approx 1/\alpha$. Moreover, the union of its triangles is simply connected --- it has no holes. Unfortunately, in general it is not a triangulation of the entire convex hull of $P$, as illustrated in Figure \ref{Fig:AlmostTriangulation}.
\begin{figure}
\begin{center}
\input{AlmostTriangulation.pstex_t}
\caption{\small \sf The triangulation $\mathop{\mathrm{DT}}^\diamond(P)$ of an 8-point set $P$.
The points $a,b,c,d$, which do not lie on the convex hull of $P$, still lie on the boundary of the union of the triangles of $\mathop{\mathrm{DT}}^\diamond(P)$
because, for each of these points we can place an arbitrary large homothetic interior-empty copy of $Q$ which touches that point.}
\label{Fig:AlmostTriangulation}
\end{center}
\end{figure}
For the time being, we leave it as an open problem to come up with a
simple and ``stable" scheme for filling the gaps between the triangles
of $\mathop{\mathrm{DT}}^\diamond(P)$ and the edges of the convex hull.
It might be possible to extend the kinetic triangulation scheme
developed in \cite{KRS}, so as to kinetically maintain a triangulation of the
``fringes" between $\mathop{\mathrm{DT}}^\diamond(P)$ and the convex hull of $P$, which is simple to define, easy to maintain, and undergoes only nearly quadratically many topological changes.
Of course, if we only want to maintain a triangulation of $P$ that experiences only a nearly quadratically many topological changes, then we can use the scheme in \cite{KRS}, or the earlier, somewhat more involved scheme in \cite{AWY}. However, if we want to keep the triangulation ``as Delaunay as possible", we should include in it the stable portion of $\mathop{\mathrm{DT}}$, and then the efficient completion of it, as mentioned above, becomes an issue, not yet resolved.
\paragraph{Nearly Euclidean norms and some of their properties.}
One way of interpreting the results of Section 3 is that the stability of Delaunay edges is preserved, in an appropriately defined sense, if we replace the Euclidean norm by the polygonal norm induced by the regular $k$-gon $Q_k$ (for $k\approx 1/\alpha$). That is, stable edges in one Delaunay triangulation are also edges of the other triangulation, and are stable there too. Here we note that there is nothing special about $Q_k$: The same property holds if we replace the Euclidean norm by any sufficiently close norm (or convex distance function \cite{CD}).
Specifically, let $Q$ be a closed convex set in the plane that is contained in the
unit disk $D_0$ and contains the disk $D'_0 = (\cos\alpha) D_0$ that
is concentric with $D_0$ and scaled by the factor $\cos\alpha$.
This
is equivalent to requiring that the Hausdorff distance $H(Q,D_0)$
between $Q$ and $D_0$ be at most $1-\cos\alpha\approx \alpha^2/2$.
We define the center of $Q$ to coincide with the common center of
$D_0$ and $D'_0$.
$Q$ induces a convex distance function $d_Q$, defined by $d_Q(x,y)=\min \{\lambda\mid y\in x+\lambda Q\}$. Consider the Voronoi diagram $\mathop{\mathrm{Vor}}^Q(P)$
of $P$ induced by $d_Q$, and the corresponding Delaunay triangulation $\mathop{\mathrm{DT}}^Q(P)$. We omit here the detailed analysis of the structure of these diagrams, which is similar to that for the norm induced by $Q_k$, as presented in Section \ref{Sec:polygProp}. See also \cite{Chew,CD} for more details. Call an edge $e_{pq}$ of $\mathop{\mathrm{Vor}}^Q(P)$ $\alpha$-stable if the following property holds: Let $u$ and $v$ be the endpoints of $e_{pq}$, and let $Q_u,Q_v$ be the two homothetic copies of $Q$ that are centered at $u,v$, respectively, and touch $p$ and $q$. Then we require that the angle between the
supporting lines at $p$
(for simplicity, assume that $Q$ is smooth, and so has a unique supporting line at $p$ (and at $q$); otherwise, the condition should hold for any pair of supporting lines at $p$ or at $q$)
to $Q_u$ and $Q_v$ is at least $\alpha$, and that the same holds at $q$.
In this case we refer to the edge $pq$ of $\mathop{\mathrm{DT}}^Q(P)$ as $\alpha$-stable.
Note that $Q_k$-stability was (implicitly) defined in a different manner in Section \ref{Sec:polygProp}, based on the number of breakpoints of the corresponding Voronoi edges. Nevertheless, it is easy to verify that the two definitions are essentially identical.
\begin{figure}[hbt]
\begin{center}
\input{norm.pstex_t}
\caption{\small \sf An Illustration for Claim \ref{Q1}.
\label{fig:norm1}}
\end{center}
\end{figure}
A useful property of such a set $Q$ is the following:
\begin{claim} \label{Q1}
Let $a$ be a point on ${\partial} Q$ and let
$\ell$ be a supporting line to $Q$ at $a$.
Let $b$ be the point on ${\partial} D_0$
closest to $a$ ($a$ and $b$ lie on the same radius from the center
$o$).
Let $\gamma$ be the arc of ${\partial} D_0$, containing $b$, and bounded by
the intersection points of $\ell$ with ${\partial} D_0$.
Then the angle between $\ell$ and the tangent, $\tau$, to $D_0$ at any point
along $\gamma$,
is at most $\alpha$.
\end{claim}
\begin{proof}
Denote this angle by $\theta$.
Clearly $\theta$ is maximized when $\tau$ is tangent to $D_0$
at an intersection of $\ell$ and ${\partial} D_0$.
See Figure \ref{fig:norm1}.
It is easy to verify that
the distance from
$o$ to $\ell$ is $\cos\theta$. But this distance has to be at least
$\cos\alpha$, or else ${\partial} Q$ would have contained a point inside
$D'_0$, contrary to assumption. Hence we have
$\cos\theta > \cos\alpha$, and thus $\theta < \alpha$, as claimed.
\end{proof}
We need a few more properties:
\begin{figure}[hbt]
\begin{center}
\input{norm2.pstex_t}
\caption{\small \sf An Illustration for Claim \ref{Q2}.
\label{fig:norm2}}
\end{center}
\end{figure}
\begin{claim} \label{Q2}
Let $Q_1$ and $Q_2$ be two homothetic copies of $Q$ and let $w$ be a
point such that (i) $w$ lies on ${\partial} Q_1$ and on ${\partial} Q_2$, and
(ii) $w$ and the respective centers $o_1$, $o_2$ of $Q_1$, $Q_2$
are collinear. Then $Q_1$ and $Q_2$ are tangent to each other at $w$;
more precisely, they have a common supporting line at $w$, and, assuming $\partial Q$ to be smooth, $w$ is the only point of intersection of $\partial Q_1\cap \partial Q_2$ (otherwise, $\partial Q_1\cap \partial Q_2$ is a single connected arc containing $w$.).
\end{claim}
\begin{proof}
Map each of $Q_1$, $Q_2$ back to the standard placement of $Q$, by
translation and scaling, and note that both transformations map $w$
to the same point $w_0$ on ${\partial} Q$. Let $\ell_0$ be a supporting line
of $Q$ at $w_0$, and let $\ell_1$, $\ell_2$ be the forward images of
$\ell$ under the mappings of $Q$ to $Q_1$ and to $Q_2$, respectively.
Clearly, $\ell_1$ and $\ell_2$ coincide, and are a common supporting
line of $Q_1$ and $Q_2$ at $w$.
See Figure \ref{fig:norm2}. The other asserted property follows immediately if $\partial Q$ is smooth, and can easily be shown to hold in the non-smooth case too; we omit the routine argument.
\end{proof}
\begin{claim} \label{Q3}
Let $a$ and $b$ be two points on ${\partial} Q$, and let $\ell_a$ and $\ell_b$
be supporting lines of $Q$ at $a$ and $b$, respectively. Then the
difference between the angles that $\ell_a$ and $\ell_b$ form with
$ab$ is at most $2\alpha$.
\end{claim}
\begin{proof}
Denote the two angles in the claim by $\theta_a$ and $\theta_b$,
respectively.
Let $a'$ (resp., $b'$) be the point on ${\partial} D_0$ nearest to (and
co-radial with) $a$ (resp., $b$). Let $\tau_1$, $\tau_2$ denote the
respective tangents to $D_0$ at $a'$ and at $b'$. Clearly, the
respective angles $\theta_1$, $\theta_2$ between the chord $a'b'$
of $D_0$ and $\tau_1$, $\tau_2$ are equal. By Claim~\ref{Q1}, we
have $|\theta_1-\theta_a|\le\alpha$ and
$|\theta_2-\theta_b|\le\alpha$, and the claim follows.
\end{proof}
\paragraph{The connection between Euclidean stability and $Q$-stability.}
Let $e_{pq}$ be a $t\alpha$-long Voronoi edge of the Euclidean diagram, for $t\ge 9$,
and let $u,v$ denote its endpoints.
Let $D_u$ and $D_v$ denote the disks centered respectively
at $u,v$, whose boundaries pass through $p$ and $q$, and let $D$ be a
disk whose boundary passes through $p$ and $q$, so that
$D\subset D_u\cup D_v$ and the angles between the tangents to $D$ and
to $D_u$ and $D_v$ at $p$ (or at $q$) are at least $m\alpha$ each, where
$m \geq 4$. (Recall that the angle between the tangents to $D_u$ and $D_v$ us at least $t\alpha\geq 9\alpha$.)
\begin{figure}[hbt]
\begin{center}
\input{norm3.pstex_t}
\caption{\small \sf The homothetic copy $Q^{(0)}_c$.
\label{fig:norm3}}
\end{center}
\end{figure}
Let $c$ and $\rho$ denote the center and radius of $D$, respectively.
Note that $c$ lies on $e_{pq}$ ``somewhere in the middle'', because of
the angle condition assumed above.
Let $Q^{(0)}_c$ denote the homothetic copy of $Q$ centered at $c$ and
scaled by $\rho$, so $Q^{(0)}_c$ is fully contained in $D$ and thus
also in $D_u\cup D_v$, implying that $Q^{(0)}_c$ is {\em empty}---it
does not contain any point of $P$ in its interior. (This scaling makes the
unit circle $D_0$ bounding $Q$ coincide with $D$.) See Figure \ref{fig:norm3}.
Expand $Q^{(0)}_c$ about its center $c$ until the first time it
touches either $p$ or $q$. Suppose, without loss of generality,
that it touches $p$. Denote this placement of $Q$ as $Q_c$.
Let $\ell_p$ denote a supporting line of $Q_c$ at $p$. We claim that the angle between $\ell_p$ and the tangent $\tau_p$ to $D$ at $p$ is at most $\alpha$. Indeed, let $\ell_p^-,\ell_p^+$ denote the tangents from $p$ to $Q_c^{(0)}$. By Claim \ref{Q1}, the angles that they form with the tangent $\tau_p$ to $D$ at $p$ are at most $\alpha$ each. As $Q_c^{(0)}$ is expanded to $Q_c$, these tangents rotate towards each other, one clockwise and one counterclockwise so when they coincide (at $Q_0$) the resulting supporting line $\ell_p$ lies inside the double wedge between them. Since $\tau_p$ also lies inside this double wedge, and forms an angle of at most $\alpha$ with each of them, it follows that $\ell_p$ must form an angle of at most $\alpha$ with $\tau_p$, as claimed.
Since the angle between the tangent $\tau_p$ to $D$ at $p$ and the tangent
$\tau_p^v$
to $D_v$ at $p$ is at least $m\alpha$ it follows that the angle between
$\ell_p$ and $\tau_p^v$ is at least $(m-1) \alpha$.
A similar argument shows that the angle between $\ell_p$ and
the tangent $\tau_p^u$ to $D_u$ at $p$ is at least $(m-1) \alpha$.
\begin{figure}[hbt]
\begin{center}
\input{norm4.pstex_t}
\caption{\small \sf The homothetic copy $Q_c$.
\label{fig:norm4}}
\end{center}
\end{figure}
Now expand $Q_c$ by moving its center along the line passing
through $p$ and $c$, away from $p$, and scale it appropriately so
that its boundary continues to pass through $p$, until it touches
$q$ too. Denote the center of the new placement as $c'$, and
the placement itself as $Q_{c'}$. Let $D_{c'}$ be the
corresponding homothetic copy of $D_0$ centered at $c'$ and bounding
$Q_{c'}$. See Figure \ref{fig:norm4}.
\begin{figure}[hbt]
\begin{center}
\input{norm5.pstex_t}
\caption{\small \sf The homothetic copy $Q_{c'}$.
\label{fig:norm5}}
\end{center}
\end{figure}
We argue that $Q_{c'}$ is empty.
By Claim~\ref{Q2}, $\ell_p$ is also
a supporting line of $Q_{c'}$ at $p$.
Refer to Figure \ref{fig:norm6}.
We denote by $x_p$ and $y_p$ the intersections of the supporting line
$\ell_p$ with $\partial D_{c'}$ and $\partial D_v$, respectively.
We denote by $z$ the intersection of $\partial D_{c'}$ and $\partial D_v$ that lies on the same side of $\ell_p$ as $q$.
The angle $\angle pzx_p$ is at most $\alpha$ since by Claim \ref{Q1}
the angle between $\ell_p$ and the tangent to $D_{c'}$ at $x_p$ is at most
$\alpha$.
On the other hand the
angle $\angle pzy_p$ is at least $(m-1)\alpha$ since the angle
between $\ell_p$ and $\tau_p^v$ at $p$ is at least
$(m-1)\alpha$. So it follows that
the segment $px_p$ is fully contained in $D_v$.
Since the ray $\overline{zp}$ meets $\partial D_v$ (at $p$) before meeting $\partial D_{c'}$, and the ray $\overline{zx_p}$ meets $\partial D_{c'}$ (at $x_p$) before meeting $\partial D_v$, it follows that $\partial D_{c'}$ and $\partial D_v$ intersect at a point on a ray between $\overline{zp}$ and $\overline{zx_p}$.
\begin{figure}[hbt]
\begin{center}
\input{norm6.pstex_t}
\caption{\small \sf The segment $px_p$ is fully contained in $D_v$. The circles
$\partial D_{c'}, \partial D_v$ intersect at a point on a ray emanating from $z$ between $zp$ and $z_{x_p}$.
\label{fig:norm6}}
\end{center}
\end{figure}
Let $\ell_q$ denote a
supporting line of $Q_{c'}$ at $q$. By Claim~\ref{Q3}, the angles
between $pq$ and the lines $\ell_p$, $\ell_q$ differ by at most
$2\alpha$.
Since each of the angles between $\ell_p$ and
the two tangents
$\tau_p^v$
and $\tau_p^u$ is at least $(m-1) \alpha$, it follows that
each of the angles between $\ell_q$ and the two
tangents $\tau_q^u$ and $\tau_q^v$ to $D_u$ and $D_v$,
respectively, at $q$, is at least $(m-3)\alpha$.
Refer now to Figure \ref{fig:norm7}.
We denote by $z'$ the intersection of $D_{c'}$ and $D_v$ distinct from $z$,
and we denote by $x_q,y_q$ the intersections between $\ell_q$ and
$D_{c'},D_v$, respectively. An argument analogous to the one given
before shows that $\angle qz'x_q \le \alpha$ while
$\angle qz'y_q \ge (m-3)\alpha$. It follows that the segment
$qx_q$ is fully contained in $D_v$ and we have an intersection between
$\partial D_{c'}$ and $\partial D_v$ on a ray emanating from $z'$ between the ray from $z'$ to
$q$ and the ray from $z'$ to $x_q$.
\begin{figure}[hbt]
\begin{center}
\input{norm7.pstex_t}
\caption{\small \sf The segment $qx_q$ is fully contained in $D_v$. The circles
$\partial D_{c'}, \partial D_v$ intersect at a point on a ray emanating from $z'$ between $z'q$ and $z'x_q$.
\label{fig:norm7}}
\end{center}
\end{figure}
Our argument about the position of the intersections between
$D_{c'}$ and $D_v$ implies that the entire section of
$\partial D_{c'}$ between $x_p$ and $x_q$ is contained $D_v$. Therefore
the portion of $Q_{c'}$ to the right of the line through $p$ and $q$ (in the configuration depicted in the figures) is fully contained in $D_v$.
A symmetric argument shows that the portion of
$Q_{c'}$ to the left of the line
through $p$ and $q$ is fully contained in $D_u$. Since $D_u \cup D_v$ is empty we conclude that
$Q_{c'}$ is empty.
The emptiness of $Q_{c'}$ implies that $p$ and
$q$ are neighbors in the $Q$-Voronoi diagram, and that $c'$ lies on
their common $Q$-Voronoi edge $e^Q_{pq}$.
We thus obtain the following theorem.
\begin{theorem}
Let $P$, $\alpha$, and $Q$ be as above. Then (i) every $9\alpha$-stable edge of the Euclidean Delaunay triangulation is an $\alpha$-stable edge of $\mathop{\mathrm{DT}}^Q(P)$. (ii) Conversely, every $9\alpha$-stable edge of $\mathop{\mathrm{DT}}^Q(P)$ is also an $\alpha$-stable edge in the Euclidean norm.
\end{theorem}
Note that parts (i) and (ii) are generalizations of Lemmas \ref{Thm:LongEucPoly} and \ref{Thm:LongPolygEuc}, respectively (with weaker constants).
\begin{proof}
Part (i) follows directly from the preceding analysis. Indeed, let $pq$ be a $t\alpha$-stable Delaunay edge, for $t\geq 9$, whose Voronoi counterpart has endpoints $u$ and $v$. Let $Q_{c'}$ be the homothetic placement of $Q$, with center $c'$, that touches $p$ and $q$. We have shown that $Q_{c'}$ has empty interior if the ray $\rho=\overline{pc'}$ lies between $\overline{pu}$ and $\overline{pv}$ and spans an angle of at least $4\alpha$ with each of them. Assuming $t\geq 9$, such rays $\rho$ form a cone of size $(t-8)\alpha>\alpha$, which, in turn, gives the first part of the theorem.
Part (ii) follows from part (i) by repeating, almost verbatim, the proof of Lemma \ref{Thm:LongPolygEuc}.
\end{proof}
There are many interesting open problems that arise here. One of the main problems is to extend the class of sets $Q$ for which a near quadratic bound on the number of topological changes in $\mathop{\mathrm{DT}}^Q(P)$, under algebraic motion of bounded degree of the points of $P$, can be established.
|
2,877,628,089,150 | arxiv | \section{Introduction}
ATLAS Liquid Argon (LAr) calorimeter consists of the electromagnetic barrel, the electromagnetic end-cap, the hadronic end-cap and the forward calorimeter\cite{atlas-1}\cite{atlas-tdr-1}. The position of these calorimeters are shown in the Figure\,\ref{lar}. In current LAr trigger readout, for each area of size $\Delta\eta\times\Delta\phi = 0.1\times0.1$, the Layer Sum Board (LSB), as mezzanines on the front-end board will sum and get the energy deposition across each of the four longitudinal layers in the calorimeter. As depicted in Figure\,\ref{geometrical}, the so called Tower Builder Board will further sum these four energies together, and form a trigger tower with the granularity of $0.1\times0.1$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.95\linewidth]{LAr}
\caption{ATLAS Liquid Argon calorimeter}
\label{lar}
\end{figure}
The second long shutdown of LHC is scheduled in 2019-2020. For LAr calorimeters of the ATLAS detector, the Phase-I upgrade of the trigger readout electronics will be installed. The objective of this upgrade is to provide higher granularity, higher resolution and longitudinal shower information\cite{atlas-tdr-2}. As shown in the Figure\,\ref{geometrical}, after the upgrade, the level-1 trigger granularity will be improved. One current trigger tower will has 10 so called super cells. The information from each layer is retained, and the granularity can be fine up to $\Delta\eta\times\Delta\phi = 0.025\times0.1$. There will be about 34000 super cells in total. All of them will be sampled at every LHC bunch-crossing at a frequency of 40 MHz.
\begin{figure}[h!]
\centering
\includegraphics[width=0.95\linewidth]{systems1.pdf}
\caption{Geometrical representation in $\eta$,$\phi$ space of an electromagnetic trigger tower in the current system, where the transverse energy in all four layers are summed (left) and of the super-cells for the Phase-I upgrade, where the transverse energy in each layer is retained in addition to the finer granularity in the front and middle layers (right). Each big square here represents an area of size $\Delta\eta\times\Delta\phi = 0.1\times0.1$.\cite{atlas-tdr-2}}
\label{geometrical}
\end{figure}
As the LHC luminosity increases above the LHC design value, the improved calorimeter trigger electronics will allow ATLAS to deploy more sophisticated algorithms, enhancing the ability
to measure the properties of the newly discovered Higgs boson and the potential for discovering physics beyond the standard model.
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{lar_ph1_dg}
\caption{Schematic block diagram of the upgraded LAr trigger readout architecture: with new components indicated by the red outlines and arrows.}
\label{fig:larph1dg}
\end{figure}
As shown in the architecture of the Figure\,\ref{fig:larph1dg}, LSBs need to be upgraded to output the super cell signals. The new LAr Trigger Digitizer Boards (LTDB) will process and digitize the super cell signals, and send the processed data to the back-end electronics LAr Digital Processing System (LDPS), where data are then transmitted to the trigger processors. Each LTDB will be able to process up to 320 super cell signals, which will be digitized by 80 of 12-bit quad-channel NEVIS ADCs\cite{adc} on the LTDB. Twenty serializer ASICs LOCx2 will receive the ADC data, and output 40 of 5.12 Gb/s data via fiber optical links to the LDPS\cite{locx2}. The LDPS will convert the samples to calibrated energies in real-time and interface to the FEX processors. With a total of 124 LTDB boards in the system, the total rate to the back end electronics is approximately 25 Tbps. The LTDB will also output 64 summed signals to the tower builder board, each of the signal is sum of 4 input in the same middle or front layer.
The control and monitoring of LTDB is realized via 5 GBT links connected with the Front-End LInks eXchange (FELIX) in ATLAS TDAQ system\cite{felix-1}\cite{felix-2}. FELIX will distribute the TTC (Time, Trigger and Control) clock and BCR (Bunch Counter Reset) signal via down-links to the LTDB. Besides TTC information, FELIX will also control GBTx and all ASICs via the SCA (Slow Control Adapter) ASIC on the LTDB.
\section{Design and Test}
\subsection{Design of the LTDB}
The LTDB design have been split into three stages. In the demonstrator stage, forty 8-channel TI ADC ADS5272 digitize the 320 super cells. Four Kintex-7 FPGA are used to packing the data and send it to the back-end via 40 of 4.8 Gb/s links. Radiation tolerance of COTS ADCs and power converters are researched\cite{adctest-1}\cite{adctest-2}. For the pre-prototype, 80 of the NEVIS ADCs are used, 10 of Xilinx Artix-7 FPGA are used for data packing and transmission. From the prototype stage, all of the ADC, serilizer, optical-electric converters are custom radiation-hard ASICs. Diagram of the LTDB prototype is shown in the Figure\,\ref{fig:diagram}.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\linewidth]{ltdb_diagram}
\caption{Diagram of the LTDB prototype}
\label{fig:diagram}
\end{figure}
The prototype board is shown in Figure\,\ref{fig:board}. Analog section is at the bottom half of board. Digital part is at the top side.
\begin{figure}[H]
\centering
\includegraphics[width=0.7\linewidth, angle=90]{ltdb}
\caption{LTDB prototype}
\label{fig:board}
\end{figure}
The slow control between FELIX and LTDB are realized via 5 GBT links. The GBT link is connected to GBTx via the MTRx module. GBTx supports to output two kinds of recovered clock: DCLK with better quality and CLKDES with programmable phase in step of 48.8 ps. For 40MHz, the jitter in a frequency range from 100 Hz to 5MHz is about 4 ps for DCLK. For CLKDES, it is about 10 ps. The high quality DCLK is used as the ADC input clock. On the prototype board, LOCx2 also uses DCLK. On the pre-production board, the CLKDES\cite{bibGBTx} is used to support the phase calibration required by the LOCx2.
\subsection{Test Setup and Results}
To test the LTDB boards, as shown in the test setup of Figure\,\ref{fig:testsetup}, the PCIe card BNL-711 is developed. This 16-lane Gen 3 PCIe card can interface the ATLAS TTC system, and decode the TTC clock and TTC information like BCR. There are 48 bi-directional optical links which can run up to 14 Gb/s. The two DDR4 modules can support 32 GB buffer with a speed of up to about 270 Gbps.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{LTDB_Test_Diagram}
\caption{Test setup for LTDB evaluation}
\label{fig:testsetup}
\end{figure}
In the test setup, the firmware has 6 bidirectional GBT links and 40 GTH receiver for the data links from the LTDB. The low-level software tools from the FELIX project are used to read and write registers in the Kintex Ultrascale FPGA on BNL-711. An optimized low latency GBT-FPGA core is developed to support multi-channel GBT links with LTDB\cite{opt-gbt}. The IC/EC API is developed to control the GBTx and communicate with GBT-SCA chips via the GBT links\cite{HDLC-ICEC}. With this API, all ASICs on the LTDB can be configured and calibrated. The high level software is designed to monitor voltage, current and temperature of the LTDB. Beside the five GBT links for LTDB, one extra link is connected to the Xilinx ZC706 evaluation board. This link is used control the DAC3484 evaluation card, which can generate test super cell waveform for LTDB. The Xilinx all digital VCXO PICXO\cite{VCXO} is implemented in ZC706 to synchronize the GBT link to BNL-711 with system clock from BNL-711. The BNL-711 PCIe card are shown in Figure\,\ref{fig:BNL-711}.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\linewidth]{flx711}
\caption{BNL-711 PCIe card for the LTDB test}
\label{fig:BNL-711}
\end{figure}
With this test setup, all functions of the LTDB pre-production are verified. With the successful testing, two LTDB pre-production boards are installed on the detector in early 2018. The pedestal and noise distribution of all channels on one board are shown in the Figure\,\ref{fig:pedestal} and \ref{fig:noise}.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\linewidth]{histo_pedestal}
\caption{Pedestal distribution of the 320 channels}
\label{fig:pedestal}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.9\linewidth]{histo_noise}
\caption{Std noise distribution of the 320 channels}
\label{fig:noise}
\end{figure}
\section{Summary and outlook}
The new LAr calorimeter trigger readout system is being designed for the Phase-I upgrade. LTDB is the kernel part in the front-end. Two LTDB pre-production boards have been installed on the detector in 2018 successfully.
The preliminary test results show that the total noise level of the crate with LTDB installed is at the same level of other crates. The pedestal and noise level of super cells are same with test at lab. The data taking is ongoing for the LHC Run 2 with the BNL-711 PCIe card. Full integration with LDPS, FELIX and level 1 calorimeter trigger system is scheduled in summer of 2018.
|
2,877,628,089,151 | arxiv | \section{Introduction}
The next generation of surveys, using telescopes like the Wide-Field Infrared Survey Telescope (WFIRST) and Euclid, will reveal thousands of strong-lensing galaxy clusters. The ability to create models of these lensing masses will be crucial to fully leveraging these datasets for the study of dark matter, structure formation, cluster physics, early galaxy and star formation, cosmology, and more (e.g., \citealp{Barn03,Mahdi14,Jullo10,CLASH2012,CLASH2014}). However, making such models is resource intensive, requiring significant computing time while also, in particular, requiring a significant investment of researcher time. Furthermore, follow-up imaging in other bands or for longer exposure times may be required to clarify image family identifications, further consuming telescope resources.
Indeed, the identification of image families is an uncertainty of particular importance for lens modelers. Changing which image family to which a single image belongs can result in significant changes to the resulting lens models (e.g., \citealp{Jau14,Sharon0327}). Currently, color is one of the primary pieces of information used to distinguish members of a multiply-imaged lensing system from galaxies in the foreground. Color has the advantage of being a property of images that can be measured algorithmically and used as a constraint on the image identifications made by various modeling teams. With Hubble Space Telescope (HST) quality imaging, one also uses internal image morphology as a constraint, but this is typically assessed by eye and requires a substantial investment of time by an experienced lens modeler. A number of lens models for Frontier Fields (FF) clusters have been recently published \citep{Jau14,Jau15,Johnson14,Rich14,Grillo15,Ish15,Treu15,Kaw15} where image family identification has been done using such methods. Clearly, though, it is impractical to obtain deep imaging in so many different filters (the FF clusters, for example, are observed in seven different filters) for each of the thousands of strong lens systems that will be discovered in the coming years. Therefore, maximizing the strong lensing constraints available from the survey data that will exist is of particular utility.
Recently, we have found the Gini coefficient to be well-preserved by strong gravitational lensing in HST-quality imaging \citep{Flo15}. Since it is a measurement that can be made in a single filter and in the image plane, it should be possible to gain additional constraints from the Gini coefficient (at least one per filter) to help with image family identification without using any additional observational resources. In this letter, we use the results of the simulations presented in \citet{Flo15} to show that the Gini coefficient can indeed be used in this way.
\section{The Gini Coefficient}
The Gini coefficient was introduced to astronomy by \citet{Ab2003} and has since been used in morphological studies of unlensed galaxies (e.g. \citealt{Lotz06}). It is a measurement of the inequality of the distribution of light in a galaxy. Conceptually, it is calculated by ordering the pixels in a given aperture by flux, and producing the cumulative distribution function, and finding the area between that curve and the curve representing the cumulative distribution function that describes a galaxy with a perfectly flat profile. In practice, it is calculated in the following way:
\begin{equation}
G = \frac{1}{\overline{X}n(n - 1)} \sum_{i=1}^{n} (2i - n - 1)X_{i}
\end{equation}
where $X_{i}$ is the flux of the $i^{th}$ flux-ordered pixel, $\overline{X}$ is the mean flux, and $n$ is the total number of pixels within the aperture. A Gini coefficient of 0 indicates a perfectly uniform profile and a Gini coefficient of 1 indicates an aperture where all of the light is located in a single pixel. For more details, see \citet{Ab2003}.
\section{The Simulated Images}
The strong lensing simulations used here are described in \citet{Flo15}. In brief, we selected low redshift galaxies (11 elliptical, 20 spiral, and 2 irregular) from the CANDELS UDS field to be used as source galaxies for a gravitational lensing ray-tracing code. Low redshift galaxies were chosen because direct HST observations of higher redshift galaxies do not contain as much small scale complexity due to the finite resolution of the telescope. These source galaxies were placed at redshift $z=1$ and lensed by an analytical spherical NFW-profile with $M_{200} = 10^{15}M_{\odot}h^{-1}$ and concentration parameter $c=5$ placed at $z=0.2$. For each of the 33 sources, images were produced for 50 unique positions in the source plane near caustics, in each of 3 filters (HST ACS/WFC F814W and F606W, and WFC3/IR F160W). A gravitational lensing ray-tracing code \citep{Li15} was run for each of these configurations, and images of the resulting image plane configurations were produced. These were convolved with appropriate HST-like PSFs and resampled to HST-like pixel scales (0.03 arcseconds per pixel). Finally, Gaussian noise was added to degrade the average S/N per pixel of each arc to 0.1. Gini coefficients and colors were measured only for the arcs that were tangential arcs or counterimages (we did not consider typically demagnified central images or radial arcs because they are often either not seen, or significantly contaminated by cluster galaxy or intracluster light).
For purposes of measuring the Gini coefficient and associated uncertainties, apertures were defined as in \citet{Flo15}, one for each filter. Gini coefficients were calculated according to Eq. 1 and uncertainties were bootstrapped as in \citet{Ab2003}. However, for colors, apertures were obtained from the stack of the images in the F160W and F814W filters. The resulting aperture was then applied to the F160W image and the F814W image separately. The total flux within this aperture was measured in each filter and the results were converted to instrumental magnitudes. Colors were defined as the difference between the magnitudes in each filter (F814W - F160W and F606W - F814W in this paper). Uncertainties in the colors were determined by making many separate noisy realizations of each simulated arc and finding the standard deviations of the resulting distributions. For each arc, an aperture was created. 100 different noise fields were then applied to the original image (with the average SNR remaining 0.1 per pixel), in each of the two filters. Finally, the aperture was applied to each of these 100 realizations and the flux and magnitude were calculated.
For further details of the simulation, the aperture definitions, or the method of measuring the Gini coefficent, see \citet{Flo15}.
\section{Separation of Source Galaxies in Gini-Color Space}
To determine whether the Gini coefficient can be used, along with color, to help identify the image family to which a given image belongs, we plotted, for each source, the Gini coefficient in the F606W filter against the F606W-F814W color for every lensed tangential arc and counterimage from every model realization of that galaxy. The result is shown in Fig.~\ref{Gini814vColor}. In this figure, the color of each point corresponds to the source galaxy (i.e., all points of a given color are different lensed images of the same source galaxy).
From the figure, it is clear that different lensed images of the same source galaxy clump together in Gini-color space, suggesting that the combination of these measurements can indeed be used to distinguish between members of different image families. It is, of course, possible that two galaxies can have the same color and the same Gini coefficient. However, in cases where the two had the same color, they would not have been easily distinguished by the conventional means of looking at colors only. The strength of the Gini coefficient, therefore, arises from its ability to break that degeneracy. The significance of breaking this degeneracy is explored in Fig.~\ref{purities}.
In Fig.~\ref{purities}, we compare the relative abilities of different pairs of color and Gini coefficient information to distinguish between the 33 source galaxies in our sample. For each combination of Ginis and/or colors, a plot like Fig.~\ref{Gini814vColor}. was constructed. We defined regions of the Gini-color, Gini-Gini, or color-color space based on the outlines of each of the 33 clumps. We then calculated the purity of each clump using the following process. For the $i^{th}$ clump, we counted the number of points corresponding to galaxy $i$ inside the clump and divided by the sum of that number and the number of points from any galaxy $j\neq i$ inside that clump to determine a purity, where 1 is a clump that is perfectly separated from all of the others and lower values indicate more contamination from images of other galaxies. We also compared methods using only a single Gini coefficient or only a single color. In these cases, the regions are 1-dimensional and defined by the two images with the least and greatest Gini or color values. Purities were calculated similarly for these 1-dimensional clumps. The distribution purities arising from each combination of Gini-color, Gini-Gini, color-color, single Gini, or single color are shown in Fig.~\ref{purities}. Red histograms correspond to methods of defining clumps that involve only Gini coefficient information, while blue histograms come from methods that use only color information, and purple histograms include both types of information. The light red and light blue histograms denote methods that use only one piece of data (i.e., a single Gini or a single color) while the darker histograms use two pieces of data (two Ginis, two colors, or in the case of the purple histograms, one of each).
We find that methods of separating images into image families that include both a Gini coefficient and a color consistently yield noticeably higher purities than all other methods. This means that the inclusion of morphological information from the Gini coefficient adds considerable power to image family identification methods above that would not be available from colors alone. While we find that using the F606W Gini coefficient paired with the F606W-F814W color gave the highest purities the most often, it is unclear why these particular filters yielded the best results in this study. It may be entirely due to the sharper in PSF in the F606W filter (and that the PSF is sharper for F814W images than for F160W images). But it may also be because of some sort of morphological quality that is more present in the F606W band at low redshift or because the F606W and F814W filters capture the 4000\AA\ break in these low redshift galaxies (typically z$\sim$0.2-0.4), causing the F606W-F814W color to hold significant morphological information that would not be available in the F814W-F160W filter pair at these redshifts, but could be at higher redshifts. To fully investigate this aspect of our result would require having a sample of highly spatially detailed images of galaxies with SEDs similar to those seen in the existing strong-lensing sample (i.e., with redshifts in the 1-3 range, or greater) which currently does not exist. However, it may be possible to simulate such a sample using a code like GAMER \citep{Gamer} and a sample of SEDs from known moderate to high redshift galaxies that are more directly representative of typical lensed sources. Gini analysis of well-studied strong-lensing clusters with robustly identified multiply-imaged families would allow an in-situ test of the applicability of this method. We will present this analysis in a future paper.
Regardless of the reason for better results with some combinations of filters, it is clear that including the Gini coefficient from \textit{any} filter in attempts to identify image families is a substantial improvement over using colors alone. And while degeneracies still remain even in the best filter combinations tested, it is possible that these degeneracies could be further broken by the inclusion of more Gini or color information (i.e., by using higher dimensional ``clumps") or by inclusion of some other measurement aside from these that is also preserved by gravitational lensing. This is exciting in the context of lens modeling, where accurate image family identification is required in order to make the best possible models of complex clusters like those in the Frontier Fields (e.g., \citealp{Jau14,Johnson14,Rich14,Grillo15,Ish15}).
\section{Conclusions}
We find that the Gini coefficient is likely to be an effective tool for the identification of image families in strong lensing systems with many images of an unknown number of source galaxies. We have shown that the Gini coefficient, combined with a color, is capable of distinguishing between image families with effectiveness substantially greater than using one or two colors only. This provides the additional benefit of minimizing the total number of observations that one needs to make in order to identify different image families. Using two colors requires making observations in at least three filters, but using a Gini coefficient and a color requires only two. This reduces the amount of observing time required to obtain reliable image family identifications by about a third, which is not insignificant given the tremendous demand for telescope resources.
Moreover, these results show that it is possible to automate a process--namely image family identification by image morphology--that is at present mostly a by-eye process requiring a considerable amount of researcher effort. As we move into an era in which thousands of strong lenses will have imaging from space telescopes, such automation will be of great benefit.
\acknowledgments
Argonne National Laboratory's work was supported under the U.S. Department of Energy contract DE-AC02-06CH11357.
This work was supported in part by the Kavli Institute for Cosmological Physics at the University of Chicago through grant NSF PHY-1125897 and an endowment from the Kavli Foundation and its founder Fred Kavli, and by the Strategic Collaborative Initiative administered by the University of Chicago's Office of the Vice President for Research and for National Laboratories.
This research is also supported in part at the University of Chicago by the National Science Foundation under Grant PHY 08-22648 for the Physics Frontier Center ``Joint Institute for Nuclear Astrophysics" (JINA).
|
2,877,628,089,152 | arxiv | \section{Introduction}
\subsection{The Abell 85 Galaxy Cluster}
Abell 85 is a galaxy cluster that is in the process of merging with two
subclusters \citep{ichinohe2015}. As a bright X-ray cluster with a complex structure,
Abell~85 has been intensively studied with all modern X-ray observatories: Chandra,
XMM-Newton and Suzaku \citep[e.g.][]{kempner2002, durret2003, schenck2014}.
The galaxy population of Abell 85 was recently analyzed at optical wavelengths by \citet{agulli2016}
who determined the spectroscopic luminosity function with 460 confirmed cluster members.
\citet{owen1997} and \citet{schenck2014} made clear detections of radio emission emanating from
the Abell 85 cluster and its BCG. Radio relics located $\sim$320 kpc from the core of Abell 85
were discovered by \citet{slee2001} -- see also \citet{schenck2014} and \citet{bagchi1998}.
These relics are likely the result from shocks of past cluster mergers.
\subsection{A fanciful trio of black holes}
In the optical and X-rays, SDSS J004150.75-091824.3 is a bright ($g$=14.03 mag), point-like
source located $\sim14\arcsec$ away from the nucleus of the Abell 85 BCG -- see
Fig.~\ref{geminichandra}. SDSS J004150.75-091824.3 is also the closest X-ray point source
to the nucleus of the BCG. The core of the BCG shows a diffuse X-ray emission while
SDSS~J004150.75-091824.3 is bright and point-like.
\begin{figure*}
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.45]{geminiabell85v0-eps-converted-to.pdf} & \includegraphics[scale=0.45]{chandraabell85v0-eps-converted-to.pdf}\\
\end{tabular}
\caption{Central region of Abell 85 as viewed by Gemini (left) and Chandra (right). The center of the Abell 85 BCG
is depicted with a red solid circle. The location of SDSS J004150.75-091824.3 is shown with a blue dotted circle.
These circles have diameters of 4 arcseconds. The optical image was originally presented in \citet{madrid2016}
while the X-ray data was presented in \citet{ichinohe2015}. North is up and east is left. X-ray data kindly provided
by Y. Ichinohe.\label{geminichandra}}
\end{figure*}
SDSS J004150.75-091824.3 was included in the photometric selection of quasars from the SDSS
by \citet{richards2004,richards2009}. SDSS J004150.75-091824.3 was found to have a photometric
redshift of $z$= 0.925 in \citet{richards2004}. This value was later revised to $z$=0.675
in \citet{richards2009}.
SDSS J004150.75-091824.3 was identified as a possible AGN in the detailed X-ray study carried out by
\citet{durret2005}. These authors identify SDSS J004150.75-091824.3 as an X-ray point source on their
Chandra data and also identify an optical counterpart on a SDSS image (see their Fig.\ 2).
SDSS J004150.75-091824.3 is located within an X-ray cavity \citep{durret2005, ichinohe2015}.
Given the projected proximity of SDSS J004150.75-091824.3 to the center of the Abell 85 BCG,
\citet{lopez2014} surmised that stellar light could have biased the redshift and classification
given by \citet{richards2009}. \citet{lopez2014} speculated that \textsc{SDSS J004150.75-091824.3}
could be ``a third" supermassive black hole associated with the Abell 85 brightest cluster galaxy.
The other two black holes of Abell 85 would have been a hypothetical binary pair in the
core of the BCG. \citet{lopez2014} claimed the presence of a substantial light deficit on the nuclear
region of the Abell 85 BCG. \citet{lopez2014} interpreted this light deficit as evidence
for the presence of an ``ultramassive" SMBH.
When galaxies merge, their central black holes are thought to consolidate into one entity.
During the process of black hole coalescence, supermassive black hole binaries are thus created
\citep{begelman1980}. Stars that come in close proximity to a binary supermassive black hole
(SMBH) can be slingshot away through gravitational interactions \citep{begelman1980, ebisuzaki1991, milo2001}.
Over time, as this process repeats itself, a SMBH creates a deficit of stars in its vicinity.
The presence of a binary SMBH can then be inferred by the detection of stellar light deficits on
the optical surface brightness profiles of, for instance, elliptical galaxies
\citep[e.g.][]{graham2004,dullo2019}. When light deficits are present in surface brightness
profiles, the steep slope of the profile in the outer parts of the galaxy becomes flatter,
sometimes even close to constant, with decreasing radii \citep[e.g.][]{graham2004}.
Given the small spatial scales involved, the accurate characterization of the properties
of stellar cores and, among them, their light deficits, require Hubble Space Telescope data
\citep{ferrarese2006}. Light deficits associated with SMBH occur in spatial scales of a few
hundred parsecs in galaxies located megaparsecs away. For instance, cores in galaxies belonging
to the Virgo cluster have spatial scales of 1 to 4 arcseconds \citep{ferrarese1994}.
With few exceptions \citep[e.g., 3C~75;][]{owen1985}, unequivocal detections of close pairs
of binary AGN remain rare. Binary black holes are elusive, even when dedicated observing
campaigns are carried out \citep{tingay2011}.
In the following sections a new radio map of the Abell 85 BCG with the highest resolution
ever achieved is introduced. This radio map is used to search for the presence of a binary AGN.
A new optical spectrum of SDSS J004150.75-091824.3 is also presented. This spectrum is used
to make a precise determination of the redshift for this source. A new Gemini spectrum of
the Abell 85 BCG is also shown. High-resolution imaging obtained with Gemini and Chandra
are used as ancillary data to place the new observations in a wider context. Finally, by
presenting optical data obtained under different atmospheric conditions we illustrate the
effect of seeing on optical surface brightness profiles.
At the distance of Abell 85 ($D=233.9$ Mpc) 1$\arcsec$ corresponds to 1.075 kpc.
\section{Observations and data reduction}
\subsection{X-ray Imaging}
The Chandra data shown in Fig.~\ref{geminichandra} was analyzed and presented by \citet{ichinohe2015}.
Details of the observations and data reduction for this dataset are given in that reference.
\citet{ichinohe2015} created a Chandra image that is a factor of $\sim$5 deeper than the initial Chandra
image of $\sim$37 ks analyzed by \citet{kempner2002}. \citet{ichinohe2015} also included XMM–-Newton and
Suzaku data out to the virial radius of the cluster. \citet{ichinohe2015} found evidence that the
Abell 85 cluster is undergoing two mergers and has gas sloshing out to a radius of $r\sim600$ kpc from
its center.
\subsection{Optical Imaging}
Figure \ref{geminichandra} also shows a Gemini image of the Abell 85 BCG. The Gemini $r$-band
image was obtained with the Gemini Multi-Object Spectrograph (GMOS). This dataset was presented
and discussed in depth in \citet{madrid2016}. A new Gemini image obtained under poor seeing conditions
is also presented in section 4 (Gemini program GS-2016B-Q-87). The data reduction for this new
image is identical to the one described in \citet{madrid2016}.
\begin{figure}
\epsscale{1.3}
\plotone{vla_abell85v12.pdf}
\caption{VLA synthesized image of the Abell 85 BCG. This image shows two bipolar jets emanating from the core
of Abell 85. Contours are shown at 0.02, 0.06, and 0.1 mJy. The scale bar on the lower right is 1$\arcsec$ in length.
North is up and east is left.
\label{vlamap}}
\end{figure}
\begin{figure*}
\epsscale{0.85}
\plotone{abell85plotspectrumv1.pdf}
\caption{\textit{Top:} Gemini spectrum of the Abell 85 BCG. \textit{Bottom:} GTC spectrum of SDSS J004150.75-091824.3.
Both spectra are shown on their observed wavelength without correcting for radial velocities. The letter \textsc{T}
denotes a telluric absorption line.
\label{spectra}}
\end{figure*}
\subsection{Optical Spectroscopy}
The Gemini spectrum of the Abell 85 BCG was obtained with GMOS-South on spectroscopic mode.
Five exposures of 900 seconds were obtained on 2016 September 14. The grating used was R400 which
has its blaze wavelength at 764 nanometers. The slit width was 1.$\arcsec$5.
The first task of the data reduction process is to apply {\sc gprepare} to the raw science data
obtained through the Gemini archive. {\sc gprepare} applies a flatfield and performs overscan
subtraction. The spectrum is extracted from the fits file by using the task {\sc gsextract}.
This tasks extracts the intensity values along the spectral dimension of the CCD chip.
Calibration is carried out with the task {\sc gswavelength}. The details of the reduction procedure
used here were described in \citet{madrid2013}.\\
The spectroscopic data for SDSS J004150.75-091824.3 was obtained with the Optical System for Imaging and
low Intermediate Resolution Integrated Spectroscopy (OSIRIS) on the Gran Telescopio Canarias. OSIRIS
is an optical imager and spectrograph. A $2\times2$ binning was used for the CCD resulting in a spatial
scale of 0.$\arcsec$26 per pixel. Likewise, the resulting spectral resolution is 2.12~\AA/pixel.
The slit had a width of 1$\arcsec$ and the grating in use was R1000B.
The spectrum of SDSS J004150.75-091824.3 presented here was obtained on 2017 August 17. Raw science data,
bias, flats and arcs were retrieved from the Grantecan public archive.
Both science data extraction and sky subtraction were done using the \textsc{pyraf} task \textsc{apall}.
In the same way, arcs, bias and flats are processed with \textsc{apall}. The science spectra had the bias
removed and were normalized by the flat. The HgAr and Ne calibration arcs were used for wavelength
calibration. The emission lines given on the reference arcs provided by the observatory were used
as part of the wavelength calibration. The spectra were extracted using a spatial width of 7 pixels,
or 1.$\arcsec$8. The spectrophotometric standard star {\sc Feige 110} is used to flux calibrate the spectra
using the tasks \textsc{standard, sensfunction} and \textsc{calibrate}.
\subsection{Radio Data}
Karl G. Jansky Very Large Array (VLA) data was obtained from the National Radio Astronomy
Observatory public archive. Abell 85 was observed with the VLA in its A configuration on
2018 March 25. The total observation time was 4 hours, resulting in 2.8 hours on-source
after overheads. The observations used the X-band receiver with 4 GHz of correlated
bandwidth centered at 10 GHz.
The VLA data were processed with a series of tasks found within the Common Astronomy
Software Applications (CASA). Standard interferometric data reduction procedures were
followed to calibrate the delay, bandpass, phase, amplitude and flux. One round of
phase-only self-calibration was also included. During imaging, a Briggs robust
weighting of $+0.5$ was used to suppress Point Spread Function (PSF) sidelobes and
achieve a resolution of $0.\arcsec29 \times 0.\arcsec19$ at a position angle
of 32 degrees. The final image had a measured sensitivity of 3 $\mu$Jy/beam.
\section{The Abell 85 AGN}
The high-resolution and sensitivity of the radio data allows the detection of the central AGN
and bipolar diffuse emission. Compact radio sources are ubiquitous in the core of early type
elliptical galaxies \citep{ekers1973}, and the Abell 85 BCG is no exception. The VLA image
also shows two bipolar jets emanating from the compact core. Both jets are less than $\sim$2 kpc
in length (projected). The image shown in Fig.~\ref{vlamap} is the first radio image
to clearly identify the jets emanating from the Abell 85 BCG.
Interestingly, the southern jet is well aligned with the location of the X-ray cavity
described by \citet{durret2005} and \citet{ichinohe2015}; see Fig.~\ref{geminichandra}.
This X-ray cavity is, however, at about 20 kpc south of the core whereas the southern
radio jet does not show radio emission beyond 2 kpc from the center of the BCG.
The VLA image of the Abell 85 BCG shows a single core and no evidence of double nuclei.
Similarly, the X-ray image only shows diffuse X-ray emission in the core of the galaxy.
Binary black holes, when present, appear as two distinct point sources in X-ray images
\citep{komossa2003, fabbiano2011}.
The null detection of binary black holes in the X-ray and very high resolution radio data
implies that the central massive object in the Abell 85 BCG is likely a single
entity. Should a binary AGN exist in the Abell 85 BCG, its detection would require very
long baseline interferometry (VLBI) given that the pair would be separated by less than
$\sim$400 pc. Using data obtained with the Very Long Baseline Array (VLBA), \citet{rodriguez2006}
discovered a supermassive binary black hole in the radio galaxy IVS 0402$+$379
(B1950)\footnote{IVS stands for International VLBI Service. The VLBA Calibrator Survey
gives the (J2000.0) name for this source: J0405+3803 \citep{beasley2002}.},
the projected separation between the two black holes is only 7.3 pc.\\
\section{SDSS J004150.75-091824.3 is a distant quasar}
Three star-like sources are present in the optical image within 14$\arcsec$ (or 15.05 kpc)
of the nucleus of the BCG. One of these sources is SDSS J004150.75-091824.3, see
Fig.~\ref{geminichandra}. On the other hand, in the X-ray image, SDSS J004150.75-091824.3
is the only point-like source within one arcminute of the center of Abell 85. With the
Gemini image we can derive an accurate position for SDSS J004150.75-091824.3 at R.A.=0:41:50.764
and dec=$-$9:18:24.4.
The core of Abell 85 has a diffuse and extended X-ray emission that envelops SDSS~J004150.75-091824.3.
The presence of this extended X-ray emission was misinterpreted by \citet{lopez2014} as an
indication that SDSS J004150.75-091824.3 was a low redshift AGN.
In Fig.~\ref{spectra}, the spectrum of SDSS J004150.75-091824.3 show four prominent emission
lines: C\,{\sc iv}, He\,{\sc ii}, C\,{\sc iii}], and Mg\,{\sc ii}. These emission lines are
characteristic of quasar spectra, e.g.,\citet{wilkes1986, vandenberk2001}. With the Grantecan
spectrum the redshift of SDSS J004150.75-091824.3 is determined to be $z=1.5603\pm 0.003$.
The spectrum of the Abell 85 BCG, also shown in Fig.~\ref{spectra}, has a prominent
H$\alpha~+$ [N\,{\sc ii}] 6584 emission line. The H$\alpha$ line has a [N\,{\sc ii}]
emission line on either side. The forbidden [S\,{\sc ii}] 6717/6730 doublet is also
clearly visible. The Franhofer absorption lines for magnesium and sodium are easily seen.
The above emission and absorption lines are used to confirm the redshift of the BCG to
be $z=0.05538 \pm 0.0004$. This value is in agreement with several earlier determinations
of the redshift of the Abell 85 BCG \citep[][among others]{nesci1990}.
It should be noted that the spectrum of the BCG is similar to the archival spectrum of this
galaxy available through the SDSS archive. The spectrum of the Abell 85 BCG is presented
here as a comparison with the spectrum of SDSS J004150.75-091824.3 but also as a baseline
spectrum for studies of this galaxy, e.g.,~\citet{edwards2016}.
\section{Mirage Supermassive Black Holes}
\subsection{The Effect of Seeing on Optical Surface Brightness Profiles}
The presence of a binary black hole in the Abell 85 BCG was postulated based on the
analysis of its surface brightness profile \citep{lopez2014}. However, the obvious effect
of poor seeing has been ignored in recent studies that attempt to use ground-based data
without sufficient resolution to study the surface brightness profile of the Abell 85 BCG.
This section highlights the risks of inferring the presence of a light deficit, or any
other nuclear property of a distant galaxy, using ground-based data.
The BCG of Abell 85 has been reported during recent years as having both a
light deficit and a light excess in its core \citep{lopez2014, bonfini2015, madrid2016, mehrgan2019}.
These apparently contradicting results are a clear example of the effects of data quality
on optical surface brightness profiles.
\begin{figure}
\centering
\begin{tabular}{c}
\includegraphics[scale=0.4]{goodseeingv100.pdf}\\
\includegraphics[scale=0.4]{badseeingv100.pdf} \\
\end{tabular}
\caption{Surface brightness profile of a model galaxy observed with optimal seeing (\textit{top panel})
or with bad seeing (\textit{bottom panel}). The seeing is represented by a PSF that has been normalized.
Units are arbitrary.
\label{seeingconvolve}
}
\end{figure}
The effect of seeing on surface brightness profiles can be easily modeled, as we do
in this section. Atmospheric turbulence blurs and smears images obtained with ground
based telescopes. Poor seeing degrades and erases the innermost structure of surface brightness
profiles. This effect is not only intuitive but was well defined over four decades ago
by \citet{schweizer1979}, among others. More recently, \citet{cote2006} showed
that data with poor seeing miss the presence of structures in galactic nuclei.
Let's consider a galaxy whose light profile can be described by two main components:
a nuclear point spread function and an outer spheroid. The nuclear component can be
accurately modeled as a Gaussian and the main component can be represented by a
S\'ersic function \citep{sersic1968}.
The surface brightness profile of the above galaxy observed with a ground based
telescope can be evaluated by convolving its profile with the seeing. Here, seeing is
considered as the distorting effects of both atmospheric turbulence and telescope optics.
\begin{figure*}
\epsscale{0.7}
\plotone{abell85sbnologv100.pdf}
\caption{Surface brightness profile of the Abell 85 BCG obtained with Gemini and the KPNO 0.9-m.
Two epochs of observations with Gemini are plotted. One epoch with excellent seeing (0.$\arcsec$56)
and a second epoch of observations obtained under poor weather conditions with a seeing of (1.$\arcsec$60).
\label{abell85sb}}
\end{figure*}
Fig.~\ref{seeingconvolve} shows the effect of seeing on the innermost regions of a surface
brightness profile. Observing a galaxy light profile with excellent seeing recovers
the actual profile with no distortions, as shown in the top panel of Fig.~\ref{seeingconvolve}.
Both the nuclear and spheroidal components are detected when the light profile is obtained
under ideal seeing conditions.
On the other hand, when the same light profile is observed with poor seeing, its central
part is entirely misrepresented, as shown in the bottom panel of Fig.~\ref{seeingconvolve}.
The central nuclear component is fully blurred by seeing effects. Under bad seeing conditions,
instead of detecting a central Gaussian component the slope of the profile becomes
flat toward the nucleus of the galaxy.
A surface brightness profile observed with bad seeing could give the illusion of a light deficit.
Poor seeing also alters the light profile creating a break on its slope. The above can mislead
observers to believe on the detection of a depleted core and its associated SMBH when galaxies
are observed with poor seeing conditions.
\subsection{Optical observations of Abell 85}
\citet{lopez2014} used data obtained with the Kitt Peak National Observatory (KPNO) 0.9-meter
telescope to analyze the central surface brightness profile of the Abell 85 BCG. The seeing
for the KPNO 0.9-m data is reported to be 1.$\arcsec$67. \citet{bonfini2015} published
a detailed analysis of the surface brightness profile of Abell 85 using data obtained with
the Canada-France-Hawaii telescope (CFHT). The CFHT data analyzed by \citet{bonfini2015} has
subarcsecond resolution: 0.$\arcsec$74. \citet{bonfini2015} found that the light deficit
claimed by \citet{lopez2014} did not exist. Moreover, contrary to a light deficit, \citet{bonfini2015}
find a modest light excess in the core of the Abell 85 BCG.
\citet{madrid2016} obtained Gemini data of the Abell 85 BCG with a seeing of 0.$\arcsec$56.
\citet{madrid2016} confirmed the presence of a light excess in the core of this galaxy initially
found by \citet{bonfini2015}. The superior seeing of the Gemini data allow \citet{madrid2016}
to detect an additional nuclear component within the inner 2 kpc of the center of the Abell 85 BCG.
This nuclear component is well resolved, with a FWHM of 0.$\arcsec$85. At the distance of Abell 85
this nuclear component has a size of $\sim$0.9 kpc \citep{madrid2016}.
The empirical effect of seeing on the central surface brightness profile of the Abell 85 BCG is shown
in Fig.~\ref{abell85sb}. This figure shows the light profile derived with Gemini data obtained
during two different epochs with different seeing conditions. Fig.~\ref{abell85sb} also shows
the KPNO 0.9-m data used by \citet{lopez2014}. Note that the true inner structure is lost to both
Gemini and KPNO 0.9-m under poor seeing conditions.
The Gemini data presented in Fig.~\ref{abell85sb} allow us to quantify the impact of seeing.
The surface brightness profile with excellent seeing and the Gemini profile with poor seeing
have a small, but measurable, difference of 0.01 mag/arcsec at $r=4.5$ kpc. This difference
increases to 0.02 mag/arcsec at $r=2.0$ kpc, and to 0.04 mag/arcsec at $r=1.0$ kpc. As shown in
Fig.~\ref{abell85sb}, the two Gemini profiles begin to diverge at about $\sim$1.5 kpc
from the nucleus of the galaxy.
The fact that the profile obtained with bad seeing fails to accurately record the true
profile of the galaxy at such large radii is crucial. \citet{lopez2014} claims that the
Abell 85 BCG has a cusp radius of $r_{\gamma}=4.57 \pm 0.06$ kpc. This value is within the
range where the surface brightness profile is affected by seeing effects. It is precisely
this cusp radius that is used to infer the presence of a supermassive black hole.
\section{Final remarks}
A new VLA map shows bipolar AGN jets aligned along the north-south direction in the Abell 85 BCG.
X-ray and radio images show no evidence of a binary black hole in the core of this galaxy.
The optical spectra shown here demonstrate that the Abell 85 BCG and SDSS J004150.75-091824.3 are
two entirely different objects with no relation whatsoever, other than their close projected
proximity in the sky.
Studies searching for light deficits associated with SMBHs in the inner regions
of surface brightness profiles must use high resolution data. Poor seeing distorts the central
region of any surface brightness profile creating the mirage hallmarks of a SMBH. When possible,
the ideal surface brightness profile should be created by combining HST data in the core
and ground based data on the outskirts of any given galaxy.
\acknowledgments
We thank Yuto Ichinohe of the Japanese Space Agency (JAXA) for providing the
exquisite X-ray data. We thank the referee for providing a prompt and constructive report.
This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France.
Based on data from the GTC Public Archive at CAB (INTA-CSIC).
Based on observations obtained at the Gemini Observatory which is
operated by AURA under a cooperative agreement with the NSF on
behalf of the Gemini partnership.
The National Radio Astronomy Observatory is a facility of the National
Science Foundation operated under cooperative agreement by Associated
Universities, Inc.
\software: Astropy \citep{astropy2013}, Matplotlib \citep{hunter2007}, Numpy \citep{vanderwalt2011}.
Common Astronomy Software Applications (CASA) \citep{mcmullin2016}.
\facilities Gemini, Chandra, VLA, Grantecan.
\bigskip
|
2,877,628,089,153 | arxiv | \section{Introduction}
Rapid progress in the fields of cavity and circuit quantum electrodynamics (QED) has given us a possibility to study how many-electron systems interact with quantum light. The interplay between photons
and electrons plays a key role in many fascinating processes in atoms inside optical cavities in cavity-QED~\cite{RaiBruHar2001,MabDoh2002,Walter2006}, or mesoscopic systems such as superconducting qubits and quantum dots embedded in transmission line resonators in circuit-QED \cite{Wallraff2004,Blais2004,Frey2012,Delbecq2011,Petersson2012,Liu2014}.
Furthermore, in the case of molecules a strong coupling of molecular states to microcavity photons has been achieved experimentally \cite{Schwartz2011,Ebbesen2016}, and the modification of photo-chemical landscapes, the charge and energy transport by cavity vacuum fields have been reported \cite{Hutchison2012,Orgiu2015,Zhong2017}. The progress in experiments triggered theoretical activities that address ``chemistry-in-cavity'' problem. Indeed, the corresponding processes cannot be captured properly within the usual classical approximation for the light as the system now includes new quantum degrees of freedom of photons and the concept of electron-photon correlation comes in as a new player influencing the electronic states of the system. In the last few years,
several theoretical approaches have been put forward to describe molecular systems in quantum cavities. These include both mapping to simplified few-level quantum optics models \cite{KowBenMuk2016,GalGarFei2015,GalGarFei2016,GalGarFei2017}, and the
cavity-QED generalizations of {\it ab initio} electronic structure methods, such as (TD)DFT~\cite{Tokatly2013PRL,Ruggenthaler2014PRA,Pellegrini2015PRL,R15,Flick2015,Flick2017a,FSR18,DFRR17}, or the Hedin equations framework in the Green functions theory \cite{TreMil2015}.
As long as the light can be treated classically,
the electronic many-body states are fully described by the non-relativistic electronic Schr\"odinger Equation (SE), i. e.,
\begin{equation}
\label{eq:ese}
\hat{H}_{e} \phi^j({\underline{\underline{\bf r}}})= E_j\phi^j({\underline{\underline{\bf r}}}),
\end{equation}
with the Hamiltonian
\begin{equation}
\label{eq:e_ucH}
\hat{H}_{e}=\hat{T}_e+\hat{V}+\hat{W}_{ee
\end{equation}
where $\hat{T}_e$, $\hat{V}$ and $\hat{W}_{ee}$ are the usual kinetic energy, the external potential, and the Coulomb interaction energies of the electrons, respectively. The Hamiltonian $\hat{H}_{e}$ acts
on the electronic coordinates collectively denoted by ${{\underline{\underline{\bf r}}}} \equiv {r_1, r_2, \ldots, r_{N_e}}$. At this level of theory only the electrons are treated quantum mechanically and only the coordinates of electrons
appear as arguments of the wavefunction. However, if the electronic system is embedded in a microcavity the presence of quantum electromagnetic degrees of freedom (photons) can modify the electronic states
significantly. The complete description of the quantum states of matter that is now coupled to the photons,
in principle, can be provided by the SE of the multi-component system of electrons and photons,
\begin{equation}
\label{eq:se}
\hat{H}_\text{tot} \Psi^j({\underline{\underline{\bf r}}},{\underline{\underline{\bf q}}})= E_j\Psi^j({\underline{\underline{\bf r}}},{\underline{\underline{\bf q}}}),
\end{equation}
with the Hamiltonian
\begin{equation}
\label{H_tot}
\hat{H}_\text{tot}=\hat{H}_{e}+\hat{H}_{\text{EM}},
\end{equation}
which now includes an additional term, $\hat{H}_{\text{EM}}$, that is resulted from the quantization of the electromagnetic field to properly account for quantum features of the radiation field and the electron-photon correlation.
A detailed derivation of $\hat{H}_{\text{EM}}$ will be presented shortly in the following section. Here, the degrees of freedom of $N_p$ cavity modes are collectively represented
by ${{\underline{\underline{\bf q}}}} \equiv {q_1, q_2, \ldots, q_{N_p}}$. In the length gauge, the ``electromagnetic coordinate'' $q_{\alpha}$ corresponds to the amplitude of electric displacement in the $\alpha$-mode of the cavity (see section~\ref{section:QED_Ham} for more details).
A numerically exact solution of the complete electron-photon SE, Eq.~(\ref{eq:se}), can only be obtained for small systems, hence, an accurate description of the electronic
states of matter in the presence of photons requires an efficient treatment of electronic many-body problem while accounting for the electron-photon correlation. Here, an important question is whether or not the information
on the electronic states that are coupled to photons can be obtained from pure electronic states ${\Phi^j({\underline{\underline{\bf r}}})}$ rather than the more complicated electron-photon states ${\Psi^j({\underline{\underline{\bf r}}},{\underline{\underline{\bf q}}})}$. And if yes, what Hamiltonian
gives such electronic states? How does it differ from the uncoupled electronic Hamiltonian of Eq.~(\ref{eq:e_ucH})?
In this work, we will address these questions and will demonstrate that the answer to the first question is indeed yes. By utilizing the Exact Factorization (EF) framework \cite{hunter_IJQC1975,GG_PTRSA2014,AMG_PRL2010}
we will show how pure electronic states, ${\Phi^j({\underline{\underline{\bf r}}})}$, can provide us with important information such as the exact electronic many-body densities and current densities equivalent to those obtained from the complete
electron-photon states ${\Psi^j({\underline{\underline{\bf r}}},{\underline{\underline{\bf q}}})}$. We furthermore, present the additional purely electronic potentials that are needed to be included in the electronic Hamiltonian in order to account for the electron-photon correlation in a formally exact way. In addition, we derive analytical expressions of these potentials for electronic states of a single electron system in an asymmetric double-well potential that is coupled to
a single-photon mode of a cavity with a resonance frequency. Furthermore, we will show how well our analytical expressions match the potential obtained from the numerical solution of the SE. We demonstrate that in the resonance regime the effective electronic potential for excited states demonstrate clear peak and step structures, which are responsible, respectively, for the polaritonic squeezing of the intra-well states and the photon-assisted inter-well tunneling.
Here, we only study the stationary states. However our results have a direct relevance for understanding the dynamics of electron-photon systems and thus for the development of QED-TDDFT \cite{Tokatly2013PRL,Ruggenthaler2014PRA,Flick2015}. In fact, the step-and-peak structure of the electronic potential for the stationary excited states should show up in the effective time-dependent potential to account for the charge transfer processes supplemented with the photon emission/absorption.
\section{Quantization of electromagnetic field: Hamiltonian for cavity QED}
\label{section:QED_Ham}
The main object of the cavity/circuit QED is a system of non-relativistic
electrons interacting with electromagnetic modes of a microcavity. The QED regime assumes that both electrons and the electromagnetic
field are treated quantum mechanically. However,
to understand better the structure of the quantum theory
it is instructive to analyze first the classical dynamics of the system.
Our starting point is the Maxwell equations for the transverse part
of the electromagnetic filed
\begin{eqnarray}
& &\nabla\times{\bf E}_{\perp} = -\frac{1}{c}\partial_{t}{\bf B},\label{Maxwell-EB-1}\\
& &\nabla\times{\bf B} \, \, \, = \frac{1}{c}\partial_{t}{\bf E}_{\perp}+\frac{4\pi}{c}{\bf j}_{\perp},\label{Maxwell-EB-2}
\end{eqnarray}
where ${\bf E}_{\perp}({\bf r},t)$ is the transverse electric field
with $\nabla\cdot{\bf E}_{\perp}=0,$ and ${\bf j}_{\perp}({\bf r},t)$
is the transverse part of electron current that enters as a source
of the radiation field. In general for $N_e$ electrons moving along
trajectories ${\bf r}_{j}(t)$ the current is defined as follows
\begin{equation}
{\bf j}({\bf r},t)=e\sum_{j=1}^{N_e}\dot{{\bf r}}_{j}(t)\delta({\bf r}-{\bf r}_{j}(t)).\label{current}
\end{equation}
In a typical cavity QED setup, the motion of electrons is bounded to
a region around some point ${\bf r}_{0}$ inside the cavity, which
is much smaller than the cavity size and thus much smaller than the
characteristic wavelength $\lambda$ of the field. The condition $|{\bf r}_{j}(t)-{\bf r}_{0}|\ll\lambda$
justifies the replacement of ${\bf r}_{j}\mapsto{\bf r}_{0}$ in the
arguments of the $\delta$-functions
\begin{equation}
{\bf j}({\bf r},t)=e\sum_{j=1}^{N}\dot{{\bf r}}_{j}(t)\delta({\bf r}-{\bf r}_{0})=\partial_{t}{\bf P}({\bf r},t)\label{current-P}
\end{equation}
where we introduced the polarization vector ${\bf P}({\bf r},t)=e{\bf R}(t)\delta({\bf r}-{\bf r}_{0})$
with ${\bf R}=\sum_{j=1}^{N}{\bf r}_{j}$ being the center-of-mass
coordinate of the electrons. This corresponds to the dipole approximation
that is fulfilled with a very high accuracy in most of the practical situations.
The transverse current entering the Maxwell equations is determined
by the transverse projection of the polarization vector
\begin{equation}
{\bf P}_{\perp}({\bf r},t)=e{\bf R}(t)\delta^{\perp}({\bf r}-{\bf r}_{0})=\frac{e}{4\pi}\nabla\times\left(\nabla\times\frac{{\bf R}(t)}{|{\bf r}-{\bf r}_{0}|}\right)\label{P-transverse},
\end{equation}
where we have used the identity $\delta^{\perp}({\bf r}-{\bf r}_{0})=\frac{1}{4\pi}\nabla\times\left(\nabla\times\frac{1}{|{\bf r}-{\bf r}_{0}|}\right)$.
In a quantum theory, the Maxwell equations (\ref{Maxwell-EB-1}), (\ref{Maxwell-EB-2})
should become Heisenberg equations for the corresponding field operators.
The quantum operator algebra can be revealed by representing the classical
theory in a Hamiltonian form. To do so, we introduce a new electric
variable -- the displacement vector
\begin{equation}
{\bf D_{\perp}}={\bf E}_{\perp}+4\pi{\bf P}_{\perp},\label{D}
\end{equation}
and rewrite the Maxwell equations(\ref{Maxwell-EB-1}), (\ref{Maxwell-EB-2})
as follows
\begin{eqnarray}
& &\partial_{t}{\bf B} \, \, \, \, = -c\nabla\times({\bf D}_{\perp}-4\pi{\bf P}_{\perp}),\label{Maxwell-DB-1}\\
& &\partial_{t}{\bf D}_{\perp} = c\nabla\times{\bf B}.\label{Maxwell-DB-2}
\end{eqnarray}
These equations demonstrate a clear Hamiltonian structure. Indeed,
by considering the standard energy of the transverse electromagnetic
field
\begin{eqnarray}
H_{\text{EM}} & = &\frac{1}{8\pi}\int d{\bf r}\left[{\bf E}_{\perp}^{2}+{\bf B}^{2}\right] \nonumber\\
& = &\frac{1}{8\pi}\int d{\bf r}\left[({\bf D}_{\perp}-4\pi{\bf P}_{\perp})^{2}+{\bf B}^{2}\right],\label{He-m}
\end{eqnarray}
and imposing the following commutation relations
\begin{equation}
[B^{i}({\bf r}),D_{\perp}^{j}({\bf r}')]=-i\,4\pi c \, \varepsilon^{ijk}\partial_{k}\delta({\bf r}-{\bf r}'),\label{BD-commut}
\end{equation}
we recover the Maxwell equations from the canonical Heisenberg equations
\begin{eqnarray}
& &\partial_{t}{\bf B} \, \, \, = i[H_{\text{EM}},{\bf D}_{\perp}],\label{Maxwell-H-1}\\
& &\partial_{t}{\bf D}_{\perp} = i[H_{\text{EM}},{\bf B}].\label{Maxwell-H-2}
\end{eqnarray}
An important outcome of this analysis is that the proper conjugated
Hamiltonian variables for the electromagnetic field are the magnetic
field ${\bf B}$ and the electric displacement ${\bf D}$.
Let us introduce cavity modes as a set of normalized transverse eigenfunctions
${\bf E}_{\alpha}({\bf r})$ of the wave equation inside a metallic
cavity $\Omega$
\begin{eqnarray*}
& &c^{2}\nabla^{2}{\bf E}_{\alpha}({\bf r}) \, \, = \omega_{\alpha}^{2}{\bf E}_{\alpha}({\bf r}),\quad{\bf r}\in\Omega\\
& &({\bf n\times{\bf E}_{\alpha}})\vert_{\partial\Omega} = 0,
\end{eqnarray*}
where ${\bf n}$ is a unit vector normal to the cavity surface $\partial\Omega$.
Now all transverse functions in the Hamiltonian (\ref{He-m}) can
be expanded in the cavity modes
\begin{eqnarray}
{\bf D}_{\perp}({\bf r}) & = & \sum_{\alpha}d_{\alpha}{\bf E}_{\alpha}({\bf r}),\label{D-expansion}\\
{\bf B}({\bf r}) & = & \sum_{\alpha}b_{\alpha}\frac{c}{\omega_{\alpha}}\nabla\times{\bf E}_{\alpha}({\bf r}),\label{B-expansion}\\
{\bf P}_{\perp}({\bf r}) & = & e\sum_{\alpha}\left({\bf E}_{\alpha}({\bf r}_{0})\cdot{\bf R}\right){\bf E}_{\alpha}({\bf r}).\label{P-expansion}
\end{eqnarray}
Here the expansion coefficients $d_{\alpha}$ and $b_{\alpha}$ are,
respectively, the quantum amplitudes of the electric displacement
and the magnetic field in the $\alpha$-mode. Note that Eq.~(\ref{B-expansion}) ensures that the magnetic field satisfies the proper boundary condition $({\bf B}\cdot {\bf n})\vert_{\partial\Omega}= 0$. By inserting the above expansions into Eqs.~(\ref{He-m}) and~(\ref{BD-commut}) we obtain the following Hamiltonian
\begin{equation}
H_{\text{EM}}=\frac{1}{8\pi}\sum_{\alpha}\left[\left(d_{\alpha}-4\pi e{\bf E}_{\alpha}({\bf r}_{0})\cdot{\bf R}\right)^{2}+b_{\alpha}^{2}\right],\label{H-bd}
\end{equation}
and the commutation relations for the field amplitudes
\begin{equation}
[b_{\alpha},d_{\beta}]=-i4\pi\omega_{\alpha}\delta_{\alpha\beta}.\label{bd-commut}
\end{equation}
Finally we rescale the electric displacement and the magnetic field
amplitudes
\begin{equation}
d_{\alpha}=\sqrt{4\pi}\omega_{\alpha}q_{\alpha},\quad b_{\alpha}=\sqrt{4\pi}p_{\alpha},\label{pq-def}
\end{equation}
so that the new variables $q_{\alpha}$ and $p_{\alpha}$ satisfy
the standard coordinate-momentum commutation relations $[p_{\alpha},q_{\beta}]=-i\delta_{\alpha\beta}$,
while the Hamiltonian (\ref{H-bd}) reduces to that of a set of shifted
harmonic oscillators
\begin{equation}
H_{\text{EM}}=\frac{1}{2}\sum_{\alpha}\left[p_{\alpha}^{2}+\omega_{\alpha}^2\left(q_{\alpha}-\frac{\bm{\lambda}_{\alpha} \cdot \bf R}{\omega_{\alpha}}\right)^{2}\right],\label{Hem-PZW}
\end{equation}
where the ``coupling constant'' $\bm{\lambda}_{\alpha}$ is related
to the electric field of the $\alpha$-mode at the location of the
electron system
\begin{equation}
\bm{\lambda}_{\alpha}=\sqrt{4\pi}e{\bf E}_{\alpha}({\bf r}_{0}).\label{lambda}
\end{equation}
Equation (\ref{Hem-PZW}) corresponds to the description of quantum
electromagnetic field and the electron-photon coupling in a so called
Power-Zienau-Woolley (PZW) gauge \cite{PowZie1959,Woolley1971,BabLou1983}. The total Hamiltonian for the combined system of electrons and the field is a sum of $H_{\text{EM}}$ and the standard Hamiltonian of a non-relativistic
many-electron system
\begin{equation}
\hat{H}_\text{tot}=\hat{H}_{e}+\hat{H}_{\text{EM}}\label{H-fin},
\end{equation}
where $\hat{H}_{e}$ is the electronic Hamiltonian of Eq.~(\ref{eq:e_ucH})
in the absence of the photon field. This Hamiltonian is commonly used
as the starting point in the first-principles approaches to the cavity
QED \cite{Tokatly2013PRL,Ruggenthaler2014PRA,Pellegrini2015PRL,Flick2015,Flick2017a}.
\section{Exact factorization of the complete electron-photon wavefunction}
The framework of the exact factorization (EF) for static \cite{GG_PTRSA2014,cederbaum_jcp2013,hunter_IJQC1975} and time-dependent problems \cite{AMG_PRL2010,AMG_JCP2012,AMG_JCP2013} was originally developed to go beyond
the Born-Oppenheimer treatment of multicomponent systems of electrons and nuclei. Consequently, the original presentation of the framework provides an exact separation of the complete electron-nuclear wavefunction
as a product of a marginal nuclear wavefunction and a conditional electronic wavefunction that parametrically depends on the nuclear configuration. As there is no approximation involved in developing this framework and
the two subsystems are treated on the same footings, in principle, the EF can be extended to exactly factorize any multicomponent many-body wavefunction. In general, the choice of marginal and conditional wavefunctions
is arbitrary and depends on the setting of the problem and its applications. Within the EF approach, the expressions of the coupling potentials
that account for the exact correlation between the two subsystems are given explicitly and the conditional wavefunction satisfies a partial normalization condition. While the equation of motion (EoM) of the marginal wavefunction has an appealing form of a (TD)SE that includes an scalar and a
vector potential, the EoM of the conditional wavefunction is non-linear and depends on both conditional and marginal wavefunctions.
The EF approach has grown steadily over the past couple of years and has been implemented for fundamental investigations and method developments in various fields such as molecular dynamics~\cite{AASG_PRL2013,AASG_MP2013,AAG_EPL2014,AAG_JCP2014,AASMMG_JCP2015,SAG_JPCA2015,AMAG_JCTC2016,CA_JPCL2017,MATG_JPCL2017,SASGV_PRX2017,HLM_JPCL2018},
geometric phases~\cite{MAKG_PRL2014,RTG_PRA2016,RG_PRL2016} and strong-field dynamics~\cite{AMG_PRL2010,AMG_JCP2012,SAMYG_PRA2014,SAMG_PCCP2015,KAM_PRL2015,KARM_PCCP2017,SG_PRL2017}.
In this section, we present a generalization of the EF approach for the problem of correlated electron-photon states. As our derivation follows closely the procedure given in~\cite{GG_PTRSA2014} for the correlated electron-nuclear
states, here we only present the final outcomes of the derivation and refer the readers to the reference~\cite{GG_PTRSA2014} for more details~\footnote{After submitting
this manuscript we became aware of a recent unpublished work on the dynamical aspects of the light-matter interaction using the time-dependent EF frameworkin ~\cite{HARM18}.}.
Within the EF framework, the ($j$-th) correlated electron-photon state that is an {\it exact} eigenstate of the complete electron-photon SE~(\ref{eq:se}), $\Psi^j({\underline{\underline{\bf r}}},{\underline{\underline{\bf q}}})$, can be written as a single product,
of an electronic wavefunction, $\Phi^j({\underline{\underline{\bf r}}})$, and a photonic wavefunction parameterized by the electronic coordinates, $\chi^j_{{\underline{\underline{\bf r}}}}({\underline{\underline{\bf q}}})$, i. e.,
\begin{equation}
\label{eq:EF1}
\Psi^j({\underline{\underline{\bf r}}},{\underline{\underline{\bf q}}})=\Phi^j({\underline{\underline{\bf r}}})\chi^j_{{\underline{\underline{\bf r}}}}({\underline{\underline{\bf q}}}),
\end{equation}
that satisfies the partial normalization condition (PNC)
\begin{equation}
\label{eq:PNC}
\int d{\underline{\underline{\bf q}}} \, \vert\chi^j_{{\underline{\underline{\bf r}}}}({\underline{\underline{\bf q}}})\vert^2=1 ~\text {for every}~~{\underline{\underline{\bf r}}}.
\end{equation}
Here, we emphasize again on the vital role of the PNC in making this product physically meaningful. Indeed, it is possible to come up with a lot of different decompositions that satisfy Eq.~(\ref{eq:EF1}) but {\it do not}
fulfill the PNC. As it was discussed previously, for instance in Ref.~\cite{AMG_JCP2013}, it is the PNC that makes the decomposition physically meaningful and unique up to a gauge-like transformation and allows for the interpretation of
a marginal probability amplitude, and a conditional probability amplitude for $\Phi^j({\underline{\underline{\bf r}}})$ and $\chi^j_{{\underline{\underline{\bf r}}}}({\underline{\underline{\bf q}}})$, leading to their identification as electronic and photonic wavefunctions. Here, it is important to
note that unlike the electron that is a subatomic particle, the photon is an excitation of the quantized electromagnetic radiation in the cavity. Therefore, the concept of photonic wavefunction used here is meant to describe
the electric displacement amplitude of the radiation field in the cavity (see section~(\ref{section:QED_Ham})).
It can be proved~\cite{GG_PTRSA2014} that the photonic conditional wavefunction, $\chi^j_{{\underline{\underline{\bf r}}}}({\underline{\underline{\bf q}}})$, satisfies
\begin{equation}
\label{eq:exact_photon}
\hat{H}^{ph,j}_{{\underline{\underline{\bf r}}}}\chi^j_{{\underline{\underline{\bf r}}}}({\underline{\underline{\bf q}}})\\= V^j_{e-ph}({\underline{\underline{\bf r}}}) \chi^j_{{\underline{\underline{\bf r}}}}({\underline{\underline{\bf q}}}),
\end{equation}
with the photonic Hamiltonian
\begin{equation}
\label{eq:photon_ham}
\begin{split}
\hat{H}^{ph,j}_{{\underline{\underline{\bf r}}}} = \hat{H}_{\text{EM}} &+ \sum_{k=1}^{N_e}\frac{1}{m} \Big[\frac{(-i\nabla_k-{\bf S}^j_k({\underline{\underline{\bf r}}}))^2}{2} \\
&+ \Big(\frac{-i\nabla_k \Phi^{j}}{\Phi^{j}}+{\bf S}^j_k({\underline{\underline{\bf r}}})\Big)\left(-i\nabla_k-{\bf S}^j_k({\underline{\underline{\bf r}}})\right)\Big],
\end{split}
\end{equation}
while the electronic wavefunction $\Phi^j({\underline{\underline{\bf r}}})$ satisfies a Schr\"odinger-like equation:
\begin{eqnarray}
\label{eq:exact_e}
\Bigl(\sum_{k=1}^{N_e}\frac{1}{2m}(-i\nabla_k+{\bf S}^j_k({\underline{\underline{\bf r}}}))^2 +\hat{V}({\underline{\underline{\bf r}}})+\hat{W}_{ee}({\underline{\underline{\bf r}}}) &+& \nonumber \\
V^j_{e-ph}({\underline{\underline{\bf r}}})\Bigr)\Phi^j({\underline{\underline{\bf r}}})&=&E_j \Phi^j({\underline{\underline{\bf r}}}),\nonumber \\
\end{eqnarray}
where $m$ is the electronic mass. As it can be seen in Eq.~(\ref{eq:exact_e}), as a result of the coupling to the cavity photons, the electronic subsystem contains two additional potentials compared to the independent uncoupled electronic SE~(\ref{eq:ese}).
The influence of electron-photon correlation on the $j$-th electronic state is formally exactly taken care of by addition of a scalar potential,
\begin{equation}
\label{eq:exact_epes}
V^j_{e-ph}({\underline{\underline{\bf r}}}) = \left\langle\chi^j_{{\underline{\underline{\bf r}}}} \right\vert\hat{H}^{ph,j}_{{\underline{\underline{\bf r}}}} \left\vert\chi^j_{{\underline{\underline{\bf r}}}}\right\rangle_{\underline{\underline{\bf q}}},
\end{equation}
and a vector potential,
\begin{equation}
\label{eq:exact_evect}
{\bf S}^j_k({\underline{\underline{\bf r}}})=\left\langle\chi^j_{{\underline{\underline{\bf r}}}}\right\vert\left.-i\nabla_k\chi^j_{\underline{\underline{\bf r}}} \right\rangle_{\underline{\underline{\bf q}}},
\end{equation}
to the uncoupled electronic SE~(\ref{eq:ese}). Here, $\langle ...|...|...\rangle_{\underline{\underline{\bf q}}}$ denotes an inner product over all photonic variables only.
Similar to the other extensions of the EF framework, the marginal electronic wavefunction and the conditional photonic wavefunction and their corresponding equations have the
following properties:
\begin{itemize}
\item {Eqs.~(\ref{eq:exact_photon})-~(\ref{eq:exact_e}) are form-invariant under
the following gauge-like transformation,
\begin{equation}
\label{eq:gt_wfs}
\begin{split}
&\chi^j_{{\underline{\underline{\bf r}}}}({\underline{\underline{\bf q}}})\rightarrow\tilde{\chi}^j_{{\underline{\underline{\bf r}}}}({\underline{\underline{\bf q}}})=\exp(i\theta_j({\underline{\underline{\bf r}}}))\chi^j_{{\underline{\underline{\bf r}}}}({\underline{\underline{\bf q}}} ) \nonumber \\
&\Phi^j({\underline{\underline{\bf r}}} )\rightarrow\tilde{\Phi}^j({\underline{\underline{\bf r}}} )=\exp(-i\theta_j({\underline{\underline{\bf r}}} ))\Phi^j({\underline{\underline{\bf r}}}).
\end{split}
\end{equation}
}
\item{The scalar potential given in Eq.~(\ref{eq:exact_epes}) is also gauge invariant under the above-mentioned gauge transformation while
the vector potential is transformed as
\begin{equation}
{\bf S}^j_k({\underline{\underline{\bf r}}})\rightarrow\tilde{\bf S}^j_k({\underline{\underline{\bf r}}})={\bf S}^j_k({\underline{\underline{\bf r}}})+\nabla_k\theta_j({\underline{\underline{\bf r}}}).
\end{equation}
}
\item{The wavefunctions $\chi^j_{{\underline{\underline{\bf r}}}}({\underline{\underline{\bf q}}})$ and $\Phi^j({\underline{\underline{\bf r}}})$ are unique up to this $({\underline{\underline{\bf r}}})$-dependent gauge transformation and yield the given solution, $\Psi^j({\underline{\underline{\bf r}}},{\underline{\underline{\bf q}}} )$,
of Eq.~(\ref{eq:se}) .}
\item{The electronic wavefunction, $|\Phi^j({\underline{\underline{\bf r}}} )|^{2}=\int |\Psi^j({\underline{\underline{\bf r}}},{\underline{\underline{\bf q}}} )|^{2}d{\underline{\underline{\bf q}}}$, gives the probability density of finding
the electronic configuration ${\underline{\underline{\bf r}}}$ of the $j$-th correlated electron-photon state and the photonic conditional wavefunction,\\ $|\chi^j_{{\underline{\underline{\bf r}}}}({\underline{\underline{\bf q}}} )|^{2}=|\Psi^j({\underline{\underline{\bf r}}},{\underline{\underline{\bf q}}} )|^{2}/|\Phi^j({\underline{\underline{\bf r}}} )|^{2}$,
provides the conditional probability of finding the displacement amplitudes of the cavity at ${\underline{\underline{\bf q}}}$ for a given electronic configuration ${\underline{\underline{\bf r}}}$. Furthermore, the exact electronic $N_e$-body current-density can be obtained
from $\Im (\Phi^{j*}\nabla_k\Phi^j)+|\Phi^j({\underline{\underline{\bf r}}})|^{2}{\bf S}^j_k$. Therefore, $\chi^j_{{\underline{\underline{\bf r}}}}({\underline{\underline{\bf q}}})$ and $\Phi^j({\underline{\underline{\bf r}}})$ can be interpreted as photonic and electronic wavefunctions.}
\end{itemize}
One of the main results of this work is Eq.~(\ref{eq:exact_e}) that can be regarded as the {\it exact electronic equation} for the $j$-th electronic state of the correlated electron-photon system:
The Hamiltonian that is formed by adding the scalar potential (Eq.~(\ref{eq:exact_epes})) and the vector potential (Eq.~(\ref{eq:exact_evect})) (which are {\it unique} up to
within a gauge transformation), to the uncoupled electronic Hamiltonian provides us with the $j$-th electronic state, $\Phi^j({\underline{\underline{\bf r}}})$, that yields the true electron ($N_e$-body) density and current density
of the full electron-photon problem.
The scalar electron-photon ({\it e-ph}) correlation potential~(\ref{eq:exact_epes}) can also be written as
\begin{eqnarray}
\label{eq:exact_epes2}
V^j_{e-ph}({\underline{\underline{\bf r}}}) &=& \left\langle\chi^j_{{\underline{\underline{\bf r}}}} \right\vert\hat{H}_\text{EM}({\underline{\underline{\bf q}}},{\underline{\underline{\bf r}}}) \left\vert\chi^j_{{\underline{\underline{\bf r}}}}\right\rangle_{\underline{\underline{\bf q}}}\nonumber\\
&+&\frac{1}{2m}\sum_{k=1}^{N_e}\left(\left\langle\nabla_k\chi^j_{{\underline{\underline{\bf r}}}}\vert\nabla_k\chi^j_{{\underline{\underline{\bf r}}}}\right\rangle_{\underline{\underline{\bf q}}}-{\bf S}^j_k({\underline{\underline{\bf r}}})^2\right),
\end{eqnarray}
which for our purposes here has a more convenient form.
\begin{figure}
\begin{center}
\resizebox{0.45\textwidth}{!}{\includegraphics{DWePot.pdf}}
\end{center}
\caption{Asymmetric double-well potential (red) together with its ground-state density (black solid-line) and its 1st-excited state density (blue solid-line) as well as the electronic ground-state (green dashed-line)
and 1st-excited state (magenta dashed-line) densities of the complete electron-photon system. The densities have been enlarged four times. The acronyms "c" and "uc" on the plot-label stand for "coupled" and "uncoupled"
respectively.}
\label{fig:fig1}
\end{figure}
\section{Example: photon-assisted delocalization of the electronic states}
In this section, we investigate a particular situation in which the character of the electronic excited states of the system undergoes a fundamental change through electron-photon coupling,
i. e. they become delocalized as a result of the coupling to a cavity mode with a resonance frequency. Our study is based on a model system in which
we consider a single electron in an Asymmetric Double-Well (ADW) potential
\begin{equation}
\label{eq:ADWpot}
V(x)=\frac{1}{2} \omega_e^2 (|x|-a)^2 + E x,
\end{equation}
with the Hamiltonian
\begin{equation}
\label{eq:DWeH}
\hat{H}^\text{ADW}_{e} = -\frac{1}{2}\frac{\partial^2}{\partial x^2} + V(x)
\end{equation}
Here, $\omega_e=1.6$ (a.u.) , $a=2.35$ (a.u.) , the static electric field $E=0.08$ (a.u.), and the electronic mass $m$ is set to $1$. These parameters are chosen such that this electronic system is
practically a two-level system. In Fig.~\ref{fig:fig1} the asymmetric double-well potential (Eq.~\ref{eq:ADWpot}) and the first two electronic states are shown.
Due to the asymmetric nature of the potential the electronic ground state and the first excited state are localized in one of the wells that are centered at $\pm a$ with a relatively small overlap, hence, the probability of inter-well tunneling is very small. When this system is coupled to a single-photon mode of a cavity with the frequency $\omega_{c}$ and coupling constant of $\lambda_{c}$, the full description
is given by the electron-photon Hamiltonian
\begin{equation}
\label{eq:Ham_DWpC}
\hat{H}_\text{tot} = \hat{H}^\text{ADW}_{e} + \hat{H}_\text{EM},
\end{equation}
that now contains
\begin{equation}
\label{eq:Hq}
\hat{H}_\text{EM} = \frac{-1}{2}\frac{\partial^2}{\partial q^2}+\frac{1}{2}\omega_{c}^2\left(q-\frac{\lambda_{c}}{\omega_{c}}x\right)^{2},
\end{equation}
as a result of the quantization of the electromagnetic field and the subsequent electron-photon coupling (\ref{Hem-PZW}). If the frequency of the cavity-photons
is tuned to bring the first two electronic states of the asymmetric double-well into resonance, the primarily localized electronic excited state becomes delocalized (see Fig.~\ref{fig:fig1}) independent of the value of the coupling constant. The resonance frequency depends on $\lambda_c$: for small $\lambda_c$-s, $\omega_c\sim2 E a$ and for larger coupling constants, the $\lambda_c$-dependence becomes more pronounced. In Table.~(\ref{table:T1}) the resonance frequencies corresponding to three values of $\lambda_c$ that are considered in this work are given.
\begin{table}[]
\centering
\label{table:T1}
\begin{tabular}{|c|c|c|c|}
\hline
$\lambda_c$ & 0.1 & 0.5 & 0.9 \\
\hline
$\omega_c$ & 0.37676260 & 0.39495042 & 0.43442993 \\
\hline
\end{tabular}
\caption{The coupling constants and the corresponding resonance frequencies implemented in this work in atomic units .}
\end{table}
In the following, we investigate how this photon-assisted delocalization of the electronic excited state can be captured by adding the {\it e-ph} correlation potential~(\ref{eq:exact_epes2}) to the electronic
Hamiltonian~(\ref{eq:DWeH}). Here, we are not aiming at solving the EF equations~(\ref{eq:exact_photon})-~(\ref{eq:exact_e}). In fact, these coupled equations need to be solved self-consistently that seems to be somehow more
complicated than solving the full SE~(\ref{eq:se}). Therefore, we solve the full SE~(\ref{eq:se}) and similar to our previous studies~\cite{AMG_PRL2010,AMG_JCP2013,AASG_PRL2013} extract the numerically exact
{\it e-ph} potential by inverting the full SE. In addition, we derive an approximate analytical expression for the exact {\it e-ph} correlation potential~(\ref{eq:exact_epes}) and show how step and peak features of the
{\it e-ph} correlation potential are captured analytically. These features lead to the polaritonic localization of the electronic ground-state and delocalization of the electronic excited-states.
For the one dimensional model (Eq.~\ref{eq:DWeH}), the {\it e-ph} correlation potential reads
\begin{equation}
\label{eq:exact_epes2}
V^j_{e-ph}(x) = \left\langle\chi^j_{x} \right\vert\hat{H}_\text{EM}\left\vert\chi^j_{x}\right\rangle_q+\frac{1}{2m}\left\langle\frac{\partial\chi^j_{x}}{\partial x}\vert\frac{\partial\chi^j_{x}}{\partial x}\right\rangle_q,
\end{equation}
while the vector potential can be gauged away~\cite{AMG_JCP2012}.
\subsection{Combining atomic orbitals and displaced harmonic oscillator basis}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{Schematic-2harm-new2.pdf}
\end{center}
\caption{(Left) Schematic representation of the displaced harmonic oscillator basis with the usual eigenstates. Eigenstates with the energy quantum number $N$ on the left are degenerate with the Eigenstates with energy
quantum number $N-1$ on the right. (Right) Schematic representation of the (diabatic) approximation in which the $N$-th state of the displaced oscillator on the left is allowed to mix with the $(N-1)$-th state of the displaced
oscillator on the right (similar to the symmetric case of Irish et al.~\cite{irish2005dynamics}).}
\label{fig:fig2}
\end{figure}
In order to work out an accurate analytical approximation for the complete electron-photon wavefunction, $\Psi^j(x,q)$, that is the $j$-th eigenstate of Eq.~(\ref{eq:Ham_DWpC}), we first expand it in a basis set. Given the fact that
the uncoupled electronic system is practically a two-level system with two localized states, it can be accurately described by localized atomic orbitals (AO) that are the ground states of the harmonic potentials
centered at $\pm a$, i. e.,
\begin{equation}
\label{eq:AO_se}
\left[\hat{T}_e+\hat{V}^{\pm}_{at}\right]\phi^{\pm}(x) = E^{\pm} \phi^{\pm}(x)
\end{equation}
where $\hat{V}^{\pm}_{at}=\frac{1}{2}\omega_e^2\left(x \mp a\right)^2$. To form our basis set we combine these atomic orbitals with the so called displaced harmonic oscillator (DHO) basis. The DHO basis
are the eigenstates of two harmonic oscillator Hamiltonians that are centered at $\pm \frac{a\lambda_c}{\omega_c}$, i. e.,
\begin{equation}
\hat{H}_q^{\pm} \xi^{\pm}_N(q) = V_{\textrm{ad}}^{N \pm } \xi^{\pm}_N (q)
\end{equation}
where $\hat{H}_q^{\pm} = \frac{-1}{2}\frac{\partial^2}{\partial q^2}+\frac{1}{2} \omega_{c}^{2}\left( q\, \mp \frac{\lambda_c}{\omega_c} a\right) ^2 \pm\Delta$ and
$V_{\textrm{ad}}^{N \pm } = \pm \Delta + (N + \frac{1}{2}) \omega_c$~(with $\Delta=E\, a$). We then choose the localized basis as products of the AO on the left (right) and the DHO states on the left (right)
, $\{\xi^{-}_N (q) \phi^{-}(x)\}$($\{\xi^{+}_N (q) \phi^{+}(x)\}$) and expand the full wavefunction in terms of them, i. e.,
\begin{equation}
\label{eq:expansion}
\Psi(x,q) = \sum_{N=0}^{\infty} \left[ A _{N}^{-} \xi^{-}_N (q) \phi^{-}(x) + A _{N}^{+} \xi^{+}_N (q) \phi^{+}(x) \right],
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{Exact_eeps_edens_GS_final.pdf}
\end{center}
\caption{Right: Ground-state full electronic potential including the {\it e-ph} correlation potential for various $\lambda_c$-s as indicated on the plots calculated from numerically exact solution of SE (top) and
from our analytical approximation (bottom). The asymmetric double-well potential of the uncoupled electronic system has been plotted (black dashed-line) in both for a reference. Left: the difference between the
electronic density of the uncoupled electronic system with the electronic density of coupled electron-photon system calculated from numerically exact solution of SE (top) and from our Analytical Approximation (AN) (bottom)
for various $\lambda_c$-s as indicated on the plots.}
\label{fig:fig3}
\end{figure}
where $A _{N}^{-}$($A _{N}^{+}$) are the expansion coefficients of the left (right) states. This is analogous to the expansion of the full molecular wavefunction in terms of a diabatic basis.
Our choice of basis was inspired by the work of Irish et al.~\cite{irish2005dynamics} in which they implemented the DHO to solve the two-level Rabi model.
We now plug the expansion~(\ref{eq:expansion}) into the complete electron-photon SE~(\ref{eq:se}) with the Hamiltonian~(\ref{eq:Ham_DWpC}) and project onto $ {\langle \xi^{-}_M}\phi^{-}\vert$ (and ${\langle \xi^{+}_N}\phi^{+}\vert$))
that leads to
\begin{eqnarray}
\label{eq:projection}
& &A_{M}^{-} \left[ \alpha + \hat{V}_{\textrm{add}}^{M-} \right]+\nonumber\\
& &\sum_{N=0}^{\infty} A _{N}^{+} \left[ \beta \langle\xi^{-}_M(q) \vert \xi^{+}_N(q)\rangle_{q} + \hat{V}_{\textrm{add}}^{N+} \, \langle\xi^{-}_M(q) \vert \xi^{+}_N(q)\rangle_{q} S \right] \nonumber\\
& & \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,= E \left [A_M^{-} + \sum_{N=0}^{\infty} A _N^{+} \, \langle\xi^{-}_M(q) \vert \xi^{+}_N(q)\rangle_{q} S \right],\nonumber\\
\end{eqnarray}
where
\begin{eqnarray}
\alpha &=& \langle \phi^{\pm}(x)\vert \left[ \hat{T}_e + \frac{1}{2} \omega_e^2 (|x|-a)^2 \right] \vert \phi^{\pm}(x)\rangle, \nonumber\\
\beta &=& \langle \phi^{\pm}\vert \left[ \hat{T}_e + \frac{1}{2} \omega_e^2 (|x|-a)^2 \right] \vert \phi^{\mp} \rangle,\nonumber \\
S &=& \langle \phi^{-}\vert \phi^{+} \rangle,
\end{eqnarray}
are the typical elements of the LCAO technique. The outcome of the projection onto ${\langle \xi^{+}_N}\phi^{+}\vert$ is symmetric with Eq.~(\ref{eq:projection}).
In Fig.~\ref{fig:fig2} we present the DHO basis schematically. As it can be seen in the figure, for the resonance frequencies the states of the left DHO with the energy quantum number $N^{-}$ are
degenerate with the states of the right DHO with the energy quantum number $(N-1)^{+}$. In this work we only allow for the mixing of these states and neglect the coupling to the other states
that can be regarded as an adiabatic approximation. We discuss how to go beyond this limit in a different publication~\cite{AKT_IP2018}.
\subsubsection{Ground state}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{Exact_eeps_edens_es1.pdf}
\end{center}
\caption{First excited-state electronic densities (blue solid-line) for three different $\lambda_c$-s (indicated on the plots) together with their corresponding full electronic potentials (green solid-line), obtained from
the numerical integration of the electron-photon SE~(\ref{eq:se}). The electronic potential of the uncoupled electronic system (asymmetric double-well) is plotted (red dashed-line) as a reference. Step and peak features
are highlighted on all the plots.}
\label{fig:fig4}
\end{figure}
The full electronic potential of the EF framework for the ground-state, $V(x)+V^{0}_{e-ph}(x)$, with the {\it e-ph} correlation potential~(\ref{eq:exact_epes2}) obtained from the numerically exact solution of SE
of combined systems~(\ref{eq:se}) has been plotted in Fig.~\ref{fig:fig3} (top-right) for $\lambda_c = 0.1\, ,0.5\, ,0.9$. As it can be seen in this plot, for smallest coupling constant $\lambda_c=0.1$ the
full electronic potential only slightly differs from the asymmetric double-well potential of the uncoupled electronic system. By increasing $\lambda_c$, the well on the right side is lifted up and both wells are squeezed
leading to a polaritonic squeezing of the electronic states compared to the electronic states of the uncoupled electronic system as it is clear in the top-left panel of the Fig.~\ref{fig:fig3}, hence, the larger the coupling
constant, the more squeezed the electronic density.
We now turn to our approximate evaluation of the correlated {\it e-ph} potential to see whether this effect is reflected in our approximation and if yes how?
Due to the asymmetry of the potential in Hamiltonian~(\ref{eq:DWeH}), there is no state from the right side ($+$) in resonance with the localized state $\Phi^{-}\xi^{-}_{0} (q)$ at the left side. Therefore, the conditional
photonic wavefunction of the ground state can be approximated as $\xi^{-}_{0} (q)$. The corresponding {\it e-ph} correlation potential reads
\begin{equation}
\label{eq:epes_gs}
V^{0}_{e-ph}(x) = \left\langle\xi^{-}_{0} (q) \right\vert\hat{H}_\text{EM}\left\vert\xi^{-}_{0} (q)\right\rangle_q = \frac{\omega_c}{2} + \frac{\lambda_c^2}{2}\left(x+a\right)^2,
\end{equation}
as the second term in Eq.~(\ref{eq:exact_epes2}) is zero. The final outcome of Eq.~(\ref{eq:epes_gs}) is obtained by replacing $\hat{H}_\text{EM}$ with\\
\begin{equation}
\hat{H}_q^{-}+\frac{1}{2}\, \omega_{c}^{2}\left[\left(q-\frac{\lambda_{c}}{\omega_{c}}x\right)^{2}-\left( q\, + \frac{\lambda_c}{\omega_c} a\right) ^2\right]+\Delta. \nonumber
\end{equation}
In Fig.~\ref{fig:fig3} (bottom-right), we present the full electronic potential that is resulted from addition of this analytically approximate {\it e-ph} correlation potential~(\ref{eq:epes_gs}) to the asymmetric double-well potential
of the uncoupled electronic system. Furthermore, we implement these potential to calculate the electronic states from Eq.~(\ref{eq:exact_e}) and show the difference between the
electronic densities of the uncoupled electronic system with the approximate electronic densities of coupled electron-photon system in Fig.~\ref{fig:fig3} (bottom-left). It can be seen that the potentials and densities
corresponding to the approximate analytical {\it e-ph} correlation potentials follow the same trend as the numerically exact results and exhibit the main features, i.e., the right well is elevated while both wells are
squeezed. The densities are also squeezed compared to the electronic density of the uncoupled electronic system, following the same trend as the exact electronic densities. Although, in both cases of approximate potentials and electronic
densities the squeezing is more exaggerated. The approximate {\it e-ph} correlation potential also shifts up the right well fairly above the exact results. However, given the fact that the most
simple approximation for the conditional photon wavefunction has been implemented here, the agreement with the exact result is fulfilling.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{NablaNabla_final.pdf}
\end{center}
\caption{Second term of Eq.~(\ref{eq:exact_epes2}) for the first (upper panel) and second (lower panel) excited state calculated from the numerically exact solution of SE (blue solid-line) and from the
approximate analytical expression given in Eq.~(\ref{eq:nabla_nabla}) (red dashed-line).}
\label{fig:fig5}
\end{figure}
\subsubsection{Excited states}
Now we turn to investigate the {\it e-ph} correlation potential of the electronic excited-states of the coupled electron-photon system for the resonance photon frequencies. Contrary to the ground-states, the excited-states electronic
densities are totally delocalized. In Fig.~\ref{fig:fig4} we have plotted the first excited-state electronic densities for three different coupling constants together with their corresponding full electronic potentials,
$V(x)+V^{1}_{e-ph}(x)$ obtained from the numerical integration of the electron-photon SE~(\ref{eq:se}). As it is also highlighted in the figure, the addition of the {\it e-ph} correlation potential~(\ref{eq:exact_epes2}) to the electronic potential $V(x)$, significantly modifies the electronic potentials
in two ways: First, it brings the two-wells to the same level, symmetrizing the asymmetric electronic potential which is highlighted on the plots with steps. Second, it increases the barrier between the two wells that
stabilizes the delocalization of the electronic densities on the both sides of the double-well potential. In addition, the wells are squeezed as the coupling constant increases that leads to polaritonic squeezing of the delocalized
electronic excited states.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{nn_an.pdf}
\end{center}
\caption{The height of the peak at $x=0$ as a function of $\lambda_c$ estimated by Eq.~\ref{eq:nabla_nabla_0}. The blue arrows indicate the height-estimations for the coupling constants $\lambda_c$ used in this work.}
\label{fig:fig6}
\end{figure}
We shall now investigate how these features are captured by our approximate analytical treatment. Here we note that while we give general analytical expressions for the {\it e-ph} correlation potential, our discussions are only
focused on $N=1$. Furthermore, our results are valid as long as the energies of coupled electron-photon system is well bellow the second excited state of the uncoupled electronic system.
As we only allow for the mixing of the states with the same energy in the expansion~(\ref{eq:expansion}), within our approximate treatment, the full electron-photon excited-states that are the eigenstates of Eq.~(\ref{eq:Ham_DWpC})
may be written as
\begin{equation}
\label{eq:el_ph_wf}
\Psi^{\pm}_{N}(x,q)= \frac{1}{\sqrt{2\nu^{\pm}_N}} \left[ \xi^{-}_{N} \phi^{-} \pm \xi^{+}_{N-1} \phi^{+} \right],
\end{equation}
with the corresponding excited-state energies
\begin{equation}
E^{\pm}_N= N \, \omega_c+ \frac{\alpha \pm \beta \langle \xi_{N}^{-} \vert\xi_{N-1}^{+} \rangle_q}{\nu^{\pm}},
\end{equation}
where
\begin{equation}
\nu^{\pm} = 1\pm S \, \langle \xi_{N}^{-} \vert\xi_{N-1}^{+} \rangle_q,
\end{equation}
and
\begin{equation}
\langle \xi_{N}^{-} \vert\xi_{N-1}^{+} \rangle_q = \exp(- \frac{a^2 \, \lambda_c^2}{2 \, \omega_c^2}) \left( \frac{- a \, \lambda_c} {\omega_c} \right) \sqrt{\frac{1}{N}}\,\mathcal{L}_{N}^{1}\left(\frac{a^2 \lambda_{c}^2}{ \omega_c^2}\right).
\end{equation}
Here $\mathcal{L}_{i}^{j}$ is an associated Laguerre polynomial~\cite{irish2005dynamics}. From this electron-photon wavefunctions we can obtain the corresponding electronic wavefunction,
\begin{equation}
\Phi^{\pm}_{N}(x) = \frac{1}{\sqrt{2\nu^{\pm}_N}} \left[ \vert \phi^{-} \vert^2 + \vert \phi^{+} \vert^2 \pm 2 \langle \xi^{-}_{N} \vert\xi^{+}_{N-1} \rangle_{q} \phi^{-} \phi^{+} \right]^{1/2},
\label{eq:phi_app}
\end{equation}
and conditional photonic wavefunction,
\begin{equation}
\chi^{\pm}_{N} (q\vert x) = \frac{1}{\sqrt{2}} \frac{\xi^{-}_{N} \phi^{-} \pm \xi^{+}_{N-1} \phi^{+} }{\Phi^{\pm}_{N}(x)}.
\label{eq:chi_app}
\end{equation}
Then we simply implement the conditional photonic wavefunction to derive the {\it e-ph} correlation potential~(\ref{eq:exact_epes2}). Here, we first discuss the expression of the second term in~(\ref{eq:exact_epes2}) that has
the following analytical form in our approximation (considering $m=1$):
\begin{equation}
\label{eq:nabla_nabla}
\frac{1}{2}\langle \partial_x \chi^{\pm}_{N} \vert \partial_x \chi^{\pm}_{N} \rangle_q = \frac{a^2\,\omega_e^2\, \vert \phi^{+} \vert^2 \vert \phi^{-} \vert^2 (1-\langle\xi^{-}_{N} \vert\xi^{+}_{N-1} \rangle_{q}^2) }{2\vert\Phi_{N}^{\pm}\vert^4}.
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{Exact_eeps_Step.pdf}
\end{center}
\caption{Estimation of the step from the second term of Eq.~\ref{eq:e_approx} for three different $\lambda_c$-s}
\label{fig:fig7}
\end{figure}
In Fig.~\ref{fig:fig5}, we plot the expression above for the first (upper panel) and second (lower panel) excited state together with numerically exact results for the same term (second term of~(\ref{eq:exact_epes2})). As it appears from the figure, our analytical expression gives a peak in the same position as the numerically exact results, reproducing the peak feature of the
{\it e-ph} correlation qualitatively well. Here we can further simplify the expression of the peak~(\ref{eq:nabla_nabla}) for the center of the peak that is located at the crossover of the two AOs that happens to be at $x=0$.
Hence, the approximate expression predicts the height of the peak at $x=0$ as
\begin{equation}
\label{eq:nabla_nabla_0}
\frac{1}{2} \langle \partial_x \chi^{\pm}_{N} \vert \partial_x \chi^{\pm}_{N} \rangle_q\vert_{x=0} = \frac{a^2\,\omega_e^2}{2}\left(\frac{1\mp\langle\xi^{-}_{N} \vert\xi^{+}_{N-1} \rangle_{q}}{1\pm\langle\xi^{-}_{N} \vert\xi^{+}_{N-1} \rangle_q}\right).
\end{equation}
In Fig.~\ref{fig:fig5} we have plotted this for $N=1$ that leads to a prediction for the height of the peak for the first ($1^{-}$) and second ($1^{+}$) excited states. According to this expression,
the height of the peaks for the first two excited states progress in different directions. While the height of the peak for the the lower state ($-$) descends to a minimum (around $\lambda_c=0.17$) the height of the peak for
the upper-state ($+$) ascends to a maximum around the same point. However, for large $\lambda_c$-s the heights of both states converge to $\frac{a^2\,\omega_e^2}{2}$.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{Exact_eeps_ExVxAn_final.pdf}
\end{center}
\caption{Full electronic potentials obtained from an approximate analytical expression (red dashed-line) versus the ones calculated numerically exactly (blue solid-line) for three different $\lambda_c$-s as indicated in the plots}
\label{fig:fig8}
\end{figure}
Now let us investigate the analytical expression for the first term of Eq.~(\ref{eq:exact_epes2}) that within our approximation is
\begin{equation}
\label{eq:e_approx}
\left\langle\chi^{N\pm}_{x} \right\vert\hat{H}_\text{EM}\left\vert\chi^{N\pm}_{x}\right\rangle_q = N \omega_{c} + \frac{\omega_{c}}{4}\left(\frac{\vert \phi^{-} \vert^2 - \vert \phi^{+} \vert^2 }{\vert\Phi_{N}^{\pm}\vert^2} \right) + f(\lambda_{c})
\end{equation}
where
\begin{eqnarray}\label{eq:e_approx_ldependent}
& &f(\lambda_{c}) = \frac{\lambda_{c}^2 a \, x}{2} \left(\frac{\vert \phi^{-} \vert^2 - \vert \phi^{+} \vert^2 }{\vert\Phi_{N}^{\pm}\vert^2} \right) + \nonumber\\
& &\frac{\lambda_{c}^2 x^2}{2}+ \frac{\lambda_{c}^2 a^2}{2} \left(\frac{\Phi_{N}^{\mp}}{\Phi_{N}^{\pm}}\right)^2 \mp \frac{\lambda_{c} \, \omega_{c} \, x \langle \xi^{-}_{N} \vert q \vert \xi^{+}_{N-1} \rangle_{q}}{\vert\Phi_{N}^{\pm}\vert^2} \phi^{-} \phi^{+}\nonumber\\
\end{eqnarray}
While the first term of~(\ref{eq:e_approx}) gives a constant general shift, the second term gives a step-like function with the height equal to $\omega_c$ ($=2 \Delta$) as it is shown in Fig.~\ref{fig:fig7} and as it can be seen
in the figure, the larger the coupling constant $\lambda_c$ the sharper the step around $x=0$. This proves how this approximation can capture yet another essential feature of the exact {\it e-ph} correlation potential that was shown schematically in
Fig.~\ref{fig:fig4} and was discussed earlier in this section.
Finally, as the concluding result of this work, we present the full electronic potentials $V(x)+V^{1(2)}_{e-ph}$ obtained analytically approximately versus the ones calculated numerically exactly in Fig.~\ref{fig:fig8} for three different $\lambda_c$-s
as indicated in the figure. As it was discussed previously and can also be seen in Fig.~\ref{fig:fig8}, the approximate analytical expression for the {\it e-ph} correlation potential captures the essential step and peak
features of the exact {\it e-ph} correlation very well. It also captures the squeezing of the wells but overestimates this feature as $\lambda_c$ increases.
\section{Summary and Outlook}
We have extended the EF framework to study the electronic states of the correlated electron-photon systems and have shown that within this new approach to correlated electron-photon states, the electronic states are
uniquely determined by addition of a scalar potential ({\it e-ph} correlation potential) and a vector potential to the uncoupled electronic Hamiltonian. For a one dimensional asymmetric double-well potential coupled to a
single photon mode of a cavity with resonance frequencies, we have calculated the exact {\it e-ph} correlation potential numerically exactly and discussed their significant features namely steps, intra-well peaks and
squeezing of the wells of the double-well potential. These features of the {\it e-ph} correlation potential are responsible for the polaritonic squeezing of the electronic ground-state and photon-assisted delocalization
as well as polaritonic squeezing of the electronic excited-states. Although not directly related, the step-and-peak structure of the {\it e-ph} correlation potential investigated in this work is reminiscent of
the step-and-peak structure of the Kohn-Sham potential of density functional theory in the dissociation limit~\cite{helbig2009exact,Gritsenko1997,tempel2009revisiting}.
We have furthermore derived an approximate analytical expression for the {\it e-ph} correlation of the model system studied, by extending the atomic orbitals via combining them with displaced harmonic oscillator states.
In the case of the ground electronic state of the coupled electron-photon system, we have shown how our analytical approximation captures the key features of the exact {\it e-ph} correlation potential, i. e., squeezing of
both wells and the elevation of the right well that leads to the polaritonic squeezing of the electronic ground-states in the left well which is enhanced as the coupling constant increases. In the case of the first two
excited states, the analytical approximation reproduces the step and peak features of the exact {\it e-ph} correlation potential that are essential to capture the photon-assisted delocalization of the electronic
excited-states. In the case of the ground-state and the first two excited state discussed in this work, we have shown that while the analytical approximation captures the polaritonic squeezing of the electronic states that is
enhanced as $\lambda_c$ increases, it overestimates this effect. In our upcoming work we will discuss how to go beyond the approximation presented here~\cite{AKT_IP2018}. One of the main motivation of these analytical
investigations is to set the stage to utilize the time-dependent EF framework for studying the correlated electron-photon dynamics~\cite{AKT2_IP2018}. This approach to coupled electron-photon dynamics is complementary
to the recently developed TDDFT approach to cavity QED. Indeed, the {\it e-ph} correlation potential which was the heart of our investigation in this work is closely related to the correlation potential of the cavity QED (TD)DFT, therefore,
the features of the {\it e-ph} correlation potential we discussed in this work together with the analytical approximation of the potential presented here are particularly relevant for developing the
cavity QED (TD)DFT exchange-correlation functionals~\cite{FSR18,DFRR17}. Another interesting avenue to explore is the connection between the approach proposed here and
the Born-Huang expansion approach for the cavity QED that has been proposed very recently~\cite{SRR18}.
\section{Acknowledgments}
It is our great pleasure to dedicate this work to Hardy Gross who has revived and established the Exact Factorization approach through his persistent developments and exceptional presentations. We wish Hardy the best
for the decades to come!
A. A. and E. K. acknowledge funding from the European Uninos Horizon 2020 research and innovation programme under the Marie-Sklodowska-Curie grant agreement no. 702406 and 704218, respectively.
I. T. acknowledges the funding from Spanish Ministerio de Economía y Competitividad (MINECO), project No. FIS2016-79464-P and by the “Grupos Consolidados UPV/EHU del Gobierno Vasco” (Grant No. IT578-13).
\addcontentsline{toc}{section}{References}
\bibliographystyle{ieeetr}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.